text
stringlengths
1
2.25M
--- abstract: 'Efficient allocation of scarce law enforcement resources is a hard problem to tackle. In a previous study (forthcoming Barreras et.al (2019)) it has been shown that a simplified version of the self-exciting point process explained in Mohler et.al (2011), performs better predicting crime in the city of Bogotá - Colombia, than other standard hotspot models such as plain KDE or ellipses models. This paper fully implements the Mohler et.al (2011) model in the city of Bogotá and explains its technological deployment for the city as a tool for the efficient allocation of police resources.' author: - | Mateo Dulce\ University of los Andes and Quantil\ `mate.dulce@quantil.com.co`\ Simon Ramirez\ University of los Andes and Quantil\ `simon.ramirez@quantil.com.co`\ Alvaro Riascos\ University of los Andes and Quantil\ `ariascos@uniandes.edu.co` nocite: '[@*]' title: Efficient allocation of law enforcement resources using predictive police patrolling --- Introduction {#introduction .unnumbered} ============ Criminality is one of the biggest challenges mega-cities face. Among many other decisions, policy makers have to efficiently allocate scarce law enforcement resources on a vast and highly dynamic environment. This is a hard problem with no trivial solution. For example, during 2012 and 2015 all murders and 25% of all crimes in Bogota took place in just 2% of street segments. Yet, these same road segments received less than 10% of effective police patrolling time. Understanding the spatial and temporal dynamics of these so-called *hotspots* is needed to make highly effective police patrolling possible. In this paper we develop a *self exciting point process* model to predict crime and present partial results of its deployment on field scenarios in Bogotá, Colombia. This paper is organized as follows. Section 2 describes the theoretical model used to approach crime prediction. Section 3 describes the training of the model and section 4 its validation. Finally, section 5 presents the technological deployment and visualization of model. The model {#the-model .unnumbered} ========= The model developed to predict crime occurrences in Bogotá, Colombia, follows closely the methodology proposed by Mohler et al. (2011) in their work *Self-Exciting Point Process Modeling of Crime* (Mohler et.al, 2011). This model is constructed under three assumptions: 1. Criminality concentrates in specific areas of the city. 2. Higher incidence of crime at certain times of the day and certain days of the week. 3. Crime spread from one place to another like a disease. With this in mind, crimes are classified between background and aftershock events, the former being those that arise independently given their spatio-temporal location, while the latter occur as triggering of past crimes nearby. Crime appearance is modeled as a self-exciting point process in which the past occurrence of crimes increases the probability of new crimes occurring in the future. A spatio-temporal point process $N(x,y,t)$ is uniquely characterized by its conditional intensity $\lambda(x,y,t)$, which can be defined as the expected number of points falling in an arbitrarily small spatio-temporal region, given the points history $\mathcal{H}_t$ occurred until $t$: $$\label{defIntensity} \lambda(x,y,t) = \lim_{\Delta x, \Delta y, \Delta t \downarrow 0} \frac{E[N\{(x,x+\Delta x)\times (y,y+\Delta y)\times (t,t+\Delta t)\}|\mathcal{H}_{t}]}{\Delta x \Delta y \Delta t}.$$ For the purpose of crime prediction and according to the initial assumptions on the behavior of crime occurrence, it is assumed that the conditional intensity takes the following functional form $$\label{intensity} \lambda(x,y,t) = \mu(x,y)\nu(t) + \sum_{\{k:t_k<t\}}g(x-x_{k}, y-y_{k}, t-t_{k}),$$ where $\mu(x,y)$ and $\nu(t)$ captures background crimes appearance patterns according to their spatial and temporal location, respectively. In a similar fashion, $g(x-x_{k}, y-y_{k}, t-t_{k})$ captures how crime $(x_{k}, y_{k}, t_{k})$ propagates to other spatio-temporal locations. Training {#training .unnumbered} ======== We worked with criminal data from the *Delinquential, Contraventional and Operative Statistical Information System* (SIEDCO) from the National Police of Colombia. This dataset contains georeferenced crimes occurred in Bogotá during 2017, along with the day and time of the crime. We aggregate the data according to the patrol shifts of Bogotá police department for 3 daily and 21 weekly shifts. Then we construct the *circular\_time* variable that summarizes the day and time of the week in which a crime occurs, and *linear\_time* that keeps the temporary record of the occurred crimes. *Circular\_time* is the input variable of the function $\nu$ which looks for temporal patterns of crime occurrence, while *linear\_time* is used in the triggering function $g$ to study the temporal distance between the occurred crimes. Finally, the function $\mu$ use the latitude and longitude coordinates of historic crimes to find spatial patterns. To estimate the conditional intensity function, is necessary to differentiate between background crimes and those triggered by past crimes, and use each of these families of data to estimate the functions: $\mu$ and $\nu$ with background events and $g$ with aftershock crimes. The training of the model is then based on *stochastic declustering* techniques and Kernel density estimation. Assuming that the functional form of the conditional intensity is correct, the probability that crime $i$ was triggered by crime $j$ is: $$\label{probRep} p_{ij} = \frac{g(x_{i}-x_{j}, y_{i}-y_{j}, t_{i}-t_{j})}{\lambda(x_{i},y_{i},t_{i})}.$$ On the other hand, the probability that crime $i$ is a background event is given by: $$\label{probTrans} p_{ii} = \frac{\mu(x_{i}, y_{i})\nu(t_{i})}{\lambda(x_{i},y_{i},t_{i})}.$$ Let $P$ denote the matrix with entries $p_{ij}$. Note that $P$ is an upper triangular matrix given that a crime cannot be triggered by a future event, and that, by definition of $\lambda(\cdot)$, columns sum to one. Then we perform an iterative algorithm until the matrix $P$ converges in the following way: 1. $P_{0}$ is initialized assuming that the crime triggering process decays exponentially in time and behaves as a bivariate normal distribution on the spatial coordinates (Rosser and Cheng, 2016): $$p_{ij} = exp(-\alpha(t_{i}-t_{j}))exp\left(\frac{-(x_{i}-x_{j})^{2}-(y_{i}-y_{j})^{2}}{2\beta^{2}}\right), ~~~ i\leq j,$$ normalizing its columns such that each one sum to one. In all the exercises performed we found that the parameters $\alpha = 0.03$ y $\beta=100$ yield to consistent results. 2. Given $P_{n-1}$, sample background events $\{(x_{i}^{b},y_{i}^{b},t_{i}^{b})\}_{i=1}^{N_{b}}$ and triggered crimes $\{(\Delta x_{i}^{r},\Delta y_{i}^{r},\Delta t_{i}^{r})\}_{i=1}^{N_{r}}$ where $(\Delta x_{i}^{r}, \Delta y_{i}^{r}, \Delta t_{i}^{r})$ denotes the spatio-temporal distance of crime $i$ to the triggering crime. 3. Estimate functions $\mu_{n}$ and $\nu_ {n}$ using the sampled background crimes, and $g_{n}$ with the sampled triggered crimes, using Kernel density estimation. 4. Update matrix $P_{n}$ using estimated functions $\mu_{n}$, $\nu_{n}$ and $g_{n}$ in the previous step, and relations (\[probRep\]) and (\[probTrans\]). If $||P_{n}-P_{n-1}||_{2} \geq \epsilon$, go to step 2[^1]. Note that in each iteration the number of background and aftershock events varies, but the total number of crimes remains constant, $N^{b}+N^{a}=N$. Thus, in the Kernel density estimation, we use a variable bandwidth that updates in each iteration using maximum likelihood with the sampled events for each function. Finally, we obtain functions $\mu$, $\nu$ and $g$ from the training process and construct the conditional intensity $\lambda$ with them. Then, to predict the occurrence of crimes, we evaluate the intensity function in the spatial coordinates and shifts of interest. For this, we need to specify these two dimensions for estimation and to make it useful for police authorities, according to their operating manners. 1. Time dimension: eight hours slots. 2. Identify two main hotspots per locality. [0.45]{} ![Competitors[]{data-label="fig:timing2"}](im/int_usaquen.PNG "fig:"){width="\linewidth"} [0.45]{} ![Competitors[]{data-label="fig:timing2"}](im/ic_usaquen.PNG "fig:"){width="\linewidth"} Validation {#validation .unnumbered} ========== To evaluate the predictive capacity of the model of crime as a self-exciting point process, and to select the right parameters that maximizes such predictive capacity, we use the standard measure of *Hit Rate* that indicates the portion of crimes correctly predicted by the model. For this, we divide the city in uniform cells and compute the intensity of each cell using a Monte Carlo method. Then, we choose the critical cells (hotspots) and evaluate how many of the known crimes occurred in these cells. $$\textnormal{Hit Rate} = \frac{\textnormal{\# Crimes in the predicted hotspots}}{\textnormal{Total \# crimes}}.$$ The training and validation process were performed on the 10% of the city cells that include Santa Fe sector. We train the proposed model with criminal data in this area during ten weeks (from June 22, 2017 to August 31, 2017), and tests its predictive accuracy checking the crimes occurred in the following four weeks (from September 1 to 28, 2017). The validation process shows that the model of crime as a self-exciting point process trained using variable bandwidth predicts a greater number of crimes, on average, than the model with fixed bandwidth or a plain KDE. - Accuracy per unit of area identified: $$\begin{aligned} \text{PAI}= \dfrac{\text{Hit Rate}}{\text{Percentage of Area}}\\ \vspace{40pt} \text{Hit Rate}= \dfrac{\text{Crimes predicted in Hotspots}}{\text{Total Crimes}}\\ \vspace{40pt} \text{Percentage Area}= \dfrac{\text{Area of Hotspots}}{\text{Total Area}}\end{aligned}$$ - Hit Rate with 7 weeks of training data and 10% of covered area (i.e., hotspots): ------------------ -- ------ ---------- ------------- Prediction KDE fixed bw variable bw \[0.5ex\] Week 1 0,42 0,44 0,57 Week 2 0,44 0,46 0,59 Week 3 0,53 0,54 0,62 Average 0,46 0,48 0,59 \[1ex\] ------------------ -- ------ ---------- ------------- Finally, to assess whether the predictive accuracy of the models differ statistically, we used the non-parametric *Wilcoxon signed-rank test* to compare the obtained samples of crime prediction. The results show that the self-exciting point process modeling of crime performs statistically better predicting crime in the city of Bogotá, Colombia, than other state-of-the-art crime prediction models. ---------------------------- -- --------- p-value \[0.5ex\] fixed bw vs. KDE 0,061 fixed bw vs. varible bw 0,030 variable bw vs. KDE 0,016 ---------------------------- -- --------- Field deployment and visualization {#field-deployment-and-visualization .unnumbered} ================================== We jointly developed a hybrid (web and mobile) application with local law enforcement authorities[^2] and Colombia’s main research center on security studies[^3] to deploy our model in real-life field scenarios in Bogota. Our app is available to both planning agents via web browsers and to field agents via device specific native containers. The application is integrated to local law enforcement information systems and uses the most recent available crime data to calibrate self exciting process models. The app offers users two main features: - Layout a crime intensity heatmap over neighborhoods under agent surveillance for weekly schedules. (see Figure 2.a). - Display critical hotspots in neigborhoods under agent surveillance for weekly schedules. (see Figure 2.b). The application is under pilot trial in 10 crime ridden neigborhoods in Bogota since November 2017. Early results are encouraging. Police agents have appropriated the app as part of their decision making process and are looking forward to further development. A more extensive and rigorous randomized control trial to account for causal effects on safety and liveability related variables on 500 neighborhoods is currently being developed. ![Web based view of crime intensity heatmaps over Bogotá](im/intensidades.png){width="0.9\linewidth"} ![Web based view of critical hotspots over Bogotá](im/hotspots.png){width="0.9\linewidth"} [00]{} Barreras, F., Diaz, C., Riascos, A. y M. Ribero (2019). Comparación de diferentes modelos para la predicción del crimen en Bogotá. *Economía y seguridad en el posconflicto.* H. Zuleta, Ed. Mohler, George O and Short, Martin B and Brantingham, P Jeffrey and Schoenberg, Frederic Paik and Tita, George E. (2011). Self-exciting point process modeling of crime. *Journal of the American Statistical Association Volume 106, Issue 493.* Pages 100-108. Rosser, Gabriel and Cheng, Tao (2016). Improving the robustness and accuracy of crime prediction with the self-exciting point process through isotropic triggering. *Applied Spatial Analysis and Policy*. Pages 1–21. [^1]: $||.||_{2}$ denotes matrix norm in $L_{2}$. $\epsilon=0.01$ was used as convergence parameter of $P$. [^2]: Secretaría de Seguridad, Convivencia y Justicia de Bogotá (SSCJ) and Policía Metropolitana de Bogotá (MEBOG) [^3]: Centro de Estudios sobre Seguridad y Drogas (CESED) - Universidad de los Andes
--- abstract: 'In the cell, protein complexes form relying on specific interactions between their monomers. Excluded volume effects due to molecular crowding would lead to correlations between molecules even without specific interactions. What is the interplay of these effects in the crowded cellular environment? We study dimerization of a model homodimer both when the mondimers are free or tethered to each other. We consider a structured environment: Two monomers first diffuse into a cavity of size $L$ and then fold and bind within the cavity. The folding and binding are simulated using molecular dynamics based on a simplified topology based model. The [*confinement*]{} in the cell is described by an effective molecular concentration $C \sim L^{-3}$. A two-state coupled folding and binding behavior is found. We show the maximal rate of dimerization occurred at an effective molecular concentration $C^{op}\simeq 1m$M which is a relevant cellular concentration. In contrast, for tethered chains the rate keeps at a plateau when $C<C^{op}$ but then decreases sharply when $C>C^{op}$. For both the free and tethered cases, the simulated variation of the rate of dimerization and thermodynamic stability with effective molecular concentration agrees well with experimental observations. In addition, a theoretical argument for the effects of confinement on dimerization is also made.' author: - 'Wei Wang$^\ast$, Wei-Xin Xu, Yaakov Levy, E. Trizac,' - 'P. G. Wolynes' title: Confinement Effects on the Kinetics and Thermodynamics of Protein Dimerization --- any biological functions depend on protein complexes or multimeric proteins which must specifically form in a crowded cellular environment. There are several types of protein complexes. Homodimeric proteins consisting of two identical chains or monomers with a symmetrical conformation are the most typical [@dimer]. [*In vitro*]{} experiments show that the formation of dimeric proteins, termed dimerization, may be described as two-state [@Two-state] or three-state [@three-state]. Here, the term two-state indicates that the folding and binding of monomers are directly coupled, while the term three-state signifies that binding starts from already folded monomers or that binding has a dimeric intermediate. Since dimerization involves assembly of two monomers, its rate should depend on the monomer concentration. That is, when the separation distance of the monomers is large, the monomers should diffuse close to each other first and then dimerize. For [*In vitro*]{} experiments where there is only one kind of molecule involved in general, the dimerization occurs easily when the concentration is large. [*In vivo*]{} dimerization of specific monomers is more complicated than [*in vitro*]{} because cells are rather crowded due to the presence of various macromolecules[@crowding1; @crowding2; @crowding3; @crowding4; @hxzhou-minton2008]. When the local concentration of the monomers is low, the monomers take a long time to diffuse together, and the diffusion even may be kinetically blocked by other molecules. This makes dimerization more difficult. Nevertheless, when the local concentration of the monomers is sufficiently high, dimerization occurs easily. The concentration of total macromolecules in cytoplasm is estimated to be $80\sim 200g/l$ [@crowding1; @crowding2; @crowding3] which is approximately $1m$M (or $100\mu$M) if the averaged molecular weight $\bar{m}=500\times 110$Da, i.e., 500 amino acids (or 5000 amino acids) in average for a macromolecules, is assumed. Obviously, crowding must lead to excluded volume effects [@crowding4; @hxzhou-minton2008; @excluded1; @excluded2; @excluded3; @excluded4; @excluded5] which can be described using an effective concentration of the reacting molecules. Crowding can preferentially destabilize the balance between reactants and products, and makes the association reactions highly favored. It has been suggested that association constants under crowded conditions could be several orders of magnitude larger than those in dilute solutions[@crowding2; @crowding3; @crowding4; @hxzhou-minton2008]. At the same time, crowding causes a decrease in the diffusion rate of molecules by a factor in the range of $3\sim 10$ [@crowding2; @crowding3; @verkman]. The translational diffusion of molecules in the cell is a kinetic process which can be described using Brownian dynamics [@verkman; @northrupjcp1984]. Dimerization, ultimately, involves the intimate contact between two specific monomers, a local dynamic process. Previously, the simultaneous folding and binding of a number of homodimers has been theoretically studied using topology based models (Go-like) by adding a covalent linkage between the two monomers of the dimer [@dimer-go1; @dimer-go2]. Such studies may be directly related to the [*in vitro*]{} situation. To study the [*in vivo*]{} dimerization of homodimeric proteins, both interactions between the monomers and those between the monomers and other macromolecules must be considered. Including crowding effects in such studies may provide some useful insights into the formation of various protein complexes and protein-protein interactions, and thus enable one to understand intracellular protein networks, and to design protein complexes that could act as pharmacological inhibitors. Here, we study the dimerization of two monomers encapsulated in a cavity with size $L$ which mimics the crowding in cell by an effective molecular concentration $C \sim L^{-3}$. We study both the thermodynamics and kinetics. The diffusion of the monomers into the confined space is described by a Brownian dynamics. Dimerization depends on the size $L$, or the effective concentration $C$. There the model predicts a maximal rate of dimerization at an optimal confined space size $L^{op}=22$ (with unit $3.8$Å). Such an optimal $L^{op}$ corresponds to an effective concentration $C^{op}\sim 1m$M which is of the order of the macromolecular concentration in cells [@crowding1; @crowding2; @crowding3]. This suggests a possibility that the rate of dimerization and the concentration of various macromolecules in cells may have been optimized by evolution. Based on the changes of the conformational and translation entropies due to the confinement, we show that there is a scaling behavior for the heights of free energy barriers for binding and the folding transition temperatures with the cavity sizes. Results and Discussions ======================= Molecular Crowding and Molecular Diffusion ------------------------------------------ Suppose that in a cubic box with size $L_b=1000\AA$, corresponding to a small compartment of the cell, there are about 1000 molecules. Among these only two specific monomers can form a dimer. The effective molecular concentration is $C_{e}=100g/l$ given an average molecular weight $\bar{m}=55$KDa (i.e., $\sim 500$ amino acids). That is, $1000=C_{e}\times L_{b}^{3}$. All molecules diffuse randomly in the large box. Once the distance between two monomers reaches a smaller value, $L$, i.e., the monomers diffuse into a small confined space (Fig.1[*A*]{}), there they can form a dimer by folding and binding. The diffusion time is simulated using a Brownian dynamics (see the Methods), and a monotonic decrease of the time versus the size $L$ is obtained (Fig.1[*B*]{}). In the Brownian dynamics each monomer is modeled as a single particle mimicking the protein chains. Clearly, the diffusion of such particle could not fully describe the case of the protein chains since the protein chains are soft and may change their conformations during the diffusion. However, for the sake of simplicity, an effective friction for the particles could be used to model the diffusion of the protein chains. Previously, a friction coefficient $\gamma_{a} \sim 0.05$ was used for a single amino acid with size $a$ [@Thirumalai1996]. For the subunit of Arc repressor (with 53 amino acids) studied in this work, a corresponding friction coefficient is estimated to be $\gamma \sim 0.19$ since the size of the Arc monomer is approximately $53^{1/3}a$. Here, a spherical conformation for the monomer is assumed. Note that for a protein with $500$ amino acids a friction coefficient is $\gamma \sim 0.4$ \[If a friction coefficient $\gamma_{a}=0.2$, which was argued to be a factor of 10 larger than the measured value for amino acids in water[@pnas-Nymeyer], is set, one has $\gamma \sim 1.6$\]. Thus a value of $\gamma \sim 0.1$ is in the reasonable range to model the kinetics for protein Arc. To show the effect of friction coefficient on diffusion rate of the monomers, several cases with different $\gamma$ values are simulated (see Fig.1[*B*]{}). Clearly, the diffusion slows down when the friction is large. It is noted that such simplified diffusion of Brownian particles is approximate but reasonable when the sizes of the conformations of the two protein chains can be negligible when compared with the inter-chain distance between them. Obviously, if the density of the specific monomers is high, the diffusion time will be short and the dimerization events will be more frequent. In the present work we only put two such monomers to model the least case. It is also worth noting that if the size of confined space $L$ is roughly $5\sim 6$ times the radius of gyration $R_g$ of the monomers, the monomers need to diffuse further within the confined space. Here a value $5\sim 6$ is set since unfolded monomeric chain is extended [@excluded3]. ![ Molecular crowding and confinement. ([*A*]{}) A schematic particle model for the molecular crowding. Two monomers are distributed randomly in a box and diffuse into a confined space with size $L$. ([*B*]{}) The time of any two monomers diffusing closely with a separating distance $<L$ versus the size $L$ for four different friction coefficients. ([*C*]{}) The Arc dimer (PDB structure 1arr) confined in a cylindrical cavity. ([*D*]{}) The cavity size $L$ versus the effective molecular concentration $C\sim 1/(2\pi L^{3})$[]{data-label="fig1"}](Fig1-weiwang.eps){width=".4\textwidth"} Model Protein and Confinement ----------------------------- The homodimer studied in this work is the Arc repressor of bacteriophage P22 which consists of two chains each containing 53 residues and has a symmetrical native structure [@BREG]. Clearly, Arc is taken as the model protein because it is small and has been studied experimentally. In a crowded circumstance, the confined space is taken as a cylindrical cavity (Fig.1[*C*]{}) and its diameter $2L$ and height $h$ are set equal to each other. The size of the cylinder is related to the effective molecular concentration by $C\sim 1/(2\pi L^3)$ if every cylinder only contains two molecules. Obviously, a big cylinder or weak confinement corresponds to a low $C$ and [*vice versa*]{} (Fig.1[*D*]{}). Thus, two Arc monomers may fold and bind in such a cavity, modelling the dimerization of a homodimer within a crowded cell. Such behaviors are simulated based on the molecular dynamics using the Go-like potentials (see the Methods). The confinement is modeled by a cylinder which was previously used to study the confinement/crowding effect on protein folding [@excluded1; @excluded4; @excluded5; @Ziv05pnas]. Simulations using spherical space provide that using different confinement shapes will not qualitatively change the results [@Klimov02pnas; @excluded2; @Mittal2008]. Furthermore, a study on the molecular crowding effect on folding of globular proteins suggested that to depict a rather crowded [*in vivo*]{} environment, the optimal cavity to host protein molecule may be cylindrical [@excluded3]. Two-state Behavior ------------------ Some features of the dimerization trajectories of the Arc dimer confined in a cavity with $L=20$ (or $C\sim 1.2m$M) at the related transition temperature $T^{L}_f$ are shown in Fig.2. The typical time evolution of the native contacts of chain-A and chain-B ($Q_A$ and $Q_B$) and the interfacial native contacts ($Q_{AB}$) and the separating distance $d$ between the centers of mass of two chains is shown (Fig.2[*A-B*]{}). One can see clearly that the folded state (with $Q_A$ or $Q_B \sim 0.9$) emerges only when the interface is formed (with $Q_{AB}\ge 0.85$). Interestingly, $d$ varies between the folded and unfolded states. The free energies of the folding and binding process projected onto three different sets of reaction coordinates show the most populated states, i.e., folded chains with a well formed interface (both $Q_A$ and $Q_B\sim 0.9$, and $Q_{AB}\sim 0.85$) and the unfolded chains without binding (both $Q_A$ and $Q_B\sim 0.5$, and $Q_{AB}< 0.1$) (Fig.2[*C-D*]{}). Note that the interfacial native contacts $N_{AB}=143$ are almost twice as numerous as the intra-chain native contacts $N_{A}$ (or $N_{B}$) $=77$, thus energetically, the states with $Q_A$ and $Q_B\sim 0.5$ can still be referred as the unfolded states. These indicate that the folding and binding occur in a cooperative two-state manner, consistent with previous [*in vitro*]{} experimental observation [@Two-state; @experiment5; @experiment6], and also with earlier simulations for linked chains [@dimer-go1; @dimer-go2]. ![The features of dimerization at $T^{L}_f$ within a cavity of $L=20$. ([*A*]{}) The time evolution of the native contacts: $Q_A$ for monomeric chain-A in green (or $Q_B$ for monomeric chain-B in blue), and $Q_{AB}$ for the interface in red. ([*B*]{}) The time evolution of the separated distance $d$ between the centers of mass of two monomers. ([*C*]{}) The free energies projected onto $Q_A$ versus $Q_B$. ([*D*]{}) The free energies projected onto $Q_A$ versus $Q_{AB}$. []{data-label="fig2"}](Fig2-weiwang.eps){width=".4\textwidth"} ![The features of dimerization. ([*A*]{}) The folding transition temperature $T^{L}_f$ versus the cavity size $L$ for two free and tethered monomers. ([*B*]{}) The dimerization rate averaged over $100$ trajectories at $0.85T^{L}_f\sim 1.0$ versus the cavity size $L$. Curve-A shows the case without the global diffusion for two free monomers, and curve-B shows the case with the global diffusion for two free monomers at $\gamma =0.1$. Curve-C shows the case for two tethered monomers. The related concentrations are listed in the upper x-axes in both Fig.3[*A-B*]{}.[]{data-label="fig3"}](Fig3-weiwang.eps){width=".4\textwidth"} Effects of Concentration on Stability ------------------------------------- To study the influence of the various effective molecular concentrations $C$ on the dimerization, the transition temperatures $T^{L}_f$, which characterizes the thermodynamic stability of the dimer (high value of $T^{L}_f$ means high stability) are obtained. In Fig.3[*A*]{}, it is shown that the value of $T^{L}_f$ decreases monotonically as $L$ increases (or $C$ decreases), implying that small $C$ or large space results in low value of $T^{L}_f$ or low thermodynamic stability. Experimentally, both urea and thermal denaturation showed that the stability of the Arc dimer is low at low protein concentrations [@Two-state; @experiment5; @experiment6]. Experiments on other dimeric proteins have also showed that high concentration improves the thermodynamic stability [@experiment3; @experiment4]. Our results are clearly consistent with these experimental observations. The value of $T^{L}_f$ at $C=22.9m$M (or $L=7$) increases about $4\%$ with respect to that of the case of confinement-free, i.e., $T_f^{bulk}=1.18$ defined roughly at $C=1\mu $M. Obviously, such a big enhancement in the thermodynamic stability is due to the crowding effect or confinement which reduces the conformational and translational entropies of the unfolded states of the two monomers more than it affects the native dimer thus making the unfolded states unstable (see an argument in the final part). Note that the dimerization cannot occur if the confined space $L<7$ (see Fig.3[*A*]{}). This relates to a too crowding case for the monomers to perform their folding and binding. Effects of Concentration on kinetics ------------------------------------ The effect of concentration on the kinetics of dimerization is also reflected in the rate of dimerization by incorporating the diffusion, folding and binding processes together (Fig.3[*B*]{}). The rate $k_f$ changes nonmonotonically as $L$ increases (or $C$ decreases), showing an optimal maximum at $C^{op} \sim 1m$M which is relevant to the macromolecular concentration in cells (see curve-A and curve-B in Fig.3[*B*]{}). Here the rate $k_f$ is in inverse proportion to the summation of the time for the two monomers to diffuse into the confined space and time for assembly of two monomers within the confined space. Note that the assembly of two monomers within a confined space may include the local diffusion if the initial distance between the two monomers is large. Clearly, here the diffusion of two monomers in the cavity is simulated by the motion of two poloymeric chains (Fig.1[*C*]{}) not of two point particles (Fig.1[*A*]{}). In Fig.3[*B*]{}, three cases, namely those for the dimerization of two monomers with and without diffusion and for the single tethered mutant, are shown. Curve-A shows the case without nonlocal diffusion, which describes a situation of high local concentration of the monomers. It is found that when $L$ is small (or $C$ is high), the dimerization is slow and quite difficult since the conformational space for the chains to search is limited. As $L$ increases, the dimerization becomes easier and faster. However, when $L$ is too large, the conformational space becomes very big and the chains now must spend much time in finding the folded state, resulting in slow dimerization. Thus, there exists an optimal size for the confinement, or an optimal effective concentration $C^{op}$. For $C<C^{op}$, the rate $k_f$ monotonically decreases as $C$ decreases (Fig.3[*B*]{}). When $C$ is low enough, the rate $k_f$ depends linearly on $C$ in agreement with the experimental observation[@experiment5]. As shown by curve-B, a similar change of dimerization rate is also observed when the nonlocal diffusion is taken into account. Since the diffusion time decreases monotonically as the size of confined space $L$ increases (Fig.1[*B*]{}), the decrease of dimerization rate becomes slower. However, there still exists an optimal size of the confinement, or an optimal effective concentration having about the same value of $C^{op}$ obtained for the case without the nonlocal diffusion. The physical origin for such a behavior is basically the same as the case without diffusion, and the nonlocal diffusion only increases the total time of the dimerization when two monomers are further separated. Actually, curve-B is related to rather rigorous environment since the local concentration of the monomers is low (only two monomers among 1000 molecules are assumed) and the averaged separation distance is large. Clearly, if the local concentration of the monomers is not so low or the monomers are co-located, the effects of global diffusion are smaller. As a result, a curve of dimerization rate should be bounded by the curves - A and -B. Effect of Confinement for Tethered Mutant ----------------------------------------- Clearly, for the tethered mutant, i.e., when the two chains of the Arc dimer are linked together, the thermodynamic stability is higher than for the non-tethered case, especially when $L\ge 15$ (see Fig.3[*A*]{}), and the rate of dimerization shows a plateau when $C<C^{op}$ (Fig.3[*B*]{}). Again this agrees with experimental observations that tethering two subunits of a dimeric protein significantly enhances both the thermal stability of the dimer and the rate of dimerization [@experiment6; @experiment7]. The physical reason is that the tethering reduces significantly the conformational and translational entropies of the two tethered chains, resulting in the reduction of the search time in the unfolded ensemble and destabilization of the unfolded states. In addition, the two chains of Arc dimer would not need to diffuse much to be close to each other because they are already linked together. Therefore, it takes them less time to complete the folding and binding in comparison to the non-tethered case especially for large confined spaces. Obviously, more time is needed for diffusion as the available space of two monomers grows larger. It is worth noting that the tethered case actually is related to a rather crowded case of non-tethered monomers, and gives an effective concentration $C_{e}=2.7m$M for the non-tethered Arc dimers here[@experiment6]. This is relevant to the optimized effective concentration $C^{op}\sim 1m$M. Free Energy Profiles of Folding and Binding ------------------------------------------- To characterize the folding and binding of the two chains, we calculate the free energy profiles for both processes, respectively. As shown in Fig.4[*A*]{}, for a case of $L=20$ (or $C\sim 1.2m$M), the height of free energy barrier $\Delta G_b^{\ddag}$ for binding is about $4.7\epsilon$ which is much larger than that for folding of the monomeric chains, i.e., $\Delta G_f^{\ddag}\sim 1.7\epsilon$. This suggests that the binding is a dominant rate-limiting step in the dimerization. Interestingly, it is found that the value of $\Delta G_b^{\ddag}$ increases when $L< 22$ (or $C>1m$M) and then is saturated to $5.0$ when $L\ge 22$ (or $C\le 1m$M) (Fig.4[*B*]{}). However, the variation of $\Delta G_f^{\ddag}$ for the monomeric chain is rather small (Fig.4[*C*]{}). Therefore, the crowding effect mainly influences the binding rather than the folding of the monomers. The rate limiting step, the binding of two monomers, also requires overcoming frustrated polar interactions or non-native contacts formed at the interface between two monomers in a relatively hydrophobic environment [@Waldburger1996pnas]. To further understand the dimerization of the two chains, the free energy profile as a function of $\Delta d$, the distance between the centers of mass of two chains shifted by subtracting the native separation distance, is shown in Fig.4[*D*]{}. Free energy funnels can be clearly seen for three cases. As an example for the case of $L=14$, a deep well around $\Delta d \sim 0$ corresponds to a quite stable binding or a localized state of the two chains. Note that the effective attraction is short ranged, and is similar to that of the binding between the ligand and receptor. It is also seen that the two chains have weak or even no interaction in a certain range of $\Delta d\sim 12$ . However, due to the repulsive interaction between the protein chains and the cavity wall representing the excluded volume effects of other protein, the free energy increases when $\Delta d=\Delta d^{*}>16$. Thus we see the dimerization is cooperatively guided by the binding and confinement quite naturally. For the various cavity sizes, the ranges with weak interactions and the values of $\Delta d^{*}$ are different, indicating that the slopes of the free energy profiles are different. The presence of a free energy funnel allows the dimerization to be stabilized by confinement. This is very similar to that of a free energy of protein-ligand binding obtained theoretically and also to the force measured for the ligand-receptor association and dissociation [@Woopnas2005; @Moy]. The dimerization reaction under confinement conditions investigated in this study by the native topology-based model [@Shoji-pnas; @dimer-go1; @dimer-go2] focuses on the effect of the confinement on the configurational and translational entropy[@excluded2]. In the cell, confinement and crowding encapsulate the monomers closely and facilitate binding. It is possible that the cavity has another role besides restricting the available volume of protein motions and dynamics. For example, other effects can arise due to interactions of the protein with the walls of the cavity or due to intra- or inter-monomeric non-native interactions. ![ The free energy profiles and barriers for folding and binding. ([*A*]{}) The free energy for the interface as a function of $Q_{AB}$ (solid circles), and the free energy for the monomeric chain as a function of $Q_A$ (open circles). ([*B*]{}) The height of free-energy barrier for the dimer $\Delta G^{\ddagger}_b$ (marked in Fig.4[*A*]{}) versus $L$. ([*C*]{}) The height of free-energy barrier for a monomeric chain $\Delta G^{\ddagger}_f$ (marked in Fig.4[*A*]{}) versus $L$. ([*D*]{}) The binding free energy between two monomers as a function of $\Delta d$.[]{data-label="fig4"}](Fig4-weiwang.eps){width=".4\textwidth"} ![ Scaling behavior of folding and binding. ([*A*]{}) The scaling of $\ln (T^{L}_{f}-T_{f}^{bulk})/T_{f}^{bulk}$ with $L$ for two tethered monomers. The data are taken from Fig.3[*A*]{} (open circles) and the line represents the theoretical argument $(T^{L}_{f}-T_{f}^{bulk})/T_{f}^{bulk} \sim L^{-15/4}$. ([*B*]{}) The barrier height $\Delta G^\ddagger_b$ versus the folding temperature $T_{f}^{L}$ of two free monomers confined in space with $L$. The data are taken from Fig.4[*B*]{} (solid circles). The line in the main graph is a guide to the eye. Inset: The changes of barrier heights of the single monomer $\Delta\Delta G^{\ddagger}_f$ scaled with the size $L$. The open circles show the data from Fig.4[*C*]{}, and the line shows the scaling $\Delta\Delta G^{\ddagger}_f \sim L^{-15/4}$.[]{data-label="fig5"}](Fig5-weiwang.eps){width=".4\textwidth"} Theoretical Interpretation of Confinement ----------------------------------------- It is well known that the folded state corresponds to a compact conformation, while the ensemble of unfolded states has a huge number of extended conformations. Thus, confinement primarily affects the free energy of the unfolded states through the conformational entropy. This effect can be quantified based on the theory of polymers with excluded volume [@deGennes]. From the scaling arguments, the conformational entropy cost reads [@deGennes; @Luijten; @Raphael; @excluded2] $\Delta S^{c}_{u}/k_B \, \propto \, -N^{9/4} (a/L)^{15/4}$ where $u$ denotes the unfolded states, $S^{c}$ the conformational entropy, $N$ the number of residues (or beads) with size $a$ of the beads in a chain, and $L$ the size of the confined space. The exponent $15/4$ is more generally equal to $3/(3\nu -1)$ where $\nu=3/5$ is the Flory exponent. In addition, since at the folding transition temperature the free energy differences between the unfolded states and native state $\Delta G=G^{u}-G^{n}$ for both cases with and without confinement are zero, we have a relationship between the folding temperatures and the entropies as $ T^{bulk}_{f} (S^{bulk}_{u}-S^{bulk}_{n}) = T^{L}_{f} (S^L_{u}- S^L_{n})$ where the superscript $L$ and $bulk$ indicate cases with and without the confinement. Thus, we have $(T^L_{f}-T^{bulk}_{f}) (S^{bulk}_{u}-S^{bulk}_{n}) = T^{L}_{f}[(S^L_{n}-S^{bulk}_{n})- (S^L_{u}-S^{bulk}_{u})]= T^{L}_{f}(\Delta S_{n} - \Delta S_{u})$. In general, we have both the conformational and translational parts for $\Delta S$, i.e., $\Delta S = \Delta S^{c} + \Delta S^{t}$. For the tethered case, two monomers actually become a “long” single chain, and their contributions of translational entropies to $\Delta S$ are cancelled in a first approximation. Only their contributions to conformational entropies remain. Thus we have $(T^L_{f}-T^{bulk}_{f})/T^L_{f} \propto - \Delta S_{u}^{c}/k_B \propto L^{-15/4}$. Since the relative shift $(T^L_{f}-T^{bulk}_{f})$ is quite a bit smaller than unity, we have $(T^L_{f}-T^{bulk}_{f})/T^{bulk}_{f} \propto L^{-15/4}$. As plotted in Fig.5[*A*]{} for simulation data of Fig.3[*A*]{}, a well agreement can be seen. For the case of two free monomers, the process of folding and binding, i.e., a process $1+2 \to 12$, involves the loss of one chain or monomer, and the translational entropies correspondingly cannot be cancelled. For the unfolded/unbound states with 2 chains, this contribution changes with the confined volume $V\simeq L^3$ as $2 \log V$, whereas it is only $\log V$ for the folded/bound state. Thus, we have $(T^L_{f}-T^{bulk}_{f})/T^{bulk}_{f} \propto -\log V - \Delta S_{u}^{c}/k_B$. The logarithmic term explains why in the $T^L_{f}$ versus $L$ plot (Fig.3[*A*]{}), the curve for two monomers does not seem to converge towards a plateau while that for the tethered case, where only the algebraic term is present, the curve does saturate at large value of $L$. It is clear that the transition state between the unfolded and folded states is an ensemble with a non vanishing conformational entropy due to its smaller spatial extension than the ensemble of unfolded states. The transition state ensemble is less sensitive to confinement, so its conformational entropy is not affected very much by confinement. When the system is confined at a given temperature, say at $T=T^{bulk}_f$, the relative positions of free energies of the folded and transition states are not affected, while the unfolded state is destabilized. As a result, one then needs to increase the temperature by an amount $T^{L}_{f}-T^{bulk}_{f}$ to reach the folding temperature. The transition state is stabilized by an amount proportional to $T^{L}_{f}-T^{bulk}_{f}$. Thus, the barrier for folding $\Delta G_b^{\ddag}$ should decrease linearly with $T^{L}_{f}$. Such an expectation is consistent with our simulation data as shown in the main plot in Fig.5[*B*]{} where a linear behavior is observed. The inset which shows the difference between the bulk barrier and that at a given confinement is thus an indirect way to check the above linear relation between barrier for folding and folding temperature shift. Conclusion ========== A model of confinement effects on dimerization of a typical homodimeric protein was studied. It was found that both the thermodynamics and kinetics of the dimerization are affected significantly by the effective molecular concentration characterized by the size of cavity. The thermodynamic stability of the dimer can be enhanced and the dimerization can be accelerated as the concentration $C$ increases. An optimal value of $C^{op}\simeq 1.0m$M is obtained. This value is of the order of the concentration of macromolecules actually found in cells. The confinement and binding enhance the folding funnel, stabilizing the dimerization of two monomers. Methods ======= Molecular diffusion ------------------- The diffusion of the molecules (i.e., particles) in a box is simulated using a Brownian dynamics as $m^{p}_{i}{\bf \dot v_{i}}(t)={\bf F_{i}}(t)-\gamma {\bf v_{i}}(t) +{\bf \Gamma}_{i}(t)$ [@Thirumalai1996]. Here, ${\bf v}$, ${\bf \dot v}$ and $m^{p}$ are the velocity, acceleration and mass of the particles, respectively. The subscript index $i$ runs from $i=1$ to $i=2$ for two specific particles or two monomers of the dimer, and from $i=3$ to $i=1000$ for all other particles in the box. For the sake of simplicity, all particles are taken to be identical. That is, all the sizes are equal to approximately as $53^{1/3} a$ and the masses are $m^{p}=53m$ since the Arc monomer has 53 amino acids. Here the size and mass of an amino acid are $a$ and $m$, respectively. ${\bf F_{i}}$ is the force arising from the interaction between the particles. A hard-core repulsive interaction between the particles and also between the monomer and particles is set as $V(r)=(\sigma^{p}_{0}/r)^{12}$ where the hard-core radius of particle is $\sigma^{p}_{0}= (53)^{1/3}4.0$Å, and $r$ is the distance between the particles. And an attractive interaction with the 12-10 Lennard-Jones (LJ) potential between the two monomers is set as $V(r)=5(\sigma^{p}_{0}/r )^{12} - 6(\sigma^{p}_{0}/r)^{10}$. [**$\Gamma$**]{} is the white and Gaussian random force modeling the solvent collision with the standard variance related to temperature by $\langle{\bf \Gamma }(t){\bf \Gamma }(t')\rangle=6\gamma k_B T\delta (t-t')$ where $k_B$ is the Boltzmann constant, $T$ is absolute temperature, $t$ is time, and $\delta (t-t')$ is the Dirac delta function. Four values of the friction coefficient from $\gamma =0.01$ to $0.5$ are used in our simulations (see Fig.1[*B*]{}). The temperature is set as $T=300$K. The time unit $\tau$ is accordingly altered following the formula applied for an amino acid [@Thirumalai1996] and other details of the simulation process are the same as for the folding and binding (see the following subsections). Based on 100 runs of molecular dynamics simulations starting from randomly positions of all the particles and monomers in the box, average time for the two monomers to diffuse into the confined space with different sizes $L$ is obtained. A periodic boundary condition is used to model the whole cell. Topology Based Model of the Homodimer ------------------------------------- A Gō-like potential is used to model the interactions within the Arc homodimer. For each monomeric chain the interactions include the virtual bonds $V^{s}_{bond}$, angles $V^{s}_{bond-angle}$, dihedral angles $V^{s}_{dihedral}$, and non-bonded pairs of the C$_{\alpha}$ atoms $V^{s}_{non-bond}$ \[for details see Ref.[@Clementi2000])\]. Here the superscript $s$ denotes chain-A or chain-B, respectively. Note that similar non-bonded interactions are also used for the native and nonnative contacts between the inter-chain residues. The native contact is defined as occurring when the distance between any a pair of non-hydrogen atoms belonging to two residues is shorter than $5.5$Åin the native conformation of the dimer. Thus, the monomeric and interfacial native contacts can be defined. In addition, the crowding effect introduces a repulsive potential $V^{c}(r_i)$ between the residue $i$ and the cylindrical wall when their distance $r_i$ is less than $\sigma_{0}=4$Å. Here $V^{c}(r_i) = \sum_i 50 [(\sigma_0/2r_{i})^{4} -2(\sigma_0/2r_{i})^{2}+1] \Theta(\sigma_0/2-r_{i})$ (for details see Ref.[@excluded1]). Simulations for folding and binding ----------------------------------- The simulations were carried out using Langevin dynamics and leap-frog algorithm[@Thirumalai1996; @Levy2008]. The native Arc dimer is unfolded and equilibrated at high temperature, and then the unfolded conformations are taken as starting states for the folding simulations. The energy scale $\epsilon =1$ and time step $\delta t=0.005 \tau$ are used. Here $\tau =\sqrt {m a^2 /\epsilon}$ is the time scale with the van der Waals radius of the residues $a=5$Å. All the length is scaled by $\lambda =3.8 \AA$, i.e., the bond length between two C$_{\alpha}$ atoms. A friction coefficient $\gamma_{a} \sim 0.05$ is used. The thermodynamic variables (e.g., the free energy $F(Q)=E(Q)-T\log W(Q)$ with $E(Q)$ and $W(Q)$ are energy of the system and the density of conformations at $Q$, respectively) are obtained using the weighted histogram analysis method (WHAM) [@Clementi2000]. The free energies for a monomeric chain $F(Q_A)$ (or $F(Q_B)$ and for the chain-chain binding $F(Q_{AB})$ can be calculated. This work was supported by the National Basic Research Program of China (2006CB910302, 2007CB814806), and also the NNSF (10834002). The support of the Center for Theoretical Biological Physics (NSF PHY-0822283) is also gratefully acknowledged. [10]{} Marianayagam NJ, Sunde M, Matthews JM (2004) The power of two: protein dimerization in biology. [*Trends in Biochem Sci*]{} [**29**]{}: 618-625. Bowie JU, Sauer RT (1989) Equilibrium dissociation and unfolding of the Arc Repressor dimer. [*Biochemistry*]{} [**28**]{}: 7139-7143. Gloss LM Matthews CR (1998) The barriers in the bimolecular and unimolecular folding reactions of the dimeric core domain of Escherichia coli Trp Repressor are dominated by enthalpic contributions. [*Biochemistry*]{} [**37**]{}: 16000-16010. Zimmerman SB, Trach SO (1991) Estimation of macromolecule concentrations and excluded volume effects for the cytoplasm of [*Escherichia coli*]{}. [*J Mol Biol*]{} [**222**]{}: 599-620. Ellis RJ, Minton AP (2003) Join the crowd. [*Nature*]{}, [**425**]{}: 27-28. Ellis RJ (2001) Macromolecular crowding: obvious but underappreciated. [*Trends in Biochem Sci*]{} [**26**]{}: 597-604. Minton AP (2000) Implications of macromolecular crowding for protein assembly. [*Curr. Opin. Struct. Biol.*]{} [**10**]{}, 34-39. Zhou HX, Rivas G, Minton AP (2008) Macromolecular crowding and confinement: biochemical, biophysical, and potential physiological consequences. [*Annu Rev Biophys*]{} [**37**]{}: 375-397. Takagi F, Kogo N, Takada S (2003) How protein thermodynamics and folding mechanisms are altered by the chaperonin cage: Molecular simulations. [*Proc Natl Acad Sci USA*]{} [**100**]{}: 11367-11372. Thirumalai D, Klimov D, George HL (2003) Caging helps proteins fold. [*Proc Natl Acad Sci USA*]{} [**100**]{}: 11195-11197. Cheung MS, Klimov D, Thirumalai D (2005) Molecular crowding enhances native state stability and refolding rates of globular proteins. [*Proc Natl Acad Sci USA*]{} [**102**]{}:, 4753-4758. Xu WX, Wang J, Wang W (2005) Folding behavior of Chaperonin-mediated substrate protein. [*Proteins*]{} [**61**]{}: 777-794. Lucent D, Vishal V, Pande VS (2006) Protein folding under confinement: A role for solvent. [*Proc Natl Acad Sci USA*]{} [**104**]{}:, 10430-10434. Dix JA, Verkman AS (2008) Crowding effects on diffusion in solutions and cells. [*Annu Rev Biophys*]{} [**37**]{}: 247-263. Northrup SH, Allison SA, McCammon JA (1984) Brownian dynamics simulation of diffusion-influenced bimolecular reactions. [*J Chem Phys*]{} [**80**]{}: 1517-1524. Levy Y, Wolynes PG, Onuchic JN (2004) Protein topology determines binding mechanism. [*Proc Natl Acad Sci USA*]{} [**101**]{}: 511-516. Levy Y, Cho SS, Onuchic JN, Wolynes PG (2005) A Survey of flexible protein binding mechanisms and their transition states using native topology based energy landscapes. [*J Mol Biol*]{} [**346**]{}: 1121-1145. Guo Z, Thirumalai D (1996) Kinetics and thermodynamics of folding of a de Novo designed four-helix bundle protein. [*J Mol Biol*]{} [**263**]{}: 323-343. Nymeyer H, Garcia AE, Onuchic JN (1998) [*Proc Natl Acad Sci USA*]{} [**95**]{}: 5921-5928. Breg JN, van Opheusden JJ, Burgering, MM, Boelens R, Kaptein R (1990) Structure of Arc represser in solution: evidence for a family of beta-sheet DMA-binding proteins. [*Nature*]{} [**346**]{}: 586-588. Ziv G, Haran G, Thirumalai D (2005) Ribosome exit tunnel can entropically stabilize alpha-helices. [*Proc Natl Acad Sci USA*]{} [**102**]{}: 18956-18961. Klimov DK, Newfield D, Thirumalai D (2002) Simulations of beta-hairpin folding confied to spherical pores using distributed computing. [*Proc Natl Acad Sci USA*]{} [**99**]{}: 8019-8024. Mittal J, Best RB (2008) Thermodynamics and kinetics of protein folding under confinement. [*Proc Natl Acad Sci USA*]{} [**105**]{}: 20233-20238. Milla ME, Sauer RT (1994) P22 Arc Repressor: folding kinetics of a single-domain, dimeric protein. [*Biochemistry*]{} [**33**]{}: 1125-1133. Robinson CR, Sauer RT (1996) Equilibrium stability and sub-millisecond refolding of a designed single-chain. [*Biochemistry*]{} [**35**]{}: 13878-13884. Tamura A, Privalov PL (1997) The entropy cost of protein association. [*J Mol Biol*]{} [**273**]{}: 1048-1060. Tang Y, Ghirlanda G, Vaidehi N, Kua J, Mainz DT, Goddard III WA, DeGrado WF, Tirrell DA (2001) Stabilization of coiled-coil peptide domains by introduction of Trifluoroleucine. [*Biochemistry*]{} [**40**]{}: 2790-2796. Liang H, Sandberg WS, Terwilliger TC (1993) Genetic fusion of subunits of a dimeric protein substantially enhances its stability and rate of folding. [*Proc Natl Acad Sci USA*]{} [**90**]{}: 7010-7014. Waldburger, C. D., Jonsson, T. & Sauer, R. T. (1996) Barriers to protein folding: formation of buried polar interactions is a slow step in acquisition of structure. [*Proc Natl Acad Sci USA*]{} [**93**]{}: 2629-2634. Woo HJ, Roux B (2005) Calculation of absolute protein-ligand binding free energy from computer simulations. [*Proc Natl Acad Sci USA*]{} [**102**]{}: 6825-6830. Moy VT, Florin EL, Gaub HE (1994) Intermolecular forces and energies between ligands and receptors. [*Science*]{} [**266**]{}: 257-259. Shoji T, (1999) Gō-ing for the prediction of protein folding mechanisms [*Proc Natl Acad Sci USA*]{} [**96**]{}: 11698-11700. de Gennes PG (1979) [*Scaling concepts in polymer physics*]{} (Cornell University Press, Ithaca, NY). Cacciuto A, Luijten E (2006) Self-avoiding flexible polymers under spherical confinement. [*Nano Lett*]{} [**6**]{}: 901-905. Sakaue T, Raphaël E (2006) Polymer chains in confined spaces and flow-injection problems: some remarks. [*Macromolecules*]{} [**39**]{}: 2621-2628. Clementi C, Nymeyer H, Onuchic JN (2000) Topological and energetic factors: what determines the structural details of the transition state ensemble and “en-route” intermediates for protein folding? an investigation for small globular proteins. [*J Mol Biol*]{} [**298**]{}: 937-953. Mor A, Ziv G, Levy Y (2008) Simulations of proteins with inhomogeneous degrees of freedom: The effect of thermostats. [*J Comput Chem*]{} [**29**]{}: 1992-1998.
--- abstract: 'The dissociation energy of H$_2$ represents a benchmark quantity to test the accuracy of first-principles calculations. We present a new measurement of the energy interval between the EF $^1\Sigma_g^+(v=0,N=1)$ state and the 54p1$_1$ Rydberg state of H$_2$. When combined with previously determined intervals, this new measurement leads to an improved value of the dissociation energy $D_0^{N=1}$ of ortho-H$_2$ that has, for the first time, reached a level of uncertainty that is three times smaller than the contribution of about 1 MHz resulting from the finite size of the proton. The new result of 35999.582834(11) cm$^{-1}$ is in remarkable agreement with the theoretical result of 35999.582820(26) cm$^{-1}$ obtained in calculations including high-order relativistic and quantum electrodynamics corrections, as reported in the companion article (M. Puchalski, J. Komasa, P. Czachorowski and K. Pachucki, submitted). This agreement resolves a recent discrepancy between experiment and theory that had hindered a possible use of the dissociation energy of H$_2$ in the context of the current controversy on the charge radius of the proton.' author: - 'Nicolas H[ö]{}lsch$^1$, Maximilian Beyer$^1\footnote{Present address: Department of Physics, Yale University, New Haven, CT 06511, USA}$, Edcel J. Salumbides$^2$, Kjeld S. E. Eikema$^2$, Wim Ubachs$^2$, Christian Jungen$^3$ and Frédéric Merkt$^{1,2}\footnote{Corresponding author; merkt@xuv.phys.chem.ethz.ch}$' title: 'Benchmarking theory with an improved measurement of the ionization and dissociation energies of H$_2$' --- å[Astron. Astrophys.  ]{} The dissociation energy of molecular hydrogen, $D_0$(H$_2$), has been used as a benchmark quantity for first-principles quantum-mechanical calculations of molecular structure for more than a century. H$_2$ consists of two protons and two electrons and is the simplest molecule displaying all aspects of chemical binding. Whereas early calculations were concerned with explaining the nature of the chemical bond [@bohr13c; @heitler27a; @james33a], the emphasis later shifted towards higher accuracy of the energy-level structure, requiring the consideration of nonadiabatic, relativistic and radiative contributions [@kolos60a; @kolos63a; @kolos68b; @wolniewicz95c; @bubin03a; @piszczatowski09a; @matyus12a; @puchalski17a]. These theoretical developments were accompanied and regularly challenged by experimental determinations of $D_0$(H$_2$) [@langmuir12a; @witmer26a; @beutler34a; @herzberg61c; @herzberg69a; @stwalley70a; @eyler93a; @liu09b; @cheng18a]. Periods of agreement between theory and experiment have alternated with periods of disagreement and debate. The reciprocal stimulation of theoretical and experimental work on the determination of $D_0$(H$_2$) has been a source of innovation. With its ups and downs and the related controversies, it has long reached epistemological significance [@primas84a; @stoicheff01a]. In 2009, the experimental (36118.0696(4) cm$^{-1}$) and theoretical (36118.0695(10) cm$^{-1}$) values of $D_0^{N=0}$(H$_2$) reached unprecedented agreement at the level of the combined uncertainties of 30 MHz [@liu09b; @piszczatowski09a], apparently validating the treatment of the lowest-order ($\alpha^3$) QED correction and the one-loop term of the $\alpha^4$ correction, including several QED contributions that had not been considered for molecules until then. The insight that $D_0$(H$_2$) is a sensitive probe of the proton charge radius [@komasa11a; @puchalski16a] stimulated further work. On the theoretical side, Pachucki, Komasa and coworkers have improved their calculations based on nonadiabatic perturbation theory [@pachucki14a; @pachucki15a; @pachucki16a; @puchalski16a; @puchalski17a], significantly revised the 2009 result, and came to the unexpected conclusion that the excellent agreement of theoretical predictions with experimental $D_0$(H$_2$) values reached in 2009 was accidental, because of an underestimation of the contribution of nonadiabatic effects to the relativistic correction (see also Refs. [@wang18a; @puchalski18a]). In the companion article, Puchalski [*et al.*]{} report on the theoretical progress, with a determination of the leading relativistic correction using the full nonadiabatic wave function [@puchalski19a]. Recent experimental work has focused on the determination of the ionization energy $E_{\rm I}^{\rm ortho}({\rm H}_2)$ of ortho-H$_2$, from which the dissociation energy of ortho-H$_2$, $D_0^{N=1}({\rm H}_2)$, is obtained using (see Fig. \[fig1\]b) $$\label{cycle1} D_0^{N=1}({\rm H}_2) = E_{\rm I}^{\rm ortho}({\rm H}_2) + D_0^{N^+=1}({\rm H}_2^+) - E_{\rm I}({\rm H})$$ and the very accurately known values of the ionization energy of the H atom ($E_{\rm I}({\rm H})$ [@mohr16a]) and of the dissociation energy of ortho-H$_2^+$, $D_0^{N^+=1}({\rm H}_2^+)$ [@korobov17a; @korobova]. $E_{\rm I}^{\rm ortho}({\rm H}_2)$ is itself determined as the sum of energy intervals between the X $^1\Sigma_g^+(v=0,N=1)$ ground state and a selected low-$n$ Rydberg states, between this low-$n$ Rydberg state and a selected high-$n$ p Rydberg state, and the binding energy of the selected high-$n$ Rydberg state (see Ref. [@sprecher11a] for details). In the 2009 determination, the selected low-$n$ and high-$n$ Rydberg states were the EF $^1\Sigma_g^+(v=0,N=1)$ and the 54p1$_1$ Rydberg state. To check the 2009 experimental result, $D_0$(H$_2$) was first determined by measuring the energy intervals between the X (0,1) and GK $^1\Sigma_g^+(v=1,N=1)$ states and between the GK(1,1) state and the 56p1$_1$ Rydberg state  [@cheng18a], after reevaluation of the binding energy of the $n$p1$_1$ Rydberg states [@sprecher14b]. In this Letter, we describe a new determination of $D_0$ through the EF(0,1) state with an absolute accuracy improved by a factor of 30 over the 2009 result. The new measurement is also 2.3 times more accurate than, and fully independent of, the measurement via the GK state mentioned above. The accuracy of the 2009 result was limited by the uncertainties arising from (1) the frequency chirps and spectral bandwidths of the pulsed lasers used to record spectra of the EF(0,1) - X(0,1) and 54p1$_1$ - EF(0,1) transitions, (2) ac-Stark shifts affecting the Doppler-free two-photon spectra of the EF(0,1) - X(0,1) transition, (3) dc-Stark shifts of the 54p1$_1$ - EF(0,1) transition resulting from ions generated in the measurement volume when preparing the EF(0,1) state by two-photon one-color excitation from the X(0,1) ground state, and (4) by the frequency calibration procedure, which relied on comparison with I$_2$ lines. These limitations have all been overcome: The effects of frequency chirps and ac-Stark shifts were eliminated by using a two-pulse Ramsey-comb method to determine the frequency of the EF(0,1) - X(0,1) transition [@altmann18a] and by using single-mode continuous-wave (cw) ultraviolet (UV) laser radiation to measure the 54p1$_1$ - EF(0,1) transition. When recording spectra of the 54p1$_1$ - EF(0,1) transition, the generation of ions was entirely suppressed by preparing the EF(0,1) state through single-photon excitation from the X(0,1) state to the B$^\prime(0,0)$ state, followed by spontaneous emission (SE): $$\label{eq:excitationGK} \text{X}~^1\Sigma_g^+~(0,1)\xrightarrow{\rm VUV} \text{B}^\prime~^1\Sigma_u^+~(0,0)\xrightarrow{\rm SE} \text{EF}~^1\Sigma_g^+~(0,1).$$ Finally, the relevant frequencies were all calibrated using frequency combs. The measurement of the X(0,1) - EF(0,1) interval by Ramsey-comb spectroscopy has been reported separately [@altmann18a], and we describe here the measurement of the EF(0,1) - 54p1$_1$ interval, from which we derive $D_0$(H$_2$) with a 30-fold improved accuracy over the 2009 result [@liu09b]. ![(a) Schematic diagram of the experimental setup. (b) Potential energy functions of the relevant electronic states of H$_2$ and H$_2^+$ and excitation scheme used to determine the ionization and dissociation energies of H$_2$ (see text for details).[]{data-label="fig1"}](Fig1.pdf){width="1.0\columnwidth"} The interval between the EF(0,1) state and the 54p$1_1(v^+=0,S=0,F=0-2)$ Rydberg state of ortho-H$_2$ was measured using the same molecular-beam apparatus and procedures as described in Ref. [@beyer18a], see Fig. \[fig1\]a. We refer to this work for details on, e.g., the compensation of the stray electric fields to better than 1 mV/cm and the shielding of external magnetic fields. The measurements were performed in a skimmed, pulsed supersonic beam of pure H$_2$ expanding into vacuum from a cryogenically cooled reservoir. The pulsed vacuum-ultraviolet (VUV) radiation around 90.6 nm used to excite the ground-state molecules to the B$^\prime~^1\Sigma^+_u (v=0,N=0)$ state was produced in a four-wave mixing scheme as outlined in Ref. [@beyer18a]. The lifetime of the B$^\prime~^1\Sigma^+_u$ state is of the order of 1 ns because of rapid spontaneous emission to the lower-lying EF $^1\Sigma_g^+$ and X $^1\Sigma_g^+$ states [@astashkevich15a]. The angular-momentum selection rule $\Delta N = \pm 1$ and Franck-Condon factors ensure that almost all molecules decaying to the EF state populate the $(v=0,N=1)$ rovibrational level. Further excitation from the EF(0,1) state to high-$n$ Rydberg states using the cw UV laser was detected by pulsed-field ionization (PFI), as described in Ref. [@beyer18a]. The delay between the pulsed VUV radiation and PFI was set to 300 ns, i.e., longer than the lifetime of the EF state ($\tau({\rm EF})\approx200~\text{ns}$), to ensure that a maximum number of molecules could be excited to Rydberg states. The cw UV radiation used to perform the excitation from the EF(0,1) level to the high-$n$ Rydberg states was generated by frequency-doubling the output of a single-mode (bandwidth $<50$ kHz) Ti:Sa ring laser in an external enhancement cavity containing an LBO crystal. The fundamental frequency of the Ti:Sa laser was calibrated with a frequency comb (accuracy better than 3 kHz) referenced to a 10-MHz GPS-disciplined Rb oscillator. The UV laser beam crossed both the VUV laser beam and the molecular beam at $\approx90^\circ$, was retro-reflected by a mirror, and crossed the H$_2$ sample again. Overlapping the forward and reflected UV-laser beams to better than 0.1 mrad and introducing a small deviation from 90$^\circ$ of the angle between UV and H$_2$ beam led to two well-separated Doppler components for each transition, from which the Doppler-free transition frequency could be determined as the average of the two peak centers (see Fig. \[fig2\] and corresponding discussion). A telescope was used to place the focus of the UV beam onto the back-reflecting mirror and so ensure two identical Gaussian beams in the excitation region. The reflection angle was checked by monitoring the reflected beam through a 1-mm-diameter diaphragm located 8 m away from the reflection mirror. Complete realignment of the laser and molecular beams between measurements leads to a statistical uncertainty associated with the residual Doppler shift, instead of a systematic one. However, in the error budget, a systematic uncertainty of 200 kHz was included as upper limit for the effects of systematic misalignments. The cancellation of the first-order Doppler shift was verified independently using different angles (and therefore different Doppler shifts, see Fig. \[fig3\]) and by measuring the transition frequencies using fast and slow H$_2$ beams produced with the valve kept at room temperature and cooled to 80 K, respectively. Fig. \[fig2\] displays typical spectra of the 54p$1_1 \leftarrow {\rm EF}(0,1)$ (a) and 77p$1_1 \leftarrow {\rm EF}(0,1)$ (b) transitions of H$_2$. Each transition in Fig. \[fig2\] splits into two Doppler components, corresponding to photoexcitation with the forward-propagating and reflected UV laser beams, as explained above. The 54p$1_1 \leftarrow {\rm EF}(0,1)$ transition was selected because the binding energy and hyperfine structure (hfs) of the 54p$1_1$ Rydberg states are precisely known from previous studies combining millimeter-wave spectroscopy and multichannel quantum-defect theory (MQDT) [@osterwalder04a; @sprecher14b]. The 77p$1_1 \leftarrow {\rm EF}(0,1)$ transition was used as reference and was measured after each realignment to detect possible drifts of the stray fields and of the UV-laser propagation axes with respect to the molecular-beam axis. Because the polarizability of Rydberg states scales as $n^7$, the 77p$1_1$ state is more than 10 times more sensitive to stray fields than the 54p$1_1$ state, making stray-field drifts of 1 mV/cm easily detectable. Such drifts would shift the frequency of the 54p$1_1 \leftarrow {\rm EF}(0,1)$ transition by less than 10 kHz. The hyperfine splittings of the $n$p$1_1$ series become larger with increasing $n$ value [@sprecher14b]. Consequently, the hfs of the 77p$1_1$ state could be partially resolved, which enabled us to verify experimentally that the intensities of the transitions to the three accessible ($F=0-2$) components are proportional to $2F+1$. Systematic uncertainties resulting from fits of the lineshapes with our lineshape model could thus be reduced to 100 kHz (see below and Refs. [@beyer18a; @hoelsch18a]). ![Upper panels: Spectra (black dots with error bars) of the 54p$1_1 \leftarrow {\rm EF}(0,1)$ (a) and 77p$1_1 \leftarrow {\rm EF}(0,1)$ (b) transitions and corresponding fits based on a Voigt line-shape model (blue traces), taking into account the hfs of the Rydberg states (orange stick spectra). Lower panels: Corresponding relative weights (blue) and weighted residuals.[]{data-label="fig2"}](Fig2.pdf){width="0.95\columnwidth"} To determine the line positions, we fitted the lineshape model described in Ref. [@beyer18a], which consists, for each Doppler component, of a superposition of three line profiles corresponding to the three hyperfine components of the $n$p$1_1\,(F=0-2)$ Rydberg states, with intensities proportional to $2F+1$. For the 54p$1_1$ and 77p$1_1$ Rydberg states, we used the hyperfine splittings determined by millimeter-wave spectroscopy [@osterwalder04a] and MQDT calculations, respectively. Voigt profiles with a full width at half maximum of 9 MHz and a Lorentzian contribution of about 6 MHz were found to best reproduce the measured line profiles. The lineshape depends on the velocity distribution in the volume defined by the intersection of the VUV, UV and gas beams [@hoelsch18a; @beyer18a], with contributions from transit-time broadening and Doppler-broadening originating from the photon recoil of the B$^\prime\rightarrow$ EF spontaneous emission. ------------------------- ------------ -------------- -- **Transition** Measured frequency Correction Uncertainty DC Stark shift 10 kHz AC Stark shift 5 kHz Zeeman shift 10 kHz Pressure shift 1 kHz 1st-order Doppler shift 200 kHz 2nd-order Doppler shift +8 kHz 1 kHz Line-shape model 100 kHz Hfs of EF(0,1) 100 kHz [^1] Photon-recoil shift -634 kHz Systematic uncertainty 250 kHz Final frequency ------------------------- ------------ -------------- -- : Error budget for the determination of the $54\text{p}1_1 \leftarrow \text{EF}(0,1)$ transition frequency[]{data-label="table1"} Repeated measurements of these transitions revealed a high sensitivity of their frequencies to the alignment of the forward-propagating and reflected UV laser beams. Misalignments were detectable through an intensity imbalance between the two Doppler components. This effect turned out to be more pronounced than in our previous study of $n$p/f$\leftarrow {\rm GK}(1,1)$ transitions, an observation we attribute to the twice higher frequency of the $n$p$\leftarrow {\rm EF}(0,1)$ transitions and the resulting increased Doppler effect. In the final analysis of the data and after careful calibration of the effects of intentional, well-defined misalignments, we rejected all measurements associated with intensity ratios of the two Doppler components lying outside the range \[0.8,1.25\], and included a systematic uncertainty of 200 kHz (see above and Table \[table1\]). Table \[table1\] also lists the other sources of systematic uncertainties considered in our analysis, which were estimated as explained in detail in Ref. [@beyer18a], and include uncertainties arising from DC and AC Stark shifts, Zeeman shifts, pressure shifts, Doppler shifts, and two contributions of each 100 kHz to account for uncertainties associated with the line-shape model and the unresolved (and unknown) hfs of the EF(0,1) state. The transitions frequencies were corrected by adding 8 kHz for the second-order Doppler shift and subtracting $634$ kHz for the photon-recoil shift, which is more than twice as large as the combined statistical and systematic uncertainty of 300 kHz. The measurements used to determine the frequency of 54p$1_1\leftarrow {\rm EF}(0,1)$ transition were carried out at a valve temperature of 80 K and are depicted in Fig. \[fig3\], which gives the central positions of the upper and lower Doppler components in the top and bottom panels, respectively, and the average (Doppler-free and hyperfine-free) frequency with their statistical uncertainties (1$\sigma$) in the middle panel. The different colors and symbols indicate measurements carried out on different days and the sequence of full and open symbols indicate realignment of the laser beams. The dashed blue lines correspond to the standard deviation of the whole data set and the area shaded in blue the standard deviation of the mean. Adding the corrections listed in Table \[table1\] and combining all uncertainties in quadrature yields the value of [755776720.21(30)]{} MHz ([25209.997785(10)]{} cm$^{-1}$) for the 54p$1_1- {\rm EF}(0,1)$ interval. [ccScc]{} & Energy level interval & & Uncertainty & Reference\ (1)& EF$(v=0,N=1)$ – X$(v=0,N=1)$ & 99109.7312049(24) & (73 kHz) & [@altmann18a][^2]\ (2)& 54p$1_1(v^+=0,S=0,{\text{center}})$ – EF$(v=0,N=1)$ & 25209.997785(10) & (300 kHz)&\ (3)& X$^+(v^+=0,N^+=1,{\text{center}})$ – 54p$1_1(v^+=0,S=0,{\text{center}})$ & 37.509013(5) &(150 kHz) & [@sprecher14b]\ (4)& $E^\text{ortho}_\text{I}$(H$_2$) = (1)+(2)+(3) & 124357.238003(11) &(340 kHz) &\ (5)& $D^{N^+=0}_0$(H$_2^+$) & 21379.3502496(6) &(18 kHz)& [@korobov17a]\ (6)& X$^+(v^+=0,N^+=1,{\text{center}})$ – X$^+(v^+=0,N^+=0)$ & 58.2336750974(8) &(25 Hz) & [@korobova]\ (7)& $D^{N^+=1}_0$(H$_2^+$) = (5)-(6) & 21321.1165745(6) & (18 kHz) &\ (8)& $E_\text{I}$(H) & 109678.77174307(10) & (3 kHz)& [@mohr16a]\ (9)& $D^{N=1}_0$(H$_2$) = (4)+(7)-(8) & 35999.582834(11) & (340 kHz)&\ ![Frequency of the 54p$1_1 \leftarrow {\rm EF}(0,1)$ transition of H$_2$ measured on five different days (indicated by different colors and symbols) and after regular realignment of the laser beam and its reflection (indicated by changes from full to open symbols). The valve temperature was 80 K. The top and bottom panels present the frequencies of the two Doppler components and the central panel displays their average value. The symbols and error bars represent the line positions obtained by fitting the center positions and their corresponding statistical uncertainties (one standard deviation), respectively. The dashed blue lines and the blue area give the standard deviations of the full data set and of the mean, respectively.[]{data-label="fig3"}](Fig3.pdf){width="0.99\columnwidth"} Table \[table2\] provides the details of the determination of the ionization and dissociation energies of H$_2$ from the three intervals (entries (1), (2) and (3) in the table) linking the X(0,1) and X$^+$(0,1) ground states of ortho-H$_2$ and H$_2^+$ and corresponding to a value of $E_{\rm I}^{\rm ortho}({\rm H}_2)$ of [124357.238003(11)]{} cm$^{-1}$. A value of [35999.582834(11)]{} cm$^{-1}$ can be derived for $D_0^{N=1}({\rm H}_2)$ using Eq. (\[cycle1\]). The error budget in Table \[table1\] also applies to the 77p$1_1 - {\rm EF}(0,1)$ transition, with the exception of the uncertainty resulting from the dc Stark shift (100 kHz). A determination of $E_{\rm I}^{\rm ortho}({\rm H}_2)$ using the binding energy of the 77$p1_1$ state is in agreement with the results given in Table \[table2\], proving the internal consistency of the MQDT analysis presented in Ref. [@sprecher14b]. Because of the very accurate value of the X-EF interval, our new result is more precise than the result of the measurement through the GK(0,1) state (35999.582894(25) cm$^{-1}$ [@cheng18a]), from which it differs by about $2\sigma$. It is in agreement with the theoretical result (35999.582820(26) cm$^{-1}$) obtained by Puchalski [*et al.*]{} (see companion article [@puchalski19a]). This agreement between experiment and theory at the accuracy level of better than 1 MHz resolves the discrepancy noted in recent work [@puchalski17a] and may be regarded as unprecedented in molecular physics. The error margins within which theoretical and experimental values of $D_0^{N=1}$(H$_2$) agree are 30 times more stringent than in 2009. This agreement opens up the prospect of using $D_0$(H$_2$) to make a contribution to the solution of the proton-radius puzzle [@pohl10a] as well as in the search or exclusion of fifth forces (see discussion in Ref. [@salumbides13a]). The experimental uncertainty of 340 kHz of the present result represents 30% of the expected total contribution of 1 MHz to $D_0$ from the finite size of the proton [@puchalski16a]. The main sources of uncertainty of the present result come from the (unresolved) hfs of the EF(0,1) level, which affects both the X(0,1)-EF(0,1) and the EF(0,1) - 54p$1_1$ intervals and uncertainties associated with the residual first-order Doppler shift and the line shape model (see Table \[table1\]). These sources of uncertainties would be significantly reduced in a measurement in para-H$_2$, which should be the object of future efforts. In this context, theoretical work should consider the ionization energy of H$_2$ (see also [@puchalski19a]), which is the quantity we directly measure and which we obtain experimentally with a relative accuracy ($\Delta\nu/\nu$) of $9.1\cdot 10^{-11}$. FM, WU and KE acknowledge the European Research Council for ERC-Advanced grants under the European Union’s Horizon 2020 research and innovation programme (grant agreements No 670168, No 743121 and No 695677). KE and WU acknowledge FOM/NWO for a program grant (16MYSTP). FM acknowledges the Swiss National Science Foundation (grant 200020-172620). [42]{} natexlab\#1[\#1]{} bibnamefont \#1[\#1]{} bibfnamefont \#1[\#1]{} citenamefont \#1[\#1]{} url \#1[`#1`]{} urlprefix \[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (), . , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , , , , , , , , , ****, (). , ** (, , ). , ****, (). , , , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , (). , , , ****, (). , , , ****, (). , (). , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , , , ****, (). , ****, (). , , , , ****, (). , , , (). , , , , , , , , , , , ****, (). , , , , , , ****, (). [^1]: Estimated by multichannel quantum-defect theory in calculations of the type described in Ref. [@osterwalder04a] [^2]: Note that the first two columns of Table II of reference [@altmann18a] unfortunately contain errors. The listed intervals in the first column add up to the ionization energy of ortho-H$_2$ instead of the dissociation energy of para-H$_2$, and the values given in the second column of the same table for the binding energy of the 54p1$_1$ Rydberg state and the dissociation energy of para-H$_2$ must be corrected to 37.509013(10) cm$^{-1}$ and 36 118.069 62(37) cm$^{-1}$, respectively.
--- author: - Michael Kalisch bibliography: - 'References.bib' title: | Numerical construction and critical behavior of Kaluza-Klein black holes\  \ --- [1]{} @updaterelative 2em [subtitleempty]{} 3em subjectempty 3em @ plus3fill @tpage uppertitleback lowertitleback thanksthanksempty [ =2.5em =10000 ]{}
--- abstract: 'Let $X$ be a $2n$-dimensional torus manifold with a locally standard $T \cong \left( S^1 \right)^n$ action whose orbit space is a homology polytope. Smooth complete complex toric varieties and quasitoric manifolds are examples of torus manifolds. Consider a principal $T$-bundle $p : E \rightarrow B$ and let $\pi : E(X) \rightarrow B$ be the associated torus manifold bundle. We give a presentation of the singular cohomology ring of $E(X)$ as a $H^*(B)$-algebra and the topological $K$-ring of $E(X)$ as a $K^*(B)$-algebra with generators and relations. These generalize the results in [@masudapan] and [@param] when the base $B=pt$. These also extend the results in [@paramuma], obtained in the case of a smooth projective toric variety, to any smooth complete toric variety.' address: - 'Department of Mathematics, Indian Institute of Technology-Madras, Chennai, India' - 'Department of Mathematics, Indian Institute of Technology-Madras, Chennai, India' - 'Department of Mathematics, Indian Institute of Technology-Madras, Chennai, India' author: - Jyoti Dasgupta - Bivas Khan - 'V. Uma' title: Cohomology of Torus Manifold Bundles --- Introduction ============ A torus manifold is a $2n$-dimensional manifold acted upon effectively by an $n$-dimensional compact torus with non-empty fixed point set. Smooth complete complex toric varieties and quasitoric manifolds are examples of torus manifolds. The notion of torus manifolds was introduced by A. Hattori and M. Masuda in [@hm]. In [@masudapan] M. Masuda and T. Panov studied relationships between the cohomological properties of torus manifolds and the combinatorics of their orbit spaces. The topological $K$-ring of the torus manifolds with locally standard action and orbit space a homology polotope was described by P. Sankaran in [@param]. Let $p : E \rightarrow B$ be a principal bundle with fibre and structure group the complex algebraic torus $\mathbb{T} \cong ({\mathbb C}^*)^n$ over a topological space $B$. For a smooth projective $\mathbb{T}$-toric variety $X$, consider the toric bundle $\pi : E(X) \rightarrow B$, where $E(X)=E \times_{\mathbb{T}} X$, $\pi([e, x])=p(e)$. In [@paramuma], the authors describe the singular cohomology ring of $E(X)$ as a $H^*(B)$-algebra. Furthermore, when $B$ is compact Hausdorff, they describe the topological $K$-ring of $E(X)$ as a $K^*(B)$-algebra. In this paper we consider $p: E \rightarrow B$ to be a principal bundle with fibre and structure group the compact torus $T \cong \left( S^1 \right)^n$. We assume that $B$ has the homotopy type of a finite CW complex so that $H^*(B)$ and $K^*(B)$ are finitely generated abelian groups. Without loss of generality, we further assume that $B$ is compact and Hausdorff. Let $X$ be a $2n$-dimensional torus manifold with a locally standard action of $T$ such that the orbit space $Q:=X/T$ is a homology polytope. We call the associated bundle $\pi : E(X) \rightarrow B$ a torus manifold bundle, where $E(X)=E \times_T X$. In Theorem \[2\] we give a presentation of the singular cohomology ring of $E(X)$ as a $H^*(B)$-algebra. A presentation of the topological $K$-ring $K^*(E(X))$ as a $K^*(B)$-algebra is obtained in Theorem \[6\]. As an application, we describe the cohomology ring and $K$-ring of toric bundles for a smooth complete toric variety in Corollary \[15\] extending the results in [@paramuma]. The method of proof for Theorem \[2\] exploits the known presentation of the cohomology ring [@masudapan Corollary $7.8$] when the base $B$ is a point. Applying the Leray-Hirsch theorem in cohomology we first prove that $H^*(E(X))$ is a free module over $H^*(B)$ of rank $\chi(X)$. Then we construct a surjective $H^*(B)$-algebra homomorphism from $R(B, \left( Q, \Lambda \right))$ (see Definition \[3\]) to $H^*(E(X))$. Here $\Lambda$ denotes the characteristic map of the torus manifold (see Section 2). To verify that this algebra homomorphism is injective, we recall from [@masudapan] that the equivariant cohomology ring which is isomorphic to the face ring of $Q$, is a free $H^*(BT)$-module of rank $\chi(X)$, where $BT$ denotes the classifying space of principal $T$-bundles. We then canonically extend the scalars of the face ring to $H^*(B)$ and use that it is a finitely generated abelian group to conclude injectivity. Similarly, the method of proof for Theorem \[6\] exploits the known presentation of the topological $K$-ring [@param Theorem $5.3$] when the base $B$ is a point. Applying the Leray-Hirsch theorem in $K$-theory we first prove that $K^*(E(X))$ is a free module over $K^*(B)$ of rank $\chi(X)$. Then we construct a surjective $K^*(B)$-algebra homomorphism from $\mathcal{R}(B, \left( Q, \Lambda \right))$ (see Definition \[5\]) to $K^*(E(X))$. Let $M:=\text{Hom}(T,{S}^1)$ denote the character lattice of $T$ and $RT:=\mathbb{Z}[\chi^u: u\in M]$ the ring of finite dimensional complex representations of $T$. In Proposition \[8\] we show that the [*$K$-theoretic face ring*]{} of $Q$ denoted by $\mathcal{K}(Q)$ (see Definition \[14\]) is a free $RT$-module of rank $\chi(X)$, using methods similar to [@vezzosi2003higher] and [@baggio2007equivariant] in the setting of smooth toric varieties. We then canonically extend the scalars of $\mathcal{K}(Q)$ to $K^*(B)$ and use that it is a finitely generated abelian group to conclude injectivity. In the case of a smooth complete toric variety, the $K$-theoretic face ring is in fact isomorphic to the algebraic $\mathbb{T}$-equivariant $K$-ring [@vezzosi2003higher Theorem 6.4]. The authors believe that the topological equivariant $K$-ring of any $T$-torus manifold is isomorphic to the $K$-theoretic face ring but could not find it in literature. We prove this statement for a quasitoric manifold in a parallel work [@dku]. In Section 6 we consider a torus manifold $X$ with a locally standard action of $T$ such that $X/T$ is not necessarily a homology polytope but only a face-acyclic nice manifold with corners (see Section 2 for the definition). The equivariant cohomology ring as well as the ordinary cohomology ring of $X$ have been described by Masuda and Panov in [@masudapan Theorem 7.7, Corollary 7.8]. Let $E(X)\longrightarrow B$ be the bundle with fiber $X$ associated to the principal $T$-bundle over a topological space $B$ which is of the homotopy type of a finite CW complex. We generalize [@masudapan Corollary 7.8] to give a presentation $H^*(E(X))$ as a $H^*(B)$-algebra in Theorem \[cohombuntorus\]. Similar to Theorem \[2\] we prove this by using the Leray-Hirsch theorem and the known presentation of $H^*_{T}(X)$ as a $H^*(BT)$-algebra [@masudapan Theorem 7.7]. We finally conjecture a similar presentation for $K^*(E(X))$ as a $K^*(B)$-algebra. We note that difficulties arise in extending the result to this setting especially because the cohomology ring $H^*(X)$ is not generated in degree $2$. [**Acknowledgements:**]{} The authors are grateful to Prof. P. Sankaran for drawing our attention to this problem and for his valuable comments on the initial versions of this manuscript. The first and the second author thank the Council of Scientific and Industrial Research (CSIR) for their financial support. The authors wish to thank the unknown referee for a careful reading of the manuscript and for very valuable comments and suggestions which led to improving the text. The final section has been added taking into account the referee’s suggestions. The extension of Theorem 3.3 to Theorem 6.1 was also suggested by Prof. M. Masuda in a prior email correspondence. We are grateful to him for this. Notation and Preliminaries ========================== We recall some notation and preliminaries from [@masudapan] and [@param]. Torus manifolds --------------- Let $T \cong \left( S^1 \right)^n$ denote the compact $n$-dimensional torus. A $2n$-dimensional closed connected orientable smooth manifold $X$ with an effective smooth action of $T$ such that the fixed point set $X^T$ is non-empty, is called a *torus manifold*. Since $X$ is compact it follows that $X^T$ is finite (see [@gmukherjee Section 3.4], [@bp Section 7.4]). A codimension-two connected submanifold is called a *characteristic submanifold* of $X$ if it is pointwise fixed by a circle subgroup of $T$. Since $X$ is compact, there are finitely many characteristic submanifolds, which we denote by $V_1, \ldots, V_d$. It can be shown that each $V_i$ is orientable. We say that $X$ is omnioriented, if an orientation is fixed for $X$ and for every characteristic submanifold $V_i$. We fix an omniorientation of $X$. The $T$-action on the torus manifold $X$ is said to be locally standard if it has a covering by $T$-invariant open sets $U$ such that $U$ is weakly equivariantly diffeomorphic to an open subset $U'\subset {\mathbb C}^n$ invariant under the standard $T$-action on ${\mathbb C}^n$. The latter means that there is an automorphism $\theta :T \rightarrow T$ and a diffeomorphism ${g}: U \rightarrow U'$ such that ${g}(ty)=\theta(t) {g}(y)$ for all $t \in T$, $y \in U$. Let $Q:=X/T$ be the orbit space and let $\Upsilon : X \rightarrow Q$ be the projection map. If $X$ is locally standard, then $Q$ becomes a nice manifold with corners (see [@masudapan Section 5.1 p. 724] [@bp Definition 7.1.3]). We denote by $Q_i$ the image of $V_i$ under $\Upsilon$ for $i=1, \ldots, d$; these are the *facets* or the codimension one faces of $Q$. A codimension-$k$ *preface* is defined to be a non-empty intersection of $k$ facets for $k=1,\ldots, n$. The connected components of prefaces are called *faces*. We regard $Q$ itself as a face of codimension zero. We say that $Q$ is [*face-acyclic*]{} if all its faces are acyclic i.e. $\tilde{H}_{i}(F)=0,~\mbox{for every}~ i$, for each face $F$ of $Q$. We say that $Q$ is a *homology polytope*, if $Q$ is face-acyclic and all its prefaces are faces. This is equivalent to saying that $Q$ is acyclic and all its prefaces are acyclic (in particular, connected). In this case the intersection of $r$ facets $Q_{i_1}, \ldots, Q_{i_r}$ is a codimension $r$ face $F$ of $Q$. Equivalently non-empty intersections of characteristic submanifolds are connected submanifolds of $X$. Unless otherwise specified, we shall assume henceforth that $X$ is a locally standard torus manifold with $Q$ a homology polytope. Note that, $H^{\ast}(X)$ is generated in degree two if and only if $X$ is locally standard and $Q$ is homology polytope (see [@masudapan Theorem $8.3$]). For every characteristic submanifold $V_i$, there is a primitive element $v_i \in \text{Hom }(S^1, T) \cong {\mathbb Z}^n$ determined up to sign, whose image is the circle subgroup fixing $V_i$ pointwise. The sign of $v_i$ is determined by the omniorientation. Define the characteristic map $\Lambda : \{ Q_1, \ldots, Q_d\} \rightarrow \text{Hom}(S^1, T)$, such that $\Lambda(Q_i)=v_i$. The local standardness of $X$ implies that the characteristic map $\Lambda$ satisfies the following smooth condition: if $Q_{i_1}\cap \cdots \cap Q_{i_k}$ is non-empty, then $\Lambda(Q_{i_1}), \ldots , \Lambda (Q_{i_k})$ is a part of a basis for the integral lattice $\text{Hom}(S^1, T)\cong {\mathbb Z}^n$. Moreover, under our assumption of local standardness and $Q$ being a homology polytope, the manifold $X$ is determined up to equivariant diffeomorphisms by the pair $(Q, \Lambda)$ (see [@masudapan Lemma $4.5$]). \[11\] - Let $\mathbb{T} \cong \left({\mathbb C}^* \right)^n$ be the algebraic torus, $M=\text{Hom }(\mathbb{T}, {\mathbb C}^*)\cong \text{Hom }(T, S^1)$ be the character lattice, and let $N=\text{Hom }(M, {\mathbb Z})$ be the dual lattice. Consider the smooth complete $\mathbb{T}$-toric variety $X=X(\Delta)$ corresponding to a fan $\Delta$ in $N_{{\mathbb R}}:=N \otimes_{{\mathbb Z}} {\mathbb R}\cong {\mathbb R}^n$ under the action of the torus $\mathbb{T}$. The orbit space of $X$ under the action of the compact torus $T \ (\subset \mathbb{T})$ is the manifold with corners $X_{\geq}$, which is formed by gluing $\left( U_{\sigma} \right)_{\geq}=\text{Hom}_{\text{sg}}(\sigma^{\vee} \cap M, {\mathbb R}_{\geq })$ $($see [@ful Section $4.1$]$)$. For each $\rho \in \Delta(1)$, let $v_{\rho} \in \text{Hom }(S^1, T)=N$ be the primitive ray generator of $\rho$. The characteristic submanifolds are given by the divisors ${\mathcal D}_{\rho}$ for $\rho \in \Delta(1)$, these are fixed by the circle subgroups $\text{Image}(v_{\rho})$. In this case the characteristic map $\Lambda$ is given by sending $({\mathcal D}_{\rho})_{\geq 0}$ to $v_{\rho}$. Since $H^{\ast}(X)$ is generated in degree two by [@dan Theorem $10.8$], $X_{\geq}$ is a homology polytope. - Another class of examples are quasitoric manifolds introduced by Davis and\ Januszkiewicz in [@davisjanu]. By definition a quasitoric manifold is locally standard under the $T$-action and the orbit space is a simple convex polytope and hence a homology polytope. [In [@suyama], the author has constructed smooth complete toric varieties of complex dimension $\geq 4$ whose orbit spaces by the action of the compact torus are not homeomorphic to simple polytopes (as manifolds with corners). These provide the first known examples of smooth complete toric varieties that are not quasitoric manifolds.]{} The following lemma is an equivariant version of [@param Lemma $5.1$] and [@uma Proposition $2.1$]. \[1\] Let $X$ be a locally standard torus manifold with orbit space $X/T=Q$. For each $i$, $1 \leq i \leq d$, there exists a $T$-equivariant complex line bundle $L_i$ such that $c_1(L_i)=[V_i] \in H^2(X)$, where $[V_i]$ denotes the cohomology class dual to $V_i$ and each $L_i$ admits an equivariant section $s_i : X \rightarrow L_i$ which vanishes precisely along $V_i$. [**Proof:**]{} Set $V=V_i$ and recall that $V$ is a closed $T$- invariant codimension $2$ submanifold of $X$ . Since $T$ is a compact Lie group we can assume that $X$ is endowed with a $T$-invariant Riemannian metric (see [@bredon Chapter VI, Theorem $2.1$]). Let $\nu$ denote the normal bundle to $V$ in $X$. We have the decomposition $T(V) \oplus \nu=T(X)\mid_V$. Since $V$ is $T$-invariant, $T(V)$ and $T(X) \mid_V$ are $T$-equivariant vector bundles. Moreover, since the Riemannian metric is also $T$-invariant, $\nu=T(V)^{\perp} \subseteq T(X)\mid_V$ is naturally a $T$-equivariant real vector bundle. Furthermore, we see that $\nu$ is a canonically oriented real $2$-plane bundle since $T(V)$ and $T(X)\mid_{V}$ are oriented by the choice of the omniorientation. Thus $\nu$ admits a reduction of structure group to $SO(2,\mathbb{R})\cong S^1$ giving $\nu$ the structure of a complex line bundle. Since $T$ is a connected Lie group and $\nu$ is $T$-equivariant, $T$ preserves the orientation under the linear action on the fibre. (Fixing an oriented basis for the $\mathbb{R}$-vector space $\nu_x, ~ \mbox{for every} ~x\in V$, $t\mapsto \psi_t\in \text{Hom}(\nu_x, \nu_{tx})$ defines a continuous map from $T$ to $SO(2,\mathbb{R}) \subseteq O(2,\mathbb{R})$.) This implies that $T$ preserves the complex structure on the fibre, making $\nu$ a $T$-equivariant complex line bundle. Now (by [@bredon Chapter VI, Theorem $2.2$]) $V$ has a closed invariant tubular neighbourhood denoted by $D$ which is equivariantly diffeomorphic to the disk bundle associated to the normal bundle $\nu$. The restriction of the equivariant diffeomorphism to the zero section of $\nu$ is the inclusion of $V$ in ${D}\subset X$. We denote by $\varpi : {D} \rightarrow V$ the projection map of the disk bundle. The complex line bundle $\varpi^*(\nu)$ admits an equivariant section $s: {D} \rightarrow \varpi^*(\nu)$ which vanishes precisely along $V$. Consider the trivial complex line bundle ${\mathcal E}:=(X \setminus \text{int}~ {D}) \times {\mathbb C}$ on $(X\setminus \text{int} ~{D})$, with the canonical $T$-action on $(X \setminus \text{int} ~{D})$ and the trivial $T$-action on the fibre ${\mathbb C}$. Consider the equivariant bundle isomorphism $\eta : {\mathcal E} \mid_{\partial {D}} \rightarrow \varpi^*(\nu)\mid_{\partial {D}}$ given by $(x, \lambda) \mapsto \lambda s(x)$, for all $x \in \partial {D}$. Now using clutching of bundles (see [@karoubi Theorem $3.2$]), glueing ${\mathcal E} \mid_{\partial {D}}$ along $\varpi^*(\nu)\mid_{\partial {D}}$ using the equivariant identification $\eta$ we get an equivariant line bundle, say $L$ on $X$. Note that $L$ admits an equivariant section $\tilde{s}$ (which restricts to $s$ on $D$ and $x \mapsto (x, 1)$ on $(X \setminus \text{int}~ {D})$) that vanishes precisely along $V$. Hence $c_1(L)=[V]$ and this completes the proof. $\hfill\square$ \[12\] Let $p': ET \rightarrow BT$ be the universal principal $T$-bundle with the associated bundle $\pi' : ET \times_T X \rightarrow BT$. For a $T$-equivariant line bundle $q:L \rightarrow X$, we obtain the line bundle $ET \times_T L$ on $ET \times_T X$ with the projection $[e,l] \mapsto [e, q(l)]$. If $L$ has a $T$-invariant section $s$ which vanishes precisely along $V \subseteq X$, we obtain a section $\tilde{s}$ of $ET \times_T L$, defined by $[e,x] \mapsto [e, s(x)]$. Thus $\tilde{s}$ vanishes precisely along $ET \times_T V\subseteq ET\times_{T} X$. It follows that $c_1^T(L)=c_1(ET \times_T L)=[ET \times_T V]:=[V]_T$. \[hompol\] [*Note that in Lemma \[1\] we do not assume that $Q$ is a homology polytope or even face-acyclic. It holds when $Q$ is simply a nice manifold with corners.*]{} [*Throughout this text by $H^*(~~)$ we shall always mean cohomology ring with $\mathbb{Z}$-coefficients unless specified otherwise.*]{} Cohomology ring and $K$-ring of torus manifolds ----------------------------------------------- We now recall the presentation of the cohomology ring and $K$-ring of torus manifolds from [@masudapan] and [@param]. \[cohom\] $($[@masudapan Corollary $7.8$], [@param Proposition $5.2$]$)$ Let $I$ be the ideal in ${\mathbb Z}[x_1, \ldots, x_d]$ generated by the elements: 1. $x_{i_1} \cdots x_{i_r}$ whenever $V_{i_1} \cap \cdots \cap V_{i_r} = \emptyset$, 2. $\displaystyle{\sum_{1 \leq i \leq d} \langle u, v_i \rangle x_i}$ where $u \in \text{Hom}(T, S^1)$. We have an isomorphism of $\mathbb{Z}$-algebras $\displaystyle\frac{{\mathbb Z}[x_1, \ldots, x_d]}{I}\stackrel{\sim}{\rightarrow} H^{\ast}(X)$ which maps $x_i$ to $c_1(L_i)=[V_i]\in H^2(X)$ for $1\leq i\leq d$. Furthermore, by [@masudapan Equation $(5.2)$, Section $7.2$], $H^{\ast}(X)$ is a free abelian group of rank $\chi(X)=\mid X^T\mid=m$. Here $m$ equals the number of vertices of $Q$. \[kring\] [@param Theorem $5.3$] Let $J'$ be the ideal in ${\mathbb Z}[x_1, \ldots, x_d]$ generated by the following elements: - $x_{i_1} \cdots x_{i_r}$, whenever $V_{i_1} \cap \cdots \cap V_{i_r} = \emptyset$, - $\displaystyle{\prod_{\{1 \leq i \leq d: \langle u, v_i \rangle > 0 \}} (1- x_{i})^{ \langle u, v_i \rangle}- \prod_{\{1 \leq j \leq d: \langle u, v_j \rangle < 0 \}} (1- x_{j})^{- \langle u, v_j \rangle}}$ for $u \in \text{Hom}( T, S^1)$. We have an isomorphism of $\mathbb{Z}$-algebras $\displaystyle\frac{{\mathbb Z}[x_1, \ldots, x_d]}{J'} \stackrel{\sim}{\rightarrow} K^*(X)$ which maps $x_i$ to $1-[L_i]$, $1 \leq i \leq d$. Furthermore, $K^*(X)$ is a free abelian group of rank equal to $\chi(X)=m$ $($see [@param Remark 3.2]$)$. \[10\] Let $J$ the ideal in $\mathbb{Z}[y^{\pm 1}_1,\ldots, y^{\pm 1}_d]$ generated by the following elements: - $\displaystyle{\prod_{1 \leq j \leq r} \left(1- y_{i_j} \right)}$, whenever $V_{i_1} \cap \cdots \cap V_{i_r} = \emptyset$, - $\displaystyle{\prod_{1 \leq i \leq d} y_i^{\langle u, v_{i} \rangle}}$ for $u \in \text{Hom}( T, S^1)$. In Theorem \[kring\], by making the transformation $y_i=1-x_i$, $1 \leq i \leq d$ we get the following alternative presentation $\displaystyle \frac{{\mathbb Z}[y_1^{\pm 1}, \ldots, y_d^{\pm 1}]}{J}$ for $K^*(X)$ which sends $y_i$ to $[L_i]$, $1 \leq i \leq d$ (see [@param Remark 4.2]). Cohomology ring of torus manifold bundles ========================================= Let $p: E\rightarrow B$ be a principal bundle with fibre and structure group the compact torus $T$ over a topological space $B$. Then one has the associated fibre bundle $\pi : E(X)\rightarrow B$ with fibre the torus manifold $X$, where $E(X):= E \times_T X $, and $\pi ([e,x])=p(e)$. For $u \in \text{Hom }(T,S^1)$, let ${\mathbb C}_u$ denote the corresponding $1$-dimensional $T$-representation. One has a $T$-line bundle $\xi_u$ on $B$ whose total space is $E \times_T {\mathbb C}_u$. For the $T$-equivariant complex line bundle $L_i$ from Lemma \[1\], let $E(L_i):=E \times_T L_i$ denote the associated line bundle on $E(X)$. \[3\]Let $R(B, \left( Q, \Lambda \right))$ denote the ring $ \displaystyle \frac{H^*(B)[x_1, \ldots, x_d]}{\mathcal{I}} $ where the ideal $\mathcal{I}$ is generated by the following elements: - $x_{i_1} \cdots x_{i_r}$, whenever $Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset$, - $\displaystyle{\sum_{i=1 }^d \langle u, v_i \rangle x_{i}-c_1(\xi_u)}$ for $u \in \text{Hom}( T, S^1)$. Recall that the *face ring* ${\mathbb Z}[Q]$ of the homology polytope $Q$ is defined to be $ \displaystyle \frac{{\mathbb Z}[x_1, \ldots, x_d]}{I_1} $ where the ideal $I_1$ is generated by elements of the form $$x_{i_1} \cdots x_{i_r}, \text{ whenever } Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset.$$ For $h(x_1,\ldots,x_d)\in \mathbb{Z}[x_1,\ldots,x_d]\subseteq H^*(B)[x_1,\ldots,x_d]$ we shall denote by $\bar{h}(x_1,\ldots,x_d)$ its class in $\mathbb{Z}[Q]$ and by $\bar{\bar{h}}(x_1,\ldots,x_d)$ its class in $R(B, \left( Q, \Lambda \right))$. We have a canonical $H^{\ast}(BT)$-algebra structure on ${\mathbb Z}[Q]$ given by the ring homomorphism $H^{\ast}(BT) \rightarrow {\mathbb Z}[Q]$ which maps $u$ to $\displaystyle{\sum_{i=1}^d \langle u, v_i \rangle \bar{x_i}}$. A canonical $H^*(B)$-module structure on $H^*(B) \otimes_{H^{\ast}(BT)} {\mathbb Z}[Q]$ is obtained by extending scalars to $H^*(B)$ via the homomorphism $H^*(BT) \rightarrow H^*(B)$ that sends $u$ to $c_1(\xi_u)$. \[4\] We have an isomorphism $R(B, \left( Q, \Lambda \right)) \cong H^*(B) \otimes_{H^*(BT)} {\mathbb Z}[Q]$ of $H^*(B)$-modules. In particular, $R(B, \left( Q, \Lambda \right))$ is a free $H^*(B)$-module of rank $m$. [**Proof:**]{} Define $\alpha : H^*(B)[x_1,\ldots,x_d] \rightarrow H^*(B) \otimes_{H^*(BT)} {\mathbb Z}[Q]$ by sending $x_i \mapsto 1 \otimes \bar{x_i}$ and $b \mapsto b \otimes 1$ for $b \in H^*(B)$. Clearly the generators of $\mathcal{I}$ listed in $(i)$ of Definition \[3\] map to zero under $\alpha$. Now $$\alpha(\sum_{i=1 }^d \langle u, v_i \rangle x_{i}-c_1(\xi_u))=1 \otimes \sum_{i=1 }^d \langle u, v_i \rangle \bar{x_{i}}- c_1(\xi_u) \otimes 1=1 \otimes u \cdot 1- u \cdot 1 \otimes 1$$ which is zero in $ H^*(B) \otimes_{H^*(BT)} {\mathbb Z}[Q]$. Hence $\alpha$ induces a well defined $H^*(B)$-module homomorphism $\bar{\alpha} : R(B, \left( Q, \Lambda \right))\rightarrow H^*(B) \otimes_{H^*(BT)} {\mathbb Z}[Q]$.\ We define $\beta : H^*(B) \otimes_{{\mathbb Z}}{\mathbb Z}[Q] \rightarrow R(B, \left( Q, \Lambda \right))$ by $b \otimes \bar{h}(x_1, \ldots, x_d) \mapsto b \bar{\bar{h}}(x_1, \ldots, x_d)$, for $b \in H^*(B)$ and $\bar{h}(x_1, \ldots, x_d) \in {\mathbb Z}[Q]$. Clearly this is well defined. Now $$\beta(1 \otimes u \cdot 1- u \cdot 1 \otimes 1)=\sum_{i=1 }^d \langle u, v_i \rangle \bar{\bar{x_i}}-c_1(\xi_u)$$ which is zero in $R(B, \left( Q, \Lambda \right))$. Hence $\beta$ induces a map $\bar{\beta} : H^*(B) \otimes_{H^*(BT)} {\mathbb Z}[Q] \rightarrow R(B, \left( Q, \Lambda \right))$. Noting that $\bar{\alpha}$ and $\bar{\beta}$ are inverses of each other proves the first assertion. Now ${\mathbb Z}[Q]$ is free $H^*(BT)$-module of rank $m$ by [@masudapan Theorem $7.7$, Lemma $2.1$], which proves the second assertion. $\hfill\square$ The following is the main theorem of this section. \[2\] Let $B$ have the homotopy type of a finite CW complex. The map $\Phi : R(B, \left( Q, \Lambda \right)) \rightarrow H^*(E(X))$ which sends $x_i$ to $c_1(E(L_i))$ is an isomorphism of $H^*(B)$-algebras. [**Proof:** ]{}Suppose that $Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset$, which implies $V_{i_1} \cap \cdots \cap V_{i_r} = \emptyset$. So by Lemma \[1\], the bundle $L_{i_1}\oplus \cdots \oplus L_{i_r}$ has a nowhere vanishing $T$-equivariant section. Hence by Remark \[12\], the bundle $E(L_{i_1})\oplus \cdots \oplus E(L_{i_r})$ admits a nowhere vanishing section. This shows that $$c_1(E(L_{i_1}) \cdots c_1( E(L_{i_r}))=0$$ in $H^{2r}(E(X))$. Hence the elements listed in (i) of Definition \[3\] map to zero under $\Phi$. Let $L_u$ be the trivial line bundle $X \times {\mathbb C}_u$ on $X$. Consider the associated line bundle ${\xi}'_u:=ET \times_T {\mathbb C}_u$ on $BT$. Note that $ET \times_T L_u$ is isomorphic to the pullback $\pi'^*({\xi}'_u)$ where $\pi'$ is as in Remark \[12\]. By the naturality of Chern classes $$\label{equivchern} c_1^{T}(L_u):=c_1(ET \times_T L_u)=\pi'^*(c_1({\xi}'_u)).$$ By [@masudapan Proposition $3.3$], $$\label{equivchern1}\pi'^*(c_1({\xi}'_u))=\sum_{i=1}^d \langle u, v_i \rangle [V_i]_T$$ and by Remark \[12\], $$\label{equivchern2} c^T_1\left( \prod_{i=1}^d L_i^{\langle u, v_i \rangle}\right)=\sum_{i=1}^d \langle u, v_i \rangle [V_i]_T$$ in $H^2_{T}(X)$. Now, (\[equivchern\]), (\[equivchern1\]) and (\[equivchern2\]) together imply $\displaystyle {c_1^{T}(\prod_{i=1}^d L_i^{\langle u, v_i \rangle}) =c_1^{T}(L_u)}$. This in turn implies that $\displaystyle {L_u \cong \prod_{i=1}^d L_i^{\langle u, v_i \rangle}}$ as $T$-equivariant line bundles by [@moment Theorem C$.47$]. Thus $$\label{associatediso}\pi^*(\xi_u) \cong E(L_u) \cong \prod_{i=1}^d E(L_i)^{\langle u, v_i \rangle} .$$ Taking first Chern classes on both sides of (\[associatediso\]) we get $$\label{chernassociatediso} \sum_{i=1}^d \langle u, v_i \rangle c_1(E(L_i))=c_1(\pi^*(\xi_u)).$$ This implies that the generators of $\mathcal{I}$ listed in (ii) of Definition \[3\] map to zero under $\Phi$, hence it is a well-defined ring homomorphism. By Theorem \[cohom\], there exist $p_i(x_1, \ldots, x_d) \in {\mathbb Z}[x_1, \ldots, x_d]$, $1 \leq i \leq m$ such that $$p_i:=p_i(c_1(L_1),\ldots, c_1(L_d)) :1\leq i\leq m$$ form a ${\mathbb Z}$-basis of $H^*(X)$. Consider $$\displaystyle {P}_i:=p_i(c_1(E(L_1)),\ldots, c_1(E(L_d))) :1\leq i\leq m$$ in $H^*(E(X))$. Since $E(L_i)|_X=L_i$, it follows that ${P}_j\mid_X=p_j$. Since $H^k(X)$ is free for all $k$, by the Leray-Hirsch theorem, (see [@top Theorem $4D.1)$]) $H^*(E(X))$ is a free $H^*(B)$-module with ${P}_1, \ldots, {P}_m$ as a basis. Moreover, since $\Phi(x_i)=c_1(E(L_i))$, each ${P}_i$ has a preimage under $\Phi$. Hence by Lemma \[4\], $\Phi$ is a surjective $H^*(B)$-module map between two free $H^*(B)$-modules of the same rank. Furthermore, since $H^*(B)$ is a finitely generated abelian group, it follows that $\Phi$ is a surjective map from a finitely generated abelian group to itself, and hence an isomorphism. (More generally, a surjective morphism from a finitely generated module over a Noetherian commutative ring to itself is an isomorphism (see [@AM Chapter $6$, Exercise $1.(i)$])). $\hfill\square$ $K$-ring of torus manifold bundles ================================== \[14\] The $K$-theoretic face ring of the homology polytope $Q$ is defined to be $ \displaystyle {\mathcal K}(Q):=\frac {{\mathbb Z}[y_1^{\pm 1}, \ldots, y_d^{\pm 1}]}{J_1} $ where $J_1$ is the ideal generated by elements of the form $$\left(1- y_{i_1} \right) \cdots \left(1- y_{i_r} \right), \text{ whenever } Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset.$$ We show that ${\mathcal K}(Q)$ is a free $RT$-module in Proposition \[8\]. We first set up the notation. Recall that $v_i \in \text{Hom }(S^1, T)$ determines the circle subgroup of $T$ fixing $V_i$ for $i=1, \ldots,d$. Let $\mathcal{V}$ denote the set of vertices of $Q$ and let $a \in \mathcal{V}$. Write $a=Q_{i_1} \cap \cdots \cap Q_{i_n}$ as an intersection of facets. Then by [@masudapan Proposition $3.3$], the elements $v_{i_1}, \ldots, v_{i_n}$ form a basis of $\text{Hom }(S^1,T)$. We set $$RT_a:={\mathbb Z}[\chi^{\pm u_{i_1}}, \ldots, \chi^{\pm u_{i_n}}]$$ where $u_{i_1}, \ldots, u_{i_n}$ denotes the dual basis of $v_{i_1}, \ldots, v_{i_n}$. For any $b \in \mathcal{V}$, denote by $a \vee b$ the minimal face of $Q$ containing both $a$ and $b$. If $a \vee b=Q$, set $RT_{a \vee b}={\mathbb Z}$ and the projection map $RT_a \rightarrow RT_{a \vee b}$ to be the augmentation map. Otherwise when $a \vee b$ is a proper face, write $a \vee b=Q_{i_1} \cap \cdots \cap Q_{i_l}$ and set $$RT_{a \vee b}:={\mathbb Z}[\frac{M}{\langle v_{i_1}, \ldots, v_{i_l} \rangle^{\perp}}]={\mathbb Z}[\chi^{\pm u_{i_1}}, \ldots, \chi^{\pm u_{i_l}}].$$ Then we have the canonical projection map $RT_a \rightarrow RT_{a \vee b}$ given by $\chi^{u_{i_j}} \mapsto \chi^{u_{i_j}}$ for $j=1, \ldots, l$ and $\chi^{u_{i_j}} \mapsto 1$ for $j= l+1, \ldots, n$. The following lemma is analogous to [@vezzosi2003higher Theorem $6.4$] in the setting of torus manifolds. We prove it along similar lines. \[7\] There is an inclusion of rings $$\displaystyle \bar{\phi}: {\mathcal K}(Q) \hookrightarrow \prod_{a \in \mathcal{V}} RT_a.$$ The image consists of elements of the form $\displaystyle{\left(r_a \right) \in \prod_{a \in \mathcal{V}} RT_a}$, where for any two distinct $a, \ b \in \mathcal{V}$, the restriction of $r_a$ and $r_b$ to $RT_{a \vee b}$ coincide. [**Proof:** ]{} Define the map $\displaystyle{\phi:{\mathbb Z}[y_{1}^{\pm 1}, \ldots, y_{d}^{\pm 1}] \rightarrow \prod_{a \in \mathcal{V}} RT_a}$ given by $y_i \mapsto r_i:=(r_{ia})$ where $$r_{ia} = \left\{ \begin{array} {r@{\quad \quad}l} 1 & \text{if } a \notin Q_i \\ \chi^{u_i} & \text{if } a \in Q_i \\ \end{array} \right.$$ Set $\displaystyle {W=\{\left(r_a \right) \in \prod_{a \in \mathcal{V}} RT_a: r_a \mid_{a \vee b}=r_b \mid_{a \vee b}, \text{for all } a \neq b \in \mathcal{V} \}}$. Note that $W$ is a subring of $\displaystyle{\prod_{a \in \mathcal{V}} RT_a}$. Let $a, b \in \mathcal{V}$ be distinct. If $a \vee b=Q$, then there is nothing to prove. Otherwise write $a \vee b =Q_{i_1} \cap \cdots \cap Q_{i_l}$, where $a=Q_{i_1} \cap \cdots \cap Q_{i_n}$ and $b=Q_{i_1} \cap \cdots \cap Q_{i_l} \cap Q_{j_{l+1}} \cap \cdots \cap Q_{j_n}$. Now consider the following cases: 1. $a, b \notin Q_i$: Then $r_{ia}=1=r_{ib}$, hence $r_a \mid_{a \vee b}=r_b \mid_{a \vee b}$. 2. $a \notin Q_i$ and $b \in Q_i$: Then $r_{ia}=1$ and $r_{ib}=\chi^{u_i}$. Note that under the restriction map $RT_b \rightarrow RT_{a \vee b}$, $\chi^{u_i} \mapsto 1$, since $u_i \in \langle v_{i_1}, \ldots, v_{i_l} \rangle^{\perp}$. Hence we are done in this case. 3. $a, b \in Q_i$: Then $r_{ia} \mid_{a \vee b}=\chi^{u_i}=r_{ib} \mid_{a \vee b}$ and under the respective projection they map to the same image since $i \in \{i_1, \ldots, i_l \}$. This proves that $r_i \in W$ for $1 \leq i \leq d$. We show that elements of $W$ can be written as Laurent polynomials in $r_i$’s. Set $$\mathcal{V}=\{a_1, \ldots, a_m \}$$ and let $\alpha=\left( \alpha_{a_i} \right) \in W$. Let $a_1=Q_{i_1} \cap \cdots \cap Q_{i_n}$, then $\alpha_{a_1} \in RT_{a_1}={\mathbb Z}[\chi^{\pm u_{i_1}}, \ldots, \chi^{\pm u_{i_n}}]$ and hence we can find a Laurent polynomial $p_1(y_{i_1}, \ldots, y_{i_n})$ such that $p_1(r_{i_1}, \ldots, r_{i_n})_{a_1}=\alpha_{a_1}$. Let $\alpha_1:= \alpha-p_1(r_{i_1}, \ldots, r_{i_n})$. Then we see that $\alpha_{1_{a_1}}=0$. Now let $a_2=Q_{i_1} \cap \cdots \cap Q_{i_l} \cap Q_{j_{l+1}} \cap \cdots \cap Q_{j_n}$ such that $a_1 \vee a_2= Q_{i_1} \cap \cdots \cap Q_{i_l}$. Similarly as above there is a Laurent polynomial $p_2(y_{i_1}, \ldots, y_{i_l}, y_{j_{l+1}}, \ldots, y_{j_n})$ such that $p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_2}=\alpha_{1_{a_2}}$. Note that $$p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_1}=p_2(\chi^{u_{i_1}}, \ldots, \chi^{u_{i_l}}, 1, \ldots, 1)$$ whose projection to $RT_{a_1 \vee a_2}$ remains unchanged, i.e. $$\label{eq1} p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_1}= p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_1} \mid _{a_1 \vee a_2}.$$ Since $\alpha_1\in W$, $\alpha_{1_{a_2}} \mid_{a_1 \vee a_2}=\alpha_{1_{a_1}} \mid_{a_1 \vee a_2}=0$. Moreover, $p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n}) \in W$ implies $$p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_1} \mid _{a_1 \vee a_2}= p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_2} \mid _{a_1 \vee a_2}=\alpha_{1_{a_2}}\mid_{a_1\vee a_2}=0.$$ Now, (\[eq1\]) implies $$p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})_{a_1}=0.$$ Letting $\alpha_2 := \alpha_1-p_2(r_{i_1}, \ldots, r_{i_l}, r_{j_{l+1}}, \ldots, r_{j_n})$, we have $\alpha_{2_{a_1}}=0=\alpha_{2_{a_2}}$. Repeating this process for $a_3, \ldots, a_m$, where $a_k=Q_{k_1}\cap\cdots\cap Q_{k_n}$ for $k=1,\ldots, m$, we get that $\alpha_{m_{a_1}}=\alpha_{m_{a_2}}=\cdots=\alpha_{m_{a_m}}=0$, for $\alpha_m=\alpha-\sum_{k=1}^m p_k(r_{k_1},\ldots,r_{k_n})$. Thus $\alpha_m=0$, so that $\alpha$ is in the image of $\phi$. Since $\alpha\in W$ was arbitrary, $\phi$ is surjective. It remains to show that $\text{ker}(\phi)=J_1$. For $a \in \mathcal{V}$, consider the map $\phi_a:{\mathbb Z}[y_{1}^{\pm 1}, \ldots, y_{d}^{\pm 1}] \rightarrow RT_a$ which sends $y_i \mapsto r_{ia}$ for $1 \leq i \leq d$. We see that $\text{ker}(\phi_a)=J_a:=\langle y_j-1: a \notin Q_j \rangle $ and clearly $\text{ker}(\phi)= \cap_{a \in \mathcal{V}} J_a$. Then $\cap_{a \in \mathcal{V}} J_a= J_1$ follows from [@vezzosi2003higher Lemma $6.5$]. Hence we get the induced ring homomorphism $\displaystyle{\bar{\phi}:\mathcal{K}(Q)\stackrel{\sim}{\rightarrow} W\hookrightarrow \prod_{a\in \mathcal{V}} RT_a}$ as required. $\hfill\square$ Note that one has a monomorphism of rings $RT \stackrel{\iota}{\rightarrow }{\mathcal K}(Q)$ defined by $\displaystyle{\chi^u \mapsto \prod_{1 \leq i \leq d} y_{i}^{\langle u, v_{i} \rangle}}$, $u \in \text{Hom}( T, S^1)=M$, which gives an $RT$-algebra structure on ${\mathcal K}(Q)$. Moreover, for every $a_k=Q_{k_1}\cap\cdots \cap Q_{k_n}$, in $\mathcal{V}$, we have the isomorphism $\zeta_{k}:\mathbb{Z}[M]=RT \rightarrow RT_{a_k}$ which maps $\displaystyle{\chi^u\mapsto \prod_{j=1}^n\chi^{\langle u,v_{k_j}\rangle u_{k_j}}}$ for $1\leq k\leq m$. Thus $\displaystyle{\prod_{k=1}^m \zeta_k}$ identifies $(RT)^m$ with $\displaystyle{\prod_{a\in \mathcal{V}} RT_a=\prod_{k=1}^m RT_{a_k}}$. Now, $(RT)^m$ has a canonical $RT$-algebra structure via the diagonal embedding $\delta$. Hence $\displaystyle{\zeta=(\prod_{k=1}^m\zeta_k)\circ \delta:RT\longrightarrow \prod_{k=1}^m RT_{a_k}}$ which maps $\displaystyle{\chi^u\mapsto (\prod_{j=1}^n\chi^{\langle u,v_{k_j}\rangle u_{k_j}})}$ gives the canonical $RT$-algebra structure on $\displaystyle{\prod_{a\in \mathcal{V}} RT_a}$. \[algegramono\] The inclusion of rings $\bar{\phi}$ in [*Lemma \[7\]*]{} is a monomorphism of $RT$-algebras. [**Proof:**]{} The proof follows readily since it can be seen that $\bar{\phi}\circ \iota=\zeta$.$\hfill\square$ \[8\] ${\mathcal K}(Q)$ is a free $RT$-module of rank $\chi(X)$. [**Proof:** ]{} We see that ${\mathcal K}(Q)$ is isomorphic to a localization of ${\mathbb Z}[Q]$ by a similar argument as in the proof of [@baggio2007equivariant Theorem $2.3$]. Explicitly, we see that there is an ring isomorphism $ \displaystyle {\mathbb Z}[Q] \cong \frac{{\mathbb Z}[y_1, \ldots, y_d]}{J_1 \cap {\mathbb Z}[y_1, \ldots, y_d] }$ which sends $x_i$ to $y_i-1$. This remains an isomorphism if we localize at the respective multiplicative systems $S_I=\{(x_i+1)^k\}_{k \in {\mathbb N}}$ and $S_J=\{y_i^k\}_{k \in {\mathbb N}}$: $$S_I^{-1}{\mathbb Z}[Q] \cong S_J^{-1}\frac{{\mathbb Z}[y_1, \ldots, y_d]}{J_1 \cap {\mathbb Z}[y_1, \ldots, y_d] }={\mathcal K}(Q)$$ Now ${\mathbb Z}[Q]$ is Cohen-Macaulay by [@masudapan Lemma $8.2$], hence ${\mathcal K}(Q)$ is also Cohen-Macaulay. Note that ${\mathcal K}(Q)$ is a finite $RT$-module since it is a submodule of a Noetherian module $\prod_{a \in \mathcal{V}} RT_a\simeq RT^m$ by Lemma \[7\]. Hence $\iota:RT \subseteq {\mathcal K}(Q)$ is an integral extension. Since $RT$ is an integrally closed domain and ${\mathcal K}(Q)$ is a torsion free $RT$-module, by the Going Down Theorem ([@huneke Corollary $2.2.8$]), for any maximal ideal $\mathfrak{M}$ of ${\mathcal K}(Q)$ which contracts to the maximal ideal $\mathfrak{m}$ of $RT$, $\text{ht }{\mathfrak{m}}= \text{ht }{\mathfrak{M}}$. Then by [@baggio2007equivariant Lemma $2.4$], ${\mathcal K}(Q)$ is a projective $RT$-module. Moreover, since $RT$ is a Laurent polynimial ring, ${\mathcal K}(Q)$ is in fact a free $RT$-module. Now note that the presentation of $K^*(X)$ in [@param Theorem 5.3] and Remark \[10\], implies that $$K^*(X) \cong \frac {{\mathbb Z}[y_1^{\pm 1}, \ldots, y_d^{\pm 1}]}{J} \cong \mathbb{Z}\otimes_{RT} {\mathcal K}(Q)$$ where the extension of scalars to $\mathbb{Z}$ is via the augmentation homomorphism $RT \stackrel{\epsilon}{\rightarrow} \mathbb{Z}$. On the other hand it is also known that $K^*(X)$ is a free abelian group of rank $\chi(X)$. Hence the proposition follows. $\hfill\square$ \[5\]Let $\displaystyle\mathcal{R}(B, \left( Q, \Lambda \right)):= \frac{K^*(B)[y_1^{\pm 1}, \ldots, y_d^{\pm 1}]}{\mathcal{J}} $ where the ideal $\mathcal{J}$ is generated by the following elements: - $(1-y_{i_1}) \cdots (1-y_{i_r})$, whenever $Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset$, - $\displaystyle{\prod_{1 \leq i \leq d} y_i^{\langle u, v_{i} \rangle}-[\xi_u]}$ for $u \in \text{Hom}( T, S^1)$. Consider the ring $K^*(B) \otimes_{RT} {\mathcal K}(Q)$ obtained from the $RT$-algebra $\mathcal{K}(Q)$ by extending scalars to $K^*(B)$ via the homomorphism $RT \rightarrow K^*(B)$ which maps $\chi^u \mapsto [\xi_u]$. In particular, by Proposition \[8\], $K^*(B) \otimes_{RT} {\mathcal K}(Q)$ is a free $K^*(B)$-module of rank $\chi(X)=m$. \[9\] We have an isomorphism $ \displaystyle \mathcal{R}(B, \left( Q, \Lambda \right)) \cong K^*(B) \otimes_{RT} {\mathcal K}(Q)$ as $K^*(B)$-modules. In particular, $\mathcal{R}(B, \left( Q, \Lambda \right))$ is a free $K^{\ast}(B)$-module of rank $\chi(X)$. The proof is similar to the proof of Lemma \[4\]. \[6\] Let $B$ has the homotopy type of a finite CW complex. Then we have an isomorphism $ \displaystyle \Psi:\mathcal{R}(B, \left( Q, \Lambda \right)) \stackrel{\sim}{\rightarrow} K^*(E(X)) $ of $K^*(B)$-modules, which maps $ y_i \mapsto [E(L_i)]$. Suppose that $Q_{i_1} \cap \cdots \cap Q_{i_r} = \emptyset$. Recall from the proof of the Theorem \[2\], the bundle $E(L_{i_1})\oplus \cdots \oplus E(L_{i_r})$ admits a nowhere vanishing section. Then applying $\gamma^r$-operation, we obtain $\gamma^r([L_{i_1}\oplus \cdots \oplus L_{i_r}]-r)=(-1)^r c_r(L_{i_1}\oplus \cdots \oplus L_{i_r})=0$. Also note that $\displaystyle{\gamma^r([L_{i_1}\oplus \cdots \oplus L_{i_r}]-r)=\prod_{1 \leq j \leq r} \left([L_{i_j}]-1 \right)}$. This shows that the elements listed in (i) of Definition \[5\] map to zero under $\Psi$. Note that, we have $\displaystyle{\pi^*(\xi_u) \cong E(L_u) \cong \prod_{i=1}^d E(L_i)^{\langle u, v_i \rangle}}$ from the proof of Theorem \[2\]. This implies that the generators of $\mathcal{J}$ listed in (ii) of Definition \[5\] map to zero under $\Psi$. The surjectivity of $\Psi$ follows from the same argument as in the proof of Theorem \[2\], using a version of the Leray-Hirsch theorem in the setting of K-theory (see [@hatcher2003vector Theorem $2.25$]). Then using Lemma \[9\], similar arguments as in the proof of Theorem \[2\] show that $\Psi$ is an isomorphism. $\hfill\square$ Some applications ================= As an illustration of the above results, we derive both the cohomology and $K$-ring of $E(X)$, where $X=X(\Delta)$ is a smooth complete toric variety (see Example \[11\]). For a smooth complete fan $\Delta$ we define the following rings. 1. Let $R(H^*(B), \Delta)$ denote the ring $ \displaystyle \frac{H^*(B)[X_1, \ldots, X_d]}{\mathcal{I}} $ where the ideal $\mathcal{I}$ is generated by the following elements: - $X_{i_1} \cdots X_{i_r}$, whenever $\rho_{i_1}, \ldots, \rho_{i_r}$ do not generate a cone in $\Delta$, - $\displaystyle{\sum_{i=1 }^d \langle u, v_i \rangle X_{i}-c_1(\xi_u)}$ for $u \in \text{Hom}( T, S^1)$. 2. Let $\mathcal{R}(K^*(B), \Delta)$ denote the ring $ \displaystyle \frac{K^*(B)[Y_1^{\pm 1}, \ldots, Y_d^{\pm 1}]}{\mathcal{J}} $ where the ideal $\mathcal{J}$ is generated by the following elements: - $(1-Y_{i_1}) \cdots (1-Y_{i_r})$, whenever $\rho_{i_1}, \ldots, \rho_{i_r}$ do not generate a cone in $\Delta$, - $\displaystyle{\prod_{1 \leq i \leq d} Y_i^{\langle u, v_{i} \rangle}-[\xi_u]}$ for $u \in \text{Hom}( T, S^1)$. \[15\] Let $X=X(\Delta)$ be a smooth complete $\mathbb{T} \cong ({\mathbb C}^*)^n$-toric variety. Let $p: E \rightarrow B$ be a principal $\mathbb{T}$-bundle, where $B$ has the homotopy type of a finite CW complex. 1. The cohomology ring of $E(X)$ is isomorphic as an $H^*(B)$-algebra to $R(H^*(B), \Delta)$ under the isomorphism $ \displaystyle \Phi : R(H^*(B), \Delta) \rightarrow H^*(E(X))$ which sends $X_i$ to $c_1(E(L_i))$. 2. The topological $K$-ring of $E(X)$ is isomorphic as a $K^*(B)$-algebra to $\mathcal{R}(K^*(B), \Delta)$ under the isomorphism $ \displaystyle \Psi : \mathcal{R}(K^*(B), \Delta)\rightarrow K^*(E(X))$ which sends $Y_i$ to $[E(L_i)]$. [**Proof:**]{} We consider $X$ as a torus manifold with locally standard $T \cong (S^1)^n$ action and orbit space the homology polytope $X_{\geq}$ (see Example \[11\]). We then have the principal $T$-bundle $p' : E \rightarrow E/T$ since $T$ is an admissible subgroup of $\mathbb{T}$ (i.e. $\mathbb{T} \rightarrow \mathbb{T}/T$ is a principal $T$-bundle). Note that $E \times _T X$ and $E \times _{\mathbb{T}} X$ are homotopy equivalent, since $\mathbb{T}=T \times ({\mathbb R}_{\geq})^n$ and $({\mathbb R}_{\geq})^n$ is contractible. Similarly $B$ and $E/T$ are homotopy equivalent. The assertions $(1)$ and $(2)$ of the corollary now follow by applying Theorem \[2\] and Theorem \[6\] respectively for the space $E\times_{T} X$ associated to the principal $T$-bundle $p' : E \rightarrow E/T$. Note that in the proof of assertion $(2)$ above, Proposition \[8\] is immediate from [@vezzosi2003higher Theorem $6.9$] because the ring ${\mathcal K}(Q)$ in Proposition \[8\] is the algebraic $\mathbb{T}$-equivariant $K$-ring of $X$. $\hfill\square$ [Let $X$ be a torus manifold with locally standard action and orbit space a homology polytope $Q$ whose nerve is a shellable simplicial complex (see [@uma]), e.g. quasitoric manifolds. Then Theorem \[2\] (respectively, Theorem \[6\]) can be proved for for $B$ any topological space (respectively, $B$ compact Hausdorff topological space) using [@paramuma Lemma $2.1$, Lemma $2.2$]. In particular, this gives a relative version of [@uma Theorem 1.3] and a generalization of [@MR2313027 Theorem 1.2].]{} Torus manifold bundles when $X/T$ is not a homology polytope ============================================================ In the preceeding sections we considered torus manifolds $X$ defined in Section 2.1 with the additional assumption that $X/T=Q$ is a homology polytope. This ensured that the cohomology ring $H^*(X)$ was generated by the degree $2$ classes corresponding to the fundamental classes $[V_i]$ of the characteristic submanifolds (see Theorem \[cohom\]). In this section we shall consider a torus manifold $X$ with a locally standard action of $T$ as defined in Section 2.1, with the exception that $X/T=Q$ is not assumed to be a homology polytope but only face acyclic. In particular, we do not assume that the prefaces are connected. Since $Q$ is face acyclic the cohomology ring of $X$ satisfies the property that $H^{odd}(X)=0$ (see [@bp Theorem 7.4.46]), which in particular also implies by the universal coefficient theorem that $H^*(X)$ is torsion free and hence free of rank $\chi(X)$. Moreover, [@masudapan Corollary 7.8] gives an explicit presentation of the ring $H^*(X)$. Let $p : E\longrightarrow B$ be a principal $T$-bundle and let $E(X):=E\times_{T} X$ be the associated torus manifold bundle. Let $B$ be a topological space having the homotopy type of a finite CW complex. We then have the following theorem which gives a presentation of $H^*(E(X))$ as a $H^*(B)$-algebra. \[cohombuntorus\] Let $\mathfrak{I}$ be the ideal in the ring ${\mathfrak R}:= H^*(B) [x_{F}:F ~\mbox{a face of}~ Q]$ generated by the following relations: \(i) $\displaystyle{x_Gx_{H}-x_{G\vee H}\sum_{E\in G\cap H} x_{E}};$ \(ii) $\displaystyle{\sum_{i=1}^d \langle u,v_i\rangle x_{Q_i}}-c_1(\xi_u)$ for $u\in Hom(T,S^1)$ where $Q_i$ are the facets of $Q$, $v_i=\Lambda(Q_i)$ is the primitive vector in $Hom(S^1,T)\simeq \mathbb{Z}^n$ which determines the circle subgroup of $T$ fixing the characteristic submanifold $V_i$ for $1\leq i\leq d$, and $\xi_u=E\times_{T} \mathbb{C}_u$ is the line bundle on $B$ associated to the character $u\in M$. Since $X$ is omnioriented $v_i$ is well defined. [*(See Section 2 and Section 3)*]{}. The map $\Phi_1 : {\mathfrak R} \rightarrow H^*(E(X))$ which sends $x_F$ to $[E(V_{F})]$ defines an isomorphism of $H^*(B)$-algebras from ${\mathfrak R}/{\mathfrak I} \rightarrow H^*(E(X))$. Here $V_F$ denotes the connected $T$-stable submanifold $\Upsilon^{-1}(F)$ of $X$ corresponding to a face $F$ of $Q$ and $[E(V_F)]$ denotes the ${Poincar\acute{e}}$ dual of $E(V_{F}):=E\times_{T}V_{F}$ in $H^*(E(X))$. [**Proof:**]{} By [@masudapan Corollary 7.8] it follows that $H^*(X)$ is a free $\mathbb{Z}$-module of rank $\chi(X)$ and that there exists $p_1,\ldots, p_m$ polynomials in $\mathbb{Z}[x_{F}: F ~\mbox{a face of} ~Q]$ such that $p_i([V_{F}])$ for $1\leq i\leq m$ form a $\mathbb{Z}$-module basis of $H^*(X)$. Since $E(V_{F})\mid_{X}=V_{F}$ for each face $F$ of $Q$, by the Leray-Hirsch theorem $P_i:=p_i([E(V_F)])$ for $1\leq i\leq m$ form a basis of $H^*(E(X))$ as an $H^*(B)$-module. Recall from (\[associatediso\]) that we have the isomorphism of line bundles $\displaystyle{\prod_{i=1}^d (E(L_i))^{\langle u, v_i\rangle} \simeq \pi^*(\xi_u)}$ over $E(X)$ (see Lemma \[1\], Remark \[hompol\] for the definition of the line bundles $L_i$ on $X$). Since $c_1(E(L_i))=[E(V_i)]$ we see that the relation $(ii)$ holds in $H^*(E(X))$ by (\[chernassociatediso\]). Consider the classifying map $f:B\longrightarrow BT$ of the principal $T$-bundle $E\longrightarrow B$. Thus we have the map $\widetilde{f}:E(X)\longrightarrow ET\times_{T} X$ over $f$ since $E(X)$ is the pull back of $ET\times_{T}X$ under $f$. This induces the canonical maps of cohomology rings $\widetilde{f}^*: H_{T}^*(X)\longrightarrow H^*(E(X))$ over $f^*:H^*(BT)\longrightarrow H^*(B)$ giving a commuting square $$\begin{array}{lllllll} H_{T}^*(X)&\stackrel{\widetilde{f}^*}{\longrightarrow} & H^*(E(X))\\ ~~~~\uparrow~{\pi}'^* & &~~~~\uparrow~\pi^*\\ H^*(BT)&\stackrel{f^*}{\longrightarrow} & H^*(B) \end{array}$$(see Remark \[12\]). Furthermore, the submanifold $ET\times_{T} V_{F}$ of $ET\times_T X$ pulls back to the submanifold $E(V_{F})$ of $E(X)$ under $f$. Thus the class $\tau_{F}:=[ET\times_{T} V_F]\in H_{T}^*(X)$ maps to the class $\tau'_{F}:=[E(V_{F})]$ in the cohomology ring $H^*(E(X))$. This in particular implies that the element $\displaystyle{\tau_{G}\tau_{H}-\tau_{G\vee H}\sum_{E\in G\cap H}\tau_{E}}$ maps to $\displaystyle{\tau'_{G}\tau'_{H}-\tau'_{G\vee H}\sum_{E\in G\cap H}\tau'_{E}}$ in $H^*(E(X))$. However, by [@masudapan Theorem 7.7], $\displaystyle{\tau_{G}\tau_{H}-\tau_{G\vee H}\sum_{E\in G\cap H}\tau_{E}}=0$ in $H_{T}^*(X)$. Hence the relation $(i)$ holds in $H^*(E(X))$. Thus $\Phi_1$ induces a well defined map from $\mathfrak{R}/\mathfrak{I}\longrightarrow H^*(E(X))$. On the other hand [@masudapan Theorem 7.7, Corollary 7.8] imply that as an $H^*(BT)$-algebra, the ring $H^*_{T}(X)$ has the presentation $\mathfrak{R}'/\mathfrak{I}'$, where $\mathfrak{R}':=H^*(BT)[x_{F}: F~ \mbox{a face of}~ Q]$ and $\mathfrak{I}'$ is the ideal in $\mathfrak{R}'$ generated by the relations $(i)$ above and the relations $(ii)'$ $\displaystyle{\sum_{i=1}^d \langle u,v_i\rangle x_{Q_i}}-u$ for $u\in Hom(T,S^1)=H^2(BT)$. This further implies that $\mathfrak{R}/\mathfrak{I}$ is isomorphic to the ring $\displaystyle{H_{T}^*(X)\otimes _{H^*(BT)} H^*(B)}$ which is a free $H^*(B)$-module of rank $\chi(X)$ as in Lemma \[4\] above. Here $H^*(B)$ is a $H^*(BT)$-module by the map $f^*$ which sends $u\in Hom(T,S^1)$ to the class $c_1(\xi_u)\in H^*(B)$. Since $H^*(B)$ is a finitely generated abelian group the proof follows by the arguments similar to the proof of Theorem \[2\]. $\Box$ *(see [@masudapan Example 3.2, Example 5.8] and [@bp Example $7.4.36$]) Let $X=S^4$ be the $4$-sphere identified with the following subset $$\{(z_1,z_2,y)\in \mathbb{C}^2\times \mathbb{R}: |z_1|^2+|z_2|^2+|y|^2=1\}.$$ Define a $T=S^1\times S^1$-action on $X$ given by $(t_1,t_2)\cdot (z_1,z_2,y)=(t_1z_1,t_2z_2,y)$. The $T$ action on $X$ is locally standard with $X/T$ homeomorphic to $$Q=\{(x_1,x_2,y)\in \mathbb{R}^{3} :x_1^2+x_2^2+y^2=1, x_1\geq 0, x_2\geq 0\}.$$ It has $2$ characteristic submanifolds $z_1=0$ and $z_2=0$. The intersection of the two characteristic submanifolds is disconnected and it is the union of the two $T$-fixed points $(0,0,1)$ and $(0,0,-1)$. The circle subgroup of $T$ which fixes $\{z_1=0\}$ is given by $\{(t,1)~:~t \in S^1\}$ which corresponds to $e_1 \in \text{ Hom }(S^1, T)\cong {\mathbb Z}^2$. Similarly the circle subgroup fixing $\{z_2=0\}$ is given by $\{(1,t)~:~t \in S^1\}$ which corresponds to $e_2 \in \text{ Hom }(S^1, T)\cong {\mathbb Z}^2$. Here $e_1, e_2$ are the standard basis of ${\mathbb Z}^2$. Here the orbit space $Q$ is a $2$-ball with two $0$-faces denoted by $a$ and $b$ respectively and two $1$-faces denoted by $G$ and $H$ respectively. Thus the orbit space is not a homology polytope, but is a face-acyclic manifold with corners.* Let $B=\mathbb{C}\mathbb{P}^1$ and $E\longrightarrow B$ denote the principal $T$-bundle associated to the direct sum of the line bundles ${\mathcal O}\oplus {\mathcal O}(1)$ where ${\mathcal O}$ denotes the trivial line bundle and ${\mathcal O}(1)$ denotes the tautological line bundle on $\mathbb{C}\mathbb{P}^1$. Consider the associated $S^4$ bundle $E(S^4)$ over $B$. By Theorem \[cohombuntorus\], $H^*(E(S^4))$ has the presentation $\mathfrak{R}/\mathfrak{I}$ where $\mathfrak{R}=H^*(\mathbb{C}\mathbb{P}^1)[x_G, x_H,x_a,x_b]$ with $x_a$ and $x_b$ are of degree $2$ and $x_{G}$ and $x_{H}$ are of degree $4$ and $\mathcal{I}$ is the ideal in $\mathfrak{R}$ generated by the following two relations $(i)$ $x_G\cdot x_{H}-x_a-x_b;~~ x_ax_b$   $(ii)$ $x_{G}-c_1({\mathcal O})=x_{G}-0=x_{G}; ~~x_{H}-c_1({\mathcal O}(1))$. $K$-ring of a torus manifold bundle ----------------------------------- Since $H^{odd}(X)=0$ the Atiyah Hirzebruch spectral sequence with $E_2^{p,q}=H^p(X; K^q(pt))$ collapses at the $E_2$ term and converges to $K^{p+q}(X)$ (see [@atiyah_hirzebruch_adams_shepherd_1972 p. 208 ]). Moreover, since $H^*(X)$ is free abelian of rank $\chi(X)$ by [@atiyah_hirzebruch_adams_shepherd_1972 p. 209] we have $K^r(X)=0$ when $r$ is odd and $K^r(X)\simeq \mathbb{Z}^m$ when $r$ is even. Here $m=\chi(X)$ is also equal to the number of vertices of $Q$. In particular, $K^0(X)$ is free abelian of rank $m$. Let $E\longrightarrow B$ be a principal $T$-bundle and $E(X):=E\times_{T} X$ the associated bundle over a base $B$ having the homotopy type of a finite CW complex. Let $\mathfrak{S}:=K^*(B)[x_{F}: F~ \mbox{a face of}~ Q]$ and $\mathfrak{J}$ denote the ideal in $\mathfrak{S}$ defined by the following relations: $(i)$ $\displaystyle{x_Gx_{H}-x_{G\vee H}\sum_{E\in G\cap H} x_{E}};$ $(ii)$ $\displaystyle{\prod_{i: \langle u,v_i\rangle>0} (1-x_{Q_i}})^{\langle u,v_i\rangle} -[\xi_{u}]\prod_{i: \langle u,v_i\rangle<0} (1-x_{Q_i}) ^{-\langle u,v_i\rangle}$ for $u\in Hom(T,S^1)$. We have the following conjecture on $K^*(E(X))$ as a $K^*(B)$-algebra. When $B=pt$ this shall give a presentation of the $K$-ring of $X$ which will generalize Sankaran’s result stated in Theorem \[kring\]. For arbitrary $B$ this shall generalize our Theorem \[6\] proved above. \[kringtorusbun\] The ring $K^*(E(X))$ is a free $K^*(B)$ module of rank $m=\chi(X)$ and is isomorphic to $\mathfrak{S}/\mathfrak{J}$. *The difficulty in this case is because the cohomology is not generated in degree $2$ (see [@masudapan Example 4.10]), we cannot find canonical complex line bundles whose classes generate the $K$-ring as in [@param Section 3].* On the other hand it may be useful to define the analogue of the $K$-theoretic face ring $\mathcal{K}'(Q)$ when $Q$ is a nice manifold with corners so that when $Q$ is a homology polytope it agrees with $\mathcal{K}(Q)$ (see Definition \[14\]). One can then check whether $\mathcal{K}'(Q)$ has the structure of a free $RT$-module of rank $m$ generalizing the Proposition \[8\] above. Consider the fibration $E(X) \longrightarrow B$ where $X$ is as above. When $B$ is a path connected, finite-dimensional CW-complex then by [@daviskirk Theorem $9.22$], there exists a cohomology spectral sequence with $E^{p,q}_2=H^p(B, K^q(X)) \Rightarrow K^{p+q}(E(X))$. Since $K^q(X)=0$ for $q$ odd this spectral sequence collapses at the $E_2$ term. We wonder if this gives enough information to deduce the structure of $K^*(E(X))$ as a $K^*(B)$-module. [9]{} , pages 196–222. London Mathematical Society Lecture Note Series. Cambridge University Press, 1972. . Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont. 1969. *Equivariant [$K$]{}-theory of smooth toric varieties,* Tohoku Mathematical Journal Second Series **59**(2) (2007), 203–231. . Academic Press, New York-London, 1972. Pure and Applied Mathematics, Vol. **46**. , volume 204 of [*Mathematical Surveys and Monographs*]{}. American Mathematical Society, Providence, RI, 2015. *The geometry of toric varieties* Uspekhi Mat. Nauk, **33**(2(200)) :85–134, 247, 1978. *Equivariant [$K$]{}-ring of quasitoric manifolds,* ArXiv e-prints; arXiv:1805.11373 \[math.AT\], 2018. , volume **35** of [*Graduate Studies in Mathematics*]{}. American Mathematical Society, Providence, RI, 2001. *Convex polytopes, [C]{}oxeter orbifolds and torus actions,* Duke Math. J. **62**(2) (1991), 417–451. , volume 131 of [*Annals of Mathematics Studies*]{}. Princeton University Press, Princeton, NJ, 1993. The William H. Roever Lectures in Geometry. , volume **98** of [*Mathematical Surveys and Monographs*]{}. American Mathematical Society, Providence, RI, 2002. Appendix J by Maxim Braverman. . Cambridge University Press, Cambridge, 2002. *Vector bundles and [$K$]{}-theory,* In Internet under http://www. math. cornell. edu/\~ hatcher, 2003. *Theory of multi-fans,* Osaka J. Math. **40**(1) (2003), 1–68. , volume **336** of [*London Mathematical Society Lecture Note Series*]{}. Cambridge University Press, Cambridge, 2006. . Classics in Mathematics. Springer-Verlag, Berlin, 2008. An introduction, Reprint of the 1978 edition, With a new postface by the author and a list of errata. *On the cohomology of torus manifolds,* Osaka J. Math. **43** (2006), 711–746. Hindustan Book Agency, New Delhi, 2005. Symplectic torus actions and toric manifolds, With contributions by Chris Allday, Mikiya Masuda and P. Sankaran. *[$K$]{}-rings of smooth complete toric varieties and related spaces,* Tohoku Math. J. **60** (2008), 459–469. *Cohomology of toric bundles,* Comment. Math. Helv. **78**(4) (2003), 540–554. *[$K$]{}-theory of quasitoric manifolds,* Osaka J. Math. **44**(1) (2007), 71–89. *Examples of smooth compact toric varieties that are not quasitoric manifolds,* Algebr. Geom. Topol. **14**(5) (2014), 3097–3106. *[$K$]{}-theory of torus manifolds,* Toric topology, Contemp. Math. **460** (2008), 85–389. *Higher algebraic [$K$]{}-theory for actions of diagonalizable groups,* Inventiones mathematicae **153**(1) (2003), 1–44.
--- abstract: 'Deep learning techniques, namely convolutional neural networks (CNN), have previously been adapted to select gamma-ray events in the TAIGA experiment, having achieved a good quality of selection as compared with the conventional Hillas approach. Another important task for the TAIGA data analysis was also solved with CNN: gamma-ray energy estimation showed some improvement in comparison with the conventional method based on the Hillas analysis. Furthermore, our software was completely redeveloped for the graphics processing unit (GPU), which led to significantly faster calculations in both of these tasks. All the results have been obtained with the simulated data of TAIGA Monte Carlo software; their experimental confirmation is envisaged for the near future.' author: - Evgeny Postnikov - Alexander Kryukov - Stanislav Polyakov - Dmitry Zhurov title: 'Deep Learning for Energy Estimation and Particle Identification in Gamma-ray Astronomy[^1]' --- Introduction ============ Gamma-ray astronomy ------------------- Gamma-ray detection is very important for observing the Universe as gamma-rays are particles without electric charge and are unaffected by a magnetic field. Detected gamma-rays can therefore be extrapolated back to their origin. For that reason, they are currently the best “messengers” of physical processes from the relativistic Universe. With specially designed telescopes, gamma-rays can be detected on Earth (ground-based gamma-ray astronomy) at very high energies. These instruments are called Imaging Air Cherenkov Telescopes (IACTs) [@1]. Gamma-rays are observed on the ground optically via the Cherenkov light emitted by extensive showers of secondary particles in the air when a very-high-energy gamma-ray strikes the atmosphere. However, very-high-energy gamma-rays contribute only a minuscule fraction to the flux of electrically charged cosmic rays (below one per million [@2]). This circumstance makes it necessary to learn to distinguish gamma-rays against charged cosmic rays, mostly protons, on the basis of the images they produce in the telescope camera. Data Life Cycle project in Astroparticle Physics ------------------------------------------------ The Russian-German Initiative of a Data Life Cycle in Astroparticle Physics (also referred to as Astroparticle.online) [@3; @4] aims to develop an open science system for collecting, storing, and analyzing astroparticle physics data including gamma-ray astronomy data. Currently it works with the TAIGA [@5] and KASCADE [@6] experiments and invites astrophysical experiments to participate. In this work, two important problems of gamma-ray astronomy data analysis are solved within the framework of deep learning approach (convolutional neural networks). These are the background rejection problem (removal of cosmic ray background events), and the gamma-ray energy estimation problem, in imaging air Cherenkov telescopes. The data to solve the both problems were simulated using the complete Monte Carlo software for the TAIGA-IACT installation [@7]. Convolutional Neural Networks (CNNs) ------------------------------------ CNNs are well adapted to classify images; that is why they were also chosen for all deep learning applications to the IACT technique [@8; @9; @10]. Their advantage is a fully automatic algorithm, including automatic extraction of image features instead of a set of empirical parameters (‘Hillas parameters’ [@11]). CNNs are implemented in various free software packages, including PyTorch [@12] and TensorFlow [@13]. In contrast to the camera with square pixels [@8], a shape and arrangement of pixels of the TAIGA-IACT camera is hexagonal, and this geometrical feature has not been fully taken into account yet. Data simulations ================ Data simulations were performed to obtain datasets with the response of a real IACT telescope for two classes of particles to be identified: gamma-rays and background particles (protons). The development of the shower of secondary particles in the atmosphere was simulated with the CORSIKA package [@14]. The response of the IACT system was simulated using the OPTICA-TAIGA software developed at JINR, Dubna [@7]. It describes the real TAIGA-IACT setup configuration: 29 constituent mirrors with an area of about 8.5 m$^2$ and a focal length of 4.75 m, and the 560-pixel camera located at the focus. Each pixel is a photomultiplier (PMT) collecting light from the mirrors. The telescopic image was formed using a dedicated software developed at SINP MSU taking into account the night sky background fluctuations, PMT characteristics, and triggering and readout procedures of the data acquisition system. Image cleaning ============== Image cleaning is a conventional procedure to remove images and image parts produced by the night sky background fluctuations but not by a shower of secondary particles. The conventional procedure is two-parametric: it excludes from subsequent analysis all image pixels except the “core pixels”, i.e. those with the amplitude above a “core threshold” and at least one neighbour pixel above a “neighbour threshold”, and the neighbour pixels themselves. If the image contains too few pixels after cleaning (for example, 2 or less), the entire image is excluded from the analysis. Deep learning algorithms were trained on images both without and with cleaning. For the reference technique, a test sample was first subjected to the image cleaning procedure in any case. No training sample was needed for the reference technique. Deep learning ============= Data sample ----------- Training datasets for the CNN contained gamma-ray and proton images (Monte Carlo of TAIGA-IACT) for the task of background suppression, and only gamma-ray images for the energy estimation. Image examples are presented in Figure \[image\_ex\]. ![Simulated image examples: generated by high-energy gamma-ray (left) and proton (right).[]{data-label="image_ex"}](gamma "fig:"){width="14pc"} ![Simulated image examples: generated by high-energy gamma-ray (left) and proton (right).[]{data-label="image_ex"}](hadron "fig:"){width="14pc"} The dataset consisted of 2.7$\times$10$^4$ simulated events after strong image cleaning (70% training + 30% test) for PyTorch, and of 5.6$\times$10$^4$ events after soft image cleaning (of which 60% were used for training, 15% for validation, and 25% for testing) for TensorFlow. The images in the training dataset were rotated around the camera center by multiples of 60$^o$ thereby producing 6 times the initial sample size. Finally, the total number of events was about 2$\times$10$^5$ for training (with rotations), 0.8$\times$10$^4$ for validation, and 1.5$\times$10$^4$ for testing. CNN implementation {#secImplem} ------------------ Seeing that convolutional operations are optimized for a square grid, the TAIGA-IACT hexagonal camera grid was needed to be represented in convenient form to fit the square one. For that purpose, a transformation to oblique coordinate system was applied to each image, so that each hexagonal image with 560 pixels was transformed to the 31x30 square grid. These square grid images were fed to the input layer of the CNN. For the background suppression, test datasets of gamma-ray and proton images in random proportion (blind analysis) were classified by each of the packages: TensorFlow and PyTorch. The energy was either directly predicted as a scalar parameter by the CNN, or the ratio of the energy to the total sum of the amplitudes in the image was predicted and then multiplied back by the value of the total sum to obtain the energy estimate. The reason for the second way to estimate the energy is that the above mentioned total sum of the amplitudes, referred to as ‘image size’, is correlated with the energy for gamma-rays incident closer than $\approx$100 m from the telescope [@15]. Therefore, image size can be in some way directly included in the estimation algorithm to account for this strong correlation at least for nearby gamma-rays. Beyond this ‘Cherenkov radius’ of about 100–120 m, the Cherenkov light intensity varies rapidly with the distance from gamma-ray to the telescope, which may also lead to a substantial increase of the resulting uncertainty in the energy estimation. Various networks with different parameters were tested to find the one maximizing background suppression and the one minimizing the relative energy error. CNN architecture ---------------- The first part of the convolutional neural network consists of convolutional layers (Figure \[cnn\]). Each layer includes convolution with 32 kernels of 3x3 pixels and the ReLU activation function, average pooling layer with 3x3 pooling size and strides of 2 pixels. To avoid overfitting during the training, a dropout layer with a dropout rate of 10% is added after each pooling layer. ![Convolutional neural network for classification/regression. The network accepts square grid images in oblique coordinate system at the input of convolutional layers. Output of the convolutional layers (extracted features) is fed to the classifier/regressor (full-connected layers) that evaluate the output value.[]{data-label="cnn"}](cnn_architecture-crop){width="\textwidth"} Output of the convolutional layer is fed to the full-connected layers of classifier or regressor. The full-connected layers consist of 32 neurons with the ReLU activation function in the first layer and 16 neurons in the second one. Dropout with a 50% rate after each full-connected layer is used to avoid overfitting. Sigmoid was set as the output neuron activation function for the classification task, whereas no activation function was set to the output neuron for the energy estimation. Adagrad optimizer with the learning rate set at 0.05 and the binary cross-entropy as the loss function were used for classification. The energy estimation was performed using Adam optimizer and the mean square error as the loss function. The early stop criterion was set to interrupt the training procedure when the loss function for the validation dataset shows no decrease for 30 epochs. The training lasted for 144 epochs (runtime $\sim$9 minutes). The computational graph was run on NVIDIA GPU Tesla P100. Accuracy on the training and validation sample after training was 91.29% and 90.02% respectively. ROC AUC score (an area under the receiver operating characteristic curve [@16]) was 0.9712 for training and 0.9647 for validation. Results ======= Scalar quality criterion (Q-factor) ----------------------------------- As a quality criterion of particle identification, the selection quality factor $Q$ was estimated. This factor indicates an improvement of a significance of the statistical hypothesis that the events do not belong to the background in comparison with the significance before selection. For Poisson distribution (that is for a large number of events), the selection quality factor is: $$Q=\epsilon_{nuclei}/\sqrt{\epsilon_{bckgr}},$$ where $\epsilon_{nuclei}$ and $\epsilon_{bckgr}$ are relative numbers of selected events and background events after selection. For our task we consider protons as a background to select gamma-rays above this background. Quality factor (1) values obtained by the best CNN configuration among all the trained networks are assembled in Table \[tab1\] together with the quality factor for the reference technique (simplest Hillas analysis [@17]). CNN was accelerated on graphics processing unit (GPU), which led to a significantly faster calculation. Its implementation was approximately 6 times faster than an equivalent implementation on CPU and revealed no quality loss (the last column of Table \[tab1\]). Image cleaning Reference CNN(PyTorch) CNN(TensorFlow) CNN(TensorFlow GPU) ---------------- ----------- -------------- ----------------- --------------------- No 1.76 1.74 1.48 Yes 1.70 2.55 2.91 2.86 : Q-factor for gamma/proton identification.[]{data-label="tab1"} Q-factor variability -------------------- Comparison of different CNN versions for both software packages is illustrated in Figure \[fig1\]. PyTorch had more stable results in a wide range of CNN output parameter values. However, significant improvement was obtained with the TensorFlow CNN version trained by a modified training sample, which contained both original simulated images and additional ones obtained by rotating images from the initial sample by the symmetry angles of hexagonal structure. Thus the modified training sample consisted of $\sim$2$\times10^5$ events instead of $\sim$3$\times10^4$. Therefore, the performance of different software packages was approximately the same, indicating that the training sample size was crucial for the identification quality. ![Quality factor vs CNN output parameter (a scalar parameter between 0 and 1 characterizing image similarity to gamma-ray or proton).[]{data-label="fig1"}](QCNNvsProb_GPU5crop.png){width="80.00000%"} ![Predicted energy vs true energy: top pannel TensorFlow, bottom pannel PyTorch (dashed lines are the ‘ideal case’ y=x).[]{data-label="reg_im"}](pred_en_on_true_en "fig:"){width=".88\textwidth"} ![Predicted energy vs true energy: top pannel TensorFlow, bottom pannel PyTorch (dashed lines are the ‘ideal case’ y=x).[]{data-label="reg_im"}](pred_en_on_true_en_PTfont11 "fig:"){width=".79\textwidth"} ![Absolute energy error distributions.[]{data-label="figdeltaE"}](delta_E_hist_TF_PTcrop){width="80.00000%"} ![Relative energy error vs angular distance of the image: TensorFlow (‘TF’, left), PyTorch (‘PT’, right).[]{data-label="figE"}](dE_vs_Rc_CNNTFfont19crop.png "fig:"){width="14pc"} ![Relative energy error vs angular distance of the image: TensorFlow (‘TF’, left), PyTorch (‘PT’, right).[]{data-label="figE"}](dE_vs_Rc_CNNPTfont19crop.png "fig:"){width="14pc"} Energy error ------------ The predicted energy RMSE is 2.56 TeV for training sample, 3.31 TeV for validation dataset, and 4.76 TeV for the TensorFlow test sample (7.82 TeV for the PyTorch test sample). The dependence of the predicted energy on the primary energy is shown in Figure \[reg\_im\]. Absolute error distribution is presented in Figure \[figdeltaE\]. The accuracy of the energy estimate depends on the image distance from the camera center, which corresponds to the distance of the gamma-ray induced shower to the telescope. The energy error is presented in Figure \[figE\] for various angular distances from the image centre of gravity to the centre of the camera’s field of view. This angular distance is strongly correlated with the distance from the gamma-ray-induced shower to the telescope, but is measurable in experiment unlike the unknown distance to the telescope. The angular distance of $\sim$$1.5^o$ corresponds roughly to $\sim$100–150 m [@18]. Though for the nearest gamma-rays there is no improvement over the simplest conventional technique of a linear proportionality to the image size (section \[secImplem\]), for the distances above $1^o$ CNN gives significantly better results, and especially does the CNN predicting the ratio of the energy to the image size instead of predicting the energy itself. However, it is not the optimal way to incorporate the image size information in the CNN, and therefore the energy estimation still contains some potential to further improve accuracy. Conclusion ========== Convolutional neural networks were implemented to solve two important tasks of data analysis in gamma-ray astronomy: cosmic ray background suppression and gamma-ray energy estimation. Background rejection quality strongly depends on the learning sample size but in any case is substantially higher than for conventional techniques. The energy estimation achieves significantly better accuracy than conventional approach for gamma-rays incident in the area outside of a narrow circle around the telescope ($\sim$100–150 m on the ground or $\sim$1–1.5$^o$ on the camera plane). Because of the wide acceptance of the TAIGA-IACT camera, this technique is capable of measuring energy of most gamma-rays detected by the installation. We also note that there is still considerable potential to further improve the results by taking into account the hexagonal pixel shape and increasing training sample size by one order of magnitude, which is a challenge for the immediate future. [8]{} Weekes, T.C. et al. \[Whipple collaboration\]: Observation of TeV gamma rays from the Crab Nebula using the atmospheric Cerenkov imaging technique. Astroph. J. **342**, 379 (1989) Lorenz, E., Wagner, R.: Very-high energy gamma-ray astronomy. EPJ H **37**, 459 (2012) Bychkov, I. et al.: Russian–German Astroparticle Data Life Cycle Initiative. Data **3**(4), 56 (2018) Astroparticle.online Homepage, url[https://astroparticle.online]{}. Last accessed 15 May 2019 TAIGA Homepage, url[http://taiga-experiment.info]{}. Last accessed 15 May 2019 KASCADE Homepage, url[http://web.ikp.kit.edu/KASCADE]{}. Last accessed 15 May 2019 Postnikov, E.B., et al.: Hybrid method for identifying mass groups of primary cosmic rays in the joint operation of IACTs and wide angle Cherenkov timing arrays. J. Phys.: Conf. Series **798**, 012030 (2017) Nieto, D. et al. for the CTA Consortium: Exploring deep learning as an event classification method for the Cherenkov Telescope Array. Proceedings of Science **301**, PoS(ICRC2017)809 (2017) Shilon, I. et al.: Application of deep learning methods to analysis of imaging atmospheric Cherenkov telescopes data. Astroparticle Physics **105**, 44–53 (2019) Feng, Q., Jarvis, J. on behalf of the VERITAS Collaboration: A citizen-science approach to muon events in imaging atmospheric Cherenkov telescope data: the Muon Hunter. Proceedings of Science **301**, PoS(ICRC2017)826 (2017) Hillas, A.M.: Cerenkov light images of EAS produced by primary gamma rays and by nuclei. In: Proc. 19th Int. Cosmic Ray Conf., La Jolla, 1985, p. 445. NASA, Washington, D.C. (1985) PyTorch Homepage, url[http://pytorch.org]{}. Last accessed 15 May 2019 TensorFlow Homepage, url[http://www.tensorflow.org]{}. Last accessed 15 May 2019 Heck, D. et al.: CORSIKA: A Monte Carlo Code to Simulate Extensive Air Showers. Report FZKA 6019. Forschungszentrum Karlsruhe (1998) Hofmann, W. et al.: Improved energy resolution for VHE gamma ray astronomy with systems of Cherenkov telescopes. Astroparticle Physics **12**, 207–216 (2000) Hanley, J.A., McNeil, B.J.: The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology **143**, 29–36 (1982) Postnikov, E.B., et al.: Gamma/Hadron Separation in Imaging Air Cherenkov Telescopes Using Deep Learning Libraries TensorFlow and PyTorch. J. Phys.: Conf. Series **1181**, 012048 (2019) Dhar, V.K. et al.: ANN-based energy reconstruction procedure for TACTIC $\gamma$-ray telescope and its comparison with other conventional methods. Nucl. Instrum. Meth. A **606** 795–805 (2009) [^1]: Supported by the Russian Science Foundation, project 18-41-06003.
--- abstract: 'We estimate the main-sequence age and heavy-element abundance of the Sun by means of an asteroseismic calibration of theoretical solar models using only low-degree acoustic modes from the BiSON. The method can therefore be applied also to other solar-type stars, such as those observed by the NASA satellite Kepler and the planned ground-based Danish SONG network. The age, 4.60$\pm$0.04 Gy, obtained with this new seismic method, is similar to, today’s commonly adopted values, and the surface heavy-element abundance by mass, $Z_{\rm s}$=0.0142$\pm0.0005$, lies between the values quoted recently by Asplund et al. (2009) and by Caffau et al. (2009). We stress that our best-fitting model is not a seismic model, but a theoretically evolved model of the Sun constructed with ‘standard’ physics and calibrated against helioseismic data.' date: 'Accepted 2011 August 2. Received 2011 August 2; in original form 2011 April 29' title: 'On the seismic age and heavy-element abundance of the Sun' --- \[firstpage\] stars: abundances – stars: interiors – stars: oscillations – Sun: abundances – Sun: fundamental parameters – Sun: interiors – Sun: oscillations. INTRODUCTION {#sec:intro} ============ The only way by which the age of the Sun can be estimated directly to a useful degree of precision is by accepting the basic tenets of solar-evolution theory and measuring those aspects of the structure of the Sun that are predicted by the theory to be indicators of age. We recognize also that there is not a precise moment that one can uniquely define to be the time at which the Sun arrived on the main sequence. However, after initial transients, the central hydrogen abundance $X_{\rm c}$ declined almost linearly with time [e.g. @dog95], so one can extrapolate $X_{\rm c}(t)$ backwards quite well to the time when $X_{\rm c}=X_0$, the initial hydrogen abundance. That is the time that we adopt as our fiducial origin. The solar structure measurements must be carried out seismologically, and one is likely to expect greatest reliability of the results when all the available pertinent helioseismic data are employed. Of these, the most pertinent are the frequencies of the modes of lowest degree, because it is they that penetrate the most deeply into the energy-generating core where the helium-abundance variation records the integrated history of nuclear transmutation. Moreover, it is also only they that can be measured in other stars. Therefore, there has been some interest in calibrating theoretical stellar models using only low-degree modes – here we use modes of degrees $l$=0, 1, 2 and 3. The prospect was first discussed in detail by @jcd84 [@jcd88], [@ulrich86] and [@dog87], although prior to that it had already been pointed out that the helioseismic frequency data that were available at the time indicated that either the initial helium abundance $Y_0$, or the age $t_\odot$, or both, are somewhat greater than the generally accepted values [@dog83; see also @gk90]. Subsequent, more careful, calibrations were discussed by [@guenther89], [@gn90], [@guenther-demarque97], [@weiss-schlattl98], [@wd99], [@dog01], @bsp02 and @dbc11; Most of did not address the influence of uncertainties in chemical composition on the determination of $t_\odot$; for example, [@weiss-schlattl98] adopted in their calibration the helioseismically determined values for the helium abundance in the convection zone, As a main-sequence star ages, helium is produced in the core, increasing the mean molecular mass $\mu$ preferentially near the centre, and thereby inducing a local positive gradient of the sound speed. The resulting functional form of the sound speed $c(r)$ depends not only on age $t_\odot$ but also on the relative augmentation of $\mu(r)$, which itself depends on the initial absolute value of $\mu$, and hence and, to a lesser degree, $Z_0$, [@dog01] tried to separate these two dependencies using the degree dependence of the small separation $d_{n,l}=3(2l+3)^{-1}(\nu_{n,l}-\nu_{n-1,l+2})$ between cyclic multiplet frequencies $\nu_{n,l}$, where $n$ is order and $l$ is degree. This is possible, in principle, because modes of different degree and similar frequency sample the core differently. However, the difference between the effects of $t_\odot$ and $Y_0$ on the functional form of $c(r)$ in the core is not very great, and consequently the error in the calibration produced by errors in the observed frequency data is uncomfortably high, This lack of sensitivity can be overcome by using, in addition to core-sensitive seismic signatures, the relatively small oscillatory component of the eigenfrequencies induced by the sound-speed glitch associated with helium ionization [@dog02], whose amplitude is close to being proportional to helium abundance $Y$ [@hg07b]. The neglect of that component in the previously employed asymptotic signature had not only omitted an important diagnostic of $Y$, but had imprint an oscillatory contamination in the calibration as the limits $(k_1, k_2)$, where $k=n+\frac{1}{2}l$, of the adopted mode range was varied [@dog01]. It therefore behoves us to decontaminate the core signature from glitch contributions produced in the outer layers of the star (from both helium ionization and the abrupt variation at the base of the convection zone, and also from hydrogen ionization and the superadiabatic convective boundary layer immediately beneath the photosphere). To this end a helioseismic glitch signature has been developed by [@hg07b], from which its contributions $\delta\nu_{n,l}$ to the frequencies can be computed and subtracted from the raw frequencies $\nu_{n,l}$ to produce effective glitch-free frequencies $\nu_{{\rm s}n,l}$ to which a glitch-free asymptotic formula – equation (\[e:asymp\]) – can be fitted. The solar calibration is then accomplished as previously [@dog01] by fitting theoretical seismic signatures to the observations by Newton-Raphson iteration, using a carefully computed grid of calibrated models to compute derivatives with respect to . The result of the first preliminary calibration by this method, using BiSON data, has been reported by [@hg07a]. Here we enlarge on our discussion of the analysis, taking a more consistent account of the surface layers of the star, augmenting the number of diagnostic frequency combinations used in the calibration, and adding a second starting reference solar model to demonstrate the insensitivity of the iterated solution to starting conditions. ![ Functional forms $f_X$ of the integrands $\phi_X$ in $X=\int_0^R \phi_X {\rm d}r$, where and where $X =$ $A$, $C$, or $F$, plotted for Model S of @jcd96 over the inner half of the interval $(0,R)$ of $r$. The parameters $A$, $C$ and $F$ are sensitive particularly to the structure of the core, being progressively more centrally concentrated. []{data-label="f:integrals"}](fig3.eps){width="1.00\linewidth"} The calibration procedure {#sec:calibproc} ========================= Introductory remarks {#sec:introremarks} -------------------- Naively fitting eigenfrequencies of parametrized solar models to observed solar oscillation frequencies is temptingly straightforward, and was one of the earliest procedures to be adopted in the present context [@cdg81]. However, it is unwise to adopt so crude a strategy because the raw frequencies are affected by properties of the Sun that are not directly pertinent to the particular investigation in hand, as was quickly realized at the time [e.g. @dog83; @cdg84]. An example is the effect of the near-surface layers, unwanted here, yet a serious contaminant because the region is one of low sound speed. It is more prudent to design seismic diagnostics that are sensitive only to salient properties of the structure. This we accomplish by noticing the roles of various structural features in asymptotic analysis, and relating functionals arising in that analysis to corresponding combinations (not necessarily linear) of oscillation frequencies. It is these combinations that are then used for the calibration. We emphasize that the calibration is carried out by processing numerically computed eigenfrequency diagnostics in precisely the same manner as the observed frequencies. After the diagnostics have been designed, asymptotics play no further role. The precision of the calibration itself is independent of the accuracy of the asymptotic analysis; it is only the accuracy of the conclusions drawn from these calibrations that so reliant, for those conclusions depend in part on the degree to which the diagnostic quantities of, in our present study, age and heavy-element abundance, are divorced from extraneous influences. Diagnosis of the smoothed structure {#sec:diagnosis} ----------------------------------- The principal age-sensitive diagnostics are contained in the asymptotic expression $$\begin{aligned} \nu_{{\rm s}{\boldsymbol i}}\!&\sim&\!(n+{\textstyle\frac{1}{2}}\,L+\varepsilon)\nu_0 -\frac{AL^2\!\!-\!B}{\nu_{{\rm s}{\boldsymbol i}}}\,\nu^2_0 -\frac{CL^4\!\!-\!DL^2\!+\!E}{\nu_{{\rm s}{\boldsymbol i}}^3}\,\nu^4_0\cr &&-\frac{FL^6\!\!-\!GL^4\!+\!HL^2\!-\!I}{\nu_{{\rm s}{\boldsymbol i}}^5}\,\nu^6_0 =:S_{\boldsymbol i}\,, \label{e:asymp}\end{aligned}$$ in which ${\boldsymbol i}=(n,l)$ labels the mode, $L=l+1/2$, and the coefficients $\xi_\beta:=(\nu_0, \varepsilon, A, B, ..., I)$, $\beta=1,...,11$, are functionals of the solar structure alone, independent of ${\boldsymbol i}$. This formula can be obtained by expanding in inverse powers of frequency the coupled pair of second-order differential equations governing the linearized adiabatic oscillations of a spherically symmetric star, as did @tass80, and at each order solving the resulting equation-pairs successively in JWKB [@g07] approximation. Alternatively, perhaps more conveniently, but maybe less accurately, one can adopt an approximate second-order equation The formula (\[e:asymp\]) approximates the actual (adiabatic) eigenfrequencies, for finite $n$, only if the scale $H$ of variation of the background equilibrium state is everywhere much greater than the inverse vertical wavenumber of the oscillation mode. That is accomplished by regarding the solar model, $\cal M$, to have been replaced by a smooth model, ${\cal M}_{\rm s}$, from which the acoustic glitches have been removed. We denote its frequencies by $\nu_{{\rm s}{\boldsymbol i}}$. The coefficients in expression (\[e:asymp\]) that are most sensitive to the stratification of the core are those multiplying the highest powers of $L$ at each order in $\nu_0/\nu_{{\rm s}{\boldsymbol i}}$, namely $A,\ C,$ and $F$. (The $L$-dependent part of the leading term is also sensitive to the core, but merely to indicate, in the spherical environment, that there is no seismically detectable physical singularity at the centre of the star; there is, of course, a coordinate singularity in spherical polar coordinates.) The next terms in core sensitivity are $D$ and $G$, and then $H$. These are also sensitive to the structure of the envelope, so we ignore them in the calibration. Below the near-surface layers of a spherically symmetrical star the integrands for $A,\ C$ and $F$ (which here we denote by the parameter $\alpha=1, 2, 3$ respectively) are given approximately by [@g11], where $r$ is a radial co-ordinate they are plotted in Fig.\[f:integrals\]. Notice that the higher the order in the expansion, the more concentrated near the centre of the star is the integrand of the most sensitive functional. The integrands depend on progressively higher derivatives of the sound speed. Moreover their evaluation is more susceptible to frequency errors. Granted that we use frequencies of modes of only four different degrees, $l$=0, 1, 2 and 3, we cannot even in principle determine coefficients arising in terms of higher order than those presented in the truncated expansion (\[e:asymp\]). One can see from expression(\[e:integrands\]) for the integrands of the coefficients $A$, $C$, and $F$ that they depend also on $\nu_0$, which is sensitive to the outer layers of the star where the sound speed is low. We remove that sensitivity by eliminating $\nu_0$ from expression(\[e:integrands\]), and using instead for our diagnostics the parameters $\hat A=\nu_0A,\ \hat C=\nu_0^3C$ and $\hat F=\nu_0^5 F$, . Glitch contributions {#sec:glitchcont} -------------------- The abrupt variation in the stratification of a star (relative to the scale of the inverse radial wavenumber of a seismic mode of oscillation), associated with the depression in the first adiabatic exponent $\gamma_1=(\partial {\ln p}/\partial{\ln\rho})_s$ (where $p$, $\rho$ and $s$ are pressure, density and specific entropy) caused by helium ionization, imparts a glitch in the sound speed $c(r)$, which induces an oscillatory component in the spacing of the eigenfrequencies of low-degree seismic modes [@g90a]. The amplitude of the oscillations is an increasing function of the helium abundance $Y$, and, for a given adiabatic ‘constant’ $p/\rho^{\gamma_1}$, is very nearly proportional to it [@hg07b]. It is therefore a good diagnostic of $Y$. To determine the amplitude we construct a deviant $$\delta\nu_{\boldsymbol i}:=\nu_{\boldsymbol i} - \nu_{{\rm s}{\boldsymbol i}}\, \label{e:nudiff}$$ from the frequency $\nu_{{\rm s}{\boldsymbol i}}$ of a similar smoothly stratified star, presuming that $\nu_{{\rm s}{\boldsymbol i}}$ is described approximately by equation (\[e:asymp\]). ![ The symbols in the [**upper panels**]{} denote second differences $\Delta_{2{\boldsymbol i}}\nu:=\nu_{n-1,l}-2\nu_{n,l}+\nu_{n+1,l}$ of low-degree modes obtained from the BiSON (Basu et al. 2007). The solid curve is a fit of the seismic diagnostic (equation\[e:all\_glitches\]) to the data by appropriately weighted least squares. The dashed curve is the smooth contribution, including a third-order polynomial in $\nu^{-1}_{\boldsymbol i}$ to represent the upper-glitch contribution from near-surface effects The [**lower panel**]{} displays the remaining individual oscillatory contributions (with zero means) from the acoustic glitches to $\Delta_{2{\boldsymbol i}}\nu$: the dotted and solid curves are the contributions from the first and second stages of helium ionization, and the dot-dashed curve is the contribution from the acoustic glitch at the base of the convective envelope.[]{data-label="f:BiSON"}](fig1.eps){width="0.9\linewidth"} A convenient and easily executed procedure for estimating the amplitude of the oscillatory component is via the second multiplet-frequency difference with respect to order $n$ amongst modes of like degree $l$: $$\Delta_2\nu_{\boldsymbol i}:=\nu_{n-1,l}-2\nu_{n,l}+\nu_{n+1,l}\,. \label{e:secdiff}$$ Taking such a difference suppresses smoothly varying (with respect to $n$) components. The oscillatory component in $\Delta_2\nu$, produced by an acoustic glitch, has a ‘cyclic frequency’ approximately equal to twice the acoustic depth $$\tau=\int_{r}^R c^{-1}\,{\rm d}r$$ of the glitch. The amplitude depends on the amplitude $\Gamma$ and radial extent $\Delta$ of the glitch, and decays with $\nu$ once the inverse radial wavenumber of the mode becomes comparable with or less than $\Delta$. The effects on the frequencies of a solar model $\cal M$ of a specific glitch perturbation $\delta\gamma_1$ can most readily be estimated from a variational principle in the form $\nu={\cal K}/{\cal I}$, as have @g90a, and @mt05. @hg07b have found that a good approximation to the outcome is $$\delta_\gamma\nu=\frac{\delta_\gamma{\cal K}}{8\pi^2{\cal I}\nu}\,, \label{e:vp}$$ where $${\cal I}:=\int\rho\;\bxi\!\cdot\!\bxi\;r^2{\,\rm d}r \label{e:inertia}$$ is the mode inertia and $$\delta_\gamma{\cal K}\simeq \int\delta\gamma_1\;p({\rm div}\bxi)^2r^2\,{\rm d}r\,. \label{e:K1}$$ The function $\bxi$ is the displacement eigenfunction associated with either $\cal M$ or a corresponding smooth model; here we implicitly use ${\cal M}_{\rm s}$. Several terms in equations (\[e:vp\]) and (\[e:K1\]) are missing from the exactly perturbed equation; these are relatively small, and in any case to a substantial degree they cancel. The next step of the estimation is to select a convenient representation for $\delta\gamma_1$. Several formulae have been suggested and used, by e.g., @mt98 [@mt05], @b04, @bm04, and @dog02, not all of which are derived directly from explicit acoustic glitches representing helium ionization [e.g. @b97]. Gough used a single Gaussian function; in contrast, Monteiro & Thompson assumed a triangular form; Basu et al. adopted a simple discontinuity The artificial discontinuities in the sound speed and its derivatives that the latter two possess cause the amplitude of the oscillatory signal to decay with frequency too gradually, although that deficiency may not be immediately noticeable within the limited frequency range in which adequate asteroseismic data are or will imminently be available. Subsequently added another Gaussian function to take account of the first stage of helium ionization, relating its location, $\tau_{\rm I}$, amplitude factor, $\Gamma_{\rm I}$, and width, $\Delta_{\rm I}$, to those of the second stage according to a standard solar model; and thereby they attained considerable improvement. Accordingly, we adopt that procedure here, and set $$\frac{\delta\gamma_1}{\gamma_1}=-\frac{1}{\sqrt{2\pi}}\sum_{i=1}^2 \frac{\Gamma_i}{\Delta_i} {\rm e}^{-(\tau-\tau_i)^2/2{\Delta^2_i}}\,, \label{eq:dgog}$$ summing over the two stages $i$ (=$\,$I and II) of ionization. We set $\Gamma_{\rm I}\Delta_{\rm I}/\Gamma_{\rm II}\Delta_{\rm II} = \tilde\beta$, $\tau_{\rm I}/\tau_{\rm II}=\tilde\eta$, and $\Delta_{\rm I}/\Delta_{\rm II}=\tilde\mu$. We have found that $\tilde\beta, \tilde\nu$ and $\tilde\mu$ hardly vary as $Y_0$ and $t_\odot$ are varied in calibrated solar models, and we set their values to be the constant values 0.45, 0.70, and 0.90 respectively, which gives the best fit [@hg07b]. The quantities $\tau_{\rm II}, \Gamma_{\rm II}$ and $\Delta_{\rm II}$, or equivalently $\tau_{\rm I }, \Gamma_{\rm I }$ and $\Delta_{\rm I }$, are adjustable parameters of the calibration. ![ The symbols denote contributions $\delta\nu_{\boldsymbol i}$ to the frequencies $\nu_{\boldsymbol i}$ produced by the acoustic glitches of the Sun [see also @hg09b].[]{data-label="f:all_glitches"}](fig0.eps){width="1.0\linewidth"} Following @hg07b we estimate the components of the of ${\cal M}_{\rm s}$, and divergence, in separated form as products of spherical harmonics and functions of radius $r$, using the (hybrid) JWKB asymptotic approximation [e.g. @g07] for high order $n$: $$\xi\simeq\left(\frac{K}{r^2\rho}\right)^{1/2}\cos\psi\,,\quad {\rm div}\bxi\simeq \left(\frac{\pi\omega^3|x|}{\gamma_1 pcr^2K}\right)^{1/2}{\rm Ai}(-x)\,, \label{e:divxi}$$ where $\xi(r)$ is the $r$-dependent factor in the vertical component of $\bxi$, , and $\omega=2\pi\nu$ is the angular frequency of oscillation; the argument $x$ of the Airy function Ai is given by $$x:={\rm sgn}(\psi)\big\vert\frac{3}{2}\psi{\big\vert}^{2/3}$$ in terms of the phase $\psi(\tau)=\int K\,{\rm d}r$, which we approximate using a plane-parallel polytropic envelope of index $m$: $$\mbox{$\psi(\tau)\simeq$} \left\{ \begin{array}{lll} \mbox{$\kappa\omega\tilde\tau-(m+1)\cos^{-1}\left(\frac{m+1}{\omega\tilde\tau}\right)$}& \mbox{for $\tilde\tau>\tau_{\rm t}$}\,,\\ \\ \mbox{$|\kappa|\omega\tilde\tau-(m+1)\ln\left(\frac{m+1}{\omega\tilde\tau}+|\kappa|\right)$}& \mbox{for $\tilde\tau\le\tau_{\rm t}$}\,, \end{array} \right. \label{eq:phase}$$ in which $\tilde\tau=\tau+\omega^{-1}\epsilon$, with $\epsilon$ being a phase constant, and $\tau_{\rm t}$ is the associated acoustical depth of the upper turning point, at which the wavenumber $K$ vanishes. The function $$\kappa(\tau)=\left[1-\left(\frac{m+1}{\omega\tilde\tau}\right)^2\right]^{1/2}\,$$ results from approximating $K$ as $c^{-1}(\omega^2-\omega^2_{\rm c})^{1/2}$ in which the acoustic cutoff frequency $\omega_{\rm c}$ is approximated by $(m+1)/\tilde\tau$. The Airy function must be adopted in the expression for div$\bxi$, which appears in the integral for $\delta_\gamma{\cal K}$ in equation (\[e:divxi\]), because the upper turning point of the highest-frequency modes is within the He$\,$I ionization zone where $\delta\gamma_1$ is nonzero. It is adequate to use the sinusoidal (JWKB) expression for both $\xi$ and the horizontal component of the displacement $\bxi$ – which is determined as a horizontal derivative in div$\,\bxi$ – in computing the inertia, given by equation (\[e:inertia\]), because almost all of the integral comes from regions far from the turning points. It is approximated by ${\cal I}\simeq\frac{1}{2}T\omega-\frac{1}{4}(m+1)\pi$ [@hg07b], where $T=\tau(0)$ is the acoustic radius of the star. The phase factor $\epsilon$ was introduced to take some account of the variation with $\omega$ of the location of the upper turning point. Inserting these expressions into equations (\[e:vp\])–(\[e:K1\]) the helium-glitch frequency component: $$\begin{aligned} \delta_\gamma\nu&=&-\sqrt{2\pi}A_{\rm II}\Delta^{-1}_{\rm II} \left[\nu+\textstyle\frac{1}{2}(m+1)\nu_0\right]\cr &&\hspace{-8pt} \times\Bigl[\tilde\mu\tilde\beta\int_0^T\kappa^{-1}_{\rm I} {\rm e}^{-(\tau-\tilde\eta\tau_{\rm II})^2/2\tilde\mu^2\Delta^2_{\rm II}}|x|^{\frac{1}{2}} |{\rm Ai}(-x)|^2\,{\rm d}\tau\cr &&\;\;+\int_0^T\kappa^{-1}_{\rm II} {\rm e}^{-(\tau-\tau_{\rm II})^2/2\Delta^2_{\rm II}}|x|^{\frac{1}{2}} |{\rm Ai}(-x)|^2\,{\rm d}\tau\Bigr]\,, \label{eq:delgamnu}\end{aligned}$$ where $\kappa_i:=\kappa(\tau_i)$, and where we have introduced a frequency amplitude factor $A_{\rm II}=\frac{1}{2}\Gamma_{\rm II}T^{-1}$. There are three additional components to $\Delta_2\nu_i$ that we must consider. The first is due to the abrupt variation in the vicinity of the base of the convection zone at $\tau_{\rm c}$. We model it with a discontinuity in $\omega^2_{\rm c}$ at $\tau_{\rm c}$ coupled with an exponential relaxation to the smooth model ${\cal M}_{\rm s}$ in the radiative zone beneath, with acoustical scale time $\tau_0 {{\color{black}{= 80{\rm s}}}}$, as did @hg07b. This leads to $$\begin{aligned} \delta_{\rm c}\nu&\simeq&A_{\rm c}\nu_0^3\nu^{-2} \left(1+1/16\pi^2\tau_0^2\nu^2\right)^{-1/2}\cr &\times&\hspace{-3pt}\left\{\cos[2\psi_{\rm c}+\tan^{-1}(4\pi\tau_0\nu)] \!-\!(16\pi^2\tilde{\tau}_{\rm c}^2\nu^2\!+\!1)^{1/2} \right\}\,, \label{eq:delcnu}\end{aligned}$$ where $\psi_{\rm c}:=\psi(\tau_{\rm c})$ and $\tilde\tau_{\rm c}:=\tilde\tau(\tau_{\rm c})$, and $A_{\rm c}$ is proportional to the jump in $\omega^2_{\rm c}$. The other two components, whose sum we denote by $\delta_{\rm u}\nu_i$, contain a part that is generated in the very outer layers of the star – by the ionization of hydrogen, the abrupt stratification of the upper superadiabatic boundary layer of the convection zone, and by nonadiabatic processes and Reynolds-stress perturbations associated with the oscillations, which are difficult to model [e.g. @rosen95; @h10] – and a part that results from the incomplete removal of the smooth component when taking a second difference. The latter from equation (\[e:asymp\]), is given approximately The degree-dependent term is much smaller than the other, and it is adequate here to regard the entire contribution as part of the essentially degree-independent upper (near-surface) glitch term, even though it actually arises in part from refraction in the radiative interior. We approximate it as a series in inverse powers of $\nu$, truncated at the cubic order: $$\Delta_2\delta_{\rm u}\nu_{\boldsymbol i}=\sum_{k=0}^3a_k\nu_{\boldsymbol i}^{-k}\,. \label{e:del2poly}$$ We appreciate that in principle there should be an additional contribution from the stellar atmosphere which, because it is produced far in the upper evanescent region of the mode, is a high power of $\nu$ [@cdg80]. However, for the Sun and Sun-like stars its contribution to the second differences, used for determining $\Gamma_i$, is small, as can be adduced from the work by @kbc08. Its effect on the fitting of the smooth components $\nu_{{\rm s}{\boldsymbol i}}$ is to distort the values of $B$, $E$ and $I$. However, these coefficients are not used for the $t_\odot$ and $Y_0(Z_0)$ calibration. Accordingly, we can safely ignore this surface contribution. @dbc11 have recently illustrated this general point . The complete second difference $$\Delta_{2{\boldsymbol i}}\nu \simeq\Delta_{2}(\delta_\gamma\nu_{\boldsymbol i}+\delta_{\rm c}\nu_{\boldsymbol i}+\delta_{\rm u}\nu_{\boldsymbol i}) =:g_{\boldsymbol i}(\nu_{\boldsymbol j}; \eta_\alpha) \label{e:all_glitches}$$ was then fitted to the second differences of the solar, or solar-model, frequencies to determine the coefficients From the outcome, putative frequency contributions $\delta_{\rm u}\nu_i$ were obtained by summing the second differences (\[e:del2poly\]) to yield $$\begin{aligned} \delta_{{\rm u}}\nu_{\boldsymbol i} &\simeq&\tilde A+\tilde B\nu_i\cr &\!\!+\!\!&{\frac{1}{2}}\left[a_0\nu_i^2+2a_1\nu_i(\ln\nu_{\boldsymbol i}\!-\!1)-2a_2\ln\nu_{\boldsymbol i}+a_3\nu_i^{-1}\right]\cr & &\times(\nu_{n+1,0}-\nu_{n-1,0})\cr &\equiv&\tilde A+\tilde B\nu_{\boldsymbol i}+F_{{\rm u}{\boldsymbol i}}\,. \label{e:delnu_surf}\end{aligned}$$ The initially arbitrary constants of summation $\tilde A$ and $\tilde B$ were selected in such a way as to minimize the $L_2$ norm of $\delta_{\rm u}\nu_{\boldsymbol i}$, namely $\sum_{\boldsymbol i}(\tilde A+\tilde B\nu_{\boldsymbol i}+F_{{\rm u}{\boldsymbol i}})^2$, as did @hg09a. The fitting of the second-differences was accomplished by minimizing $$E_{\rm g}=(\Delta_{2{\boldsymbol i}}\nu-g_{\boldsymbol i}){\rm C}^{-1}_{\Delta {\boldsymbol {ij}}}(\Delta_{2{\boldsymbol j}}\nu-g_{\boldsymbol j}) \label{e:minsecdiff}$$ using the value of $\nu_0$ obtained from the fitting of expression(\[e:asymp\]). (That fitting was accomplished by minimizing the appropriately weighted mean-square difference $E_{\rm s}$ from the smooth frequencies $\nu_{{\rm s}{\boldsymbol i}}$, which are themselves derived from the raw frequencies by subtracting the glitch contribution obtained by minimizing $E_{\rm g}$; the two minimizations were carried out iteratively in tandem.) Here C$^{-1}_{\Delta {\boldsymbol {ij}}}$ is the $({\boldsymbol i},{\boldsymbol j})$ element of the inverse of the covariance matrix C$_\Delta$ of the observational errors in $\Delta_{2{\boldsymbol i}}\nu$, computed, perforce, under the assumption that the errors in the frequency data $\nu_{\boldsymbol i}$ are independent. The resulting covariance matrix C$_{\eta\alpha\gamma}$ of the errors in $\eta_\alpha$ was established by Monte Carlo simulation, using 6000 realizations of Gaussian-distributed errors in the raw data with variance in accord with the published standard errors. In carrying out the simulations we omitted the surface term $\delta_{\rm u}\nu_{\boldsymbol i}$, which has insignificant influence on the statistics. ![ Denotation of the eleven solar models which we have used for calculating the partial derivatives $H_{\alpha\beta}$, calibrated to a present radius ${\rm R}_\odot=6.9599\times10^{10}$ cm and luminosity ${\rm L}_\odot=3.846\times10^{33}$erg$\,$s$^{-1}$. The ‘central model’ is Model 0; the sequence of the five models (0-4) has a constant value of $Z_0=0.02$ but varying age $t_\star$ (4.15, 4.37, 4.60, 4.84, 5.10)$\,$Gy; the second sequence, of seven models (0,5-10), has constant age $t_\star=4.60\,$Gy but varying $Z_0$ (from 0.016 to 0.022 in steps of 0.001). []{data-label="f:model_grid"}](fig2.eps) The outcome of the fitting to the BiSON data is displayed in Fig.\[f:BiSON\]: the upper panel displays the second differences, together with the complete fitted formula (\[e:all\_glitches\]) (solid curve) and its individual smooth frequency contribution $\delta_{\rm u}\nu_i$ ; the corresponding oscillatory frequency contributions (dotted and solid curves for the two stages of helium ionization) and (dot-dashed curve) are illustrated in the lower panels of Fig.\[f:BiSON\]. All the frequencies displayed in the figure have been used in equation (\[e:minsecdiff\]) for fitting expression (\[e:all\_glitches\]). In Fig.\[f:all\_glitches\] is displayed the sum of all acoustic ![ Integrand $-({\rm d}c/{\rm d}x)/x$ of $A$ as a function of radius fraction $x:=r/R$ for the calibrated solar models 5, 0 and 10 with varying $Z_0$ at constant age $t_\odot=4.60\,$Gy. The dashed, solid and dot-dashed curves are the results for models 5 ($Z_0$=0.016), 0 ($Z_0=0.020$) and 10 ($Z_0$=0.022) respectively. . []{data-label="f:AZ0"}](fig11.eps){width="1.00\linewidth"} Calibration for age and chemical composition {#s:calibration} -------------------------------------------- We subtract the glitch contributions $\delta\nu_{\boldsymbol i}$ from the full frequencies to obtain corresponding glitch-free frequencies $\nu_{{\rm s}{\boldsymbol i}}$. The procedure is carried out for the solar observations, for the eigenfrequencies of the reference solar model, and for the grid of models used for evaluating derivatives of the fitting parameters with respect to $t_\star$ and . Then we iterate the parameters defining the reference model by minimizing $E_{\rm s}:=(\nu_{{\rm s}{\boldsymbol i}}-S_{\boldsymbol i}){\rm C}^{-1}_{{\rm s}{\boldsymbol {ij}}}(\nu_{{\rm s}{\boldsymbol j}}-S_{\boldsymbol j})$, where C$_{\rm s}$ is the covariance matrix of the statistical errors in $\nu_{{\rm s}{\boldsymbol i}}$, which are determined from the independent observational errors in $\nu_{\boldsymbol i}$ and the covariance matrix C$_{\eta_{\alpha\beta}}$, to obtain both the coefficients $\xi_\beta$ and the covariance matrix C$_{\xi\beta\delta}$ of their errors. In this iteration process for $\xi_\beta$, only glitch-free frequencies $\nu_{{\rm s}{\boldsymbol i}}$ with $k=n+\frac{1}{2}l\ge15$ were considered, because the asymptotic expression (\[e:asymp\]) is not sufficiently accurate for lower values of $k$. Each component of $\xi_\beta$ is an integral of a function of the equilibrium stratification. Some of these are displayed in Fig.\[f:integrals\]. The integrals $A, C$ and $F$ are those of particular importance to our analysis, because $C$ and $F$ are dominated by conditions in the core, and, although the contributions to $A$ from the core and the rest of the star are roughly equal in magnitude (and potentially have opposite signs), the contribution from the envelope is relatively insensitive to $t_\star$ [@gn90] and $Z_0$ (Fig.\[f:AZ0\]). The integrands in the remaining integrals are either more evenly distributed throughout the Sun or are concentrated near the surface. The differences between the smoothed frequencies $\nu_{{\rm s}{\boldsymbol i}}$ and the fitted asymptotic expression $S_{\boldsymbol i}$ given by equation (\[e:asymp\]) are displayed in Fig. \[f:nus\_residuals\] for the BiSON data (left panel) and for the central model m0 (right panel). ![image](fig10.eps){width="0.40\linewidth"} We have carried out age calibrations using various combinations of the parameters $$\zeta_\alpha=(\hat A,\hat C,\hat F,-\delta\gamma_1/\gamma_1),\qquad \alpha=1,...,4\,, \label{eq:xi}$$ where $-\delta\gamma_1/\gamma_1=A_{\rm II}/\sqrt{2\pi}\nu_0\Delta_{\rm II}$ is a measure of the maximum depression in $\gamma_1$ in the second helium ionization zone, and which for convenience we sometimes denote by $\hat\Gamma$. The values of $-\delta\gamma_1/\gamma_1$ and the asymptotic coefficients $A, C, F$ appearing in expression (\[e:asymp\]), , are listed in Table \[t:BiSON\_coefficients\] for the Sun, and are plotted in Fig. \[f:calib-models\] for the eleven calibrated grid models. Presuming, as is normal, that the reference model is parametrically close to the Sun, we first carry out a single iteration by approximating the reference value $\zeta^{\rm r}_\alpha$ by a two-term Taylor expansion about the value $\zeta^\odot_\alpha$ of the Sun: $$\zeta^{\rm r}_\alpha=\zeta^{\odot}_\alpha -\left(\frac{\partial\zeta_\alpha}{\partial t_\star}\right)_{\!\!Z_0}\Delta\,t_\star -\left(\frac{\partial\zeta_\alpha}{\partial Z_0}\right)_{\!\!t_\star}\Delta Z_0 +\epsilon_{\zeta\alpha}\, ,$$ where $\Delta\,t_\star$ and $\Delta Z_0$ are the deviations of the age $t_\odot$ and initial heavy-element abundance $Z_0$ of the Sun from the corresponding values of the reference model; $\epsilon_{\zeta\alpha}$ are the formal errors in the calibration parameters, whose covariance matrix C$_{\zeta\alpha\beta}$ can be derived from C$_{\xi\beta\delta}$ and C$_{\eta\alpha\gamma}$. A (parametrically local) maximum-likelihood fit then leads to the following set of linear equations: $$H_{\alpha j}{\rm C}^{-1}_{\zeta\alpha\beta}H_{\beta k}\Theta_{0k}= H_{\alpha j}{\rm C}^{-1}_{\zeta\alpha\beta}\Delta_{0\beta}\,, \label{eq:calib1}$$ in which $\Theta_k=(\Delta t_\star, \Delta Z)+\epsilon_{\Theta k}= \Theta_{0k}+\epsilon_{\Theta k}$, $k=1,2$, is the solution vector subject to (correlated) errors $\epsilon_{\Theta k}$, and $\Delta_\beta=\zeta^\star_\beta-\zeta^{\rm r}_\beta+\epsilon_{\zeta\beta} =\Delta_{0\beta}+\epsilon_{\zeta\beta}$; the partial derivatives are denoted by $H_{\alpha j}=[(\partial\zeta_\alpha/\partial t_\star)_{Z_0}, (\partial\zeta_\alpha/\partial Z)_{t_\star}]$, $j=1,2$. A similar set of equations is obtained for the formal errors $\epsilon_{\Theta k}$: $$H_{\alpha j}{\rm C}^{-1}_{\zeta\alpha\beta}H_{\beta k}\epsilon_{\Theta k}= H_{\alpha j}{\rm C}^{-1}_{\zeta\alpha\beta}\epsilon_{\zeta\beta}\,, \label{eq:calib2}$$ from whose solution the error covariance matrix C$_{\Theta kq}=\overline{\epsilon_{\Theta k}\epsilon_{\Theta q}}$ can be computed. [ccccc]{} $\nu_0$ ($\mu$Hz)& $A$& $C$& $F$& $-\delta\gamma_1/\gamma_1$\ 136.71&0.3005&1.912&69.83&0.04538\ \[t:BiSON\_coefficients\] The partial derivatives $H_{\alpha j}$ were obtained from the set of eleven calibrated evolutionary models (see Fig.\[f:model\_grid\]) of the Sun that were used in a similar calibration by [@hg07a]. The models were computed with the evolutionary programme by [@jcd08], adopting the Livermore equation of state and the OPAL92 opacities. The set comprises two sequences: one has a constant value of the heavy-element abundance $Z_0=0.020$ but varying age ($t_\star=4.15,...,5.10\,$Gy $\ln t_\star$); the other has constant age $t_\star=4.60\,$Gy but varying $Z_0$ ($Z_0$=0.016,...,0.022 ). Note that, for prescribed relative abundances of heavy elements, the condition that the luminosity and radius of the Sun agree with observation defines a functional relation between $Y_0, Z_0$ and . In Fig.\[f:calib-models\] are plotted the seismic parameters $A, C, F$ and $-\delta\gamma_1/\gamma_1$ of the eleven models, each calibrated to the solar radius and luminosity, for determining the partial derivatives of $A, C, F$ and $-\delta\gamma_1/\gamma_1$ with respect to stellar age $t_\star$ and heavy-element abundance $Z_0$. The values of the partial (logarithmic) derivatives $H_{\alpha j}$ so obtained are listed in Table\[t:hij\]. Notice that within the range of model parameters that we have considered, the derivatives are almost constant. ![image](fig6.eps){width="0.51\linewidth"} $\left(\frac{\partial\ln\nu_0}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln\nu_0}{\partial\ln Z_0}\right)_{t_\star}$ $\left(\frac{\partial\ln A}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln A}{\partial\ln Z_0}\right)_{t_\star}$ $\left(\frac{\partial\ln C}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln C}{\partial\ln Z_0}\right)_{t_\star}$ ------------------------------------------------------------------- ------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------- ------------------------------------------------------------------ ------------------------------------------------------------------ -- -- -- -- 0.0220 -0.00997 -0.733 -0.107 1.771 0.231 $\left(\frac{\partial\ln F}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln F}{\partial\ln Z_0}\right)_{t_\star}$ $\left(\frac{\partial\ln(-\delta\gamma_1/\gamma_1)}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln(-\delta\gamma_1/\gamma_1)}{\partial\ln Z_0}\right)_{t_\star}$ $\left(\frac{\partial\ln Y_0}{\partial\ln t_\star}\right)_{Z_0}$ $\left(\frac{\partial\ln Y_0}{\partial\ln Z_0}\right)_{t_\star}$ 1.057 0.539 -0.607 0.163 -0.173 0.334 \[t:hij\] Results {#sec:results} ======= Provided that the reference model is close to the Sun, the single iteration described in the previous section should provide as reliable an estimate of ($\Delta t_\odot,\;\Delta Z_0$) as the calibration is currently able to provide. We therefore discuss at first the results of single iterations. Calibrations were carried out using different combinations of the parameters $\zeta_\alpha$ and two different reference models. They are summarized in Table\[t:calibration\]. The older reference model is the central ‘Model 0’ which has age $t_\star=4.600\,$Gy; the second is ‘Model 2’, which has an age $t_\star=4.370\,$Gy. We adopted the same physics as in Model S [@jcd96] in the evolutionary calculations both models. We notice by comparing rows 4 and 6 with rows 5 and 7 in Table\[t:calibration\] that calibrations without $\delta\gamma_1/\gamma_1$ are less stable to a change in the reference model than are the calibrations including $\delta\gamma_1/\gamma_1$. less reliable, for the reasons explained in the introduction, although the result may perhaps be simply a symptom of slower convergence. To ascertain whether the entire calibration procedure converges, we have performed several additional iterations. At each iteration the corrections $\Delta t_\star$ and $\Delta Z_0$ are used to define parameters of a new reference model, which is then constructed by performing another evolutionary calculation, followed by the evaluation of a new set of corrections $\Delta t_\star$ and $\Delta Z_0$ as before. We repeated this for five iterations, for each of the two reference models, obtaining the two ‘final’ reference models, listed in Table \[t:results\], for two different combinations of $\zeta_\alpha$, displayed in Fig.\[f:Zprofile\]. The corrections $\Delta t_\star$ and $\Delta Z_0$ are plotted in Fig.\[f:caliter\]. In carrying out the iterations we did not recompute the partial derivatives $H_{\alpha j}$ and the corresponding error covariance matrices. To have done so would have been computationally much more expensive, would have been likely not to have speeded up convergence by very much, [lccccccccc]{} $\zeta_\alpha$& $t_\odot$ (Gy)& $Z_0$& $Y_0$& $t_\odot$ (Gy)& $Z_0$& $Y_0$& C$^{1/2}_{\Theta 11}$& $-(-{\rm C}_{\Theta 12})^{1/2}$& C$^{1/2}_{\Theta 22}$\ $\hat A,\hat C,\hat F,-\delta\gamma_1/\gamma_1$&4.592&0.0156&0.252&4.597&0.0155&0.251&0.039&0.0013&0.0005\ $\hat A,\hat F,-\delta\gamma_1/\gamma_1$ &4.580&0.0157&0.252&4.582&0.0156&0.251&0.045&0.0016&0.0006\ $\hat C,\hat F,-\delta\gamma_1/\gamma_1$ &4.591&0.0157&0.252&4.595&0.0155&0.251&0.044&0.0004&0.0005\ $\hat A,\hat C,-\delta\gamma_1/\gamma_1$ &4.597&0.0160&0.254&4.603&0.0160&0.253&0.045&0.0036&0.0008\ $\hat A,\hat C,\hat F $ &4.619&0.0153&0.252&4.632&0.0151&0.248&0.095&0.0104&0.0013\ $\hat A,\hat C $ &4.638&0.0147&0.246&4.654&0.0143&0.245&1.049&0.1791&0.0306\ $\hat A,-\delta\gamma_1/\gamma_1 $ &4.588&0.0159&0.253&4.592&0.0158&0.253&0.149&0.0222&0.0039\ \[t:calibration\] ![image](fig7.eps){width="0.65\linewidth"} Error contours corresponding to the calibration from Model 0 in the first row of Table\[t:results\] are plotted in Fig.\[f:errellipse\]. Corresponding contours for Model 2 are the same, except that their centres are displaced to (4.603Gy, 0.0155). One can adduce from our description of the analysis in §\[s:calibration\] that our current treatment of the errors is not completely unbiassed, because, aside from $\nu_0$, we assess the error covariances of the parameters defining the smooth and the glitch components independently; however, the potential bias is of the order of only $|\delta\nu_i/\nu_i|$ or less, which is small. Fig.\[f:Zprofile\] depicts the heavy-element profiles after five iterations from the two reference models . Both models have a surface value $Z_{\rm s}=0.0142\pm0.0005$, which is about 6% higher than the value of $Z_{\rm s}=$0.0134 reported by @asp09 and about 9% smaller than the value of $Z_{\rm s}=0.0156\pm0.0011$ reported by @caf09. The error-bars of Caffau’s $Z_{\rm s}$ value, obtained from numerical simulations, is indicated by the region. The calibrated age inferred from Model 0 after five iterations is 4.604$\pm0.039\,$Gy, and that from Model 2 is 4.603$\pm0.039\,$Gy, using the parameter combination $\hat A, \hat C, \hat F$ and $-\delta\gamma_1/\gamma_1$. The corresponding calibrations from odels 0 and 2 for the combination $\hat C, \hat F$ and $-\delta\gamma_1/\gamma_1$ are $4.602\pm0.044\,$Gy and $4.601\pm0.044\,$Gy, respectively. Table\[t:results\] summarizes the calibrations after five iterations from reference . ![ []{data-label="f:diff_residuals"}](fig12.eps){width="1.00\linewidth"} Discussion {#sec:discussion} ========== In attempting to estimate the main-sequence age of the Sun it is prudent to adopt diagnostic quantities that are insensitive to properties that one believes not to be directly pertinent. As the Sun evolves on the main sequence it converts hydrogen into helium in the core. is not very sensitive to uncertain parameters defining those models, such as the initial heavy-element abundance $Z_0$, provided that the models have been calibrated to reproduce the luminosity and radius observed today . The same is true of the quantity of hydrogen consumed, mainly because the nuclear relations are dominated by a single branch of the pp chain, namely ppI, for which there is a tight link between fuel consumption and energy release. Therefore a robust link between main-sequence age and the total amount of hydrogen consumed: the integral $\Delta H:=\int h(r)\rho r^2\,{\rm d}r:=4\pi\int\left[X_0-X(r)\right]\rho r^2\,{\rm d}r$ a good indicator of the age $t_\odot$. It can be calibrated using seismic diagnoses of the mean molecular mass $\mu(r)$ provided that processes other than nuclear reactions that can change $X(r)$, such as gravitational settling and diffusion, are taken adequately into account. In perhaps its simplest form, solar evolution involves computing models of constant mass in hydrostatic equilibrium. The models usually depend on three initial parameters: the abundances of, say, helium, $Y_0$, and the heavy elements, $Z_0$, and a mixing-length parameter $\alpha_{\rm c}$ which is normally held constant. It is usual to fix the relative abundances of all elements other than hydrogen and helium, a procedure which we too have adopted here. Demanding that the luminosity and radius of the model agree with present-day observation relates two of those parameters, say $\alpha_{\rm c}$ and $Y_0$, to the third, $Z_0$, for any $t_\star$. Thus one obtains a two-parameter set of potentially acceptable models, which here we characterize by the values of $t_\star$ and $Z_0$, and which we attempt to calibrate with helioseismic data. Several diagnostics have been used in the past. As mentioned in the introduction, the first to be used for a full calibration were the two mean small separations $d_0$ and $d_1$ [@dog01], averages over $n$ of $d_{n,0}$ and $d_{n,1}$, the hope being that the differences in the way in which the two quantities sample the core would be adequate to disentangle $t_\odot$ and $Z_0$. Unfortunately, given the precision of the data at the time, that could not be accomplished to a useful precision. Moreover, by inspecting the dependence of the calibration on the range of frequencies over which the averages $d_0$ and $d_1$ were determined, there was evidence of contamination by an oscillatory component to the signatures from seismically abrupt variations of the stratification in and at the base of the convection zone. This component is particularly visible in second- and higher-order frequency differences with respect to $n$ [e.g. @g90a; @btc04; @bm04]. There were several obvious improvements in order to obtain a more reliable calibration. The first that we have made is to isolate much of the signal from the abrupt variation in the convection zone. The intention was two-fold: First, by removing the oscillatory component from the stratification one is left with a smooth model for which the simple asymptotic expression(\[e:asymp\]) is more nearly valid; second, its amplitude provides an independent measure of the helium abundance $Y_{\rm s}$ in the convection zone [@hg07b] through the magnitude of the depression in $\gamma_1$ in the ionization zones. The latter provides, via stellar-evolution theory, the value of $Y_0$ – and therefore $X_0(Z_0)$ – in the core, which is required for determining the hydrogen deficit $h(r)$. In carrying out the analysis, the variation in $\gamma_1$ has been represented by two Gaussian functions of acoustic depth, as by @dog02 and @hg04, which has been found to reproduce the oscillation frequencies more faithfully than either the simple discontinuity that was adopted by @b04, @bm04 and @mm10, and the triangular form adopted by @mt98 [@mt05] ; presumed discontinuities in $\gamma_1$ or its derivatives cause the amplitude of the predicted oscillatory feature to decay too slowly with frequency [@hg04], which, although apparently not very deleterious for the Sun, may be a serious deficiency for other stars. ![ Error ellipses for the calibration using the combination $\hat A, \hat C, \hat F$, $-\delta\gamma_1/\gamma_1$ and Model 0 as the reference model: solutions $(t_\odot,Z_0)$ satisfying the frequency data within 1, 2 and 3 standard errors in those data reside in the inner, intermediate and outer ellipses, respectively. []{data-label="f:errellipse"}](fig9.eps){width="0.95\linewidth"} Another improvement is to remove from the diagnostics more of the influence of regions of the Sun that are outside the core. The absolute frequency of a low-degree mode of oscillation feels almost all of the interior structure of the star in inverse proportion to the sound speed near the surface where the influence of the rapid variation of the acoustical cutoff frequency $\omega_{\rm c}$ dominates. The latter is largely eliminated in the small frequency separation, because the eigenfunctions in the very surface layers are almost independent of $L$, and therefore subtracting two modes of nearly the same frequency entails a high level of cancellation. However, the cancellation is not complete, simply because the frequencies of the two modes are not exactly the same. As @ulrich86 has pointed out, the ratio $R_l$ of the small separation $d_l$ to the large separation $\Delta_l$ is a more direct measure of age, for it isolates more effectively the nonhomologous aspects of the evolution , and it more effectively eliminates the influence of the outermost layers of the Sun, as can easily be appreciated by comparing the formulae for $d_l$ and $R_l$ implied by the asymptotic expression (\[e:asymp\]). @rv03 and @fct05 have advocated that it be used for core calibration instead of $d_l$, and recently @dbc11 have illustrated its robustness numerically. Here we have gone further by adopting as diagnostics the factors $\hat A$, $\hat C$, and $\hat F$, integrals of the solar structure which sense variations in conditions even more concentrated towards the centre of the star. ![ Heavy-element abundance $Z$ as a function of the depth-coordinate $\log(p)$ obtained from the reference Models 0 ($t_\star=4.60\,$Gy, solid black curve) and 2 ($t_\star=4.37\,$Gy, dashed red curve) after five calibration iterations. Results from numerical simulations by @asp09 (dot-dashed line) and @caf09 (triple-dot-dashed line) are indicated; the shaded area indicates the reported error bars by @caf09. The initial (zero age) value $Z_0$ of is indicated by the dotted line. After five calibration iterations both reference models have a surface heavy-element abundance $Z_{\rm s}=0.0142$; the age obtained from the 4.60$\,$Gy reference model (Model 0) is 4.604$\pm0.039\,$Gy, and that from the 4.37$\,$Gy reference model (Model 2) is 4.603$\pm0.039\,$Gy (see also Table \[t:results\]). []{data-label="f:Zprofile"}](fig8.eps){width="1.00\linewidth"} One could consider going even further by trying to replace designed to eliminate the influence of the surface layers as much as possible, analogous to the procedure adopted by @gk88 and @ketal92; that is tantamount to using a . Because $\hat A$, $\hat C$ and $\hat F$ depend differently on the core stratification, the simultaneous use of all three quantities provides some information about the manner in which $X$ varies with $r$. It is therefore to be hoped that the calibration is more secure than one using just $d_l$ or $R_l$. It is worth mentioning at this juncture that the for $\hat A$ is not actually negligible outside the core, as can be seen from Fig. \[f:integrals\]; indeed it has been known for some time that the integrand continues to the surface with approximately the same magnitude as it has at $r/R=0.5$ , and that the integral is dominated by conditions outside the core. However, it appears that only the inner parts change as $t_\star$ and $Z_0$ vary, and therefore that $\hat A$ is good diagnostic for our purposes. It is also important to include the diagnostic $\hat\Gamma:=-\delta\gamma_1/\gamma_1$, which measures the helium abundance $Y_{\rm s}$ in the convection zone, for that reflects a rather different aspect of the core structure and enables a much more precise determination of $t_\odot$ and $Z_0$, as evinced by Table \[t:calibration\], and which was already evident in an earlier phase of the investigation [@hg07a]. Whether or not the outcome is more accurate depends on the reliability of the procedure to account for gravitational settling, which relates $Y_{\rm s}$ in the surface to $Y_0$ which controls conditions in the core. It should be pointed out also that $\hat\Gamma$ is not an uncontaminated measure of $Y_{\rm s}$, because it depends also on the entropy in the deep adiabatically stratified convection zone [@hg07b], Our procedure could be made more reliable if we could find an alternative diagnostic that senses $Y_{\rm s}$ more directly. Further remarks about the influence of the outer layers, or the elimination thereof, are in order: In fitting the resolvable glitch contribution to the data an approximation to the unresolvable contribution from hydrogen ionization and the upper superadiabatically stratified boundary layer was included, equivalent to a cubic form in $\nu^{-1}$ added to the second differences [@hg07b]. Associated with the resolvable glitches are smooth contributions which were ignored in the initial calibration for $t_\odot$ and $Z_0$ [@hg07a]. Subsequently they were taken explicitly into account, thereby removing a bias in the procedure [@hg09a; @hg09b] and, it is to be supposed, improving the accuracy of the calibration. It should be mentioned, however, that we have not taken explicit account of putative errors in our modelling of the outermost layers of the Sun. @cdg80 found that were the oscillations to be adiabatic, the effect of the atmosphere would be to add to the frequencies a term $\delta$ that is itself a rapid function of frequency: $\delta\propto\nu^b$ with $b=2(m+1)$ for , where $\nu_{\rm c}$ is the cyclic cutoff frequency ${\rm d}\ln\delta/{\rm d}\ln\nu$ decreases with increasing $\nu$ as $\nu/\nu_{\rm c}$ approaches and exceeds unity. @kbc08 found that in the Sun $b$ decreases to about 4.9 for , which is not entirely inconsistent with this finding. @dbc11 found that taking the correction into account obviates the necessity to use $R_l$ instead of $d_l$ in a simple model calibration for $t_\odot$ in which $Z_0$ is held fixed, and yields results similar to those obtained from $R_l$ with no surface term. This suggests that our neglect of the near-surface adjustment – a device which we adopted to maintain a workable number of unknown parameters in the fitting – may not be severely deleterious. Nonetheless, the approximation deserves further . We have also been somewhat cavalier in our modelling of the acoustic glitches at the base of the convection zone. In particular, we have modelled them as a simple discontinuity in the second derivative of the density standard solar models. Again, we have taken this approach for our convenience; after all, the sole purpose of modelling the glitch was to remove it. However, we are aware that we have not adequately taken account of the stratification of the tachocline, and that by so doing we risk not having eliminated adequately its contribution to the frequencies, and thereby may have biassed our final result. Indeed, it is evident that we have not been able to fit for the rapidly oscillating component of the second differences to the solar data as well as we have to the frequencies of a standard solar model, suggesting that there might be room for further improvement of the theory. @mt05 and @cdmrt11 have gone some way in making such improvements, with the intention of studying the stratification at the base of the convection zone itself. It behoves us to do so too. [lccccccc]{} &&$Z_0\,=\,$0.019628&&&&$Z_0\,=\,$0.014864&\ $\zeta_\alpha$& $t_\odot$ (Gy)& $t_\odot$ (Gy)& $\epsilon_\Theta$& $\qquad$& $t_\odot$ (Gy)& $t_\odot$ (Gy)& $\epsilon_\Theta$\ &(Model 0)&(Model 2)&&&(Model 0)&(Model 2)&\ $\hat A,\hat C,\hat F $&4.272&4.264&0.050&$\qquad$&4.414&4.408&0.054\ $\hat A,\hat C $&4.486&4.490&0.061&$\qquad$&4.585&4.587&0.061\ $\hat A $&4.437&4.439&0.081&$\qquad$&4.559&4.561&0.081\ \[t:fixedZ\] It is one of our intentions to refine our core diagnostic by combining the integrals $\hat A$, $\hat C$ and $\hat F$ into a single quantity $\cal{T}$ which measures most closely the total hydrogen consumption $\Delta H$, rather than merely using the three different aspects of the deficiency function $h(r)$ in parallel. By so doing, properties of the core that are not direct indicators of age should be partially eliminated, thereby increasing the accuracy of the calibration; furthermore, the reduction of the number of final calibration parameters from four to two would increase the precision, although that is of secondary concern. The construction of the diagnostic $\cal{T}$ is a tedious, although, we believe, relatively straightforward task which we have not yet completed. Another of our unaccomplished intentions is to report on varying the bounding values $k_1$ and $k_2$ of $k=n+\frac{1}{2}l$ between which the modes used in the calibration are chosen to lie, as did @dog01 [see also @hg08]. This should give a better indication of the robustness of the calibration. We have carried out a partial survey, but we are not yet satisfied with the outcome. The reason is that the function $E_{\rm g}$ defined by equation (\[e:minsecdiff\]), when evaluated with the coefficients of a corresponding smooth model represented by the coefficients in the expansion (\[e:asymp\]), has several local minima. The calibration we report here adopts the lowest of those minima. But we have found that as $k_1$ and $k_2$ are varied the relative depths of the minima change, and always selecting the lowest can lead to sudden jumping from one to another. The situation is superficially not unlike the earliest direct solar model calibration [@cdg81], which also used only low-degree modes, and for which the acceptable minimum had eventually to be determined from other, rather different, seismic data. Maybe the resolution here will turn out to be similar. The standard calibration errors quoted in Tables \[t:calibration\]–\[t:fixedZ\] and illustrated in Fig. \[f:errellipse\] are the result of propagating quoted observational errors in the raw frequencies. They indicate the precision of the calibration. In the absence of information to the contrary, we have assumed that the raw-frequency errors are uncorrelated. It is important to realize that, given that some correlation is inevitable, this assumption can not only cause the precision of the calibration to be overestimated, but can also lead to bias in the results [@dog96; @gs02]. The calibration errors evidently overestimate the precision. And, of course, they certainly overestimate the accuracy. Our calibration yields $Z_{\rm s}=$0.0142 for the current surface heavy-element abundance of the Sun. This is significantly smaller than that of Model S of @jcd96, which has almost the correct sound-speed and density distribution throughout. Therefore our ‘best’ model is wrong. What does that imply about the values we infer for $Z_{\rm s}$ and $t_\odot$? others [e.g. @wd99; @bsp02; @dbc11] have instead simply adopted a value for $Z_0$ or $Z_{\rm s}$ that was perhaps acceptable by other criteria, and carried out a much more straightforward single-parameter calibration to estimate $t_\odot$. The precision of such a calibration is greater than it would have been had $Z_{\rm s}$ (or, equivalently, $Z_0$) been included as a fitting parameter, but not necessarily the accuracy, even if the true value of $Z_0$ had been adopted. This matter is discussed by @g11, who suggests that under conditions such as these, a simple, but admittedly not reliable, rule of thumb is that accuracy tends to decrease as precision increases. One cannot be sure that that is the case here without a much deeper understanding of the properties of the models against which the Sun is calibrated. It is of some interest to record how the outcome of such single-parameter calibrations depend on the values assumed for $Z_0$. It is summarized in Table \[t:fixedZ\] for two constant heavy-element abundances: $Z_0$=0.019628, the value adopted for Model S [@jcd96], and $Z_0$=0.014864, the value adopted for the @asp09 abundances [see @jcdhg10]. It is evident that a lower fixed value of $Z_0$ results in a greater solar age: [an increase of 3% associated with a 30% decrease in $Z_0$. ]{} This is as one would expect. Reducing $Z_0$ requires also a reduction of $Y_0$ at fixed age, resulting in a less centrally condensed star and consequently a greater value of $\hat A$. Moreover, increasing the age at fixed $Z_0$ and $Y_0$ reduces $\hat A$. Therefore, to maintain $\hat A$ constant, lowering $Z_0$ for the calibration must be compensated by a rise in the inferred value for $t_\odot$. It should be noted that the use of a value of $Z_0$ that is consistent with inferences from intermediate- and high-degree modes is a procedure which is not available for calibrating stars other than the Sun. Of course the reliability of the results of a calibration can be no greater than the reliability of the models that are used. Thus one should address the validity of the assumptions that are made, and estimate their influence on the inferred values of $t_\odot$ and $Z_0$. For making the estimate we note that the final calibration is based on the values of the parameters $\zeta_\alpha$: the coefficients $\hat A$, $\hat C$ and $\hat F$ of the most $L$-sensitive terms at each order in the asymptotic expression(\[e:asymp\]), and the measure $\hat\Gamma$ of the depression in $\gamma_1$ due to He$\,$II ionization. Were the Sun to be spherically symmetrical and nonmagnetic, the former indicators of the acoustic stratification of the core, and the latter a (model-dependent) measure of the surface helium abundance which is related, via the solar model, to the helium abundance in the core in a weakly $t_\odot$- and $Z$-dependent way. Here we address the potential errors in our estimates of the values of these two quantities. For illustrative purposes we lump the first three parameters together, and consider only $\zeta_1=\hat A$, together with $\zeta_4=\hat\Gamma$. Then, from the derivatives listed in Table \[t:hij\] one can deduce that for small errors $\delta\hat A$, $\delta\hat\Gamma$ in $\hat A$ and $\hat\Gamma$ the corresponding errors in $t_\odot$ and $Z_0$ are determined by $$\left( \begin{array}{c} \delta\ln t_\odot \\ \delta\ln Z_0 \end{array} \right) = \left( \begin{array}{cc} -0.91 & -0.58 \\ -3.2& 3.8 \end{array} \right) \left( \begin{array}{c} \delta\ln\hat A \\ \delta\ln\hat\Gamma \end{array} \right)\,. \label{e:caliberr}$$ The value of $\hat A$ obtained by fitting the expression (\[e:asymp\]) to the ‘smooth’ frequencies can be misinterpreted by ignoring asphericity, which arises principally from solar activity in the superficial layers of the Sun. @cemv07, who plot mean frequency differences at different epochs, averaged over $n$, for different values of degree $l$. These can be fitted to $L^2$ to estimate the corruption $\delta_{\rm a}\hat A$ to the coefficient $\hat A$. Averaging over the interval of observations of the @basu07 data set that we use for our calibration yields , implying that $t_\odot$ would be by 0.06$\,$Gy and $Z_0$ overestimated by 0.009. These systematic changes are not negligible: the change in $t_\odot$ is comparable with, although somewhat larger than, the typical random errors listed in Tables \[t:calibration\] and \[t:results\] in calibrations that use $\hat\Gamma$; the change in $Z_0$ is times larger. Asphericity arising from the centrifugal force of rotation is negligible for the Sun, but it can be significant in rapidly rotating stars. The asphericity of the solar tachocline is also insignificant at our present level of precision. [lccccccccc]{} $\zeta_\alpha$& $t_\odot$ (Gy)& $Z_{\rm 0}$& $Y_{\rm 0}$& $Z_{\rm s}$& $Y_{\rm s}$& C$^{1/2}_{\Theta 11}$& -(-C$_{\Theta 12})^{1/2}$& C$^{1/2}_{\Theta 22}$\ &\ $A,C,F,-\delta\gamma_1/\gamma_1$&4.604&0.0155&0.250&0.0142&0.224&0.039&0.0013&0.0005\ $C,F,-\delta\gamma_1/\gamma_1 $&4.602&0.0155&0.251&0.0142&0.224&0.044&0.0004&0.0005\ &\ $A,C,F,-\delta\gamma_1/\gamma_1$&4.603&0.0155&0.250&0.0142&0.224&0.039&0.0013&0.0005\ $C,F,-\delta\gamma_1/\gamma_1 $&4.601&0.0155&0.251&0.0142&0.224&0.044&0.0004&0.0005\ \[t:results\] Errors in the diagnostics of $Y_0$ have two obvious main sources. The first is the relation between $\delta\gamma_1$ and $Y_{\rm s}$, which depends on the equation of state, which we know is not accurate to a degree of precision that we would like. The matter has been discussed extensively by @ketal92, @cdd92 and by @betal00, and we do not pursue it here. The second is the relation between $Y_{\rm s}$ and $Y_0$, and also $Z_{\rm s}$ and $Z_0$, which depend on gravitational settling. It is difficult to assess the accuracy of currently used prescriptions, and it is likely that the uncertainty will remain with us for some time. Yet we note that it is not unlikely that the uncertainty exceeds our statistical errors arising from data errors. We note first that $Y_0$ and . This represents the amount of gravitational settling out of the convection zone that has taken place over the lifetime of the Sun. If the computation of the settling rate were underestimated by 20%, say then the effective $\delta\hat\Gamma$ underestimated likewise, and according to equation(\[e:caliberr\]) errors of and -0.001 would be imparted to $t_\odot$ and $Z_0$ respectively. What is perhaps more serious is the possibility of material redistribution in the energy-generating core, either by large-scale convection or by small-scale turbulence induced possibly by rotational shear. @gk90 argued that there is evidence for that having occurred, concomitant with a reduction of the sound-speed gradient in the innermost regions, and thereby making the Sun appear younger than it really is. This matter should perhaps be investigated further in the future. But what requires serious consideration now is the degree to which a magnetic field might suppress the acoustic glitch associated with helium ionization. @bm04 and @vce06 have reported a diminution during solar cycles 22 and 23 in the amplitude of the acoustic signature of the glitch with increasing magnetic activity (gauged by the 10.7$\,$cm radio flux $F_{10.7}$), with an average slope ${\rm d}\ln\hat\Gamma/{\rm d}F_{10.7}\simeq-0.001$ (in units of 10$^{22}$J$^{-1}$s$\,$m$^2\,$Hz). It has already been pointed out that that requires magnetic field strength variations of order 10$^5$G [@dog06]. Moreover, it is much greater than that implied by Libbrecht & Woodard’s (1990) observations in the previous cycle. Given that the average of $F_{10.7}$ over the interval of observation of the BiSON data was about 120, this would imply, had the magnetic perturbations been small, that $\hat\Gamma$ has been underestimated by about 10%, namely 7$\times$10$^{-3}$, implying that $t_\odot$ has been overestimated by about 10% and, formally, $Z_0$ underestimated by about 90%. This result appears to render hopeless any attempt to calibrate the glitch to determine $Y_{\rm s}$. It behoves us, therefore, urgently to investigate the matter further. Magnetic-field issues aside, the relation between $\hat\Gamma$ and $Y_{\rm s}$ is on the equation of state, which we know to be deficient [e.g. @ketal92; @betal00]. There are other assumptions that are implicit in most solar evolution calculations. Two which have obvious serious implications regarding the apparent age are the constancy of the total mass $M$ of the Sun – the assumption is that there has been no significant accretion nor mass loss on the main sequence – and that physics has not evolved such that, in appropriate units, Newton’s gravitational constant $G$ varies with time. Failure of either of those two assumptions can lead to a substantial deviation of the Sun’s evolution from the usual standard. Numerical computations of the effect of varying $G$ were carried out long ago by @ps64, @ec65, @rd66 and @sb69, and computations with mass loss have been performed by @gu87, ; the results . In particular, if $G$ or had been greater in the past, then the solar luminosity would have been greater, and more hydrogen would have been consumed, Such issues go beyond the scope of this investigation. Conclusion ========== We have attempted a seismic calibration of solar models with a view to improving earlier estimates of the main-sequence age $t_\odot$ and the initial heavy-element abundance $Z_0$. Our current best estimates are summarized in Table\[t:results\]: the age is close to the previous preferred values – in particular, the age adopted for Christensen-Dalsgaard’s Model S – and the implied present-day surface heavy-element abundance lies between the modern spectroscopic values quoted by @asp09 and @caf09. However, we emphasize that there remain many uncertainties in our procedure, and that future revision is not unlikely. ACKNOWLEDGEMENTS {#acknowledgements .unnumbered} ---------------- We are grateful to Bill Chaplin for supplying the BiSON data plotted in Fig.\[f:BiSON\]. Support by the Austrian FWF Project P21205-N16 is gratefully acknowledged. DOG is grateful to the Leverhulme Trust for an Emeritus Fellowship. [99]{} natexlab\#1[\#1]{}\[1\][“\#1”]{} url \#1[`#1`]{}urlprefix Amelin Y., Krot A.N., Hutcheon I.D., Ulyanov A.A., 2002, Sci, 297, 1678 Asplund M., Grevesse N., Sauval A.J., Scott P., 2009, ARA&A, 47, 481 Ballot J., Turck-Chièze S., García R.A., 2004, A&A, 423, 1051 Balmforth N.J., 1992, MNRAS, 255, 632 Balmforth N.J., Gough D.O., 1990, ApJ, 362, 256 Basu S., 1997, MNRAS, 288, 572 Basu S., Antia H.M., Narasimha D., 1994, MNRAS, 267, 209 Basu S., Mandel A., 2004, ApJ, 617, 155 Basu S., Mazumdar A., Antia H.M., Demarque P., 2004, MNRAS, 350, 277 Basu S., Chaplin W.J., Elsworth Y., New A.M., Serenelli G., Verner G.A., 2007, ApJ, 655, 660 Baturin V.A., D[ä]{}ppen W., Gough D.O., Vorontsov S.V., 2000, MNRAS, 316, 71 , 2002, [A&A]{}, 390, 1115 Bouvier A., Wadhwa M., 2010, NatGe, 3, 637 Caffau E., Maiorca E., Bonifacio P., Faraggiana R., Steffen M., Ludwig H.-G., Kamp I., Busso M., 2009, A&A, 498, 877 Chaplin W.J., Elsworth Y., Miller B.A., Verner G.A., 2007, ApJ, 659, 1749 , 1984, in Mangeney A., Praderie F., eds, [Space Research Prospects in Stellar Activity and Variability]{}. Paris Observatory Press, Paris, p.11 , 1988, in Christensen-Dalsgaard J., Frandsen S., eds, [Proc. IAU Symp.123, Advances in helio- and asteroseismology]{}. Reidel, Dordrecht, p.295 , 2008, Ap&SS, 316, 13 , 2009, in Mamajek E.E., Soderblom D.R., Wyse R.F.G., eds, IAU Symp.258, The Ages of Stars. CUP, Cambridge, p.431 Christensen-Dalsgaard J., Gough D.O., 1980, Nat, 288, 544 Christensen-Dalsgaard J., Gough D.O., 1981, A&A 104, 173 Christensen-Dalsgaard J., Gough D.O., 1984, in Ulrich R.K., Harvey J., Rhodes Jr E.J., Toomre J., eds, Solar Seismology from Space. JPL Publ. 84-84, Pasadena, p.199 Christensen-Dalsgaard J., Gough D.O., Thompson M.J., 1991, ApJ, 378, 413 Christensen-Dalsgaard J., D[ä]{}ppen W., 1992, A&ARv, 4, 267 Christensen-Dalsgaard J., Houdek G., 2010, Ap&SS 328, 51 Christensen-Dalsgaard J., etal., 1996, Sci, 272, 1286 Christensen-Dalsgaard J., Monteiro M.J.P.F.G., Rempel M., Thompson M.J., 2011, MNRAS, 414, 1158 , 2011, in Roca Cortés T., Pallé P., Jiménez Reyes S., eds, Seismological Challenges for Stellar Structure. AN, 331, 949 , 1999, [A&A]{} 343, 990 Ezer D., Cameron A.G.W., 1965, Canadian J. Phys, 44, 593 Goldreich P., Murray N., Willette G., Kumar P., 1991, ApJ, 370, 387 , 1983, in Shaver P.A., Kunth D., Kj[ä]{}r K., eds, [Primordial helium]{}. Southern Observatory, p.117 Gough D.O., 1986a, in Gough D.O., ed., Seismology of the Sun and distant Stars. Nato ASI C169, Reidel, Dordrecht, p.125 Gough D.O., 1986b, in Osaki Y., ed., Hydrodynamic and magnetohydrodynamic problems in the Sun and stars. University Tokyo Press, Tokyo, p.117 Gough D.O., 1987, Nat, 326, 257 Gough D.O., 1990a, in Osaki Y., Shibahashi H., eds, Progress of Seismology of the Sun and Stars. Lecture Notes in Physics, Vol.367, Springer Verlag, Heidelberg, p.283 Gough D.O., 1990b, in Mat-fys Roy. Danish Acad. Sci., 42:4, 13 Gough D.O., 1993, in Zahn J.-P. & Zinn-Justin J., eds, Astrophysical fluid dynamics - Les Houches 1987. Elsevier Science Publishers, North-Holland, p.399 Gough D.O., 1995, in Rhodes Jr E.J., D[ä]{}ppen W., eds, GONG’94: Helio- and Asteroseismology. ASP Conf. Ser. 76, Astron. Soc. Pac., San Francisco, p.551 Gough D.O., 1996, in Roca Cortes T., ed., The Structure of the Sun. Cambridge Univ. Press, Cambridge, p.141 Gough D.O., 2001, in von Hippel T., Simpson C., Manset N., eds, ASP Conf. Ser.245, Astrophysical Ages and Timescales. Astron. Soc. Pac., San Francisco, p.31 , 2002, in Favata F., Roxburgh I.W., Gadalí-Enríquez D., eds, [Proc 1st Eddington Workshop: Stellar structure and habitable planet finding]{}. ESA SP-485, Noordwijk, p.65 , 2004, in: [Č]{}elebonović V., Däppen W., Gough D.O., eds, Equation-of-State and Phase Transition Issues in Models of Ordinary Astrophysical Matter. AIP Conf. Proc., Vol. 731, American Institute of Physics, New York, p. 119 Gough D.O., 2006, in Lacostz H., Ouwehand L., Proc. SOHO 17: 10 years of SOHO and beyond. ESA SP-617, Noordwijk, p. 1.1 Gough D.O., 2007, AN, 328, 273 Gough D.O., 2011, in Shibahashi H., Takata M., eds, Proc. Progress in solar/stellar physics with helio- and asteroseismology. Springer, Heidelberg, in preparation Gough D.O., Kosovichev A.G, 1988, in Domingo V., Rolfe E., eds, Seismology of the Sun and Sun-like Stars. ESA SP-286, Noordwijk, p.195 Gough D.O., Kosovichev A.G, 1990, in Berthomeieu F., Cribier M., eds, Inside the Sun. Kluwer Academic Publishers, Netherlands, p. 327 , 1990, Solar Phys., 128, 143 Gough D.O., Sekii T., 2002, MNRAS, 335, 170 Grevesse N., Noels, A., 1993, in Prantzos N., Vangioni-Flam E., Cass’e M, eds, Origin and Evolution of the Elements. Cambridge University Press, Cambridge, p. 15 Guzik J.A., Wilson L.A., Brunish W, 1987, ApJ, 319, 957 Guzik J.A., Cox A.N., 1995, ApJ, 448, 905 Guzik J.A., Mussack K., 2010, ApJ, 713, 1108 , 1989, ApJ, 339, 1156 , 1997, ApJ, 484, 937 Houdek G., 2004, in: [Č]{}elebonović V., Däppen W., Gough D.O., eds, Equation-of-State and Phase Transition Issues in Models of Ordinary Astrophysical Matter. AIP Conf. Proc., Vol. 731, American Institute of Physics, New York, p. 193 Houdek G., 2010, AN, 331, 998 , 2004, in Danesy D., ed., Proc. SOHO 14/GONG 2004: Helio- and Asteroseismology: Towards a Golden Future. ESA SP-559, Noordwijk, p. 464 , 2006, in Fletcher K., ed., Proc. SOHO 18/GONG 2006/HelAs I, Beyond the Spherical Sun. ESA SP-624, Noordwijk, p. 88.1 , 2007a, in Stancliffe R.J., Dewi J., Houdek G., Martin R.G., Tout C.A., eds, Unsolved Problems in Stellar Physics. AIP Conf. Proc., Vol. 948, American Institute of Physics, Vol. 948, New York, p. 219 , 2007b, MNRAS, 375, 861 Houdek G., Gough D.O., 2008, in Deng L., Chan K.L., Chiosi C., eds, The Art of Modelling Stars in the 21st Century. IAU Symp., Vol.[252]{}, CUP, Cambridge , p.149 Houdek G., Gough D.O., 2009a, CoAst, 159, 27 Houdek G., Gough D.O., 2009b, in Cunha M., Thompson M., eds, Proc. HELAS Workshop: New insights into the Sun. Centro de Astrofísica da Universidade do Porto (CD-Rom), T9 (arXiv:0911.5044) Jacobsen B., Yin Q.-Z., Moynier F., Amelin Y., Krot A.N., Nagashima K., Hutcheon I.D., Palme H., 2008, E&PSL, 272, 353 Kjeldsen H., Bedding T.R., Christensen-Dalsgaard J., 2008, ApJ, 683, L175 Kosovichev A.G., Christensen-Dalsgaard J., Däppen W., Dziembowski W.A., Gough D.O., Thompson M.J., 1992, MNRAS, 259, 536 Libbrecht K.G, Woodard M.F., 1990, Nature, 345, 779 Mazumdar A., Michel E., 2010, in T. Roca Cortés, P. Pallé, S. Jiménez Reyes, eds, Proc. HEALS-IV, Seismological Challenges for Stellar Structure (arXiv:1004.2739) Michaud G., Proffitt C.R., 1993, in Weiss W.W., Baglin A., eds, Proc. IAU Colloq.137, Inside the stars. ASP Conf. Ser. Vol. 40, Astron. Soc. Pac., San Francisco, p.246 Monteiro M.J.P.F.G., Thompson M., 1998, in Deubner F.-L., Christensen-Dalsgaard J., Kurtz D., eds, Proc. IAU Symp.185, New Eyes to see inside the Sun and Stars. Kluwer, Dordrecht, p.317 Monteiro M.J.P.F.G., Thompson M., 2005, MNRAS, 361, 1187 Otí Floranes H., Christensen-Dalsgaard J., Thompson M.J., 2005, MNRAS, 356, 671 Pochoda P., Schwarzschild M., 1964, ApJ, 139, 587 Roeder R.C., Demarque P.R., 1966, ApJ, 144, 1016 Rosenthal C.S., Christensen-Dalsgaard J., Houdek G., Monteiro M.J.P.F.G., Nordlund [Å.]{}, Trampedach R., 1995, in Hoeksema J.T., Domingo V., Fleck B., Battrick B., eds, [Proc. 4th SOHO Workshop: Helioseismology]{}. ESA SP-376, Noordwijk, p. 459 Roxburgh I.W., Vorontsov S.V., 2003, A&A, 411, 215 Sackmann I.-J., Boothroyd A.I., 2003, ApJ, 583, 1024 Shaviv G., Bahcall J.N., 1969, ApJ, 155, 136 Swenson F.J., Faulkner J., 1992, ApJ, 395, 654 Tassoul M., 1980, ApJS, 43, 469 Ulrich R.K., 1986, ApJ, 306, L37 Verner G.A., Chaplin W.J., Elsworth Y., 2006, ApJ, 640, 95 , 1998, A&A, 332, 215
--- author: - Bram Wallace - Bharath Hariharan bibliography: - 'supp.bib' title: Supplementary --- Experimental Setup ================== Architectures ------------- ### ResNet26 See Figure 3 and Section 3.2 of [@vdc1] for the original description. ### Autoencoder Generator The architecture is laid out in Table \[table:generator\]. Layer Output Shape -------------------- --------------------- Input 256 ConvTranspose2d-1 \[-1, 512, 4, 4\] BatchNorm2d-2 \[-1, 512, 4, 4\] ReLU-3 \[-1, 512, 4, 4\] ConvTranspose2d-4 \[-1, 256, 8, 8\] BatchNorm2d-5 \[-1, 256, 8, 8\] ReLU-6 \[-1, 256, 8, 8\] ConvTranspose2d-7 \[-1, 128, 16, 16\] BatchNorm2d-8 \[-1, 128, 16, 16\] ReLU-9 \[-1, 128, 16, 16\] ConvTranspose2d-10 \[-1, 64, 32, 32\] BatchNorm2d-11 \[-1, 64, 32, 32\] ReLU-12 \[-1, 64, 32, 32\] ConvTranspose2d-13 \[-1, 3, 64, 64\] Tanh-14 \[-1, 3, 64, 64\] : Generator architecture. First convolution has stride of 1 and no padding, all subequent convolutions have stride of 2 with padding 1. All kernels have size 4. []{data-label="table:generator"} Training & Evaluation {#sec:training} ===================== All networks are trained using stochastic gradient descent for 120 epochs with an initial learning rate of 0.1 decayed by a factor of 10 at 80 and 100 epochs, with momentum of 0.9. One addition to our training process was that of “Earlier Stopping" for the Rotation and Jigsaw pretext tasks. We found that even with traditional early stopping, validation accuracy could oscillate as the pretext overfit to the training data (especially in the Scenes & Textures or Biological cases), potentially resulting in a poor model as the final result. We stabilized this behavior by halting training when the training accuracy improves to 98%, effect on accuracy shown in Table \[table:early\]. --------------- -------------- ---------------- ---------------- ------------------ Jigsaw Early Jigsaw Regular Rotation Early Rotation Regular aircraft 8 9 9 11 cifar100 19 24 42 37 cub 9 9 12 14 daimlerpedcls 67 80 87 87 dtd 15 14 15 14 gtsrb 68 67 82 79 isic 57 59 60 62 merced 57 53 70 58 omniglot 18 24 46 54 scenes 33 33 42 40 svhn 50 53 80 78 ucf101 25 22 42 45 vgg-flowers 22 19 23 22 bach 47 46 41 36 protein atlas 21 21 22 25 kather 79 78 57 61 --------------- -------------- ---------------- ---------------- ------------------ : Comparison of test accuracies with early stopping vs without. Rotation in particular was stabilized and improved by this method. Jigsaw was stabilized, but sometimes hampered. For Jigsaw with less permutations than the 2000 reported the net effect was more positive. The only qualitative difference in results was Jigsaw matching Instance Discrimination on the Internet domains instead of being outperformed. Both methods still fell far behind Rotation. \[table:early\] Dataset Splits {#sec:splits} -------------- We use provided dataset splits when available, taking our validation data from training data when a train-validation split is not predetermined.[^1] If no split was given, we generally used a 60-20-20 split within each class. *Full train-validation-test splits will be released along with our code and models*. Data Augmentation, Weight Decay, and Other Regularization --------------------------------------------------------- A sensitive topic in any deep learning comparison is that of data augmentation or other forms of regularization, which can substantially alter performance. In this work we are determined to give as fair of an apples-to-apples comparison as possible, and as such we apply minimal data augmentation and do not employ weight decay or other regularization methods. The data augmentation used consists solely of resizing, random crops, and horizontal flips. Note that horizontal flips are not typically used on the symbolic domains, but are considered standard everywhere else. We elected to go with the logical choice for 13 out of 16 of our domains, and employ horizontal flips in all of our main experiments. We present results without flipping below. Effect of Horizontal Flipping on Symbolic ----------------------------------------- As seen in Table \[table:flips\], taking away horizontal flipping generally does not have major effects *except* for improving Rotation-Omniglot substantially and hurting Jigsaw-SVHN significantly. The former we attribute to the learning load of Rotation being used, while the latter we posit is due to the lack of horizontal flips allowing Jigsaw to use simpler cues for classification. Autoencoding Jigsaw ID Rotation Supervised ---------- -------------- ---------- ---------- ---------- ------------ GTSRB (57,58) (66, 67) (43, 39) (82, 78) (93, 93) SVHN (31, 33) (55, 26) (37, 34) (80, 81) (95, 95) Omniglot (18,19) (26, 27) (45, 47) (46, 53) (79, 80) : Each tuple is normal accuracy (with horizontal flips, as in paper) and accuracy without flips. In general we see performance changes of only a few percentage points, qualitative comparisons largely hold. The biggest differences are Rotation’s improvement on Omniglot and Jigsaw’s worsening on SVHN. \[table:flips\] Implicit Dimensionality ======================= We observe that the largest variations in explained variance between pretexts occur in the first dimension (Table \[table:pca\]), and investigate its use as a predictor in downstream performance. Correlations are shown in Figure \[fig:pca\_dim\_vs\_performance\]. We do observe a moderate correlation between the explained variance in the first component and downstream normalized accuracy for Instance Discrimination. While weak, this trend holds for PCA performed on both the training and validation images. More significantly, we note the distinct separation formed around 0.5 on the x-axis and perform a t-test to determine that there is a moderately significant difference in downstream accuracies across this interval ($p=0.052$). Thus implicit dimensionality is mildly predicitive of downstream performance for Instance Discrimination. ![ Downstream normalized classification accuracy vs. the fraction of variance explained by the first component. Top row is PCA on the entire training feature set, the bottom on validation. The only moderately significant trends are those of Instance Discrimination, but we note that the trend holds with comparable strength for both sets. []{data-label="fig:pca_dim_vs_performance"}](figures/pca_dim_vs_performance.png){width="\linewidth"} ----- ------ ------ ------ ------ ------ ------ ------ ------ $n$ 256 4096 256 4096 256 4096 256 4096 1 0.67 0.47 0.53 0.29 0.43 0.35 0.33 0.15 2 0.75 0.54 0.69 0.39 0.51 0.41 0.51 0.23 3 0.79 0.58 0.78 0.46 0.56 0.45 0.60 0.29 4 0.82 0.61 0.83 0.50 0.61 0.48 0.67 0.33 5 0.84 0.64 0.86 0.53 0.66 0.51 0.71 0.36 10 0.89 0.72 0.92 0.63 0.79 0.61 0.82 0.46 15 0.92 0.77 0.94 0.69 0.86 0.66 0.87 0.53 20 0.94 0.81 0.95 0.72 0.90 0.70 0.90 0.57 30 0.96 0.85 0.97 0.76 0.94 0.76 0.93 0.63 40 0.97 0.88 0.97 0.79 0.96 0.79 0.95 0.67 50 0.98 0.90 0.98 0.81 0.97 0.82 0.96 0.71 60 0.99 0.92 0.98 0.83 0.98 0.84 0.96 0.73 70 0.99 0.92 0.99 0.84 0.98 0.85 0.97 0.75 80 0.99 0.93 0.99 0.85 0.99 0.86 0.97 0.77 90 0.99 0.94 0.99 0.86 0.99 0.88 0.98 0.78 100 0.99 0.94 0.99 0.87 0.99 0.88 0.98 0.80 110 1.00 0.94 0.99 0.87 0.99 0.89 0.98 0.81 120 1.00 0.95 0.99 0.88 0.99 0.90 0.99 0.82 130 1.00 0.95 0.99 0.88 1.00 0.91 0.99 0.82 140 1.00 0.95 1.00 0.89 1.00 0.91 0.99 0.83 150 1.00 0.95 1.00 0.89 1.00 0.92 0.99 0.84 ----- ------ ------ ------ ------ ------ ------ ------ ------ : Fraction variance explained by the first $n$ values. \[table:pca\] Nearest Neighbors ================= Nearest neighbor examples are linked from the Github. Correlations of Pretexts with Downstream Accuracy ================================================= Correlations for each task are shown in Figures \[fig:rot\], \[fig:ae\], \[fig:jigsaw\], \[fig:inst\_disc\]. X-axis is accuracy for Rotation/Jigsaw, loss of Autoencoding and ID. ![ Downstream normalized classification accuracy vs. performance on pretext task for Rotation. []{data-label="fig:rot"}](figures/rot_pretext_vs_downstream.png){width="\linewidth"} ![ Downstream normalized classification accuracy vs. performance on pretext task for Jigsaw. []{data-label="fig:jigsaw"}](figures/jigsaw_2000_pretext_vs_downstream.png){width="\linewidth"} ![ Downstream normalized classification accuracy vs. performance on pretext task for Autoencoding. []{data-label="fig:ae"}](figures/ae_pretext_vs_downstream.png){width="\linewidth"} ![ Downstream normalized classification accuracy vs. performance on pretext task for Instance Discrimination. []{data-label="fig:inst_disc"}](figures/inst_disc_pretext_vs_downstream.png){width="\linewidth"} [^1]: Despite using overlapping domains with the VDC, we are forced to use different splits in some cases due to the Visual Decathlon challenge not releasing the corresponding test labels.
--- abstract: 'Income tax systems with “pass-through” entities transfer a firm’s incomes to the shareholders, which are taxed individually. In 2014, a Chilean tax reform introduced this type of entity and changed to an accrual basis that distributes incomes (but not losses) to shareholders. A crucial step for the Chilean taxation authority is to compute the final income of each individual, given the complex network of corporations and companies, usually including cycles between them. In this paper, we show the mathematical conceptualization and the solution to the problem, proving that there is only one way to distribute incomes to taxpayers. Using the theory of absorbing Markov chains, we define a mathematical model for computing the taxable incomes of each taxpayer, and we propose a decomposition algorithm for this problem. This allows us to compute the solution accurately and with the efficient use of computational resources. Finally, we present some characteristics of the Chilean taxpayers’ network and computational results of the algorithm using this network.' author: - | Javiera Barrera\ Facultad de Ingeniería y Ciencias\ Universidad Adolfo Ibáñez - | Eduardo Moreno\ Facultad de Ingeniería y Ciencias\ Universidad Adolfo Ibáñez - | Sebastián Varas K.\ CIRIC - INRIA Chile bibliography: - 'sii.bib' date: 'April 7, 2016' title: 'A decomposition algorithm for computing income taxes with pass-through entities and its application to the Chilean case' --- Introduction {#intro} ============ In income tax systems, a “pass-through” entity (also known as a flow-through entity) refers to companies or corporations that are not subject to income taxes but whose income is “passed through” to their owners, who are taxed individually. Pass-through entities are very common in many countries. For example, in the USA, this type of firm (including sole proprietorships, general partnerships, limited partnerships, LLCs and S-corporations) increased from 83% of all firms in 1980 to 94% in 2007 [@CBOreport]. In particular, in Chile, a comprehensive tax reform was approved in 2014 that includes this type of firm and changes the tax basis to an accrual basis (“attributed income”) for both companies and individuals. More specifically, at the end of a calendar year, the taxable profits of a company would be attributed to the shareholders, proportionally to their participation. However, when a company or corporation incurs losses, they are not attributed to their shareholders but can be used as a credit for subsequent years. This is different from other countries, where losses are also passed through to the owners. A natural question for this taxation system is how to compute the final attributed income of each taxpayer. The difficulty of this comes from the fact that many companies and corporations are partially owned by other companies and corporations, successively constructing complex networks of companies, usually including cycles between them (i.e., a company can “own” a fraction of itself). A natural way to compute this is to iteratively distribute the positive income to shareholders and repeat this until all income has been assigned. However, many questions arise from this procedure: Is there relevance in the order in which companies are chosen to distribute at each step? Note that a company with negative income can receive sufficient attributed income to cover it losses and in a future iteration will start to distribute its received income to the shareholders. Does a unique final state exist for this system, independently of the order in which the incomes are attributed? Can we compute this final state efficiently? This article is motivated by a request of the Chilean taxation authority (Servicio de Impuestos Internos) to study all of these questions. In this paper, we mathematically formalize the problem and use the theory of Markov chains to prove that there exists a unique final state. We also prove that this state can be obtained by decomposing the network in a strongly connected component, in which the final attributed income can be computed efficiently. This leads to a fast algorithm to compute the final state, even for a large number of taxpayers. Finally, we show some results of the implementation of the algorithm on the real network of taxpayers in Chile. To our knowledge, there is no such study in the literature. Nevertheless, similar questions are studied in the context of discrete games, particularly for chip firing games [@merino2005chip]. In the case of chip firing games over directed graphs [@bjorner1992chip], each node contains a set of chips, and at each iteration, a node is selected and one chip is sent to each of its neighbors (if it has enough chips). The game stops if there exists no node with more chips than the number of its outgoing arcs. Note that multiple arcs are allowed between a given pair of nodes, so this game can be viewed as a discrete version of our problem when all starting incomes are positive. For this problem, the authors prove that the final state of the game (if it exists) is reached independently of the sequence of nodes chosen. Additionally, the authors remark that this problem can be seen as a tool for computing the absorption probabilities of certain Markov chains [@engel1975probabilistic]. This paper is organized as follows. In Section \[concep\], we show the mathematical conceptualization of the problem, formulating the problem and defining the notation used in the subsequent sections. In Section \[calc\], we analyze how to calculate the attribution of income to shareholders, showing two particular cases first and then formulating the general case. In this section, we also show that the problem has only one finale state, which can be computed using the theory of absorbing Markov chains. In Section \[teoria\], we show the theoretical results needed to understand the validity of the algorithm that is explained in Section \[algoritmo\] along with the two main proofs of this paper. In Section \[algoritmo\], we show the algorithm used to accurately and efficiently compute the attributed income. In addition, we prove the validity of this algorithm based on the theoretical results of the previous section. In Section \[resultados\], we analyze the network of Chilean taxpayers and the results of the algorithm, comparing its performance with that of alternative algorithms. Finally, we conclude and discuss the impact of this work on the Chilean taxation authority. Conceptualization {#concep} ================= For simplicity, we refer to corporations as any pass-through entity and to individuals as any person or corporation that does not distribute its income. Let $N$ be a set of taxpayers that consists of a subset of corporations $N_S$ and a subset of individuals $N_P$, such that $N_S \cup N_P=N$. Each taxpayer $i\in N$ has an initial income $E^{(0)}_i$, which defines the vector $E^{(0)}$. Each corporation can be owned by corporations and individuals, represented by matrices $Q$ and $R$, respectively, where row $i$ represents the shares of corporations or individuals in corporation $i$. Therefore, $q_{ij}$ is the percentage of corporation $i$ owned by corporation $j$, and $r_{ij}$ is the percentage of corporation $i$ owned by individual $j$. Furthermore, we can assume that every individual is owned by himself. Thus, we define the matrix of shares $P$, where $p_{ij}$ is the percentage of taxpayer $i$ owned by taxpayer $j$. Matrix $P$ has the following form: $$\label{matriz_P} P=\begin{bmatrix} Q&R \\ 0&I \end{bmatrix}.$$ Moreover, $Q$ and $R$ have the following properties: 1. $0\leq Q \leq 1$ and $0\leq R \leq 1$, 2. $\lim_{n\to \infty} Q^n=0$, 3. $\sum_{j \in N} p_{ij}=1,\forall{i \in N}$. We denote by $\mathcal{P}$ the set of matrices $P$ with these properties. \[obs1\] These properties ensure that every corporation is, directly or indirectly, owned by individuals. Given the vector of initial incomes $E^{(0)}$ and the matrix of shares $P$, we want to distribute the initial income to taxpayers and compute the attributed income of each taxpayer. A corporation with positive initial income must distribute its income proportionally to the share of each taxpayer. A corporation with negative initial income distributes its income only if the sum of its attributed income plus its initial income is greater than 0. Hence, for the distribution of income, a corporation with negative income would behave as an individual. To consider corporations with negative income that do not distribute their income, we define the matrix of shares restricted in a subset of corporations as follows. Let $S\subseteq N_S$ a subset of corporations; we define the matrix of shares restricted in $S$, named $P_S$, as: $$\label{Def_1} \left(P_{S}\right)_{i\bullet} =\begin{cases} P_{i\bullet} & \qquad \mbox{if } i \not\in S, \\ e_i & \qquad \mbox{if } i \in S, \end{cases} %0 & \qquad \mbox{if } i\neq j, i \in S, \\ %1 & \qquad \mbox{if } i= j, i \in S. \end{cases}$$ where $e_i$ is the $i$-th canonical row vector. Replacing row $i$ of $P$ by the $i$-th canonical row vector is equivalent to saying that corporation $i$ will not distribute its income (like individuals). In our case, we want to define a set $S$ that has all corporations with negative income. Thus, we use the following notation: \[Def\_2\] Given a vector of incomes $E$, we define a subset of corporations with negative income as $$S(E)= \{i\in N_s : E_i < 0 \}.$$ Accordingly, we see that we can compute the attribution of income iteratively. In the first iteration, we distribute the initial income of each corporation proportionally to the share of each taxpayer, considering that corporations with negative initial income do not distribute. In the second iteration, we then compute the new income of each corporation, and these corporations distribute their new income to taxpayers again. In this second iteration, a corporation with negative initial income could now have positive income, so this corporation would distribute its income to its shareholders. The algorithm iterates until all income has been distributed to individuals or corporations with negative attributed income. Using the notation that we introduced, this algorithm can be formalized as follows: \[alg\_1\] Let $E^{(n)}$ be the vector of attributed incomes in the $n$-th iteration of the algorithm, where $E_j^{(n)}$ is the income of taxpayer $j$. As in each iteration a corporation with negative attributed income does not distribute its income; the iteration can then be defined as $$\label{ec_gen} E^{(n+1)}=E^{(n)}P_{S^{(n)}},\ \text{ where } S^{(n)} = S(E^{(n)}).$$ We want to study whether $\lim_{n\to \infty} E^{(n)}$ exists and is unique, i.e.,, if the value of $E^{(n)}$ converges to a vector of final attributed income denoted by $E^{(\infty)}$. Additionally, we can derive from equation that if $E^{(\infty)}$ does exist, then $$\label{eq:extension} E^{(\infty)}=E^{(0)}\cdot P_{S^{(0)}}\cdot P_{S^{(1)}}\cdot P_{S^{(2)}} \cdots$$ It is important to note that $P_{S^{(n)}}$ is not necessarily different from $P_{S^{(n+1)}}$; indeed, $S^{(n)}$ probably does not change during several iterations. Moreover, there exists an iteration in which all corporations with negative income maintain negative income during all future iterations. Note also that a taxpayer with nonnegative income will remain nonnegative for all remaining iterations. These two observations imply that there is a final matrix $P_{S}$ that is multiplied infinite times to compute the vector of final attributed incomes. Therefore, there is a succession of integer numbers $n_1,\ldots,n_k$ such that the sets $S^{(n)}$ can be written as $$S^{(i)} = \begin{cases} S^{(n_1)} &i=0...n_1\\ S^{(n_j)} & i=n_{j-1}+1\ldots n_j, \ j=2\ldots k-1\\ S^{(n_k)} &i\ge n_{k-1}+1 \end{cases}.$$ Hence, equation can be written as $$\label{ec_gen_2} E^{(\infty)}=E^{(0)}\cdot P_{S^{(n_1)}}^{n_1}\cdot P_{S^{(n_2)}}^{n_2-n_1}\cdot P_{S^{(n_3)}}^{n_3-n_2} \cdots P_{S^{(n_{k-1})}}^{n_{k-1}-n_{k-2}} \cdot P_{S^{(n_k)}}^\infty.$$ The difficulty of the problem results from the fact that some corporations are owners of other corporations. If this does not occur, only one iteration would be necessary to compute the final attributed incomes of all taxpayers. In that case, the vector of final attributed income would be $$E^{(\infty)}=E^{(1)}=E^{(0)}P_{S^{(0)}}.$$ Because corporations are owners of other corporations, a corporation could be an owner of itself. This is why infinite iterations could be required, thus creating complex instances that could require a long time for computation. However, as we show in Section \[calc\], we can use the theory of absorbing Markov chains to compute exactly the vector of final attributed income without iterating infinite times. This theory is the basis for designing the algorithm presented in Section \[algoritmo\]. Computing the final attributed income {#calc} ===================================== We start with two simple examples: 1) When all corporations have positive initial income and 2) When corporations with negative initial income have negative final attributed income. *Final attributed income when all initial incomes are positive*. When all initial incomes are positive, all corporations distribute their income to their shareholders. Therefore, $S^{(n)}=\emptyset,\forall{n\ge0}$, which implies that $P_{S^{n}}=P,\forall{n\ge0}$; hence, we can see from equation that $$\label{all_+} E^{(\infty)}=E^{(0)}P^{\infty}.$$ *Final attributed income when corporations with negative initial income have negative final attributed income*. When corporations with negative initial income have negative final income, it is clear that $S^{(n)}=S,\forall{n\ge0}$, which implies that $P_{S^{(n)}}=P_{S^{(0)}},\forall{n\ge0}$, and we know from equation that the matrix $P_{S^{(0)}}$ is known and that $P_{S^{(0)}}\in \mathcal{P}$; therefore, we can see from equation that $$\label{some_very_-} E^{(\infty)}=E^{(0)}P_{S^{(0)}}^{\infty}.$$ In both cases, equations and require computation of $P^{\infty}$ and $P_{S^{(0)}}^{\infty}$, respectively, which has a closed formula, using a formula known from the absorbing Markov chains theory. Analogy with Absorbing Markov Chains ------------------------------------ An absorbing Markov chain has a set of transient states $N_S$ and a set of absorbing states $N_P$, defining a transition matrix $P$, $$P=\begin{bmatrix} Q&R \\ 0&I \end{bmatrix}$$ where $q_{ij}$ is the probability of moving from a transient state $i$ to a transient state $j$ in one time step and $r_{ij}$ is the probability of moving from a transient state $i$ to an absorbing state $j$ in one time step. At the same time, $0$ is a matrix of zeros, which means that the probability of moving from an absorbing state to a transient state is 0, and $I$ is the identity matrix, which implies that the probability of moving from an absorbing state $i$ to an absorbing state $j$ is 1 if $i=j$ and 0 otherwise. The analogy between an absorbing Markov chain and our problem of attribution of incomes is evident. Taxpayers are defined as the states of the Markov chain, in which the percentages of each taxpayer owned by other taxpayers are represented as the transition probabilities of the Markov chain. Thus, corporations with negative initial incomes and individuals are defined as absorbing states, and the remaining corporations are defined as transient states. Moreover, it is known that a transition matrix of an absorbing Markov chain has the same properties as a matrix $P\in \mathcal{P}$. Thus, the matrices $Q$, $R$, $0$ and $I$ forming the matrix $P$ have the same properties and characteristics as the matrices of shares defined in equation . When all initial incomes are positive, the network of taxpayers is modelled as an absorbing Markov chain, in which each individual is an absorbing state and each corporation is a transition state. Conversely, when all corporations with negative initial income have negative final attributed income, corporations with positive initial income remain as transition states, and individuals remain as absorbing states, but corporations with negative initial income are absorbing states. The Chapman–Kolmogorov equation of Markov chains defines the probability of moving from a state $i$ to a state $j$ in $k$ time steps as the component $(i,j)$ of the matrix $P^k$. It is known that in an absorbing Markov chain, when $k$ tends to infinity, the process will be absorbed by one state and remain there forever. Therefore, when there are multiple absorbing states, the question is to compute the probability of being absorbed by a certain state. These are called absorption probabilities, which are precisely $P^\infty=\lim_{k\to \infty} P^k$, where $(P^\infty)_{ij}$ is the probability of being absorbed by an absorbing state $j$ given that the state of the Markov chain is $i$. Indeed, $P^\infty$ can be computed as $$\label{P_infinity} P^\infty= \begin{bmatrix} Q^\infty &(\sum_{i=0}^\infty Q^i) \cdot R \\ 0&I \end{bmatrix} = \begin{bmatrix} 0&(I-Q)^{-1}R \\ 0&I \end{bmatrix}$$ Extending the analogy of absorbing Markov chains with the problem shown in this work, component $(i,j)$ of matrix $P^k$ is the percentage of income of taxpayer $i$ distributed to taxpayer $j$ after iteration $k$ with Algorithm \[alg\_1\]. Similarly, the percentage of income of taxpayer $i$ attributed to taxpayer $j$ is component $(i,j)$ of matrix $P^\infty$. Therefore, using formula , we can compute the final attributed income of each taxpayer in both particular cases shown in Examples 1 and 2. General case ------------ In general, some taxpayers with negative initial income have positive final attributed income. Equation defined that $$E^{(\infty)}=E^{(0)}\cdot P_{S^{(n_1)}}^{n_1}\cdot P_{S^{(n_2)}}^{n_2-n_1}\cdot P_{S^{(n_3)}}^{n_3-n_2} \cdots P_{S^{(n_{k-1})}}^{n_{k-1}-n_{k-2}} \cdot P_{S^{(n_k)}}^\infty,$$ where $E^{(0)}$ is the vector of initial incomes, $P_{S^{(n_i)}}$ are existent and unique matrices $\forall i=1,\ldots,k$ and, as we can see in equation , $P_{S^{(n_k)}}^{\infty}$ exists and is unique. Therefore, the vector of final attributed incomes $E^{(\infty)}$ exists and is unique. This ensures that given a vector $E^{(0)}$ and a matrix of shares $P\in \mathcal{P}$, if we iterate with Algorithm \[alg\_1\], we have only one final distribution of the initial income to individuals. However, as we show in the next section, it is unnecessary to distribute the income of all corporations simultaneously. Indeed, if we distribute only the income of arbitrary subsets of corporations (keeping in mind that a corporation with negative income does not distribute), the vector of final attributed incomes is the same as that computed using Algorithm \[alg\_1\]. Mathematical properties of the matrix of shares {#teoria} =============================================== The next lemma shows that it is unnecessary to iterate infinitely, and the only reason to compute the attributed income is to know which corporations will have negative final attributed income $S^{(\infty)}=\{i\in N_S:E_{i}^{(\infty)}<0\}$. \[lem:base\] Let $P\in \mathcal{P}$ be a matrix of shares with a set of corporations $N_S$. Then, for any subset of corporations $S\subseteq N_S$ $$P_S \cdot P^{\infty}=P^{\infty}.$$ Note that this lemma implies that $P_S^k P^\infty = P^\infty$ for any $k=1\ldots \infty$. This will be key for the algorithm proposed in the next section. By equation , the $i$-th row of $P_S \cdot P^\infty$ in the case in which $i\notin S$ is given by $$\left(P_S\cdot P^{\infty}\right)_{i \bullet} = (P_S)_{i \bullet} \cdot P^{\infty} = P_{i \bullet} \cdot P^{\infty} = P^{\infty}_{i \bullet}$$ where the last equality is given because $P\cdot P^{\infty}=P^{\infty}$. In contrast, if $i\in S$, then $$\left(P_S\cdot P^{\infty}\right)_{i \bullet} = (P_S)_{i \bullet} \cdot P^{\infty} = e_i \cdot P^{\infty} = P^{\infty}_{i \bullet}$$ proving the result. The foregoing lemma says that if in one iteration we do not attribute income for a subset of corporations $S$, and in the following iterations we attribute income as usual with matrix $P$, then the final attributed income will be the same as if we do not skip any iterations. Moreover, because for all steps $i$, the subset $S^{(n_i)}$ contains the last subset $S^{(n_k)}$, Lemma \[lem:base\] says that $$P_{S^{(n_1)}}^{n_1}\cdot P_{S^{(n_2)}}^{n_2-n_1}\cdot P_{S^{(n_3)}}^{n_3-n_2} \cdots P_{S^{(n_{k-1})}}^{n_{k-1}-n_{k-2}} \cdot P_{S^{(n_k)}}^\infty=P_{S^{(n_k)}}^\infty \label{eq:proc}$$ Moreover, it is also true that $$P_{S^{(n_1)}}^{\infty}\cdot P_{S^{(n_2)}}^{\infty}\cdot P_{S^{(n_3)}}^{\infty} \cdots P_{S^{(n_{k-1})}}^{\infty} \cdot P_{S^{(n_k)}}^\infty=P_{S^{(n_k)}}^\infty. \label{eq:procinfty}$$ Therefore, from equation , the vector of final attributed incomes can be computed as $$E^{(\infty)} = E^{(0)} P_{S^{(n_k)}}^\infty$$ where $S^{(n_k)}=S(E^{(\infty)})$. This property says that if we are able to guess the subset of corporations that will finish with negative attributed income, we require only one step to find this final state. Unfortunately, it is impossible to know a priori the set $S(E^{(\infty)})$ of corporations that finish with negative attributed income from the initial information $P$ and $E^{(0)}$. However, the next theorem shows that when starting to distribute an arbitrary subset of corporations in each iteration (keeping in mind that a corporation with negative attributed incomes does not distribute) and then returning to the usual iterations, the algorithm still converges to the vector of final attributed incomes $E^{(\infty)}$. \[teo:principal\] Let $\{\hat{E}^{(j)}\}_{j\geq 0}$ be a sequence of vectors of incomes such that $$\hat{E}^{(0)}=E^{(0)} \quad \text{ and } \quad \hat{E}^{(j)} = \hat{E}^{(j-1)} P_{T^{(j-1)}} \quad \text{for } j=1\ldots k$$ where $T^{(0)},\ldots,T^{(k-1)}$ are subsets of corporations such that $T^{(j)} \supseteq S(\hat{E}^{(j)})$, and let $$\hat{E}^{(j)} = \hat{E}^{(j-1)} P_{S(\hat{E}^{(j-1)})} \quad \text{for }j>k$$ then, $$\lim_{j\to\infty} \hat{E}^{(j)} = E^{(\infty)}.$$ Note that if $\hat{E}^{(k)}$ satisfies $S(\hat{E}^{(k)})\supseteq S(E^{(\infty)})$, then from Lemma \[lem:base\], we know that $$\label{eq:arg_teo} P_{T^{(0)}} \cdots P_{T^{(k-1)}}\cdot P^\infty_{S(E^{(\infty)})} = P^\infty_{S(E^{(\infty)})}$$ Therefore, $\hat{E}^{(\infty)}=E^{(0)}\cdot P^\infty_{S(E^{(\infty)})} = E^{(\infty)}$. Let us assume that there is an $i^*\in S(E^{(\infty)})$ such that $\hat{E}^{(k)}_{i^*} > 0$. Without loss of generality, we can assume that this is the first iteration in which $i^*$ exists. Therefore, $S(E^{(\infty)}) \nsubseteq S(\hat{E}^{(k)})$ but $S(E^{(\infty)}) \subseteq S(\hat{E}^{(k-1)})$. In this case, equation is still true, and $$\underbrace{E^{(0)}\cdot P_{T^{(0)}} \cdots P_{T^{(k-1)}}}_{\hat{E}^{(k)}}\cdot P^\infty_{S(E^{(\infty)})} = E^{(0)}\cdot P^\infty_{S(E^{(\infty)})} = E^{(\infty)}$$ However, this is impossible because $\hat{E}^{(k)}_{i^*}>0$ and $i^*\in S(E^{(\infty)})$, so $i^*$ does not distribute its income, and component $i^*$ of $\hat{E}^{(k)}\cdot P^\infty_{S(E^{(\infty)})}$ must be greater than 0, resulting in a contradiction. This theorem implies that if we start from $E^{(0)}$, we are able to find an income vector $\bar{E}$ such that $\bar{E}_i\leq 0$ $\forall i\in N_S$, using subsets $T^{(0)},\ldots,T^{(k-1)}$ with the properties noted in the theorem (even if we iterate infinite times with a matrix $P_{T^{i}}$), $\bar{E}$ is the vector $E^{(\infty)}$, because $\bar{E} P_{S(\bar{E})} = \bar{E}$. This assertion is the key to understanding the validity of the algorithm proposed in the next section. Algorithms for computing the final attributed income {#algoritmo} ==================================================== Recall that Algorithm \[alg\_1\] could require an infinite number of iterations to converge to the final attributed income, which is not allowable. This could be limited by iterating until a small amount has not been attributed but still could lead to many iterations. A different way to compute the vector of final attributed incomes results from Lemma \[lem:base\] and equation , by iteratively computing $$E^{(n+1)}=E^{(n)}P^\infty _{S^{(n)}},\ \text{ where } S^{(n)} = S(E^{(n)}).$$ This procedure will be completed in no more than $|N_S|$ iterations. However, from the computational point of view, it is costly to invert a large matrix. Specifically, inversion of a dense matrix of size $k\times k$ has a computational complexity of $\mathcal{O}(k^{2.3728})$ [see @le2014powers; @coppersmith1987matrix]. Nevertheless, most software, including the de facto standard library LAPACK [@laug], implements a classic $\mathcal{O}(k^3)$ algorithm. Hence, the proposed algorithm to solve the problem can be computationally intractable for a large number of corporations $N_S$. Theorem \[teo:principal\] proves that we can decompose the problem into smaller subproblems to obtain the final attributed income. A natural way to decompose the problem is by using strongly connected components. Given the participation matrix $P$, we can define a directed graph $G=(V,A)$ in which each vertex $v \in V$ is a taxpayer, and we add an arc $(u,v)\in A$ if and only if $p_{uv}>0$. A strongly connected component on this graph represents a subset of corporations in which all are indirectly owned by themselves. It is a well-known property that any directed graph can be decomposed in a set of strongly connected components such that if we contract each strongly connected component into a vertex, we obtain a directed acyclic graph [see @bang2008digraphs]. Moreover, Tarjan’s algorithm [@tarjan1972depth] allows the decomposition of the graph into strongly connected components and returns an acyclic ordering of them, in time $\mathcal{O}(|V|+|E|)$. Hence, we can apply this decomposition to solve our problem more efficiently. Given an acyclic ordering of the strongly connected components, we can compute the attributed income of each taxpayer in a given component and distribute it to the corresponding shareholders in other strongly connected components. Because this is an acyclic ordering, taxpayers in the initial component will not receive further incomes, so the obtained attributed income will be definitive. Taxpayers $N_S,N_P$, participation matrix $P$, initial income $E^{(0)}$ Attributed income $E$ $E \gets E^{(0)}$ Execute Tarjan’s algorithm to compute an acyclic ordering of the strongly connected components. $E_u \gets E_u + E_v \cdot p_{uv}$ $E_u \gets 0$ $Redo \gets 0$ $S \gets N_S \setminus \{v\in C : E_v > 0\}$ $E \gets E \cdot P^{\infty}_S$ $Redo \gets 1$ Pseudocode of the proposed algorithm is presented in Algorithm \[alg:main\]. Note that for each strongly connected component $C$, to obtain the final attributed income of its taxpayers, it requires inversion of a matrix of size no more than $|C|$, which is repeated each time a taxpayer in $C$ with negative income finishes with positive income. Hence, the computational complexity to obtain the attributed income of a component requires no more than $|C|$ inversions of a matrix of size $|C|$. Hence, it will be appropriate to use this algorithm only if the size of the strongly connected components is small, which is expected in a network of corporations, as we will see in the following section. At the end of the algorithm, the attributed income $E$ will be nonpositive for the corporations, and nonnegative for the individuals. Hence, our final attributed income $E$ satisfies that $E=E\cdot P_{S(E)}$, so by Theorem \[teo:principal\] and its following discussion, the obtained income is effectively the vector of final attributed incomes $E^{(\infty)}$. Computational Experiments and Data Analysis {#resultados} =========================================== We have presented an algorithm to obtain the final attributed incomes and we showed that its performance is given by the size of its connected components, so strongly depending on the topological characteristic of the network. Therefore, in this section we provide an analysis of the Chilean taxpayer network to understand its characteristics, and we benchmark our algorithm on this network. The data described in this section are provided by the Chilean taxation authority and correspond to the $2015$ fiscal year. The original taxpayers’ network consists of $1\,240\,809$ individuals and $786\,293$ corporations, which are connected by $2\,568\,182$ links of which $\approx 90\%$ connect corporations with individuals. However, we can simplify this network by removing trivial components, which consist of one corporation that owns nothing and is owned only by individuals. These components are trivial because the distribution of income can be solved in one iteration of Algorithm \[alg\_1\]. After this simplification, the new taxpayer network consists of $356\,372$ individuals and $152\,914$ corporations connected by $1\,122\,875$ links of which $\approx 75\%$ connect corporations with individuals. In the simplified network, corporations own an average of $1.78$ corporations, and the average quantity of owners per corporation is $7.23$, of which $5.48$ are individuals and $1.75$ are corporations. Moreover, $40\%$ and $34\%$ of corporations have in-degrees of $0$ and $1$, respectively, and $45\%$ of corporations have an out-degree of $2$. Moreover, the average number of corporations owned per individual is $2.39$, among which $56\%$ and $20\%$ of individuals have in-degrees of $1$ and $2$, respectively. The complexity of the problem is given by corporations, not individuals. Therefore, we analyze the corporation network by removing individuals. This network has $152\,135$ strongly connected components, of which only $268$ contain more than one node. As we can see in Figure \[fig\_hist\_str\_com\], $200$ of these $268$ components consist of $2$ nodes, and the largest component consists of $396$ nodes. Therefore, using the algorithm described in this paper, the largest matrix that we must invert is a $396 \times{} 396$ matrix. [Size of strongly connected components\[fig\_hist\_str\_com\]]{} ![Size of strongly connected components[]{data-label="fig_hist_str_com"}](hist_str_com "fig:"){width=".7\linewidth"} As we can see in Figure \[fig\_hist\_str\_com\_large\_entry\] and Figure \[fig\_red\_str\_com\_large\], the corporations inside the largest strongly connected component are highly connected. Indeed, the average out-degree and in-degree of the nodes of this component are both $8.21$ (considering only the links between two nodes that belong to the component); there are $10$ corporations directly connected to more than $100$ other nodes of the component, and more than $10\%$ of nodes are directly connected to at least $40$ nodes. ![In-degree and out-degree of nodes in largest strongly connected component[]{data-label="fig_hist_str_com_large_entry"}](hist_str_com_large_entry "fig:"){width=".48\linewidth"} ![In-degree and out-degree of nodes in largest strongly connected component[]{data-label="fig_hist_str_com_large_entry"}](hist_str_com_large_exit "fig:"){width=".48\linewidth"} ![Diagram of the largest strongly connected component[]{data-label="fig_red_str_com_large"}](red_str_com_large-crop){width=".5\linewidth"} Although there is only one complex strongly connected component, many strongly connected components are weakly connected, creating a large weakly connected component of $90\,322$ strongly connected components for a total of $91\,011$ corporations (if we do not remove individuals for the analysis of weakly connected components, the largest weakly connected component would connect $462\,649$ taxpayers, more than $90\%$ of the nodes of the simplified network). This adds complexity to the problem because the algorithm must respect the precedence of the strongly connected components. However, $81\%$ of the weakly connected components consist of no more than $3$ corporations. The algorithm described in this paper was implemented using the C programming language, using the libraries BLAS/LAPACK to invert the matrices. Note that even if matrices $Q$ are sparse, the inverse of $(I-Q)$ is not necessarily sparse, so memory must be allocated for the whole matrix $Q$. This makes it impossible to solve the problem using equation because $Q$ has size $152\,914 \times 152\,914$, requiring more than 150 GB of RAM to invert just this matrix. We implemented Algorithm \[alg:main\] for this data instance. The algorithm required 344 matrix inversions to compute the final attributed incomes. The complete algorithm runs in less than 10 s. As a comparison, an implementation of Algorithm \[alg\_1\] iterated until the maximum income of a corporation is less than \$1 requires several hours to finish. Conclusions =========== Using an analogy to Markov chains, we construct an efficient algorithm to compute the final attributed income of taxpayers in a pass-through tax system. The complexity of the algorithm is based on the size of the largest strongly connected component of the taxpayer network. We also prove that this final income is unique and robust to any order in which corporations attribute their incomes. This fact allow us to decompose the problem into strongly connected components, which can be obtained using Tarjan’s algorithm. The decomposition is the key property of the proposed algorithm, allowing us to solve large-scale networks in a few seconds. An algorithm that compute income taxes sufficiently fast could lead to the ability to perform further analysis for a taxation authority, such as evaluating the impact of having a mixed system with entities allowed to choose whether to attribute their incomes, forecasting tax collection for future years, to evaluate the impact of tax exemptions, or simply computing the income obtained from different types of entities (e.g., foreign companies). In a more general setting, for any country (with or without pass-through entities), this methodology allows a better estimation of the distribution of wealth to be obtained for the richest deciles, which are usually underestimated in inquiry-based studies [@EstudioBancoMundial]. ### Acknowledgments {#acknowledgments .unnumbered} The authors gratefully acknowledge the Department of Studies, Servicios Impuestos Internos, particularly to Carlos Recabarren, for introducing us the problem and its relevance, and their valuable collaboration that lead us to obtain these results.
--- abstract: 'Deep learning has the potential to have the impact on robot touch that it has had on robot vision. Optical tactile sensors act as a bridge between the subjects by allowing techniques from vision to be applied to touch. In this paper, we apply deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae (pins) inside its sensing surface analogous to structures within human skin. [Our main result is that the application of a deep CNN can give reliable edge perception and thus a robust policy for planning contact points to move around object contours.]{} Robustness is demonstrated over several irregular and compliant objects with both tapping and continuous sliding, using a model trained only by tapping onto a disk. These results relied on using techniques to encourage generalization to tasks beyond which the model was trained. We expect this is a generic problem in practical applications of tactile sensing that deep learning will solve.' author: - 'Nathan F. Lepora$^*$, Alex Church, Conrad De Kerckhove, Raia Hadsell, John Lloyd$^*$[^1][^2] [^3][^4][^5]' bibliography: - 'library.bib' title: ' From pixels to percepts: Highly robust edge perception and contour following using deep learning and an optical biomimetic tactile sensor ' --- Force and Tactile Sensing; Biomimetics; Deep Learning in Robotics and Automation INTRODUCTION ============ touch differs from robot vision: to touch, an agent must physically interact with its environment, which constrains the form and function of its tactile sensors (for example, to be robust, compliant and compact). Likewise, tactile perception differs from visual perception: to perceive touch, an agent interprets the deformation of its sensing surface, which depends on the shape and mechanics of the sensor, unlike vison where the eye does change what can potentially be seen [@Hayward2011]. Therefore, the application of deep learning to robot touch will be different from robot vision, just as robot vision poses different research questions than computer vision [@Sunderhauf2018a]. Thus far, there have been relatively few studies of deep learning for tactile perception compared with the explosion of work on robot vision. Those studies have mainly considered one particular device – the Gelsight [@Yuan2017], an optical tactile sensor that images the shading from 3 internal RGB-colored LEDs to transduce surface deformation (and complements this with painted markers to detect shear [@Yuan2015]). Use of an optical tactile sensor seems an appropriate starting place for applying deep learning to touch, given the rapid progress for vision. [@cc@]{} [**(a) Robot: Arm-mounted sensor**]{} & [**(b) Tactile sensor**]{}\ & [@c@]{}\ [**(c) Tactile image**]{}\ In this paper, we apply deep learning to an optical biomimetic tactile sensor, the TacTip [@Chorley2009; @Ward-Cherrier2018], which images an array of 3D pins inside its sensing surface (Fig. \[fig:1\]c). [These pins mimic the papillae that protrude across the dermal/epidermal layers of tactile skin, within which mechanoreceptors sense their displacement [@Cramphorn2017].]{} In past work, we used image-processing to track the pin positions [@Lepora2015; @Lepora2016]. These were then passed to a distinct perception system, using statistical models [@Lepora2015; @Lepora2016; @Lepora2017; @Ward-Cherrier2018; @Ward-Cherrier2017a; @Cramphorn2017; @Cramphorn2016], dimensionality reduction [@Aquilina2018] or support vector machines [@James2018]. Here we use a deep Convolutional Neural Network (CNN) for direct (end-to-end) perception from the tactile images. BACKGROUND AND RELATED WORK =========================== Initial applications of deep learning to artificial tactile perception were with taxel-based sensors. The first used a four-digit robot hand covered with 241 distributed tactile skin sensors (with another 71 motor angles, currents and force/torque readings) to recognize 20 grasped objects, using a CNN and $\sim\!1000$ training samples [@Schmitz2014]. There have since been several other studies with taxel-based sensors [@Cao2015; @Meier; @Baishya2016; @Kwiatkowski2017]. More recently, the Gelsight optical tactile sensor has found a natural match with deep learning, beginning with applications of CNNs to shape-independent hardness perception [@Yuan2017c] and grasp stability [@Calandra2017], then considering visuo-tactile tasks for surface texture perception [@Yuan2017b] and slip detection [@Li2018]. The original Gelsight domed form-factor has been modified to a slimmer design for better integration with a two-digit gripper [@Donlon2018], enabling study of tactile grasp readjustment [@Hogan2018] and visuo-tactile grasping [@Calandra2018]. The majority of these studies use a CNN trained with several thousand examples, sometimes including an LSTM layer for sequential information. In this work, we consider tactile edge perception and contour following. In humans, edges and vertices are the most salient local features of 3D-shape [@Plaiser2009; @Lederman1997], and are thus a key sensation for artificial tactile systems such as robot hands or prosthetics [@PonceWong2013a]. Work in robotic tactile contour following dates back a quarter century [@Berger1988; @Berger1991; @Chen1995], with a more recent approach adopting a control framework for tactile servoing [@Li2013]. However, these studies have relied on applying image processing techniques ([*e.g.*]{} image moments) to planar taxel arrays. For curved biomimetic sensors such as the iCub fingertip, another approach is to use a non-parametric probabilistic model of the taxel outputs [@Martinez-Hernandez2013b; @Martinez-Hernandez2013a; @Martinez-Hernandez2017]. The TacTip tactile sensor has been used for contour following, using a combination of servo control and a probabilistic model of the pin displacements [@Lepora2017]. After tuning the control policy, the robot could tap around shapes such as a circle, volute and spiral. However, the trajectories [@Lepora2017 Figs 7-10] were not robust to parameter changes and failed when applied to sliding rather than tapping motion. Controlled sliding using touch is a challenge because the sensing surface undergoes motion-dependent shear due to friction against the object surface [@Chen2018a]. Training data thus differs from the sensor output during the task, which will cause supervised learning methods to fail unless shear invariance is somehow applied. METHODS {#sec:3} ======= Robotic system: Tactile sensor mounted on a robot arm {#sec:3a} ----------------------------------------------------- ### Tactile sensor {#sec:3a1} We use an optical biomimetic tactile sensor developed in Bristol Robotics Laboratory: the TacTip [@Chorley2009; @Ward-Cherrier2018]. The version used here is 3D-printed with a 40mm-diameter hemispherical sensing pad (Fig. \[fig:1\]b) and 127 tactile pins in a triangular hexagonal lattice (Fig. \[fig:1\]c). Deformation of the sensing pad is imaged with an internal camera (ELP USB 1080p module; used at $640\times480$ pixels and $30$fps). The pin deflections can accurately characterize contact location, depth, object curvature/sharpness, edge angle, shear and slip. For more details, we refer to recent studies with this tactile sensor [@Ward-Cherrier2018; @Ward-Cherrier2017a; @Ward-Cherrier2017b; @Ward-Cherrier2016a; @Cramphorn2018; @Cramphorn2017; @Cramphorn2016; @Lepora2017; @Lepora2016; @Lepora2016a; @Lepora2015; @Pestell2018; @Aquilina2018; @James2018]. ### Robot arm mounting {#sec:3a2} The TacTip is mounted as an end-effector on a 6-DoF robot arm (IRB 120, ABB Robotics). The removable base of the TacTip is bolted onto a mounting plate attached to the rotating (wrist) section of the arm, then the other two modular components (central column and tip) are attached by bayonet fittings (Fig. \[fig:1\]a). ### Software infrastructure {#sec:3a3} Our integrated sensorimotor framework has four components (Fig. \[fig:2\]): (1) Stacks of tactile images are collected in the OpenCV library, then preprocessed and either saved (for use in training) or used directly for prediction; (2) These images are first cropped and subsampled to $(128\times128)$-pixel grey-scale images (Fig. \[fig:1\]c) and then passed to the Deep Learning system in Keras; (3) The resulting predictions are passed to control and visualization software in MATLAB; (4) The computed change in sensor pose is sent to a Python client that interfaces with a RAPID API for controlling the robot arm. Training is on a Titan Xp GPU hosted on a Windows 10 PC. [The components run in real-time on the tasks (cycle time: $\sim\!1$ms for prediction and $\sim\!100$ms for control).]{} Deep learning system {#sec:3b} -------------------- Two types of CNN architecture are used here (Fig. \[fig:3\]): the first for tapping-based experiments; and the second, more complex architecture, to cope with the additional challenges associated with continuous-contact sliding motion. The first architecture (Fig. \[fig:3\]a) passes a $128\times 128$ grey-scale image through a sequence of convolutional and max-pooling layers to generate a high-level set of tactile features. These features are then passed through a fully-connected regression network to make predictions. This configuration is based on a simple convolutional network pattern [@Goodfellow2016], which was scaled and regularized by restricting the number of filters/ReLUs in each layer and using a dropout of 0.25 between the convolutional stage and fully-connected net. The second architecture (Fig. \[fig:3\]b) was introduced to cope with the effects of non-linear shear on the sensor pin positions during continuous sliding motion. Under this type of motion, the pins tend to be displaced from the positions they occupy normally during tapping and thus their relative positions convey more useful information than their absolute positions. Initially, we tried to encourage the original network architecture (Fig. \[fig:3\]a) to make use of relative pin positions by using data augmentation to introduce randomly shifted copies of the sensor images into the training data (shifting each image randomly by 0-2% in the horizontal and vertical directions on each presentation). However, this alone was not enough to achieve good performance for continuous-contact contour following around objects other than the disk. We then extended the network by adding an initial convolution stage without any max-pooling layers before the original network (Fig. \[fig:3\]b), which did achieve better performance. Combined with the data augmentation, this allowed the network to learn broader features over larger groups of pixels, which allows the network to capture the spatial relationship between groups of adjacent pins. Once again, we used a dropout of 0.25 between the second convolutional stage and the fully-connected net to help over-regularize the model with respect to the validation and test data. Both network architectures were trained using the Adam optimizer, with learning rate $10^{-4}$ and learning rate decay $10^{-6}$. [All networks were trained from scratch, using the default Keras weight initializers (‘glorot$\_$uniform’).]{} Limiting the number of filters/ReLUs in each layer, using 0.25 dropout before the fully-connected net and early stopping (patience parameter 5) all helped prevent overfitting. Task: Tactile servoing along a contour {#sec:3d} -------------------------------------- Here we consider tasks in which a tactile sensor moves along a continuously-extended tactile feature such as the edge of an object, while rotating to maintain alignment with that feature. [The control policy plans the next point to move along the contour around the object edge.]{} These tasks are performed on a range of objects chosen for a variety of shapes and material properties: a $105\operatorname*{{\rm mm}}$-diameter circular disk, a tear-drop and a clover (all 3D-printed in ABS plastic); a lamina volute with radii of curvature $30, 40, 50, 60\operatorname*{{\rm mm}}$ in $90\deg$ segments and a $5\operatorname*{{\rm mm}}$-wide ridge in a volute spiral with radii of curvature $20, 30, 40, 50, 60\operatorname*{{\rm mm}}$ in $180\deg$ segments (both laser-cut from acrylic sheet); and two objects from the YCB object set [@CAlli2015], one chosen to be compliant (a soft rubber brick) and the other irregular (a plastic banana). Only the circular disk is used to gather training data (at the 12 o’clock position). ---------------------------- ---------------------------- [**(a) Position error**]{} [**(b) Rotation error**]{} ---------------------------- ---------------------------- \[fig:5\] ---------------------------- ---------------------------- [**(a) Position error**]{} [**(b) Rotation error**]{} ---------------------------- ---------------------------- [We use a policy based on tactile servoing of the sensor pose to along the predicted edge orientation $(r,\theta)$, with three components [@Lepora2017]: (i) a radial move $\Delta r$ along the (predicted) normal to the edge; (iii) an axial rotation of the sensor $\Delta\theta$; and (iii) a translation $\Delta e$ along the (predicted) tangent to the edge. This can be represented by a proportional controller $$\label{eq:1} \Delta r = g_r\left(r_0 - r\right),\hspace{1em} \Delta\theta = g_{\theta} \left(\theta_0-\theta\right),$$ with unit gains $(g_r,g_\theta)=(1,1)$, set-point $(r_0,\theta_0)=(0\operatorname*{{\rm mm}}, 0\deg)$ (the edge pose in these coordinates) and we choose a default step $\Delta e=3$mm used before in ref. [@Lepora2017]. This control policy plans points for following an edge or contour. In the simple case considered here, the actions $(\Delta r,\Delta\theta)=(-r,-\theta)$ correspond to the training-data labels and the tangential step $\Delta e$ is assumed sufficiently small to not lose a curved edge. The policy may then be considered as end-to-end from the tactile images to controlled actions.]{} Data collection {#sec:3c} --------------- For training, the tactile robotic system samples a local region of the edge of one object ($105\operatorname*{{\rm mm}}$ disk at 12 o’clock position) over a range of radial positions $r$ and axial (roll) rotation angles $\theta$. Here we used 2000 uniform random tapping contacts sampled over ranges $(-6,+9)\operatorname*{{\rm mm}}$ and $(-45,+45)\deg$, with each tap from $\sim1.5\operatorname*{{\rm mm}}$ above the object, down $5\operatorname*{{\rm mm}}$, over $\sim0.7\sec$ ($\sim20$ frames). The origin $(0\operatorname*{{\rm mm}}, 0\deg)$ has the sensor tip centred on the edge with camera and pin lattice aligned normal to the edge. The training set was split into 1600 samples for learning the weights and 400 for hyperparameter optimization. Another dataset of 2000 contacts over new random positions and angles was used for evaluating perceptual performance. [We used the 7 frames around the peak displaced frame (measured by the change in RMS pixel intensity from the initial frame) to capture data near the deepest contact part of the tap, including some variation over depth but excluding non-contact data.]{} The images were then cropped and subsampled to a $(128\times128)$-pixel region containing the pins (Fig. \[fig:4\]). There is modest scope for improving the results by fine-tuning these experiment parameters. However, trial and error (and experience with the experiment [@Lepora2017]) showed these to be reasonable and natural choices. The only non-obvious choice was to include a random shift between $\pm 1\deg$ of the sensor yaw/pitch in all data: this reduced specialisation in the trained network to small but noticeable non-normal sensor alignments that otherwise biased the angle predictions. RESULTS {#sec:4} ======= End-to-end edge perception from tactile images {#sec:4a} ---------------------------------------------- In a major departure from recent work with the TacTip optical biomimetic sensor, here we predict the percepts directly from tactile images with a deep CNN. Prior work with this sensor has used specialised preprocessing to detect then track the pin positions [@Ward-Cherrier2018; @Ward-Cherrier2017a; @Ward-Cherrier2017b; @Ward-Cherrier2016a; @Cramphorn2018; @Cramphorn2017; @Cramphorn2016; @Lepora2017; @Lepora2016; @Lepora2016a; @Lepora2015; @Pestell2018; @Aquilina2018; @James2018]. Here, this preprocessing is subsumed into the trained neural network. We report the performance of the first CNN architecture (Fig. \[fig:3\]a). During our preliminary investigations, we found that networks with more filters/ReLUs in each layer and less regularization achieved better performance on the validation and test data collected at the same point on the disk as where the training data was collected. However, they failed to generalize well to other regions of the disk or to other objects. Over-regularizing the network beyond the point required for good generalization on the test data helped solve this problem and produced models that perform well on a broader range of tapping-based contour following tasks. The overall perceptual performance is then most accurate near the central positions for all rotations (Fig. \[fig:5\]). In this region ($-3$ to $+3\operatorname*{{\rm mm}}$), errors are generally less than $1\operatorname*{{\rm mm}}$ and $5\deg$. Overall, the contacts are less informative further from the edge ($9\operatorname*{{\rm mm}}$ into free space; $-6\operatorname*{{\rm mm}}$ onto the disk), consistent with the edge being no longer visible in those tactile images. Considering only the dependence on position (Fig. \[fig:6\]), the mean absolute errors are $\sim\!0.3\operatorname*{{\rm mm}}$ and $\sim\!2\deg$ in the central region where perception is most accurate (red curve), appropriate for the contour-following tasks below. ----------------------------------- --------------------------------- [**(a) Taps: Initial contact**]{} [**(b) Taps: Step size**]{} [**(c) Taps: Contact radius**]{} [**(d) Taps: Contact depth**]{} ----------------------------------- --------------------------------- ------------------------------------ ------------------------------------ [**(a) Taps: Volute lamina**]{} [**(b) Taps: Spiral ridge**]{} [**(c) Taps: Foil**]{} [**(d) Taps: Clover**]{} [**(e) Taps: Compliant object**]{} [**(f) Taps: Irregular object**]{} ------------------------------------ ------------------------------------ ------------------------------------ ---------------------------------- [**(a) Slide: Initial contact**]{} [**(b) Slide: Step size**]{} [**(c) Slide: Contact radius**]{} [**(d) Slide: Contact depth**]{} ------------------------------------ ---------------------------------- ------------------------------------- ------------------------------------- [**(a) Slide: Volute**]{} [**(b) Slide: Spiral ridge**]{} [**(c) Slide: Foil**]{} [**(d) Slide: Clover**]{} [**(e) Slide: Compliant object**]{} [**(f) Slide: Irregular object**]{} ------------------------------------- ------------------------------------- Robust contour following around a disk {#sec:4b} -------------------------------------- The deep CNN model for predicting edge pose angle and radial position is now applied to contour following around the disk with tapping contacts. A servo control policy (Eq. \[eq:1\]) plans contact points to maintain the relative pose to the edge. The completed trajectories are near-perfect circles around the disk under a range of conditions (Fig. \[fig:7\]; Table \[tab:1\]). [Trajectories are repeatable, as indicated by running the experiment from different starting positions relative to the edge (Fig. \[fig:7\]a), with a small offset in the radial displacement (0-1$\operatorname*{{\rm mm}}$) and sensor angle (6-9$\deg$) relative to the edge.]{} Increasing the policy step size from $3\operatorname*{{\rm mm}}$ keeps the trajectory within $2\operatorname*{{\rm mm}}$ of the edge, but offsets the angle by another 4-11$\deg$ consistent with the sensor moving from its predicted location (Fig. \[fig:7\]b; $\Delta e\!=\!6,9\operatorname*{{\rm mm}}$). Taking a set-point radius inside or outside the edge keeps a circular trajectory without contributing to the angle offset (Fig. \[fig:7\]c; $r_0\!=\!-3,+6\operatorname*{{\rm mm}}$). The circular trajectories are also robust to changing the tapping depth (Fig. \[fig:7\]d, depth change $\Delta=-1.5,+2.5\operatorname*{{\rm mm}}$), from shallow taps ($2.5\operatorname*{{\rm mm}}$ above, down $5\operatorname*{{\rm mm}}$) to deep taps ($-1.5\operatorname*{{\rm mm}}$ above, down $5\operatorname*{{\rm mm}}$). Shallow taps advanced the sensor angle by $8\deg$ and deep taps lag the angle by $20\deg$. The additional is because the task data is at a different depth from that used to train the model; although the model is most accurate at the training depth, its performance has declined gracefully to still complete the contour following task. These results are a major improvement over those obtained in a previous study on the same task with a probabilistic model of pin displacements [@Lepora2017] (comparison in Table \[tab:1\]), which failed to complete the task in many circumstances. [cc|cc]{} [**experiment**]{} & ------------------- [**parameter**]{} [**variation**]{} ------------------- : Accuracy of exploration around the disk, showing the mean absolute errors of radial position and rotation angle for the trajectories in Figs \[fig:7\],\[fig:10\]. A comparison is shown with the original probabilistic model [@Lepora2017] for each experiment.[]{data-label="tab:1"} & ----------------------------- [**probabilistic**]{} [**model [@Lepora2017]**]{} ----------------------------- : Accuracy of exploration around the disk, showing the mean absolute errors of radial position and rotation angle for the trajectories in Figs \[fig:7\],\[fig:10\]. A comparison is shown with the original probabilistic model [@Lepora2017] for each experiment.[]{data-label="tab:1"} & [**deep CNN**]{}\ tapping contact & $r_{\rm init}\!=\!-6\operatorname*{{\rm mm}}$ & $1\operatorname*{{\rm mm}}$, $23\deg$ & $0\operatorname*{{\rm mm}}$, $6\deg$\ (Fig. \[fig:7\]a) & $r_{\rm init}\!=\!0\operatorname*{{\rm mm}}$ & $2\operatorname*{{\rm mm}}$, $25\deg$ & $1\operatorname*{{\rm mm}}$, $6\deg$\ & $r_{\rm init}\!=\!9\operatorname*{{\rm mm}}$ & $2\operatorname*{{\rm mm}}$, $41\deg$ & $0\operatorname*{{\rm mm}}$, $9\deg$\ tapping contact & $\Delta e\!=\!6$$\operatorname*{{\rm mm}}$ & $2\operatorname*{{\rm mm}}$, $25\deg$ & $1\operatorname*{{\rm mm}}$, $10\deg$\ (Fig. \[fig:7\]b) & $\Delta e\!=\!9$$\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $17\deg$\ tapping contact & $r_0\!=\!-2$$\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $6\deg$\ (Fig. \[fig:7\]c) & $r_0\!=\!+6$$\operatorname*{{\rm mm}}$ & $3\operatorname*{{\rm mm}}$, $16\deg$ & $3\operatorname*{{\rm mm}}$, $7\deg$\ tapping contact & $\Delta\!=\!-1.5$$\operatorname*{{\rm mm}}$ & fail & $0\operatorname*{{\rm mm}}$, $8\deg$\ (Fig. \[fig:7\]d) & $\Delta\!=\!+2.5$$\operatorname*{{\rm mm}}$ & $4\operatorname*{{\rm mm}}$, $25\deg$ & $3\operatorname*{{\rm mm}}$, $20\deg$\ sliding contact & $r_{\rm init}\!=\!-6\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $15\deg$\ (Fig. \[fig:10\]a) & $r_{\rm init}\!=\!0\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $12\deg$\ & $r_{\rm init}\!=\!9\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$,$11\deg$\ sliding contact & $\Delta e\!=\!6$$\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $15\deg$\ (Fig. \[fig:10\]b) & $\Delta e\!=\!9$$\operatorname*{{\rm mm}}$ & fail & $3\operatorname*{{\rm mm}}$, $34\deg$\ sliding contact & $r_0\!=\!-3$$\operatorname*{{\rm mm}}$ & fail & $2\operatorname*{{\rm mm}}$, $25\deg$\ (Fig. \[fig:7\]c) & $r_0\!=\!+2$$\operatorname*{{\rm mm}}$ & fail & $2\operatorname*{{\rm mm}}$, $9\deg$\ sliding contact & $\Delta\!=\!-1$$\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $18\deg$\ (Fig. \[fig:7\]d) & $\Delta\!=\!+3$$\operatorname*{{\rm mm}}$ & fail & $1\operatorname*{{\rm mm}}$, $4\deg$\ Robustness to non-uniform object shapes --------------------------------------- To demonstrate further robustness, the task is extended to four fabricated planar shapes chosen to have non-uniform curvature (a volute lamina, spiral ridge, tear drop and clover) and two household objects from the YCB object set [@CAlli2015]: one compliant (a soft rubber brick) and one irregular (a plastic banana). Task completion shows generalization to novel contours differing from the disk edge used in training. For the four fabricated shapes, the completed trajectories match the object shapes (Figs \[fig:8\]a-d). When the radius of curvature is close to that of the disk, the sensor angle aligns to the edge normal. For smaller or larger curvatures, there is offset in the sensor angle (advancing for smaller and lagging for higher). The large changes in orientation at corners on the volute and foil cause overshoots, but the task still completes. The task also completes on the compliant object (Fig. \[fig:8\]e, rubber brick). Again, the policy advances the edge angle on the straight sections. There is less overshoot at the compliant corners than those of the rigid objects; however, the angle offset on the straight edges would make turning easier. The task does not quite complete on the irregular object (Fig. \[fig:8\]f, banana). The task was challenging because there are only slight ridges and the height varies by a few millimetres. However, the policy managed to traverse most of the object, failing only at the tip where there is both a sharp change in orientation and no well-defined edge to follow. Robustness to sliding contact ----------------------------- A far more demanding test of robustness is to use a continuous sliding motion with the same training data from tapping the disk. We repeat the above experiments, dropping the sensor $3\operatorname*{{\rm mm}}$ ($1.5\operatorname*{{\rm mm}}$ into the object) and collecting data between the exploratory movements ($\sim\!0.15\sec$ duration; $\sim\!5$ frames). The second CNN architecture (Fig. \[fig:3\]b) was used, which has an initial convolution stage without any max-pooling layers to help generalize over tactile features. Under a sliding motion, the task completion was robust to changes in starting point (Fig. \[fig:10\]a), step size (Fig. \[fig:10\]b; $\Delta e\!=\!6,9\operatorname*{{\rm mm}}$), set-point radius (Fig. \[fig:10\]c; $r_{\rm fix}=-3,+2\operatorname*{{\rm mm}}$) and contact depth (Fig. \[fig:10\]d; $-1\operatorname*{{\rm mm}}$ shallower, $+3\operatorname*{{\rm mm}}$ deeper). The range of depths where the task completes is likely greater, but concerns about damaging the sensor limited further testing. The angle offset improved to $4\deg$ with the deepest contact, which is the best overall trajectory. Three objects (the spiral, clover and compliant brick) were successfully traversed with sliding (Figs \[fig:11\]b,d,e). The two rigid objects have similar sliding and tapping trajectories ([*c.f.*]{} Figs \[fig:8\]b,d and \[fig:11\]b,d). For the rubber brick, the angle offset on the straight edges appears less than for tapping motion, but there is a larger overshoot around the corners ([*c.f.*]{} Figs \[fig:8\]e and \[fig:11\]e). We interpret this as a consequence of the sliding motion inducing a shear of the sensing surface that makes corners more challenging. [The policy failed on the other three objects (the volute, tear drop and banana) at the sharp corners (Figs \[fig:11\]a,c,f), but was successful otherwise. The first two were objects with large overshoots at the corners when tapping (Figs \[fig:8\]a,c), which appear to have caused the failure under the more demanding condition of sliding. The policy also failed at the tip of banana for both tapping (Fig. \[fig:8\]f) and sliding motion (Fig. \[fig:11\]f) where there is no well-defined edge to follow.]{} Discussion ========== This work is the first application of deep learning to an optical biomimetic tactile sensor and the first to edge perception and contour following. We found robust generalization of the contour-following policy to tasks beyond which the model was trained, such as continuous sliding around compliant or irregular objects after training with taps on part of a disk. We used two techniques to encourage generalization. The first was to anticipate nuisance variables to marginalise out and then either modify the data collection (e.g. training over shifts in yaw/pitch) or augment the data set with artificially generated data (e.g. randomly shifting frames). The second technique was to over-regularize the architecture to avoid specialization to the training task; this may be because it encourages the development of simpler features throughout the network. In both cases, we introduce inductive bias into the model to improve performance in situations different from its training. This is necessary because generalizing beyond the task a model is trained on cannot be reliably achieved by trying to validate on data from the original task We emphasise the generalization from discrete tapping contacts to continuous sliding motion is a challenging test for the policy. Sliding causes a friction-dependent shear of the sensing surface that depends on the motion direction and recent history of the interaction [@Chen2018a]. Hence, the tactile data during a task can differ in complex ways from those during training. Although this caused our previous statistical model to fail [@Lepora2017], the deep learning model performed robustly. [The principal failure mode of the deep learning model was on sharp corners under sliding motion. This is unsurprising, as the model was only trained on edge data from the disk, so corners are both outside its experience and give a singularity in the prediction. The model degraded gracefully, with corners successfully followed with a tapping motion, albeit with some overshoot, and also for sliding around reflex angles and the compliant object. In principle, this limitation could be solved by crafting a more complete policy that can predict points around corners, [*e.g.*]{} by training on corners of various angles. A complementary method would be to adapt the step size of the policy based on the predicted curvature of the object.]{} In our view, the greatest benefit of using a deep CNN to learn a tactile control policy is its capability to generalize beyond the training data. Previous studies with the same biomimetic tactile sensor found good performance when the task and training were similar [@Lepora2015; @Lepora2016; @Lepora2016a; @Lepora2017]; however, we were aware that these results were fragile to small changes in the task ([*e.g.*]{} sensor orientation). Since practical applications of tactile sensing require robust performance in situations beyond those previously experienced, we expect this is a generic problem in robot touch that deep learning will solve. [*Acknowledgements:*]{} We thank NVIDIA Corporation for the donation of the Titan Xp GPU used for this research [^1]: Manuscript received: September 10, 2018; Revised December 6, 2018; Accepted January 23, 2019. [^2]: This paper was recommended for publication by Editor Dan Popa upon evaluation of the Associate Editor and Reviewers’ comments. This work was supported by an award from the Leverhulme Trust on ‘A biomimetic forebrain for robot touch’ (RL-2016-39). \* NL and JL contributed equally to this work. [^3]: $^{1}$NL, AC, CDK and JL are with the Department of Engineering Mathematics and Bristol Robotics Laboratory, University of Bristol, Bristol, U.K. [{n.lepora, ac14293, cd14192, jl15313}@bristol.ac.uk]{} [^4]: $^{2} $RH is with Google DeepMind. [raia@google.com]{} [^5]: Digital Object Identifier (DOI): see top of this page.
--- abstract: 'Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time ${\hat{u}}_*$. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes $AdS_2$ gravity and the low-energy dynamics of the SYK model. We identify a particular set of $2k$-point functions, characterized as being both “maximally braided” and “$k$-OTO", which exhibit exponential growth until progressively longer timescales ${\hat{u}}_*^{(k)} \sim (k-1) \, {\hat{u}}_*$. We suggest an interpretation as scrambling of increasingly fine-grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.' author: - 'Felix M. Haehl' - Moshe Rozali bibliography: - 'OTOreferences.bib' title: 'Fine-Grained Chaos in $AdS_2$ Gravity' --- Introduction ============ The out-of-time-order (OTO) four-point function $F({\hat{u}}) = \langle V({\hat{u}}) W(0) V({\hat{u}}) W(0) \rangle/(\langle VV \rangle \langle WW \rangle)$ in a thermal state serves as a diagnostic of quantum chaos [@Larkin:1969aa; @Shenker:2013pqa; @Shenker:2013yza; @Leichenauer:2014nxa; @Maldacena:2015waa; @Kitaev:2015aa]. A manifestation of this is the existence of a time regime where the (connected and regularized) part of $F({\hat{u}})$ grows exponentially:[^1] $F({\hat{u}})_{conn.} \sim e^{\lambda_L({\hat{u}}-{\hat{u}}_*)}$. The [*scrambling time*]{} ${\hat{u}}_*$ is larger than the typical timescale of thermal dissipation by a factor of the logarithm of the entropy of the system. It has thus been suggested that it quantifies a more fine-grained aspect of thermalization, a process that has been coined [*scrambling*]{} [@Hayden:2007cs; @Sekino:2008he; @Lashkari:2011yi]. In this letter we aim to explore generalizations of these statements. We consider higher-point correlation functions in OTO configurations. We will suggest a particular generalization of the four-point chaos correlator, which we call the “maximally braided” OTO correlator. As we will see, the maximally braided $2k$-point function is a function of $k$ Lorentzian insertion times and has several interesting features: 1. There exist Lorentzian insertion time configurations for which it exhibits exponential growth up until a time ${\hat{u}}_*^{(k)} \sim (k-1) {\hat{u}}_*$. These configurations are such that the correlator is [*maximally OTO*]{}, i.e., they display the highest possible number of switchbacks in real time. 2. The Lyapunov exponent describing the speed of this growth is the same $\lambda_L$ as for the four-point function. The longer time scales are associated with the higher-point correlators being more fine grained quantities, thus can be made progressively smaller initially. We demonstrate these features in a particular model, which is known to be maximally chaotic (i.e., the Lyapunov exponent is as large as universally allowed in any quantum system, $\lambda_L = \frac{2\pi}{\beta}$ [@Maldacena:2015waa; @Maldacena:2016hyu; @Kitaev:2017awl]): the Schwarzian theory of a single time reparametrization mode, describing the fluctuations of the location of the boundary in $AdS_2$ gravity coupled to scalar matter fields. OTO Correlation Functions ========================= Backreaction in $AdS_2$ ----------------------- Our starting point is the calculation of backreaction of matter fields in Euclidean $AdS_2$ space. We follow previous discussions in [@Almheiri:2014cka; @Maldacena:2016upp; @Engelsoy:2016xyb; @Jensen:2016pah], which the reader is invited to consult for further details. The gravitational action reduces to a boundary term, which describes the dynamics of the soft mode $t(u)$: $$\label{eq:Igrav} -I_{grav}=\frac{1}{\kappa^2}\int du \, \bigg[-\frac{1}{2}\left(\frac{t''}{t'}\right)^2+\left(\frac{t''}{t'}\right)' \bigg]$$ This is the Schwarzian action, which is determined by a pattern of spontaneous and explicit conformal symmetry breaking. The coupling $\kappa$ is our expansion parameter: in gravity it is proportional to $G_N^{1/2}$ (the bulk Newton constant) and it scales as $N^{-1/2}$ in the SYK model. Note that the SYK model [@Kitaev:2015aa; @Maldacena:2016hyu] has an additional energy scale $J$, which appears in the gravity calculation as a UV cutoff. The dominance of the soft modes over the massive modes of the SYK model, for certain quantities, stems from those quantities being UV sensitive. We believe this is the case for the special class of correlation functions discussed here, and therefore that the time scales we unravel are also relevant to the SYK model. However, for simplicity we restrict our attention to the purely gravitational calculation, representing the contribution of the soft mode to correlation functions. We couple the gravity theory to a matter action which represents external massless particles: $$\label{matter} -I_{matter}= \frac{1}{2\pi}\int du_1 du_2 \frac{t'(u_1) t'(u_2)}{(t(u_1)-t(u_2))^2}\,j(u_1)j(u_2)$$ where $j$ is a source for the (dimension 1) operator whose correlator we are calculating. To compute correlators perturbatively in a black hole background, we transform $t(u)=\tan(\frac{\tau(u)}{2})$, corresponding to working with temperature $\beta = 2 \pi$, and expand around the saddle: $\tau(u)=u+\kappa \, {\upvarepsilon}(u)$. To leading order in $\kappa$ the Schwarzian action gives a quadratic term, and hence a propagator for the mode ${\upvarepsilon}(u)$. This propagator can be written as $$\label{prop} \begin{split} \langle {\upvarepsilon}(u){\upvarepsilon}(0)\rangle&= \frac{1}{2 \pi}\bigg[\frac{ 2 \sin \, u - (\pi +u)}{2} \,(\pi+u) \\ &\qquad\qquad\qquad + 2 \pi \Theta(u) (u-\sin \, u) \bigg] \end{split}$$ where we take the coefficients $a,b$ appearing in [@Maldacena:2016upp] to zero (this corresponds to a gauge choice). Further expansion of the Schwarzian action gives self-interaction terms for ${\upvarepsilon}(u)$, suppressed by factors of $\kappa$. These are required for calculating general correlation functions. We will see that those interactions terms are not needed for our purposes. Similarly, we can expand the matter action (\[matter\]). We write the expansion in $\kappa$ as $$-I_{matter} = \frac{1}{2\pi} \int du_1 du_2 \, \frac{j(u_1)j(u_2)}{4\sin^2 (\frac{u_{12}}{2})} \sum_{p\geq 0} \kappa^p \, {\cal B}^{(p)}(u_1,u_2)$$ where $u_{12} \equiv u_1-u_2$. The leading order contribution comes from the two-point function in the absence of backreaction. It is the conformal correlator at finite temperature, i.e., $\B[0]= 1$. We will also need the first and second order expansions, corresponding to the way the matter sources the soft mode ${\upvarepsilon}(u)$ to orders $\kappa$ and $\kappa^2$ [@Sarosi:2017ykf]: $$\begin{split} {{\cal B}^{{\scriptscriptstyle(#1)}}}(u_1,u_2)&= {\upvarepsilon}'(u_1)+{\upvarepsilon}'(u_2)-\frac{{\upvarepsilon}(u_1)-{\upvarepsilon}(u_2)} {\tan(\frac{u_{12}}{2})} \\ \B[2](u_1,u_2)&= \frac{1}{4 \sin^2(\frac{u_{12}}{2})} \Big[ (2+\cos u_{12})\, ({\upvarepsilon}(u_1)-{\upvarepsilon}(u_2))^2 \\ & + 4 \sin^2 \big(\frac{u_{12}}{2}\big)\,{\upvarepsilon}'(u_1){\upvarepsilon}'(u_2) \nonumber\\ & - 2 \,\sin u_{12} \,\left( {\upvarepsilon}(u_1) - {\upvarepsilon}(u_2) \right) \left( {\upvarepsilon}'(u_1)+{\upvarepsilon}'(u_2)\right) \Big] \end{split} \label{Bfactor}$$ In order to compute a Euclidean $2k$-point function up to ${\cal O}(\kappa^n)$, one has to sum the relevant diagrams arising from this expansion: first, one writes all possible products of $k$ instances of $\B[p_i](u_{2i-1},u_{2i})$, which are relevant at $n$-th order in perturbation theory (i.e., $\sum_i p_i \leq n$). In this product, one then contracts ${\upvarepsilon}$’s either with propagators , or with higher-point vertices arising from expanding the action to higher orders in $\kappa$. This quickly gets complicated (see appendix \[app:sixpt\] for examples). We will now present a particularly interesting class of observables for which this task simplifies considerably. Systematics of the Calculation {#sec:conventions} ------------------------------ Consider coupling the Schwarzian theory, describing gravity in $AdS_2$ space, to $k$ distinguishable matter fields representing the coupling to external operators $V_i$ with $i=1,...,k$. Our aim is to calculate $2k$-point correlation functions involving the operators $V_1(u_1), V_1(u_2), \ldots , V_k(u_{2k -1}) ,V_k(u_{2k})$. We proceed as follows: $(1)$ We calculate the Euclidean correlators. Without loss of generality, for each pair of insertions of the same operator, say $V_i(u_{2i-1})$ and $V_i(u_{2i})$, we order the Euclidean times as $u_{2i-1}>u_{2i}$. The remaining relations between Euclidean insertion times determine the order in which the operators occur in the correlation function. $(2)$ Then, to discuss Lorentzian times we analytically continue by setting $u_r \rightarrow \delta_r + i {\hat{u}}_r$ for all $r=1,\ldots,2k$. We then analyze the late time dependence on Lorentzian times ${\hat{u}}_r$. $(3)$ Ultimately we are interested in putting equivalent operators at coincident Lorentzian times, ${\hat{u}}_{2i-1} = {\hat{u}}_{2i}$. The short time regulators $\delta_r$ (which are ordered in the same way as the original Euclidean times) serve to regulate the divergence in this limit. We write below terms at leading order in $\delta_{ij} \equiv \delta_i - \delta_j$, which are universal in the sense that they contain the exponential behavior we are interested in (see the discussion in [@Roberts:2014ifa]). We start by discussing the computation of Euclidean correlators. The Euclidean time ordering determines the ordering of operators in the correlator. We are interested in a specific set of orderings, which we will call [*maximally braided*]{} correlators, for which the calculation becomes particularly simple. To describe those correlators we need to introduce some conventions. The backreaction calculation involves in intermediate steps Heaviside $\Theta$-functions, resulting from the propagator of the soft mode . Organizing these will be crucial. We choose to write all step functions canonically as $\Theta(u_i-u_j)$ with $i>j$, using $\Theta(x)=1-\Theta(-x)$. We then use the configuration of these step functions to uniquely characterize the different possible operator orderings of the correlation function. For example, the time ordered correlator $\langle V_1(u_1) V_1(u_2) \cdots V_k(u_{2 k -1}) V_k(u_{2k}) \rangle$, with the canonical ordering $u_1 > u_2> \ldots > u_{2k}$, is characterized as being the term in the generic Euclidean $2k$-point function with no step functions. Since we are interested in the exponential growth in the chaos regime, we will only keep terms that are dominant at late times. Longest living modes can be characterized as a coefficient in the generic Euclidean correlator with the maximum number of step functions. It is simpler to evaluate, and subtracting off all other time orderings does not influence the information we are interested in. Maximally Braided Correlator ---------------------------- Our subtracted maximally braided correlator can be characterized by the appearance of precisely $k-1$ step functions, “braiding" every pair of operators with the consecutive pair. Elementary combinatorics shows that this characterization is equivalent to computing a product of commutators (see appendix \[app:simplifications\] for details). We thus define our basic observable of interest as $$\label{eq:F2kdef} F_{2k}(u_1,\ldots,u_{2k}) = \frac{ \big{\langle} V_1(u_1) [V_2(u_3), V_1(u_2)]\, [V_3(u_5), V_2(u_4)]\, [V_4(u_7), V_3(u_6)] \cdots [V_{k}(u_{2k-1}), V_{k-1}(u_{2k-2})] V_{k}(u_{2k}) \big{\rangle} }{ \langle V_1(u_1) V_1(u_2) \rangle \cdots \langle V_k(u_{2k-1}) V_k(u_{2k}) \rangle}$$ The maximally braided configuration is obtained by dropping all commutator brackets (see Fig. \[general\]). The commutators in $F_{2k}$ serve to subtract subleading pieces from the maximally braided configuration. $F_{2k}$ is then just the coefficient of a term in the generic Euclidean correlator with $k-1$ step functions. We argue below that to leading order in perturbation theory, $F_{2k}$ can be computed using only the Feynman diagrams of the type illustrated in Fig. \[general\]. ![General maximally braided $2k$-point correlator (first term obtained by expanding out commutators in $F_{2k}$): only diagrams of the type shown contribute to the connected correlator $F_{2k}$ at leading order in $\kappa$. The arrangement of insertions along the circle indicates the ordering in Euclidean time.[]{data-label="general"}](chained2.png "fig:"){width=".28\textwidth"} (-139,13)[$1$]{} (-70,0)[$2$]{} (-102,-7)[$3$]{} (-50,18)[$5$]{} (-42,40)[$4$]{} (-50,71)[$7$]{} (-68,93)[$6$]{} (-97,100)[$9$]{} (-123,93)[$8$]{} (-153,30)[$2k$]{} (-150,40)[$\vdots$]{} (-137,83)[$\iddots$]{} (0,80)[$\;= \; {{\cal B}^{{\scriptscriptstyle(#1)}}}(u_{2i-1},u_{2i})$]{} (0,46)[$\;= \; \B[2](u_{2i-1},u_{2i})$]{} (0,15)[$\;= \; \langle {\upvarepsilon}{\upvarepsilon}\rangle$]{} $\qquad\qquad\qquad$ Note that thus far we are discussing the Euclidean time ordering, or equivalently the operator ordering in the correlator. This determines the combinatorics of the calculation, and is the source of the simplification we exploit. We discuss the independent issue of the Lorentzian time ordering, which is crucial to the understanding of the different time scales involved in the correlator we compute, further below. Example: OTO Four-Point Function -------------------------------- Consider the correlator $\langle V_1(u_1)[V_2(u_3),V_1(u_2)]V_2(u_4) \rangle$. We demonstrate here the simplified calculation that picks out this particular combination (which describes precisely the dominant term in the chaos regime), without the need to calculate the full Euclidean or Lorentzian 4-point function. We then generalize that process for higher-point functions. Using the simplifications described in appendix \[app:simplifications\], we compute $F_4$ as the coefficient of $\Theta(u_{32})$ in the exchange of a soft mode between two bilinears: $$\label{2OTO} \begin{split} F_4 &=\kappa^2 \, \langle {{\cal B}^{{\scriptscriptstyle(#1)}}}(u_1,u_2) {{\cal B}^{{\scriptscriptstyle(#1)}}}(u_3,u_4)\rangle\big{|}_{\Theta(u_{32})} + {\cal O}(\kappa^3) \\ & = \left\{ \frac{4\,\kappa^2}{\delta_{12}\delta_{34}}\, [ (u_{23}-\sin u_{23}) ] + {\cal O}(\delta_{ij}^{-1}) \right\} + {\cal O}(\kappa^3) \end{split}$$ Introducing $\delta_{ij}\equiv \delta_i - \delta_j$, we have already used the benefit of hindsight and extracted the leading divergence as $\delta_{ij}\rightarrow 0$ for the analytic continuation $u_r \rightarrow \delta_r + i {\hat{u}}_r$ with ${\hat{u}}_1={\hat{u}}_2$, ${\hat{u}}_3 = {\hat{u}}_4$. We can thus complete the analytic continuation to the OTO chaos region by simply setting $u_{23} \rightarrow i {\hat{u}}_{23}$.[^2] The term $\sin u_{23}$ in (\[2OTO\]) then gives an exponentially growing term $e^{\lambda_L |{\hat{u}}_2-{\hat{u}}_3|}$, with $\lambda_L =1 =\frac{2 \pi}{\beta= 2 \pi}$, as expected. The time scale associated to this exponential growth, where the correlator becomes of order one, is the [*scrambling time*]{} $\hat{u}_* \sim \log(\kappa^{-2}) \sim \log(G_N^{-1}) \sim \log(N)$, or with units: ${\hat{u}}_* \sim \frac{\beta}{2\pi} \, \log(\frac{2\pi}{\beta \kappa^2}) $. Indeed, is the result obtained by evaluating the full 4-point function, specializing to the operator ordering $\langle V_1(u_1)V_2(u_3)V_1(u_2)V_2(u_4) \rangle$, subtracting off the time-ordered part, and expanding in small $\delta_{ij}$ (c.f. [@Maldacena:2016upp]). Note that the exponentially growing factor is associated with one soft mode propagator, relating the even and odd parts of two matter perturbations. We see below that such pattern persists for higher-point correlators, where one such exponential factor is associated with any exchange of operators relative to the canonical ordering. Any such exchange is reflected by the presence of a (canonically ordered) step function we use to organize the calculation. Each such step function is accompanied by a similar propagator factor and hence by an exponentially growing mode. This is the basic structure of the results derived below. Higher-Point Correlators ======================== Consider the six point function $F_6$ as defined in , following the process outlined and demonstrated in the previous section. The combination $\langle V_1(u_1) [V_2(u_3), V_1(u_2)] [V_3(u_5), V_2(u_4)] V_3(u_6)\rangle $ is obtained from the generic Euclidean six-point function by isolating the terms involving the product of step functions $\Theta(u_{32}) \Theta(u_{54})$. We claim that the necessary presence of this product of step functions specifies a unique diagram that can contribute to the (connected and subtracted) maximally braided correlator, to leading order in $\kappa$. Indeed the diagram depicted in Fig. \[general\] (for $k=3$) contains the minimal ingredients necessary to produce the two step functions defining the maximally braided ordering we are interested in. Such diagram is of order $\kappa^4$. Other diagrams of the same order, for example disconnected ones or those involving a 3-point self-interaction of the soft mode, will have fewer step functions. They contribute only to other correlators, where the braiding is less than maximal, or get subtracted off in the combination $F_6$. Similarly, diagrams involving more than two ${\upvarepsilon}$-propagators contribute to $F_6$ but at higher orders in $\kappa$. We are therefore faced with the relatively easy calculation of the following contribution to Fig. \[general\]: $$\label{eq:F6} F_6 = \kappa^4 \, \langle {{\cal B}^{{\scriptscriptstyle(#1)}}}(u_1,u_2) \, \B[2](u_3,u_4)\,{{\cal B}^{{\scriptscriptstyle(#1)}}}(u_5,u_6)\rangle \big{|}_{\Theta(u_{32})\Theta(u_{54})}$$ up to corrections of ${\cal O}(\kappa^5)$. Since we will eventually set $u_r = \delta_r + i {\hat{u}}_r$, we can further use the simplifications of appendix \[app:simplifications\]. The result to leading order in $\kappa$ and to leading order in the regulators $\delta_{ij}$ is $$\label{eq:F6res} \begin{split} F_6&\sim\frac{24\,\kappa^4}{\delta_{12}\delta_{34}^2\delta_{56}}\, (u_{23}-\sin u_{23})(u_{45}-\sin u_{45}) \end{split}$$ In appendix \[app:sixpt\] we illustrate how to calculate the full six-point function and reproduce this simple result for the maximally braided subtracted correlator. The calculation of the eight-point function is similar. To leading order in $\kappa$ and $\delta_{ij}$ we find: $$\label{eq:F8res} \begin{split} F_8 &\sim \frac{144\,\kappa^6}{\delta_{12}\delta_{34}^2\delta_{56}^2\delta_{78}}\, \prod_{i=1}^3\; (u_{2i,2i+1} - \sin u_{2i,2i+1}) \end{split}$$ Similar results are obtained for higher order maximally braided correlators $F_{2k}$. Those continue to obey the pattern evident from extrapolating and . Lorentzian Times {#sec:lorentzian} ================ We now turn to the analytic continuation $u_r \rightarrow \delta_r + i {\hat{u}}_r$ in more detail. Our assumptions so far concerned Euclidean time ordering and the first term in $F_{2k}$ (dropping all commutators) corresponds to the choice $\delta_1 > \delta_3 > \delta_2 > \delta_5 > \ldots$. The late time growth indicating quantum chaos is, however, sensitive to the ordering of real Lorentzian times ${\hat{u}}_r$. As we will now see, there is an independent way to characterize the real time ordering of the correlator. The [*proper-OTO number*]{} of $F_{2k}$ is determined by the real time ordering and it affects both the associated Lyapunov exponents and the associated scrambling time scales. We will see that the correlator we discuss involves the time scale $\hat{u}_*$, but also longer time scales, depending on the proper-OTO number. Types of OTO Correlators ------------------------ Our maximally braided correlators involve $k$ swaps of neighbouring operators as compared to the canonical (time ordered) configuration. It also has the distinguishing feature that it can be [*maximally OTO*]{}: its analytic continuation allows for configurations that are as much out of time order as any $2k$-point function can be. The [*proper-OTO number*]{} indicates the minimal number of switchbacks in the complex time contour that is required to represent a correlator [@Haehl:2017qfl]. The proper-OTO number of a $2k$-point function is at most $k$. In the case of $F_{2k}$, maximal OTO number is achieved by the real time ordering ${\hat{u}}_1={\hat{u}}_2 > {\hat{u}}_3 = {\hat{u}}_4 > \ldots > {\hat{u}}_{k-1} = {\hat{u}}_k$, which we focus on. The associated contour is shown in Fig. \[contour\]. Most other configurations of real times lead to a smaller proper-OTO number (i.e., the correlator can be represented on a contour with fewer switchbacks). We now show the significance of this characterization of the possible Lorentzian time orderings of our correlators. ![Complex time contour representation of the maximally braided correlator. We show the first (and dominant) term in the expansion of commutators in $F_{2k}$. Lorentzian time runs horizontally. We depict the Lorentzian time configuration which is maximally OTO. Operators are separated by small imaginary times, which enforces the operator ordering along the contour.[]{data-label="contour"}](chainContour.png "fig:"){width=".4\textwidth"} (-183,168)[$V_k$]{} (-183,129)[$V_k$]{} (-153,155)[$V_{k-1}$]{} (-153,105)[$V_{k-1}$]{} (-122,130)[$V_{k-2}$]{} (-122,80)[$V_{k-2}$]{} (-91,105)[$V_{k-3}$]{} (-55,31)[$V_{2}$]{} (-24,19)[$V_{1}$]{} (-24,57)[$V_{1}$]{} (-197,12)[$\beta$]{} Time Scales ----------- Let us now discuss the physical significance of the proper-OTO number. Using the result from the previous section, we have the following behavior for real time separations $|{\hat{u}}_{2i} - {\hat{u}}_{2i+1}| \gg 1 =\frac{\beta}{2\pi}$:[^3] $$\label{eq:F2exp} \begin{split} F_{2k} &\sim {\cal N}\, \frac{\exp \left( \sum_{i=1}^{k-1}|{\hat{u}}_{2i} - {\hat{u}}_{2i+1}|-(k-1)\, {\hat{u}}_* \right)}{\delta_{12}\delta_{34}^2 \cdots \delta_{2k-3,2k-2}^2\delta_{2k-1,2k}} \end{split}$$ with scrambling time ${\hat{u}}_* \sim \log (\kappa^{-2})$, associated with the growth of the 4-point function. The normalization ${\cal N}$ is ${\cal O}(1)$ and has an alternating sign depending on the sign of ${\hat{u}}_{2i}-{\hat{u}}_{2i+1}$. Note the appearance of the term $(k-1)\, {\hat{u}}_{*}$ in the exponent, reflecting the fact that the connected $2k$-point functions are proportional to $\kappa^{2 (k -1)}$. Depending on the real time ordering, the connected $2k$-point function $F_{2k}$ exhibits different growth patterns as function of different time separations. We focus on the proper $k$-OTO configurations: these are maximally OTO, i.e ${\hat{u}}_{2i} > {\hat{u}}_{2i-1}$ for all $i$. The time differences in the exponent in are then all positive and cancel telescopically (recalling that we set ${\hat{u}}_{2i} = {\hat{u}}_{2i-1}$ for all $i$), yielding $F_{2k} \sim e^{{\hat{u}}_1 - {\hat{u}}_{2k-1} - (k-1) {\hat{u}}_*}$. Thus the correlator in this case is a function of a single time separation ${\hat{u}}_{1,2k-1}$, corresponding to a measurement which is only sensitive to the total duration of the experiment. Despite being scrambled in different “channels", the chaotic growth of $F_{2k}$ does not saturate after the scrambling time ${\hat{u}}_*$ and continues unabated until ${\hat{u}}_{1,2k-1}$ reaches the [*$k$-scrambling time*]{} $${\hat{u}}_*^{(k)} \sim (k-1) {\hat{u}}_* \,.$$ The Lyapunov exponent for this growth is still $\lambda_L^{(k)} = 1 = \frac{2\pi}{\beta}$, but the longer time scale is associated with our chosen correlators being sensitive to more fine grained quantum chaos: they start off smaller and continue to grow for a longer time. Let us now discuss briefly configurations with less than maximal OTO-number. For example, proper $(k-1)$-OTO configurations are obtained by swapping the order of a single pair of real times, say ${\hat{u}}_1$ and ${\hat{u}}_3$, giving a correlator which can be represented on a contour with only $k-1$ switchbacks. The exponents in do not quite add up anymore, and we get $F_{2k} \sim e^{2{\hat{u}}_3 - {\hat{u}}_1 - {\hat{u}}_{2k-1} - (k-1) {\hat{u}}_*}$. There is now a two-dimensional space of time dependence on both ${\hat{u}}_{31}$ and ${\hat{u}}_{3,2k-1}$. If, e.g., $1\ll {\hat{u}}_{31}\ll {\hat{u}}_*$, then after a total duration of the experiment ${\hat{u}}_\text{tot} = {\hat{u}}_{3,2k-1} \sim (k-2) {\hat{u}}_*$ the observable $F_{2k}$ already reaches size of ${\cal O}(1)$. Working recursively, we see that less than maximal OTO configurations can exhibit intermediate time scales and transient behavior. It would be interesting to explore this in more detail. Discussion ========== We have argued that there exists new physically interesting data in higher-point out-of-time-order (OTO) correlation functions. These are qualitatively similar to the OTO four-point function used to diagnose quantum chaos. However, the observables $F_{2k}$ we constructed in display an exponential growth for a longer time ${\hat{u}}_*^{(k)} \sim (k-1) \, {\hat{u}}_*$. That is, there exists a hierarchy of timescales associated with scrambling, probed by increasingly fine grained (OTO) observables. This is reminiscent of similar hierarchies encountered in the context of unitary $k$-design in quantum circuit complexity [@Roberts:2016hpo; @Cotler:2017jue]. It would be interesting to explore this connection. Similarly, it would be an intriguing task to explore the experimental relevance, or the precise operational meaning of the hierarchy of $k$-scrambling times (for instance, along the lines of [@Halpern:2016zcm; @Halpern:2017abm]). An interpretation in terms of echo experiments, or more theoretically as quantifying operator growth by the size of repeated commutators, seem possible. Several other questions immediately spring to mind: It would be interesting to repeat the calculation in the Lorentzian setting, as a variant fo the standard shock wave calculation [@Shenker:2013pqa; @Shenker:2013yza; @Stanford:2014jda] (one would have to interpret the maximal braiding in that context). Similarly, one would like to make precise the connection to the formalism of [@Mertens:2017mtv]. Extensions to higher dimensions (e.g. [@Gu:2016oyy]) and exploration of butterfly velocities would be interesting, for example in the context of 2-dimensional CFTs at large central charge [@Roberts:2014ifa]. It is also interesting to explore whether those $k$-OTO correlators obey some bounds along the lines of [@Maldacena:2015waa] (see also [@Tsuji:2017fxs]). Finally, we hope to explore other types of $2k$-point OTO correlators, such as the (suitably regularized) “tremolo” correlator $\langle (W(t) V(0))^k \rangle$. This might shed light on the physical significance of abstract arguments about the structure of OTO correlators [@Haehl:2017qfl; @Haehl:2017eob]. We thank Ahmed Almheiri, Pawel Caputa, Nicole Yunger Halpern, Kristan Jensen, Rob Myers, Dan Roberts, Brian Swingle and Beni Yoshida for helpful discussions. FH is grateful for hospitality by University of California, Santa Barbara, where part of this work was done. FH is supported through a fellowship by the Simons Collaboration ‘It from Qubit’. MR is supported by a Discovery grant from NSERC. Technical simplifications {#app:simplifications} ========================= We collect here some simplifications that make the evaluation of $F_{2k}$ more efficient in practice. The generic Euclidean $2k$-point function is invariant under permutations of the time arguments, but most terms in it multiply some step functions. Any particular choice of Euclidean time ordering singles out some of these terms. Using the conventions of section \[sec:conventions\], the fully time ordered correlator has no step functions. Any step function signals the exchange of two insertion times with respect to the above defined canonical ordering. A correlator with a single step function would be one with a single pair of neighbouring operators exchanged relative to the time ordered one, e.g., $\langle V_1(u_1) V_2(u_3) V_1(u_2) V_2(u_4) \cdots \rangle$. Based on general arguments, the time ordered correlators reach their thermal value much faster than the scrambling times we are interested in. Similarly, any term without a sufficient number of step functions has time ordered pieces in it, which factorize off and decay in the chaos regime. As an example, observe that $\langle V_1V_2V_1V_2V_3V_3 \rangle \sim \langle V_1V_2V_1V_2\rangle \langle V_3V_3 \rangle$ for relative time differences ${\hat{u}}_{ij} \gg 1$. We are thus interested in the term with the maximal number of step functions. The observable $F_{2k}$ is constructed precisely such that one starts with the maximally braided configuration $\langle V_1(u_1) V_2(u_3) V_1(u_2) V_3(u_5) V_2(u_4) V_4(u_7) V_3(u_6) \cdots V_{k}(u_{2k-1}) V_{k-1}(u_{2k-2}) V_{k}(u_{2k}) \rangle$ (illustrated in Figs. \[general\] and \[contour\]) and then subtracts off all the pieces which contribute to it, but only involve a lower number of step functions. This ensures that we compute the leading term in the chaos regime, but nothing else. We extract the part of the generic Euclidean $2k$-point correlator with precisely $k-1$ step functions of the form $\Theta(u_{32})\Theta(u_{54}) \cdots \Theta(u_{2k-1,2k-2})$, which is nothing but $F_{2k}$. For instance, the computation of $F_6$ at ${\cal O}(\kappa^4)$ can be illustrated as follows: $$\begin{split} F_6 \big{|}_{{\cal O}(\kappa^4)} &= \frac{1}{ \langle V_1V_1 \rangle \langle V_2 V_2 \rangle \langle V_3 V_3 \rangle} \big( \langle V_1 V_2 V_1 V_3 V_2 V_3 \rangle - \langle V_1 V_2 V_1 V_2 V_3 V_3 \rangle - \langle V_1 V_1 V_2 V_3 V_2 V_3 \rangle + \langle V_1V_1V_2V_2V_3V_3 \rangle \big) \big{|}_{{\cal O}(\kappa^4)}\\ &= \begin{gathered}\includegraphics[width=.58\textwidth]{F6.png}\end{gathered} \; \text{other } {\upvarepsilon}\text{-contractions} \end{split} \label{eq:F6diagrams}$$ Of interest to us are coefficients in the full (unordered) Euclidean six-point function of $\Theta(u_{32})\Theta(u_{54})$, $\Theta(u_{32})$, $\Theta(u_{54})$, and $1$ (no step function). All four contribute to the first diagram; the second and fourth contribute to the second diagram; the third and fourth contribute to the third diagram; and only the term with no step function contributes to the last diagram. In total, only the coefficient of $\Theta(u_{32})\Theta(u_{54})$ contributes to the signed sum . This coefficient contains the maximum number of growing modes in the chaos regime. This is summarized in . Some more simplifications are useful for efficiently computing $F_{2k}$ for large values of $k$. In computing this term, we can anticipate that eventually $\tan(\frac{u_{ij}}{2})\approx \frac{\delta_{ij}}{2}$, where $\delta_{ij} \equiv \delta_i - \delta_j$. Within $F_{2k}$ we can then write ${{\cal B}^{{\scriptscriptstyle(#1)}}}(u_{2i-1},u_{2i}) \rightarrow {{\cal B}^{{\scriptscriptstyle(#1)}}}_{odd}(u_{2i-1})+{{\cal B}^{{\scriptscriptstyle(#1)}}}_{even}(u_{2i})$, with $$\label{eq:simpl1} \begin{split} {{\cal B}^{{\scriptscriptstyle(#1)}}}_{odd}(u_{2i-1})&= {\upvarepsilon}'(u_{2i-1}) - \frac{2{\upvarepsilon}(u_{2i-1})}{\delta_{2i-1,2i}} \,,\qquad {{\cal B}^{{\scriptscriptstyle(#1)}}}_{even}(u_{2i})= {\upvarepsilon}'(u_{2i}) + \frac{2{\upvarepsilon}(u_{2i})}{\delta_{2i-1,2i}} \end{split}$$ etc.. This source for the soft mode separates into two terms, each depending on either $u_{2i-1}$ or $u_{2i}$ but not both. The second order bilinear $\B[2](u_{2i-1},u_{2i})$ simplifies in a similar way: in that case we only need to keep terms involving both ${\upvarepsilon}(u_{2i-1})$ and ${\upvarepsilon}(u_{2i})$. Any term involving the square of only one of them would not be able to produce the consecutive step functions $\Theta(u_{2i-1,2i-2})$ and $\Theta(u_{2i+1,2i})$. Finally, we wish to extract the term involving $k-1$ factors $\Theta(u_{2i+1,2i})$ for $i=1,\ldots,k-1$. This can arise in our perturbation theory only if subsequent bilinears $\B[p_i](u_{2i-1},u_{2i})$ are connected by propagators as in Fig. \[general\]. We only need to retain the part of the soft mode propagator (\[prop\]) that contains the corresponding step function. We can therefore define a truncated propagator $$\label{eq:simpl3} \langle {\upvarepsilon}(u){\upvarepsilon}(0)\rangle_\text{trunc.} =\Theta(u) \, (u-\sin \, u)$$ which will be sufficient for computing $F_{2k}$ at leading order in $\kappa$. The Full Six-Point Function {#app:sixpt} =========================== In the main text, we computed maximally braided $2k$-point functions using an approximation scheme appropriate for extracting the late time growth characteristic of quantum chaos. As a consistency check on our approximations, in this appendix we elaborate on the exact Euclidean six-point function $G_E^{(6)}$ to see how it truncates to . The leading order diagram is ${\cal O}(\kappa^0)$ and can be represented as $$G_E^{(6)} \big{|}_{{\cal O}(\kappa^0)} =\; \begin{gathered}\includegraphics[width=.095\textwidth]{SixPt0.png}\end{gathered}$$ The dashed external circle indicates that the Euclidean time ordering has not been fixed and one should sum over permutations of the external insertion points (corresponding to “braiding” the blue lines). The above contribution is, of course, the completely disconnected product of three two-point functions. The next order in perturbation theory involves one $\langle {\upvarepsilon}{\upvarepsilon}\rangle$ propagator: $$G_E^{(6)} \big{|}_{{\cal O}(\kappa^2)} =\; \begin{gathered}\includegraphics[width=.27\textwidth]{SixPt1.png}\end{gathered}$$ These contributions factorize in a product of a two-point and a four-point function. The contribution of interest to us is the ${\cal O}(\kappa^4)$ part since this involves connected pieces for the first time (which are the ones measuring the $k$-scrambling time scales): $$G_E^{(6)} \big{|}_{{\cal O}(\kappa^4)} =\; \begin{gathered}\includegraphics[width=.7\textwidth]{SixPt2.png}\end{gathered}$$ where we are omitting a few more diagrams which are similar. In the main text, we computed only the first type of diagram since this is the only one contributing to the (subtracted) maximally braided six-point function $F_6$. The remaining diagrams are of the same order in perturbation theory, but do not contain enough step functions to contribute to $F_6$. If we were interested in the generic (unordered) Euclidean correlator, we would have to consider the full set of diagrams. For any specific ordering, one can identify a subset of diagrams that contributes. We can indeed compute the diagrams shown above for arbitrary Euclidean time ordering.[^4] The result is complicated and unilluminating. But from the above diagrams one can readily see which diagrams contribute given a particular combination of step functions. Higher orders in $\kappa$ are not our concern here, but some such loop calculations for two- and four-point functions have been done in [@Maldacena:2016upp]. [^1]: Throughout this letter, we we denote Euclidean times as $u$ and real times as ${\hat{u}}$. [^2]: Note that $\Theta(u)= \frac{1}{2\pi i} \int d\omega \, \frac{e^{i \omega u}}{w- i \epsilon^+}$ depends only on the real part of $u$. In our context this means the step functions are sensitive to the operator ordering, but not to the Lorentzian time ordering. [^3]: This assumption about separation of time scales is simply to extract a clear exponential time dependence from the trigonometric factors in $F_{2k}$. Dropping it would allow transient effects where the time dependence interpolates between different regimes. Note that this assumption is mild: the time scale $\frac{\beta}{2\pi}$ is sometimes referred to as the [*dissipation time*]{} over which, e.g., two-point functions decay. It is parametrically smaller than the scrambling time scales we are interested in. [^4]: We have also confirmed this result using the computational method developed in [@Almheiri:2014cka]. That calculation is, of course, isomorphic, but in practice somewhat different to implement.
--- author: - 'A. Arellano Ferro[^1] J. A. Ahumada , I.H. Bustos Fierro, J. H. Calderón' - 'N. I. Morrell' bibliography: - '6362\_biblio.bib' title: | Metallicity and distance of NGC 6362 from its RR Lyrae and\ SX Phoenicis stars[^2] --- Introduction ============ The southern globular cluster NGC 6362 (C1726$-$670 in the IAU nomenclature), located at $\alpha = 17^\mathrm{h}~31^\mathrm{m}~55.0^\mathrm{s}$, $\delta = -67^{\circ}~02^{'}~52^{''}$ (J2000), $l = 325.^{\!\!\circ}55$, $b=-17.^{\!\!\circ}57$, is at the relatively low distance from the Sun of 7.6 kpc, has a low reddening $E(B-V) = 0.09$, and has a low concentration ($c=1.09$, $\rho_0 = 2.29$) [@ha96]. It is also worth mentioning that recently [@Dale14], based on HST observations, have identified two different, spatially mixed populations in the cluster, apparent in the subgiant and red giant branches in the $U$ vs. $(U-B)$ diagram (their Fig. 4). [@Dale14] also claim that NGC 6362 is one of the least massive globulars ($M_\mathrm{tot} \sim 5 \times 10^4 M_\odot$) where multiple populations have been detected so far. Given its distance, the cluster is bright, with a horizontal branch at $V \sim 15$ mag, and consequently the discovery of its variable stars began early. The first 15 variables were found by [@wo19] on plates taken with the 13-inch Boyden telescope at Arequipa, Peru. Much later, [@va61] published the discovery of variables V16 to V31 on plates obtained by P. Th. Oosterhoff from South Africa in 1950 with the 74-inch Radcliffe reflector at Pretoria. [@va61] provided $(x,y)$ coordinates and an ID chart for all V1-V31 variables. [@fou66] found star V32 on plates taken with the 60-inch telescope at Bosque Alegre Astrophysical Station, Córdoba, Argentina, again giving coordinates and charts for all the variables discovered so far. [@vh61] discovered V33 (his VH 11) and another seven variables that had already been independently announced by [@va61]. [@hsh73] in her 3rd catalogue adopted van Agt’s numbering system, although all the epochs, periods, and magnitudes listed for the NGC 6362 variables in that catalogue are from van Hoof’s paper. Much more recently, already in the CCD era, [@maz99] discovered variables V34–52 on images taken between 1991 and 1996 with the 2.5-m (du Pont) and 1-m (Swope) telescopes at Las Campanas, Chile, and gave AR and Dec coordinates with individual finding charts. The non-membership status of the eclipsing EC variables V43, V45, and V52 was confirmed by [@ruc00]. In the Catalogue of Variable Stars in Globular Clusters (CVSGC; @cle01), the periods, magnitudes, amplitudes, and classifications for V1–37 are from [@ole01], and for V38–52 are from [@maz99], while the RA and Dec for most of V1–52 are from [@sam09], with the exception of stars V11, 23, 24, 26, 28, 29, 32, and 38–41 which are from [@maz99]. Finally, [@kal14] reported a search for variable stars carried out between 1995 and 2009 in the field of NGC 6362 also with the du Pont and Swope telescopes at Las Campanas; they found 25 newly detected variable stars (V53–77), including 18 proper-motion cluster members. [@kal14] provide identification charts and RA and Dec coordinates, although M. Rozyczka (private comm.) pointed out that the coordinates of V42, 45, 49, 53–58, 60, 63, and 75–77 are incorrect in Table 1 of their paper. A new set of high quality data obtained between 1999 and 2009 was recently employed by Smolec et al. (2017) (hereinafter Smo17), to discuss the Blazhko and double mode nature of the sample of RRab and RRc stars. With the observations reported in the present work, there is a time base of almost a century of variable star studies in NGC 6362. As in most of our recent papers we have employed the DanDIA[^3] implementation of difference image analysis (DIA) (@bra08) to extract high-precision photometry for all of the point sources in the field of NGC 6362. We collected 12245 light curves in the $V$ and $I$ bandpasses with the aim of building up a colour-magnitude diagram (CMD) and discussing the horizontal branch (HB) structure as compared to other Oosterhoff type I (OoI) and Oosterhoff type II (OoII) clusters[^4]. We also Fourier decomposed the light curves of the RR Lyrae stars (RRL) to calculate their metallicity and luminosity in order to provide independent and homogeneous estimates of the cluster mean metallicity and distance. The scheme of the paper is as follows: In $\S$ 2 we describe the observations, data reduction and calibration to the standard system. In $\S$ 3 the periods and phased light curves of RRL stars are displayed and the Fourier light curve decomposition of stable RRL is described and the reddening is estimated. The corresponding individual values of \[Fe/H\] and $M_V$ are reported. $\S$ 4 summarizes some properties of the SX Phe population in the cluster and reports a newly found foreground SX Phe variable. In $\S$ 5 we report the distance to the cluster obtained from the Fourier decomposition of RRL stars light curves and the P-L relation of the SX Phe stars. $\S$ 6 deals with the discussion of the distribution of RRL in the HB. $\S$ 7 offers a summary of the results. Appendix A gives the discussion of a few peculiar stars and in Appendix B we display a detailed finding chart of all variables in the field of NGC 6362. ![The transformation relation for $V$ and $I$ for the three sets of data: from top to bottom; CASLEO, SWOPE and Bosque Alegre. Note that data from Bosque Alegre have the largest colour dependence. The transformation equations are given in each panel.[]{data-label="Trans"}](MOSAICO_TRANS.pdf) Observations ============ The observations of this cluster were performed in three sites. First, the Complejo Astron´omico El Leoncito (CASLEO), San Juan, Argentina, where the 2.15-m telescope was used on March 20–22, 2013. The detector was a Roper Scientific back-illuminated CCD of 2048$\times$2048 pixels with a scale of 0.15 arcsec/pix and a field of view (FoV) of approximately 5.1$\times$5.1 arcmin$^2$. Second,the 1-m Swope Telescope of the Las Campanas Observatory was employed on June 3 and 14, 2014, and the E2V CCD231-84 of 4096x4112 pixels with a scale of 0.435 arcsec/pix and a FoV of approximately 14.5x14.5 arcmin$^2$. Finally, on June 20, August 21 and 22 and September 5 and 6, 2015, we used the 1.54-m telescope of the Bosque Alegre Observatory, Cordoba, Argentina equipped with a CCD Alta U9 of 3072$\times$2048 pixels, with a scale of 0.247 arcsec/pix and a FoV of approximately 12.6$\times$8.4 arcmin$^2$. The log of our observations is given in Table \[tab:observations\]. Difference image analysis ------------------------- As in previous papers we employed the difference image analysis (DIA) technique and the DanDIA pipeline ([@bra08]; [@Bra13]) to extract high-precision photometry of all point sources in the images of NGC 6362. The procedure and its caveats have been described in detail by [@Bra11]. Photometric calibrations ------------------------ ### Relative calibration Systematic errors in photometric data may be so severe that they mimic bona fide stellar variability. Time-series photometry of a set of non-variable objects may be used to calculate and correct these systematic errors. We have applied the methodology developed by [@BF12] to solve for magnitude offsets $Z_k$ that should be applied to each photometric measurement from image $k$. ### Absolute calibration The transformation to the standard *VI* system was performed using the local standards in the FoV from the collection of [@Stet00][^5]. The calibrations for the three telescopes are shown in Fig. \[Trans\]. The transformation equations to the standard $V$ and $I$ system for each data set are given in the corresponding panel. In Table \[tab:vi\_phot\] we report the $V$ and $I$ photometry for all RRL stars inour FoV. The full table is published in electronic format, although we include a small portion of it the printed version. ------ ------ --------- ------------- --------- ------------- ------------------------------------------------- Date Site $N_{V}$ $t_{V}$ (s) $N_{I}$ $t_{I}$ (s) seeing (")\ 20130321 &CAS& 15 & 300 & 9 &240 & 2.7\ 20130322 &CAS& 8 & 480 & 9 & 220 & 2.7\ 20130323 &CAS& 8 & 360 & 11 & 240 & 2.0\ 20140504 &CAS& 22 & 480 & 30 & 180-220 & 3.1\ 20140605 &LC&140 & 4-20 &79 & 2-10 &2.1\ 20140615 &LC& 88 & 4-20 & 53& 2-10 &1.7\ 20150621 &BA& 37 & 180-300 & 36 & 120-180 &1.8\ 20150821 &BA& 8 & 300 & 11 & 150 &3.5\ 20150821 &BA& 11 & 300 & 14 & 150 &2.7\ 20150906 &BA& 27 & 300 & 27 & 150 &2.5\ 20150907 &BA& 3 & 300 & 6 & 150& 1.9\ Total: && 367& – & 285 & – & –\ ------ ------ --------- ------------- --------- ------------- ------------------------------------------------- : Observations log of NGC 6362. Data are from three sites; CASLEO (CAS), Las Campanas (LC) and Bosque Alegre (BA). Columns $N_{V}$ and $N_{I}$ give the number of images taken with the $V$ and $I$ filters respectively. Columns $t_{V}$ and $t_{I}$ provide the exposure time, or range of exposure times. In the last column the average seeing is listed. \[tab:observations\] ---------- -------- --------------- ------------------------------ ------------------------------ -------------- ------------------------------ ----------------------------------- ------------------------------- ------------------------------------ -------- Variable Filter HJD $M_{\mbox{\scriptsize std}}$ $m_{\mbox{\scriptsize ins}}$ $\sigma_{m}$ $f_{\mbox{\scriptsize ref}}$ $\sigma_{\mbox{\scriptsize ref}}$ $f_{\mbox{\scriptsize diff}}$ $\sigma_{\mbox{\scriptsize diff}}$ $p$ Star ID (d) (mag) (mag) (mag) (ADU s$^{-1}$) (ADU s$^{-1}$) (ADU s$^{-1}$) (ADU s$^{-1}$) V1 $V$ 2456372.79007 14.994 16.275 0.001 631.204 1.835 $+$536.163 3.076 3.7921 V1 $V$ 2456372.79452 15.015 16.296 0.001 631.204 1.835 $+$481.905 2.989 3.8046 V1 $I$ 2456372.78347 14.559 16.390 0.003 1406.814 3.984 $-$33.805 7.146 1.9506 V1 $I$ 2456372.80310 14.602 16.434 0.002 1406.814 3.984 $-$140.873 3.975 1.9684 V2 $V$ 2456372.79007 15.040 16.317 0.001 489.291 1.760 $+$963.456 2.956 3.7921 V2 $V$ 2456372.79452 15.060 16.337 0.001 489.291 1.760 $+$914.472 2.881 3.8046 V2 $I$ 2456372.78347 14.583 16.409 0.003 1194.571 4.129 $+$333.684 6.998 1.9506 V2 $I$ 2456372.80310 14.639 16.466 0.002 1194.571 4.129 $+$200.011 3.851 1.9684 ---------- -------- --------------- ------------------------------ ------------------------------ -------------- ------------------------------ ----------------------------------- ------------------------------- ------------------------------------ -------- \[tab:vi\_phot\] The RR Lyrae stars ================== The resulting light curves of the RR Lyrae stars combining the observations from the runs in the three sites are shown in Figs. \[RRab\] and \[RRc\]. The zero points for CASLEO and LC match very well. For BA however we found small drifts that vary from star to star. With the aim of using all data to refine the period, we applied these drifts and then the light curves were phased with the new resulting periods listed in column 9 of Table \[variables\]. We also report itensity-weighted mean $V$ and $I$ magnitudes, amplitudes and equatorial coordinates for all the RR Lyrae stars. For comparison we also include the periods recently reported by Smo17. ---------- ------------- -------- -------- ------- ------- ------------- ---------------------- ----------------- ------------- --------------- -- Variable Variable $<V>$ $<I>$ $A_V$ $A_I$ $P$ (Smo17) HJD$_{\mathrm{max}}$ $P$ (this work) RA Dec Star ID Type (mag) (mag) (mag) (mag) (d) ($+2\,450\,000$) (d) (J2000.0) (J2000.0) V1 RRab *Bl* 15.352 14.804 1.292 0.782 0.50479162 6813.9219 0.504814 17:31:54.72 $-$67:02:45.7 V2 RRab 15.368 14.834 1.366 0.856 0.48897301 6813.7537 0.488972 17:31:50.23 $-$67:04:25.4 V3 RRd 15.378 14.866 1.185 0.774 0.44728792 6823.8616 0.447297 17:31:40.91 $-$67:04:15.8 V5 RRab *Bl* 15.348 14.761 1.177 0.716 0.52083783 6813.8986 0.521434 17:32:08.82 $-$67:02:59.4 V6 RRc *Bl* 15.312 14.911 0.47 0.28 0.26270671 6813.8662 0.238084 17:32:03.85 $-$66:59:51.8 V7 RRab *Bl* 15.368 14.783 1.242 0.872 0.521581388 6781.8912 0.521572 17:31:58.52 $-$67:01:01.4 V8 RRc *nr* 15.082 14.620 0.517 0.321 0.38148471 7194.7619 0.381466 17:31:10.05 $-$67:01:01.3 V10 RRc *Bl* 15.257 14.970 0.471 0.254 0.265638816 6813.8947 0.315140 17:32:26.13 $-$66:56:52.6 V11 RRc 15.255 14.869 0.502 0.317 0.288789268 6823.7797 0.288789 17:31:49.90 $-$67:01:57.8 V12 RRab *Bl* 15.227 14.641 1.308 1.103 0.5328814 7194.6017 0.533046 17:31:13.13 $-$67:04:30.9 V13 RRab *Bl* 15.258 14.651 1.251 0.741 0.58002740 6823.8027 0.580010 17:31:15.14 $-$67:04:47.6 V14 RRc 15.374 15.056 0.344 0.213 0.24620647 6823.7091 0.252368 17:32:57.94 $-$67:02:12.8 V15 RRc *nr* 15.263 14.841 0.448 0.290 0.279945707 6823.7574 0.279944 17:32:03.47 $-$67:02:44.3 V16 RRab 15.359 14.774 1.12 0.67 0.525674215 6813.7472 0.525711 17:31:58.10 $-$67:07:12.0 V17 RRc *nr* 15.335 14.861 0.511 0.265 0.31460473 6813.8251 0.314604 17:32:29.46 $-$67:03:51.2 V18 RRab *Bl* 15.073 14.320 0.913 0.461 0.51288484 6813.7955 0.512899 17:32:13.59 $-$67:01:33.1 V19 RRab 15.365 14.730 0.663 0.480 0.59450528 6781.9170 0.594511 17:32:15.96 $-$67:03:09.4 V20 RRab *Bl* 15.273 14.627 0.436 0.329 0.69835898 6823.7125 0.698361 17:32:02.64 $-$67:02:59.4 V21 RRc *nr* 15.321 14.945 0.535 0.362 0.281390043 6813.7955 0.281392 17:32:22.59 $-$67:04:30.6 V22 RRc 15.318 14.939 0.516 0.335 0.26683523 6823.7091 0.253262 17:32:26.74 $-$67:07:55.1 V23 RRc *nr* 15.359 14.980 0.571 0.345 0.275105063 6813.8251 0.275108 17:32:00.12 $-$67:03:07.5 V24 RRc *nr* 15.195 14.795 0.489 0.240 0.32936190 6813.7257 0.329318 17:32:07.16 $-$67:03:20.3 V25 RRab 15.45 14.9 1.23 0.74 0.455890887 6823.6985 0.455824 17:30:54.37 $-$67:06:18.5 V26 RRab 15.351 14.750 0.610 0.403 0.60217449 6781.8739 0.602179 17:31:58.87 $-$67:03:22.2 V27 RRc *nr* 15.253 14.957 0.520 0.303 0.27812399 6813.9520 0.322892 17:31:21.62 $-$66:56:27.7 V28 RRc *Bl?* 15.111 14.624 0.457 0.271 0.3584133 6372.8559 0.358399 17:31:59.16 $-$67:02:07.7 V29 RRab *Bl* 15.272 14.658 0.502 0.321 0.64778329 6823.8064 0.647743 17:31:52.49 $-$67:03:20.2 V30 RRab *Bl* 15.186 14.599 0.954 0.642 0.61340457 6373.8773 0.613415 17:31:39.59 $-$67:01:33.6 V31 RRab *Bl* 15.323 14.714 0.749 0.460 0.60021294 6823.8726 0.600220 17:31:49.20 $-$67:01:20.7 V32 RRab *Bl* 15.385 14.817 1.258 0.765 0.49724171 6813.8468 0.497248 17:32:01.83 $-$67:02:12.7 V33 RRc *nr* 15.319 14.924 0.398 0.231 0.30641758 6813.7889 0.338096 17:32:47.91 $-$66:56:36.2 V34 RRd 15.305 14.779 1.070 0.708 0.49432939 7194.6212 0.494306 17:31:52.81 $-$67:03:34.3 V35 RRc *nr* 15.318 14.883 0.462 0.289 0.29079074 6813.7263 0.290792 17:32:08.14 $-$67:03:03.0 V36 RRc *Bl/nr* 15.173 14.698 0.393 0.248 0.31009148 6813.7294 0.3101 17:31:43.57 $-$67:02:16.8 V37 blRR ? 15.319 14.974 0.403 0.176 0.25503903 6813.9287 0.254576 17:31:32.12 $-$67:02:03.4 ---------- ------------- -------- -------- ------- ------- ------------- ---------------------- ----------------- ------------- --------------- -- ![image](MOSAICO_RR0.pdf) ![image](MOSAICO_RR1.pdf) The reddening of NGC 6362 from its RRab stars {#RRab_reddening} --------------------------------------------- Using the fact that RRab stars have nearly the same intrinsic colour $(B-V)_0$ at minimum light [@Stu66], one can calculate the individual reddenings of these stars, which can provide a good average of the cluster reddening or to reveal the presence of differential reddening. The $(V-I)_0$ at minimum has been calibrated by [@Gul05] as $(V-I)_{0,min} = 0.58 \pm 0.02$. We have adopted this value and the minimum $(V-I)$ from our observations to estimate $E(V-I)$ for each RRab with a well defined minimum. This was converted to $E(B-V)$ through the ratio $E(V-I)/E(B-V)= 1.259$ derived from [@Sch98]. From 15 RRab stars we find an average of $E(B-V)=0.063 \pm 0.024$. The scatter is small and shows no signs of differential reddening, hence, we shall adopt this value for the determination of the distance to NGC 6362 via the several approaches described in $\S$ \[Distance\]. The above reddening calculation can be compared with previous estimations such as 0.10 [@ole01], 0.06$\pm$0.03 [@Piot99] and 0.08 [@Broc99]. Fourier decomposition and physical parameters of RR Lyrae stars --------------------------------------------------------------- Determination of \[Fe/H\] and $M_V$ of RR Lyrae stars in a given cluster enables the estimation of the mean values of the metallicity and distance of the parental cluster. This can be achieved via the Fourier decomposition of their light curves and the employment of well established calibrations and their zero points of these physical quantities and the corresponding Fourier parameters. By doing it on a homogeneous basis, i.e. using the same semi-empirical calibrations and zero points for a family of globular clusters of both Oosterhoff types, OoI and OoII, an independent insight on the metallicity dependence of the horizontal branch (HB) luminosity, i.e.,the familiar $M_V$-\[Fe/H\] relation can be obtained. Such approach has been applied by [@Are17] to a group of 23 globular clusters. While preliminary results for NGC 6362 were included in that paper, in the present work we publish the specific values of the Fourier parameters and the individual physical parameters for a carefully selected sample of stable RRab stars and RRc stars in the cluster, according to the accurate photometry of Smo17. [lllllllllrr]{} Variable & $A_{0}$ & $A_{1}$ & $A_{2}$ & $A_{3}$ & $A_{4}$ &$\phi_{21}$ & $\phi_{31}$ & $\phi_{41}$ & $N$ &$D_m$\ ID & ($V$ mag) & ($V$ mag) & ($V$ mag) & ($V$ mag)& ($V$ mag) & & & & &\ \ V2 & 15.368(2) & 0.4145(3) & 0.2088(3) & 0.1458(3) & 0.0974(3) & 3.888(2) & 8.147(3) &6.034(4) & 10 & 1.7\ V16 & 15.359(5) & 0.3442(3) & 0.1796(3) & 0.1212(3) & 0.0816(3) & 3.973(2) & 8.234(4) &6.239(4) & 10 & 0.9\ V19 & 15.365(2) & 0.2080(2) & 0.0945(2) & 0.0542(2) & 0.0235(2) & 4.172(3) & 8.656(4) &7.097(9) & 10 & 1.0\ V25 & 15.263(2) & 0.4195(4) & 0.2018(4) & 0.1510(4) & 0.0966(4) & 3.831(2) & 7.975(4) &5.871(5) & 10 & 1.4\ V26 & 15.351(1) & 0.2131(2) & 0.0973(2) & 0.0555(2) & 0.0245(2) & 4.200(3) & 8.690(5) &7.139(9) & 10 & 1.3\ \ V11 & 15.255(2) & 0.2434(2) & 0.0333(2) & 0.0207(2) & 0.0167(2) & 4.830(8) & 3.410(12) & 2.140(15) & 9 &\ V14 & 15.374(1) & 0.1587(4) & 0.0184(4) & 0.0040(4) & 0.0014(4) & 4.512(22)& 2.905(99)& 1.092(281) & 9 &\ V15 & 15.263(1) & 0.2192(2) & 0.0306(2) & 0.0190(2) & 0.0131(2) & 4.593(6) & 3.098(10) & 2.002(15) & 9 &\ V21 & 15.321(1) & 0.2366(2) & 0.0331(2) & 0.0222(2) & 0.0159(2) & 4.705(6) & 3.237(8) & 2.084(12) & 9 &\ V22 & 15.318(1) & 0.2303(8) & 0.0379(8) & 0.0148(8) & 0.0134(8) & 4.680(21) & 2.850(52)& 1.588(58) & 9 &\ V23 & 15.359(1) & 0.2428(2) & 0.0377(2) & 0.0232(2) & 0.0170(2) & 4.651(6) & 2.907(8) & 1.770(10) & 9 &\ [lccccccc]{} Star&\[Fe/H\]$_{ZW}$ & \[Fe/H\]$_{UVES}$ &$M_V$ & log $T_{\rm eff}$ &log$(L/{\rm L_{\odot}})$ &$M/{\rm M_{\odot}}$&$R/{\rm R_{\odot}}$\ \ V2 &-1.274(3)& -1.157(3)& 0.633(1)& 3.821(7)& 1.647(1)&0.69(6)&5.09(1)\ V16&-0.986(4)& -0.887(4)& 0.789(1)& 3.826(7)& 1.584(1)&0.65(5)&4.63(1)\ V19&-1.194(4)& -1.075(3)& 0.637(1)& 3.806(7)& 1.645(1)&0.62(5)&5.46(1)\ V25&-1.311(4)& -1.195(4)& 0.688(1)& 3.823(7)& 1.625(1)&0.71(6)&4.93(1)\ V26&-1.190(5)& -1.211(32)& 0.610(3)& 3.824(10)& 1.656(1)& 0.51(6)& 5.09(1)\ Weighted mean& $-$1.203(2)&$-$1.066(2)& 0.657(1)& 3.816(3)& 1.637(1)&0.66(2)&5.22(1)\ $\sigma$&$\pm$0.126&$\pm$0.126&$\pm$0.072&$\pm$0.007&$\pm$0.025&$\pm$0.07&$\pm$0.27\ \ V11&-1.15(2)&-1.03(2) &0.566(1) &3.872(1)&1.674(1)&0.56(1)& 4.16(1)\ V14&-0.94(16)&-0.85(11)&0.683(2)&3.878(1)&1.627(1)&0.68(2)& 3.84(4)\ V15&-1.16(2)&-1.11(2) &0.601(1) &3.872(1)&1.660(1)&0.60(1)& 4.09(1)\ V21&-1.16(1)&-1.04(1) &0.582(1) &3.873(1)&1.667(1)&0.58(1)& 4.11(1)\ V22&-1.00(9)&-0.90(6) &0.621(4) &3.877(1)&1.652(1)&0.62(4)& 3.96(2)\ V23&-1.28(14)&-1.16(1)&0.585(1) &3.872(1)&1.666(1)&0.59(1)& 4.12(1)\ Weighted mean& $-$1.21(1)&$-$1.08(1)&0.589(1)&3.872(1)&1.664(1)&0.59(1)&4.10(1)\ $\sigma$&$\pm$0.16&$\pm$0.16&$\pm$0.046&$\pm$0.003&$\pm$0.015&$\pm$0.04&$\pm$0.11\ Although the procedure and the employed calibrations have been described in detail by [@Are17] and in several papers cited there, for completeness we include here the fundamentals. The form of the Fourier representation of a given light curve is: $$\label{eq.Foufit} m(t) = A_0 + \sum_{k=1}^{N}{A_k \cos\ ({2\pi \over P}~k~(t-E) + \phi_k) },$$ where $m(t)$ is the magnitude at time $t$, $P$ is the period, and $E$ is the epoch. A linear minimization routine is used to derive the best fit values of the amplitudes $A_k$ and phases $\phi_k$ of the sinusoidal components. From the amplitudes and phases of the harmonics in Eq. \[eq.Foufit\], the Fourier parameters, defined as $\phi_{ij} = j\phi_{i} - i\phi_{j}$, and $R_{ij} = A_{i}/A_{j}$, are computed. Since some of the calibrations and zero points employed in this work towards the calculation of stellar physical quantities differ from the ones used in the previous Fourier decomposition analysis for the RR Lyrae stars in NGC 6362 [@ole01], below we explicitly list the calibrations we used: $${\rm [Fe/H]}_{J} = -5.038 ~-~ 5.394~P ~+~ 1.345~\phi^{(s)}_{31}, \label{eq:JK96}$$ $$M_V = ~-1.876~\log~P ~-1.158~A_1 ~+0.821~A_3 + K, \label{eq:ḰW01}$$ given by [@JK96] and [@KW01], respectively. The standard deviations of the above calibrations are 0.14 dex [@Jur98] and 0.04 mag, respectively. In eq. \[eq:ḰW01\] we have used K=0.41 (see the discussion in Section 4.2 of [@AGB10]). Eq. \[eq:JK96\] is applicable to RRab stars with a [*deviation parameter*]{} $D_m$, defined by [@JK96] and [@KK98], not exceeding an upper limit. These authors suggest $D_m \leq 3.0$. The $D_m$ is listed in column 11 of Table \[tab:fourier\_coeffs\], where it is obvious that the five stable RRab stars have light curves consistent with the calibration of eq. \[eq:JK96\]. For the RRc stars we employ the calibrations: $${\rm [Fe/H]}_{ZW} = 52.466~P^2 ~-~ 30.075~P ~+~ 0.131~\phi^{(c)~2}_{31}$$ $$~~~~~~~ ~-~ 0.982 ~ \phi^{(c)}_{31} ~-~ 4.198~\phi^{(c)}_{31}~P ~+~ 2.424, \label{eq:Morgan07}$$ $$M_V = 1.061 ~-~ 0.961~P ~-~ 0.044~\phi^{(s)}_{21} ~-~ 4.447~A_4, \label{eq:K98}$$ given by [@Mor07] and [@Kov98], respectively. For eq. \[eq:K98\] the zero point was reduced to 1.061 mag to make the luminosities of the RRc consistent with the distance modulus of 18.5 mag for the LMC (see discussions by [@Cac05] and [@AGB10]). The original zero point given by [@Kov98] is 1.261. When necessary, the coefficients were transformed from cosine series phases into the sine series via the relation    $\phi^{(s)}_{jk} = \phi^{(c)}_{jk} - (j - k) {\pi \over 2}$ The values of $A_0$, in $V$ and $I$, for all the RRL stars in NGC 6362 are given in Table \[variables\] as the intensity weighted quantities $<V>$ and $<I>$. However, several of these stars have been identified as Blazhko variables or double mode pulsators (Smo17), hence we have limited the Fourier decomposition, for the purpose of physical parameters determination, to those stars proven to be stable in the time scale of the analysis of Smo17. Clearly, the light curves of Smo17 are more dense and much better covered than ours in Figs. \[RRab\] and \[RRc\], since their time-base is much longer. Hence, we decided to Fourier decompose the light curves from Smo17 for the rather stable stars (i.e. the RRc stars V11, V14, V15, V21, V22 and V23, and the RRab stars V2, V16, V19, V25 and V26). The Fourier decomposition parameters for these stars and their corresponding physical parameters are listed in Tables \[tab:fourier\_coeffs\] and \[fisicos\] respectively. The absolute magnitude $M_V$ was converted into luminosity with $\log (L/{\rm L_{\odot}})=-0.4\, (M_V-M^\odot_{\rm bol}+BC$). The bolometric correction was calculated using the formula $BC= 0.06\, {\rm [Fe/H]}_{ZW}+0.06$ given by [@SC90]. We adopted $M^\odot_{\rm bol}=4.75$ mag. For the distance calculation, and given that there are no signs of differential reddening for NGC 6362, we have adopted $E(B-V)=0.063$ (see $\S$ \[RRab\_reddening\]) The weighted average of \[Fe/H\] and distance are considered good mean values for the parental cluster. We found \[Fe/H\]$_{ZW} = -1.203\pm 0.126$ and $-1.21\pm 0.16$ for the RRab and RRc stars respectively, which in the scale of [@Carr09] are \[Fe/H\]$_{UVES}= -1.066\pm 0.126$ and $-1.08\pm 0.16$. The value of \[Fe/H\]$_{UVES}$ obtained above can be compared with independent determinations of the metallicity of the cluster that can be found in the literature. The value given in the catalogue of globular cluster parameters of [@ha96] is \[Fe/H\]$_{UVES}=-0.99$; [@ole01] finds \[Fe/H\]$_{ZW}=-1.08$ or \[Fe/H\]$_{UVES}=-0.97$. On spectroscopic grounds [@CG97] found \[Fe/H\]=$-0.96$ from an analysis of echelle spectra of cluster giants, and [@Rut97] found \[Fe/H\]=$-0.99 \pm0.03$ from Ca II triplet index. More recent spectroscopic determinations from ESO/FLAMES spectra were obtained by [@Mucc16] and [@Mass17] and obtained \[Fe/H\]= $-1.09 \pm 0.01$ and $-1.07 \pm 0.01$ respectively. Thus, our results are in very good agreement with recent spectroscopic estimations. The Color-Magnitude Diagram --------------------------- The color-magnitude diagram (CMD) in Fig. \[CMD\_6362\] was built using the magnitude weighted means of $V$ and $V-I$ for the 12245 stars in the field or our reference image from the run in Las Campanas. Variable stars are labelled and plotted using their intensity-weighted means $<V>$ and $<V>-<I>$ calculated from the light curves in Figs. \[RRab\] and \[RRc\] and listed in Table \[variables\]. All symbols and colours are explained in the caption of the figure. For the age of the cluster we adopted 12.1 Gyr from the differential age determination of [@dAng05]. The isochrone and zahb model are from [@vdB14] and are shifted to the average distance modulus found from the RRL stars. ![image](MOSAICO_DCM.pdf) Bailey diagram and Oosterhoff type {#secBailey} ---------------------------------- The log P vs Amplitude diagram, also known as the Bailey diagram, is a useful tool because it is a good discriminator between OoI and OoII clusters, the RRab and RRc stars are well separated and hence their classification is confirmed, and RRab stars of advanced evolution can be identified. Fig. \[figBailey\] shows the diagram resulting from the data listed in Table \[variables\]. The full amplitudes were measured on the light curves of Figs. \[RRab\] and \[RRc\], except in those cases were we missed the maximum or minimum. In these cases we estimated the full amplitude from the curves of Smo17. When amplitude modulations are evident, the maximum observed amplitude was taken. The distribution of the RR Lyrae stars in the diagram confirms NGC 6362 to be of the OoI type. Some outstanding stars deserve a comment. According to [@Cac05], RRab stars closer to the dashed sequence are more evolved. V12, V13, V20 and V30 are therefore good candidates to be ahead in their evolution towards the AGB. These four stars have Blazhko modulations (Smo17). In the amplification of the HB star in Fig.\[HB\] they are among the most luminous RRab stars. Conversely the luminous V18 and V29 do not show signs of being more evolved. ![Period-Amplitude diagram for NGC 6362. Filled and open symbols represent RRab and RRc stars, respectively. Triangles and squares are used for stars with Blazhko modulations or double mode stars. The continuous and segmented lines in the top panel are the loci for evolved and unevolved stars in M3 according to [@Cac05]. The black parabola was obtained by [@Kun13a] from 14 OoII clusters. The red parabolas were calculated by [@Are15] from a sample of RRc stars in five OoI clusters and avoiding Blazhko variables. In the bottom panel the black segmented locus was found by [@Are11] and [@Are13] for the OoII clusters NGC 5024 and NGC 6333 respectively. The blue loci are from [@Kun13b]. See $\S$ \[secBailey\] for a discussion.[]{data-label="figBailey"}](Bailey.pdf) The SX Phoenicis stars in NGC 6362 ================================== There are six SX Phe stars identified in NGC 6362. They are listed in Table \[SXPhe\] along with their intensity weighted magnitude, periods, pulsating mode, equatorial coordinates and the discovering paper. The pulsating modes were assigned from the position of the star on the log P- $V$ and the P-L relation of [@CS12] shifted to the predicted distance. We have identified a new SX Phe in the field of our images in NGC 6362. In the CMD the star falls in an odd position for a member SX Phe (red cross in Fig. \[CMD\_6362\]). The star is at least 2 magnitudes brighter than the SX Phe in the cluster, thus it must be a foreground object. Therefore, we refrain from assigning a sequential variable number to the star and identify it as SXf1. The light curves in our $V$ and $I$ photometry phased with a period of 0.07949d is shown in Fig. \[SXf1\] in appendix \[individual\]. ---------- -------- -------- ------------- ----------------- ----------- ------------ --------------- ------ -- Variable $<V>$ $<I>$ $P$ (Kal14) $P$ (this work) Pulsating RA Dec Ref. Star ID (mag) (mag) (d) (d) mode (J2000.0) (J2000.0) V38 16.926 16.584 0.06661582 0.06661901 F 17:31:43.7 $-$67:02:58.0 1 V46 17.491 17.103 0.050634688 0.05063447 F 17:32:25.0 $-$67:00:31.4 1 V47 17.134 16.783 0.052234111 0.05223411 F 17:32:13.0 $-$67:02:38.1 1 V48 17.048 – 0.047920021 0.04656200 1O 17:31:59.9 $-$67:03:49.8 1 V64 17.086 16.656 0.050162402 0.05016240 1O 17:31:58.2 $-$67:03:45.8 2 V72 17.560 17.180 0.0436729 0.0436729 F 17:31:29.0 $-$67:02:33.9 2 SXf1 15.075 14.673 – 0.07949 – 17:31:09.8 $-$66:59:35.6 3 ---------- -------- -------- ------------- ----------------- ----------- ------------ --------------- ------ -- The Distance to NGC 6362 {#Distance} ======================== The distance to NGC 6362 has been estimated from four independent approaches as described in the following sections. From the RR Lyrae stars ----------------------- To estimate the distance to NGC 6362 from the RR Lyrae stars we have converted the individual values of $M_V$ in Table \[fisicos\] into individual distances and adopting $E(B-V)=0.063$ derived in $\S$ \[RRab\_reddening\]. Since the values of $M_V$ come from independent calibrations from RRab and RRc stars we calculated two independent values of the true distance modulus; 14.499$\pm$0.078 and 14.522$\pm$0.037, corresponding to the distance 7.93$\pm$0.32 and 8.02$\pm$0.15 kpc from the RRab and the RRc stars respectively. An independent estimate from $I-$band RR Lyrae P-L relation derived by Catelan et al. (2004) $M_I = 0.471-1.132~ {\rm log}~P +0.205~ {\rm log}~Z$, with ${\rm log}~Z = [M/H]-1.765$. We applied these relations to the 34 RR Lyrae stars in Table \[variables\], and found a distance of 7.85$\pm$0.37 kpc. From the SX Phe stars --------------------- We have calculated the predicted distance to each SX Phe star, by the P-L relation of [@CS12], taking the periods and pulsating modes listed in Table \[SXPhe\] and assuming $E(B-V)$=0.063 for the six cluster members. The average distance is $8.07\pm0.44$ kpc, in excellent agreement with distance from the RR Lyrae stars. In Fig.\[PLSX\] we show the distribution of SX Phe stars in the log P- $V$ plane, where the P-L relation of [@CS12] for the fundamental mode, shifted to the above distance, is displayed. The corresponding first overtone relation was placed assuming a first overtone to fundamental period ratio of P1/P0=0.783 [@Santo01]. ![The relation between period and luminosity for the SX Phe in NGC 6362. Black and red lines correspond to the P-L relation of [@CS12] for the fundamental and first overtone respectively, shifted to a distance of $8.07\pm0.44$ kpc. Segmented lines correspond to zero-point uncertainty of the P-L relationship. The pulsating mode assignation to each star correspond to their position relative mode loci.[]{data-label="PLSX"}](PL_SX.pdf) Comments on the tip of the red giant branch method -------------------------------------------------- The luminosity of the true tip of the RGB can in principle be used to estimate the distance to globular clusters. The method, originally developed to estimate distances to nearby galaxies (Lee et al. 1993), has been used to corroborate the theoretical constrains that particle-physics may impose on the extension of the RGB in globular clusters, by comparing the predicted distance by the most luminous RGB with the results rendered by the RR Lyrae and SX Phe. Consistency between theoretical predictions and empirical calculations has been found by for NGC 6229 [@Are15], for M5 [@Are16] and for NGC 6934 [@Yep17]. In brief, the bolometric absolute magnitude of the tip of the RGB was calibrated by [@SaCa97] as: $$\label{TRGB} M_{bol}^{tip} = -3.949\, -0.178\, [M/H] + 0.008\, [M/H]^2,$$ where $[M/H] = \rm{[Fe/H]} - \rm {log} (0.638~f + 0.362)$ and log f = \[$\alpha$/Fe\] [@Sal93]. [@Via13] argued that the neutrino magnetic dipole moment enhances the plasma decay process, postpones helium ignition in low-mass stars, and extends the red giant branch in globular clusters. Hence, the true TRGB may be a bit brighter than the brightest observed stars by an amount between 0.04 and 0.16 mag. For the case of NGC 6362, applying the largest correction of 0.16 mag, the method produces an absurdly large distance. We note that the separation between the Horizontal Branch (HB) and the brightest stars in the RGB in the above mentioned clusters, is about 3.3 and 4.0 mag in $V$ and $I$ respectively, while in NGC 6362 it is only about 2.3 and 3.5 mag (see also the CMD in [@Broc99]). The predicted $M_{bol}^{tip}$ by eq. \[TRGB\] for the metallicity and reddening of NGC 6362 is -3.819 mag, and a correction of between 0.5 and 1.0 mag. would be necessary to bring the distance to about 8.0 kpc as calculated from the RR Lyrae and SX Phe stars. Such correction is way out of the theoretical expectations. The method fails for NGC 6362. We do not really have an explanation for this but, it should be noted that the population of the RGB in this cluster is meager, that those stars at the tip of the RGB could rather be AGB stars, and that this is a much more metal rich cluster compared with those tested before. The horizontal branch of NGC 6362 {#sec:HB} ================================= ![The Horizontal Branch of NGC 6362. The stars V12, V13, V20 and V30 are among the most luminous of the RRab stars and their position in the the log P-Amplitude diagram (Fig. \[figBailey\]) suggests they are advanced in their evolution towards the AGB.[]{data-label="HB"}](dcm_6362_HB.pdf) The neat splitting of RRc and RRab stars is clear in Fig. \[HB\]. This is significant given that, while the mode splitting has been found in all studied Oo II type clusters (nine so far) ([@Yep17]), it has been observed only some Oo I (three out of seven so far). Fig. \[CATELAN\], which is an updated version of Fig. 16 of [@Yep17], shows in the HB structure parameter $\mathcal L$ vs \[Fe/H\] plane, the segregated clusters (black-rimmed symbols) and the non-segregated ones. NGC 6362 is now the third OoI cluster where we find the clean mode segregation, limited by the red edge of the first overtone instability strip. The other examples are NGC 6229 [@Are15] and NGC 6171 (Deras et al. 2018, in preparation). It has been discussed by [@Yep17] that the mode segregation versus the mingled mode distribution in the inter-mode or “either-or” region may be a consequence of the mass loss rates involved during the He flashes, and hence the resulting mass distribution on the ZAHB, which in turn controls the mode population of the inter-mode region (Caputo et al. 1978). It is remarkable that all metal-poor and older OoII type clusters do show the mode segregation. ![The HB structure parameter $\mathcal L$ vs. metallicity. The black-rimmed symbols represent globular clusters where the fundamental and first overtone modes are well segregated around the first overtone red edge of the instability strip, as opposed to filled non-rimmed symbols. Empty symbols are clusters not studied by our group.[]{data-label="CATELAN"}](CATELAN.pdf) Summary ======= In the present paper we publish new time-series CCD *VI* photometry of the variable stars in the globular cluster NGC 6362. Via the Fourier decomposition of the RR Lyrae stars we estimated the mean \[Fe/H\]$_{UVES}$ as $-1.066\pm0.126$ and $-1.08\pm0.16$ and the distance $7.93\pm0.32$ and $8.02\pm0.15$ kpc from the RRab and RRc stars respectively. The reddening $E(B-V)=0.063\pm0.024$ was calculated using the intrinsic colour $(V-I)_0$ at minimum colour level for 15 RRab stars. These results agree very well with previous determinations found in the literature. Our resulting CMD and the above parameters are in excelent agreement with models of the zero-age horizontal branch and the isochrone for 12.1 Gyr from the Victoria-Regina stellar models of [@vdB14]. Employing the P-L relationship of [@CS12] for four fundamental mode and two first overtone SX Phe stars, an average distance of $8.07\pm0.44$ kpc was found, consistent with the results from the RR Lyrae stars. We found a new SX Phe in the field of our images that seems to be a foreground star. The distribution of the RRL stars in the HB and the Period-Amplitude diagram for this cluster reveal the presence of four RRab stars (V12, V13, V20 and V30) likely to be advanced in their evolution towards the AGB. If this is the case, as they evolve towards the red, they should display a positive secular period variation; a study yet to be performed. The fundamental and first overtone RRL stars, independently of whether they have Blazhko modulations and/or two excited pulsation modes, are neatly segregated around the red edge of the first overtone instability strip, which is a characteristic in OoII type clusters but only in some of the OoI clusters studied thus far. We are grateful to Dr. Daniel Bramich for his DanDIA software and for guiding our data reduction process. AAF recognizes and thanks the support of UNAM via the DGAPA project IN104917. JAA wishes to thank the Instituto de Astronomía of the Universidad Nacional Autónoma de Mexico for hospitality. We have made an extensive use of the SIMBAD and ADS services, for which we are thankful. Comments on individual variables {#individual} ================================ V18 --- This star presents amplitude modulations and has been considered as a Blazhko variable by [@ole01] and Smo17. It appears as the most luminous and reddest RRab in the CMD of Fig. \[CMD\_6362\]. The light curve in the $I$-band is of low amplitude and anomalously bright, hence the red position of the star, but no suggestion of advanced evolution is found from its position in the log P- Amplitude diagram. V37 --- This variable was identified by [@ole01] as a double mode pulsator with one mode being non-radial. The star has been reconsidered by Smo17 and they identified the two modes, being at least one of them non-radial. On the basis of the light curve shape and amplitudes of the two modes, they postulate that the star is not a typical RRc but a beating variable of a new type. The light curve from our data is shown in Fig. \[V37\] and, although less dense, it is similar and consistent with the light curves shown by [@ole01] and Smo17. We would like to note however, that the star is not isolated but that a very close neighbour is present at only 2.65 arc seconds to the SE from V37. In Fig. \[V37chart\] we show this pair in the reference images of our CASLEO observations, of rather high seeing, and SWOPE with much better seeing conditions. Even under better conditions, in the SWOPE detector it is difficult to isolate and measure the two stars individually. Thus, contamination of V37 from the flux of the neighbour is most likely present in the photometry of previous and present studies. We have examined all differential images by blinking them along with the reference images and found that the star at the NW, i.e. that labelled V37, is variable and got some hint that star at the SE may also exhibit some variations, although this is not a firm conclusion given the blending conditions of the pair. If this proves to be true, then at least part of the observed modulations might be an artifact. A confirmation of the double mode nature and properties of V37 with a wider scale telescope is highly desired. ![V37 light curve. Colors are as in Fig. \[RRab\]. Colours are as in Fig. \[RRab\].[]{data-label="V37"}](V37Vs.pdf) ![V37 charts in the reference images from CASLEO (left) and from SWOPE (right).[]{data-label="V37chart"}](MOSAICO_V37.pdf) In the CMD (e.g. Fig. \[HB\]), V37 is near the hot edge of the first overtone instability strip although its colour has certainly been altered by the presence of the close neighbour. In the Log P -Amplitude diagram it mingles well among the RRc stars and falls, as expected, near the predicted locus for the first overtones. SXf1 ---- The light curve of this star shows clear sinusoidal variations (Fig. \[SXf1\]) of small amplitude and a period of 0.07949 d, typical of SX Phe stars. The star lies at the HB level, i.e. some 2 magnitudes brighter, although of similar colour, than the SX Phe cluster members. Therefore, we classify it as a foreground field SX Phe star. ![The newly identified SX Phe in the field of NGC 6362. The star is not a cluster member but a foreground object. The light curves are phased with a period of 0.07949 d.[]{data-label="SXf1"}](SXf1.pdf) Identification chart {#chart} ==================== All the known variables in the field of our images are identified in charts \[chartA\] and \[chartB\]. [@kal14] provided identification charts and RA and Dec coordinates, for all the variables stars in their study. We noted however that some coordinates did not correspond to their identification. M. Rozyczka (private comm.) pointed out that the coordinates of V42, V45, V49, V53–V58, V60, V63, and V75–V77 are in fact incorrect in their Table 1, and kindly provided the correct coordinates. Aiming to clarify the identifications of these stars, in Table \[ARD\_corr\] we list the correct coordinates, which are consistent with the identifications in the charts of Fig. \[chartA\] and \[chartB\]. ---------- ----------- -------------- ------------ ----------------- Variable RA DEC RA DEC Star ID (decimal) (decimal) (h m s) ($^\circ~'~''$) V42 262.78754 $-$66.860809 17:31:09.0 –66:51:38.9 V45 262.71952 $-$66.983002 17:30:52.6 –66:58:58.8 V49 263.10043 $-$67.066788 17:32:24.1 –67:04:00.4 V53 263.29060 $-$66.856061 17:33:09.7 –66:51:21.8 V54 263.22510 $-$67.098769 17:32:54.0 –67:05:55.5 V55 263.22533 $-$66.926742 17:32:54.0 –66:55:36.2 V56 263.21986 $-$66.974452 17:32:52.7 –66:58:28.0 V57 263.19214 $-$66.925905 17:32:46.1 –66:55:33.2 V58 263.11699 $-$67.145429 17:32:28.0 –67:09:43.5 V60 263.09729 $-$66.924169 17:32:23:3 –66:55:27.0 V63 263.01446 $-$67.139542 17:32:03.5 –67:08:22.4 V75 262.80971 $-$66.924412 17:31:14.3 –66:55:27.8 V76 262.76813 $-$67.056661 17:31:04.4 –67:03:24:0 V77 262.71325 $-$66.924622 17:30:51.1 –66:55:28.6 ---------- ----------- -------------- ------------ ----------------- ![image](carta_ext.pdf) ![image](carta_core.pdf) [^1]: Corresponding author: [armando@astro.unam.mx]{} [^2]: Data obtained at the CASLEO, Las Campanas and Bosque Alegre Observatories. Complejo Astronómico El Leoncito (CASLEO) is operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, and the National Universities of La Plata, Córdoba, and San Juan, Argentina. [^3]: DanDIA is built from the DanIDL library of IDL routines available at `http://www.danidl.co.uk`. [^4]: Oosterhoff (1939) showed that globular clusters can be distinguished according to the mean perdiod of their fundamental mode RR Lyrae stars, or RRab, being $\sim$0.55 days in OoI and $\sim$0.65 days in OoII systems, and noted that the percentage of first overtone RR Lyrae stars, or RRc, is lower in OoI than in OoII clusters. [^5]: http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/ STETSON/standards/
--- author: - 'S. Longhi' title: 'Robust unidirectional transport in a one-dimensional metacrystal with long-range hopping' --- Introduction ============ Topological photonic structures, a new class of optical systems inspired by quantum Hall effect and topological insulators, have attracted a huge attention in recent years owing to their rather unique property of permitting robust transport via topologically-protected chiral edge modes [@r1]. Such two-dimensional (2D) or three-dimensional (3D) optical structures are usually realized by breaking time-reversal symmetry, e.g. using magneto-optic media [@r2; @r3; @r4; @r5; @r6; @r7], or by the introduction of synthetic gauge fields [@r8; @r9; @r10; @r11; @r12; @r13; @r14; @r15; @r16; @r17]. Other examples of chiral edge transport in 2D or 3D optical media include photonic Floquet topological insulators in helical waveguide lattices [@r18; @r18bis], gyroid photonic crystals [@r19], bianisotropic metamaterials [@r20; @r21], chiral hyperbolic metamaterials [@r22], and optomechanical lattices [@r23]. In one-dimensional (1D) systems, the possibility of realizing robust one-way transport has received less attention so far, mainly because topological protection is generally unlikely in 1D. Proposals include ’Thouless pumping’ in quasicrystals [@r24; @r25], Landau-Zener transport in binary lattices [@r26] ,the use of ’synthetic’ dimensions in addition to the physical spatial dimension [@r27; @r28; @r28bis], and non-Hermitian transport [@r29]. In 1D lattices with short-range hopping, the action of synthetic gauge fields is generally trivial, as any loop encloses zero flux, and thus topological protection can arise from the adiabatic change of some parameter (Thouless pumping) or by adding a ’synthetic’ dimension. However, 2D lattices can be mapped into 1D chains with long-range hoppings [@r30], so that loops with nonvanishing magnetic fluxes and quantum Hall physics become possible even in 1D systems without additional synthetic dimensions [@r31]. An implementation of 1D lattices with long-range hopping and synthetic gauge fields, based on periodically-driven spin chains with special driving protocols, has been recently suggested in Ref.[@r31]. However, its practical realization remains challenging. In they Letter it is shown that in a wide class of 1D metacrystals, described by an effective Hermitian Hamiltonian with long-range hopping and broken time reversal symmetry, one can realize unidirectional and robust transport which is not assisted by topological protection. A simple physical implementation of such metacrystals in optics, based on transverse light dynamics in a self-imaging optical resonator with phase gratings, is suggested. Robust unidirectional transport in a one-dimensional metacrystal ================================================================ Let us consider the motion of a quantum particle on a 1D lattice, subjected to an external potential $U(x)$ which varies slowly over the lattice period $a$. In our analysis, the potential $U(x)$ accounts for lattice defects or disorder of site energies. In the single band approximation and after expanding the wave function $\psi(x,t)$ of the particle on the basis of displaced Wannier functions $W(x-na)$, i.e. after setting $\psi(x,t)=\sum_n f(n,t)W(x-na)$, it is well-known that the envelope function $f(n,t)$ is obtained as $f(n,t)=\phi(x=na,t)$, where $\phi(x,t)$ satisfies the Schrödinger equation (taking $\hbar=1$) $$i \partial_t \phi= \hat{H}_{eff} \phi$$ with an effective Hamiltonian $\hat{H}_{eff}=\hat{H}_{0}+U(x)$ [@r32], where $$\hat{H}_0=E(-i \partial_x),$$ $E(k)=E(k+ 2 \pi/a)$ is the energy dispersion curve of the lattice band, and $-\pi/a\leq k < \pi/a$ is the Bloch wave number (quasi-momentum). After introduction of the Fourier coefficients $J_n$ of $E(k)$, $E(k)=\sum_n J_n \exp(inak)$, the Schödinger equation (1) reads explicitly $$i \frac{\partial \phi (x,t)}{\partial t} =\sum_n \ J_n \phi(x+na,t)+U(x) \phi(x,t).$$ Note that $J_n$ corresponds to the hopping amplitude between two sites in the lattice spaced by $n$. For an Hermitian lattice with time-reversal symmetry, the energy $E(k)$ is real and with the even symmetry $E(-k)=E(k)$, which implies $J_n$ real and $J_{-n}=J_n$. For example, in the nearest-neighbor tight-binding approximation (short-range hopping), $E(k) =-J \cos(ka)$, where $J/2=-J_1$ is the hopping amplitude between adjacent lattice sites. The even symmetry of the dispersion curve $E(k)$ is responsible for backscattering of a particle wave packet, propagating along the lattice, in the presence of defects or disorder. In fact, for a forward-propagating wave packet with carrier quasi-momentum $k_0$, moving with a group velocity $v_g=(dE/dk)_{k_0}>0$, the scattering potential can excite the energy-degenerate state with quasi-momentum $-k_0$, corresponding to a backward propagating wave $(dE/dk)_{-k_0}<0$; see Figs.1(a) and (b). By breaking time-reversal symmetry, one can in principle synthesize a lattice band with a dispersion curve $E(k)$ which is an increasing (or decreasing) function of $k$ over the entire Brillouin zone $-\pi/a<k<\pi/a$, with a rapid (abrupt) change at the Brilluoin zone edges $k= \pm \pi/a$; see Fig.1(c). In this way, back reflections are forbidden, since at any quasi-momentum $k_0$ the group velocity has the same sign (apart from the Brillouin zone edges of negligible measure). In such a metacrystal, long-range hopping is necessary, together with a proper engineering of the phases of hopping amplitudes to break time reversal symmetry. For example, in a metacrystal with a sawtooth-shaped dispersion curve of bandwidth $2J$, $$E(k) =J a k/ \pi, \; \; \; -\pi/a <k < \pi/a,$$ the hoping amplitudes $J_n$, as obtained from the Fourier series expansion $E(k)=2J \sum_{n=1}^{\infty} [(-1)^{n+1}/n \pi] \sin (nka)$, are given by $$\begin{aligned} J_n= \left\{ \begin{array}{cc} 0 & n=0 \\ (-1)^{n+1} J/(\pi i n) & n \neq 0 \\ %-(-1)^{|n|+1} J / ( 2 \pi i |n|) & n \leq -1. \end{array} \right.\end{aligned}$$ Note that, while $J_{-n} =J_n^*$ (Hermitian lattice), $J_n$ is imaginary, indicating that time reversal symmetry is broken. An interesting property of the sawtooth metacrystal is that, besides of ensuring one-way propagative states, the group velocity $v_g=Ja/ \pi$ is uniform, corresponding to vanishing of group velocity dispersion and distortionless wave packet propagation. To highlight the robust propagation properties of the sawtooth metacrystal as compared to an ordinary tight-binding crystal with short-range hopping and time reversal symmetry, described by a sinusoidal band $E(k)=- J \cos (ka)$, we numerically-computed the evolution of a Gaussian wave packet in two lattices in the presence of either a potential site defect \[Fig.2(a)\] and site-energy disorder \[Fig.2(b)\], assuming the same bandwidth $2J$ and lattice period $a$. The figure clearly shows that, while back reflections and deceleration of motion is observed in the sinusoidal lattice band, wave packet propagation turns out to be robust in the sawtooth metacrystal. ![ (Color online) (a) Schematic of back-reflection induced by defects or disorder in a single-band tight-binding lattice. (b) Band dispersion curve $E(k)$ in a crystal with time-reversal symmetry. A forward-propagating wave packet with carrier wave number $k_0$ can be scattered off by the defects into a backward propagating wave packet with carrier wave number $-k_0$. (c) Band dispersion curve $E(k)$ in a metacrystal with broken time reversal symmetry and with $dE/dk>0$ over the entire Brillouin zone ($ k \neq \pm \pi/a)$.](Fig1.eps){width="8.4cm"} ![ (Color online) Propagation of a Gaussian wave packet in a tight-binding crystal with unbroken time reversal symmetry (sinusoidal band, left column) and with broken time reversal symmetry (sawtooth band, right column). The panels show the numerically-computed temporal evolution of $|f(n,t)|^2$ (on a pseudocolor map) and of the wave packet center of mass $\langle n \rangle$. The initial condition is $f(n,0) \propto \exp[-(n+20)^2/16+i \pi n/2]$, corresponding to a group velocity $v_g=Ja$ in the sinusoidal crystal, and $v_g=Ja/ \pi$ in the sawtooth metacrystal. In (a) a potential defect at site $n=0$ is introduced, namely $U(x=na)=U_0 \delta_{n,0}$ with $U_0=2J$. In (b) on-site potential energy disorder is introduced, corresponding to $U(x=na)$ random variable uniformly distributed in the range $(-J/2,J/2)$.](Fig2.eps){width="9.1cm"} ![ (Color online) Schematic of the optical ring resonator in the self-imaging regime that realizes a metacrystal. The resonator is composed by four lenses (focal length $f$), a phase grating and a phase mask, with transmission $t_1(x)=\exp[-i \varphi_1(x)]$ and $t_2(x)=\exp[-i \varphi_2(x)]$, respectively.](Fig3.eps){width="8.4cm"} Resonator optics realization of a metacrystal ============================================= A main challenging is the physical implementation of a 1D metacrystal, which requires long-range hopping and breaking of time reversal symmetry. A possible platform is provided, at least in principle, by spin chains and trapped ions with synthetic gauge fields [@r31]. However, the precise tailoring of hopping rates in amplitude and phase remains a rather challenging task. Here we suggest a rather simple optical implementation of a 1D metacrystal, which is based on transverse beam dynamics in a self-imaging optical resonator with a phase grating. In a few recent works, it has been suggested that light waves propagating back and forth in an optical resonator can emulate synthetic magnetism [@r16; @r17] and can realize diffraction management [@r33; @r34], i.e. the optical analogue of kinetic energy operator management. Here we show that a self-imaging ring resonator with an intracavity phase grating can emulate for light waves the effective Schrödinger equation (3) of a metacrystal. A schematic of the passive optical resonator is shown in Fig.3. It consists of a four-lens ring cavity of total length $L=8f$ in the so-called 4-$f$ self-imaging configuration [@r34]. Planes $\gamma$ and $\gamma_F$ shown in Fig.3 are Fourier conjugate planes. A thin phase grating with spatial period $A$ and transmission function $t_{1}(x)=\exp[-i \varphi_1(x)]$, with $\varphi_1(x+A)=\varphi_1(x)$, is placed at the Fourier plane $\gamma_F$, whereas a second phase mask with transmission function $t_{2}(x)=\exp[-i \varphi_2(x)]$ is placed at the plane $\gamma$. Light propagation inside the optical ring can be readily obtained by application of the generalized Huygens-Fresnel integral. Assuming one transverse spatial dimension $x$ and disregarding at this stage of the analysis cavity losses and external beam injection, the evolution of the intracavity field envelope $\psi_m(x)$ at plane $\gamma$ in the cavity and at the $m-th$ round trip is governed by the following map $$\psi_{m+1}(x)= t_2(x) t_1\left( \frac{i \lambda f}{2 \pi} \frac{\partial}{\partial x} \right) \psi_m(x)$$ where $\lambda$ is the wavelength of the circulating optical field. The recurrence relation (6) can be transformed into a Schrödinger-like wave equation using a rather standard method [@r34]. In the limit $|\varphi_{1,2}(x) | \ll 1$, after first-order expansion $t_{1,2}(x) \simeq 1-i \varphi_{1,2}(x)$ and continuation of the round trip number ($m \rightarrow t$), from Eq.(6) one can derive the following evolution equation for the intracavity field $\psi(x,t)$ at plane $\gamma$ $$i \frac{\partial \psi}{ \partial t}= E(-i \partial_x) \psi+ U(x) \psi$$ where $t$ is the temporal variable in units of the cavity round trip time, and where we has set $$U(x)=\varphi_2(x) \; , \;\;\; E(k)= \varphi_1 \left( -\frac{\lambda f k}{2 \pi} \right).$$ ![ (Color online) Beam evolution at successive round trips (color maps of normalized intensity distribution at plane $\gamma$) in the ring resonator with an injected Gaussian pulsed beam for a sinusoidal phase grating (left panels) and a for a sawtooth phase grating (right panels) at the Fourier plane $\gamma_F$. In (a) there is not any phase mask in $\gamma$ (homogeneous metacrystal), whereas on (b) and (c) a phase mask is introduced to emulate a localized defect \[in (b)\] and site energy disorder \[in (c)\]. Parameter values are given in the text. Panel (d) shows the behavior of the normalized optical power of the intracavity field (solid line) and the temporal amplitude $F(t)$ of the Gaussian excitation beam (dashed line).](Fig4.eps){width="9cm"} Note that Eq.(7) is precisely the Schrödinger equation (1) of a metacrystal with band dispersion curve $E(k)$ and external potential $U(x)$, defined by Eq.(8). Hence, the transverse beam motion along the spatial coordinate $x$ at the resonator plane $\gamma$ emulates the motion of a quantum particle in an arbitrary 1D crystal under an external potential. The profile of the phase grating in the Fourier plane $\gamma_F$ defines the dispersion curve $E(k)$ of the lattice band, and can be thus tailored to realize a metacrystal, i.e. a crystal with long-range hopping and broken time reversal symmetry. Remarkably, long-range hopping and breaking of time reversal symmetry does not require here to introduce synthetic gauge fields nor special modulation of parameters [@r31], making the method rather simple. The profile of the phase mask at the plane $\gamma$ determines the external potential $U(x)$, and can be designed to emulate lattice defects or disorder. Note that the equivalent spatial period $a$ of the metacrystal in real space $x$ is given by $a= \lambda f /A$. Since the cavity operates in a self-imaging condition and the phase mask and grating act on the $x$ spatial coordinate solely, in the orthogonal $y$ transverse coordinate the beam profile is not affected by propagation inside the resonator.\ In an optical experiment, wave packet dynamics and robustness against back reflections can be observed by considering the freely-decaying beam dynamics in the passive resonator initially loaded with a pulsed Gaussian beam $E(x,t)=F(t)G(x)$, which is injected through one of the four cavity mirrors. Assuming that the carrier frequency of the injected beam is in resonance with one of the cavity axial modes, the map (6) is replaced by the following one $$\begin{aligned} \psi_{m+1}(x) & = & t_2(x) t_1\left( \frac{i \lambda f}{2 \pi} \frac{\partial}{\partial x} \right) \psi_m(x)+\sqrt{T} E_m(x) \nonumber \\ & - & \frac{T}{2} \psi_m(x), \end{aligned}$$ where $T \ll1$ is the transmittance of the coupling mirror, and $E_m(x)=F(m)G(x)$ is the spatial profile of the injected beam at plane $\gamma$ and at the $m$-th round trip. The free-decay of light in the cavity, following the pulse excitation with the external beam, basically emulates wave packet evolution in a metacrystal. In an experiment, transverse light evolution at successive transits in the cavity can be detected by time-resolved beam profile measurements using a gated camera, as demonstrated e.g. in Refs.[@r35; @r36]. As an example, Fig.4 shows the beam evolution of the intracavity field at plane $\gamma$, as obtained by numerical integration of the map (9), assuming either a sinusoidal grating profile $\varphi_1(x)=-J \cos(2 \pi x/A)$, or a sawtooth grating profile with the same period $A=30 \; \mu$m and amplitude $J=0.5$. Parameter values used in the simulations are $\lambda=633$ nm, $f=2$ cm, and $T=2 \%$, corresponding to a spatial period $a= \lambda f /A = 422 \; \mu$m of the metacrystal. The injected field is a pulsed and tilted Gaussian beam with transverse profile (at plane $\gamma$) $G(x)=\exp(-x^2/w^2+0.5 i \pi x/a)$ ($w=800 \; \mu$m) and pulse envelope $F(t)=\exp[-(t-t_0)^2/\tau^2]$ ($t_0=20$, $\tau=10$ in units of the round trip time $T_R=8f/c \simeq 0.53 $ ns). The external potential $U(x)$ is assumed to be either a localized defect of the lattice ($U(x)=-U_0 \exp[-(x-d)^2/s^2]$, $U_0=0.2$, $d=1600 \; \mu$m, $s=600 \; \mu$m), Fig.4(b); or a random potential ($U(x)$ random variable with uniform distribution in the range $ (-0.5,0.5)$), Fig.4(c). The freely-evolving optical beam, in the absence of lattice defects and disorder, is shown for comparison in Fig.4(a). The behavior of the intracavity power before and after injection with the external beam is also shown in Fig.4(d). The decay of the optical power in the cavity after initial pulse excitation is due to cavity losses at the output coupler. The numerical results clearly indicate that, after excitation of the passive cavity with the tilted external pulsed beam, transverse beam propagation is robust in the case of the sawtooth phase grating, while back reflections are well visible in the case of the sinusoidal phase grating according to the scenario of Fig.2. Conclusions =========== Robust unidirectional transport can occur in a wide class of 1D Hermitian metacrystals with engineered lattice band. However, long-range hopping and broken time reversal symmetry are needed to implement such metacrystals. While 1D matter-wave systems, such as spin chains or trapped ions, could be a potential platform to implement long-range hopping and synthetic gauge fields [@r31], their experimental realization remains challenging. Here we have shown that transverse beam dynamics in a self-imaging optical resonator with a phase grating provides a rather simple and experimentally-accessible system in optics to implement a metacrystal, in which time reversal symmetry breaking and long-range hopping are readily realized without the need for synthetic gauge fields nor special modulation of parameters. The present results disclose an important strategy to realize robust transport in 1D lattices, without resorting to adiabatic (Thouless) pumping [@r23; @r24] or non-Hermitian transport [@r29], and suggest resonator optics as a suitable platform to implement a metacrystal. [0]{}
--- abstract: 'We propose a scheme to create universal Dicke states of n largely detuned atoms through detecting the leaky photons from an optical cavity. The generation of entangled states in our scheme has quasi-unit success probability, so it has potential practicability based on current or near coming laboratory cavity QED technology.' author: - 'Yun-Feng Xiao' - 'Zheng-Fu Han' - Jie Gao - 'Guang-Can Guo' title: 'Generation of Multi-atom Dicke States with Quasi-unit Probability through the Detection of Cavity Decay' --- Entanglement plays an important role in the fields of quantum theory and quantum information processing (QIP). It is not only used to test quantum mechanics against local hidden variable theory [@bell], but also holds the keys of the applications of QIP including computation [@grover], communication [@nielsen], and cryptography [@ekert]. There are many theoretical and practical schemes to entangle two particles or multi-particles, such as spontaneous parametric down converters [@bouwmeester], linear ion trap [@sackett], atomic ensembles [@duan], and cavity QED [@rauschenbeutel]. Among them, the schemes based on cavity QED attract persistent interest in the experimental realization because long-lived states in high-$Q$ cavities provide a promising tool for creating entanglement and superposition, and also possible for implementation of quantum computing [@xiao]. Entangled states for two-level atoms in cavity QED have been observed for [@rauschenbeutel; @hagley] in experiment to date. One of the main obstacles for the implementation of quantum information in cavity QED, including preparation of multi-atom entanglement, is the decoherence of the atoms and cavity fields. As potential solutions, largely detuned atoms in the cavities [@zheng] and dissipation-assisted conditional quantum evolution via the detection of cavity decay [@many] have been proposed to date. On the one hand, when the atoms have large detuning with the cavity modes, the atomic populations in the excited states are very small and thus atomic spontaneous emission (described by the rate $\gamma_{s}$) can be neglected. In other words, the atomic excited states can be eliminated due to large detuning and atoms evolve only in their ground state space. On the other hand, cavity decay is also considered as a useful ingredient, not a destructive factor, since the idea was proposed by Plenio and Knight [@knight]. Recently, Hong and Lee proposed a scheme to generate two-atom entangled state in a cavity quasideterministicly [@hong]. In their paper, two three-level atoms should couple resonantly with two different polarized cavity modes. Therefore, the atomic spontaneous emission is a significant reason to decrease the success probability, especially when the trials have been done for several times. Here, we propose an extended and improved scheme to generate Dicke states [@mandel] of arbitrary n among N trapped largely detuned atoms with quasi-unit probability. Furthermore, we extend our conclusion to generate a non-trivial subset of Dicke states of the n atoms step by step. The Dicke states are defined as $\left\vert n,m\right\rangle _{Dicke}=C\left( n,m\right) \left( s_{1}\right) ^{m}\left( s_{0}\right) ^{n-m}\otimes_{j=1}^{n}\left\vert e\right\rangle _{j}$ [@mandel]. Here, the collective operators $s_{k}$ ($k=0,1$) are defined as $s_{k}={\displaystyle\sum\nolimits_{j=1}^{n}} \left\vert k\right\rangle _{jj}\left\langle e\right\vert $, and the normalization coefficient $C\left( n,m\right) =1/\sqrt{n!m!\left( n-m\right) !}$. The muti-atom Dicke states and the GHZ states in general belong to different classes of entangled states, and Dicke states are relatively more immune to the influence of noise [@dur]. Dicke states have many interesting applications in QIP and in high-precision measurements [@cabello]. In the current practical cavity QED system, optical cavity always has a finite quality factor, and the coherent coupling rate $g\left( r\right) $ between the atom and the cavity mode changes fast if the atoms pass the cavity within a predefined time. So now we take into account N atom are trapped in a bimodal cavity as showed in fig. 1a, and the trapping time has been demonstrated up to several seconds [@rempe; @kimble]. The two cavity modes have different polarization $L$ (left-circularly polarized) and $R$ (right-circularly polarized). The cavity photons leaked out through the mirror, can be detected by two single-photon detectors $D_{0}$ and $D_{1}$ without failure, and $D_{0}$ ($D_{1}$) is triggered by $L$ ($R$) photon due to the quarter-wave plate (QWP). The trapped atoms have the identical five-level configuration, as depicted in fig. 1b, the ground states $\left\vert 0\right\rangle $, $\left\vert 1\right\rangle $, $\left\vert 2\right\rangle $ and the excited state $\left\vert e\right\rangle $, $\left\vert r\right\rangle $. The configuration can be obtained in $^{87}$Rb. For example, the states $\left\vert 0\right\rangle $, $\left\vert 1\right\rangle $, and $\left\vert 2\right\rangle $, are respectively $\left\vert F=2,m=-1\right\rangle $, $\left\vert F=2,m=1\right\rangle $, $\left\vert F=1,m=-1\right\rangle $ of $5^{2}S_{1/2}$; $\left\vert e\right\rangle $ and $\left\vert r\right\rangle $ are respectively $\left\vert F=1,m=0\right\rangle $, $\left\vert F=1,m=-1\right\rangle $ of $5^{2}P_{1/2}$. Atomic transition $\left\vert 0\right\rangle \longleftrightarrow\left\vert e\right\rangle $ ($\left\vert 1\right\rangle \longleftrightarrow\left\vert e\right\rangle $) is coupled with cavity mode $a_{L}$ ($a_{R}$). The two $\pi$-polarized classical laser pulses $\varepsilon_{1}$ and $\varepsilon_{2}$ are used to transfer the occupation of the state $\left\vert 0\right\rangle $ to $\left\vert 2\right\rangle $ with Rabi frequencies $\Omega_{1}\left( t\right) $ and $\Omega_{2}\left( t\right) $ respectively. ![(a) Schematic setup to realize multi-atom entangled state in a leaky optical cavity. QWP is a quarter-wave plate, PBS is a polarization beamsplitter, and $D_{0}$ and $D_{1}$ are two single-photon detectors. (b) The energy level diagram of atoms. The states $\left\vert 0\right\rangle $, $\left\vert 1\right\rangle $ and $\left\vert 2\right\rangle $ are the hyperfine states in the ground-state manifold, respectively; $\left\vert e\right\rangle $ and $\left\vert r\right\rangle $ are the excited state. Atomic transition $\left\vert 0\right\rangle \longleftrightarrow\left\vert e\right\rangle $ ($\left\vert 1\right\rangle \longleftrightarrow\left\vert e\right\rangle $) is coupled with the cavity mode $a_{L}$ ($a_{R}$).](fig1.eps) The n atoms, expected to entangle together, can be arbitrarily chosen from N trapped atoms. We first completely transfer the occupations of the state $\left\vert 0\right\rangle $ of the other (N-n) atoms to those of the state $\left\vert 2\right\rangle $. Then the (N-n) atoms are beyond systemic state evolution space (cavity + n atoms). Before study the system evolution dynamics, we do some consideration. The separation between any two atoms is large enough compared with the wavelength of the fields in interest so that the dipole-dipole interaction among atoms can be neglected. This assumption is feasible because cavity length is always much larger than the wavelength of cavity modes. We neglect spontaneous emission of the atoms. Subject to no decay being recorded in the detectors, under the rotating wave approximation, the system conditionally evolves according to a non-Hermitian Hamiltonian which is given by (in units of $\hbar=1$) $$\begin{aligned} H_{1} & =\sum_{i=1}^{n}\left( \omega_{e}\left\vert e\right\rangle _{ii}\left\langle e\right\vert +\omega_{0}\left\vert 0\right\rangle _{ii}\left\langle 0\right\vert +\omega_{1}\left\vert 1\right\rangle _{ii}\left\langle 1\right\vert \right) +\left( \omega_{L}-\frac{i\kappa_{L}}{2}\right) a_{L}^{\dagger}a_{L}\nonumber\\ & +\left( \omega_{R}-\frac{i\kappa_{R}}{2}\right) a_{R}^{\dagger}a_{R}+\sum_{i=1}^{n}\left( g_{L}\left\vert e\right\rangle _{ii}\left\langle 0\right\vert a_{L}+g_{R}\left\vert e\right\rangle _{ii}\left\langle 1\right\vert a_{R}+H.c.\right) , \label{1th}$$ where $\omega_{e}$ ($\omega_{0}$, $\omega_{1}$) is the atomic energy in the atomic state $\left\vert e\right\rangle $ ($\left\vert 0\right\rangle $, $\left\vert 1\right\rangle $); $\omega_{\mu=L,R}$ is the frequency for the cavity mode $a_{\mu}$; $g_{\mu}$ (assumed real) is the single-photon coherent coupling rate with the cavity mode $a_{\mu}$; $\kappa_{L}=\omega/Q_{L}$ and $\kappa_{R}=\omega/Q_{R}$ denote the rate of decay of the cavity mode fields $a_{L}$ and $a_{R}$, respectively, and $H.c.$ stands for the Hermitian conjugate. Then the interaction Hamiltonian in the interaction picture can be written as $$H_{2}=\sum_{i=1}^{n}\left( g_{L}\left\vert e\right\rangle _{ii}\left\langle 0\right\vert a_{L}+g_{R}\left\vert e\right\rangle _{ii}\left\langle 1\right\vert a_{R}+H.c.\right) -\left( \Delta_{L}+\frac{i\kappa_{L}}{2}\right) a_{L}^{\dagger}a_{L}-\left( \Delta_{R}+\frac{i\kappa_{R}}{2}\right) a_{R}^{\dagger}a_{R}. \label{2th}$$ Here, the detuning $\Delta_{L}$ ($\Delta_{R}$) is defined as $\omega _{e}-\omega_{0}-\omega_{L}$ ($\omega_{e}-\omega_{1}-\omega_{R}$), and our scheme works in the strongly detuned limit $\Delta_{\mu}\gg g_{\mu^{\prime}}$. Expressing the state of the total system in the form of $\left\vert \text{atoms}\right\rangle \left\vert \text{cavity}\right\rangle $, the initial state of the system is prepared as $\left\vert 0_{1}0_{2}\cdots0_{n}\right\rangle \left\vert L\right\rangle $, which means all atoms are in the ground state $\left\vert 0\right\rangle $ and the cavity fields have a $L$ polarized photon. The temporal evolution of the system is spanned by the $2n+1$ basis states: $\{\left\vert 0_{1}0_{2}\cdots0_{n}\right\rangle \left\vert L\right\rangle $, $\left\vert e_{1}0_{2}\cdots0_{n}\right\rangle \left\vert vac\right\rangle $, $\cdots$, $\left\vert 0_{1}0_{2}\cdots e_{n}\right\rangle \left\vert vac\right\rangle $, $\left\vert 1_{1}0_{2}\cdots0_{n}\right\rangle \left\vert R\right\rangle $, $\cdots$, $\left\vert 0_{1}0_{2}\cdots1_{n}\right\rangle \left\vert R\right\rangle \}$. Since the initial atomic state is symmetrical and the atoms are indistinguishable in the evolution state space, the basis states reduce to $\{\left\vert \phi _{0}\right\rangle =\left\vert 0_{1}0_{2}\cdots0_{n}\right\rangle \left\vert L\right\rangle $, $\left\vert \phi_{1}\right\rangle =\frac{1}{\sqrt{n}}\left( \left\vert 1_{1}0_{2}\cdots0_{n}\right\rangle +\cdots+\left\vert 0_{1}0_{2}\cdots1_{n}\right\rangle \right) \left\vert R\right\rangle $, $\left\vert \phi_{2}\right\rangle =\frac{1}{\sqrt{n}}\left( \left\vert e_{1}0_{2}\cdots0_{n}\right\rangle +\cdots+\left\vert 0_{1}0_{2}\cdots e_{n}\right\rangle \right) \left\vert vac\right\rangle \}$. The state of the system at arbitrary time is described by $$\left\vert \Psi\left( t\right) \right\rangle =c_{0}\left( t\right) \left\vert \phi_{0}\right\rangle +c_{1}\left( t\right) \left\vert \phi _{1}\right\rangle +c_{2}\left( t\right) \left\vert \phi_{2}\right\rangle . \label{3th}$$ According to Schrödinger equation $i\partial_{t}\left\vert \Phi\left( t\right) \right\rangle =H_{2}\left\vert \Phi\left( t\right) \right\rangle $, we have$$\begin{aligned} idc_{0}\left( t\right) /dt & =-\left( \Delta_{L}+\frac{i\kappa_{L}}{2}\right) c_{0}\left( t\right) +\sqrt{n}g_{L}c_{2}\left( t\right) ,\nonumber\\ idc_{1}\left( t\right) /dt & =-\left( \Delta_{R}+\frac{i\kappa_{R}}{2}\right) c_{1}\left( t\right) +g_{R}c_{2}\left( t\right) ,\label{4th}\\ idc_{2}\left( t\right) /dt & =\sqrt{n}g_{L}c_{0}\left( t\right) +g_{R}c_{1}\left( t\right) .\nonumber\end{aligned}$$ Then we utilize the transformation $\lambda_{\nu=0,1,2}\left( t\right) =c_{\nu}\left( t\right) e^{-i\Delta_{L}t}$, therefore, eqs. $\left( 4\right) $ can be written as$$\begin{aligned} d\lambda_{0}\left( t\right) /dt & =-\frac{\kappa_{L}}{2}\lambda_{0}-i\sqrt{n}g_{L}\lambda_{2}\left( t\right) ,\nonumber\\ d\lambda_{1}\left( t\right) /dt & =-i\left[ \left( \Delta_{L}-\Delta _{R}\right) -\frac{i}{2}\kappa_{R}\right] \lambda_{1}\left( t\right) -ig_{R}\lambda_{2}\left( t\right) ,\label{5th}\\ d\lambda_{2}\left( t\right) /dt & =-i\sqrt{n}g_{L}\lambda_{0}\left( t\right) -ig_{R}\lambda_{1}\left( t\right) -i\Delta_{L}\lambda_{2}\left( t\right) .\nonumber\end{aligned}$$ Assuming $g_{R}=g_{L}=g$, $\kappa_{L}=\kappa_{R}=\kappa$, $\Delta_{L},\Delta_{R}\gg g$, and $\left\vert \Delta_{L}-\Delta_{R}\right\vert \ll g$, the atomic population in the excited state $\left\vert e\right\rangle $ is very small, and we can put $d\lambda_{2}\left( t\right) /dt=0$ [@biswas] so that eqs. $\left( 3\right) $ and $\left( 5\right) $ reduce to$$\begin{aligned} \left\vert \Psi\left( t\right) \right\rangle & =\lambda_{0}\left( t\right) \left\vert \phi_{0}\right\rangle +\lambda_{1}\left( t\right) \left\vert \phi_{1}\right\rangle ,\nonumber\\ d\lambda_{0}\left( t\right) /dt & =\left( \frac{ing^{2}}{\Delta_{L}}-\frac{\kappa}{2}\right) \lambda_{0}\left( t\right) +i\frac{\sqrt{n}g^{2}}{\Delta_{L}}\lambda_{1}\left( t\right) ,\label{6th}\\ d\lambda_{1}\left( t\right) /dt & =\frac{i\sqrt{n}g^{2}}{\Delta_{L}}\lambda_{0}\left( t\right) +\left[ \frac{ig^{2}}{\Delta_{L}}-i\left( \Delta_{L}-\Delta_{R}\right) -\frac{\kappa}{2}\right] \lambda_{1}\left( t\right) .\nonumber\end{aligned}$$ Here we neglect the common phase $e^{i\Delta_{L}t}$. The solutions of eqs. $\left( 6\right) $ are$$\begin{aligned} \lambda_{0}\left( t\right) & =e^{-\kappa t/2}e^{\frac{i\Omega_{0}t}{2}}\left[ \cos\left( \frac{\Omega_{1}t}{2}\right) +i\frac{2ng^{2}-\Delta _{L}\Omega_{0}}{\Delta_{L}\Omega_{1}}\sin\left( \frac{\Omega_{1}t}{2}\right) \right] ,\nonumber\\ \lambda_{1}\left( t\right) & =ie^{-\kappa t/2}e^{\frac{i\Omega_{0}t}{2}}\frac{\Delta_{L}^{2}\Omega_{1}^{2}-\left( \Delta_{L}\Omega_{0}-2ng^{2}\right) ^{2}}{2\sqrt{n}g^{2}\Delta_{L}\Omega_{1}}\sin\left( \frac{\Omega_{1}t}{2}\right) , \label{7th}$$ where $\Omega_{0}=\frac{\left( n+1\right) g^{2}}{\Delta_{L}}-\left( \Delta_{L}-\Delta_{R}\right) $, $\Omega_{1}=\sqrt{\left[ \frac{\left( n+1\right) g^{2}}{\Delta_{L}}\right] ^{2}+\left( \Delta_{L}-\Delta _{R}\right) ^{2}+2\left( n-1\right) \left( \Delta_{L}-\Delta_{R}\right) \frac{g^{2}}{\Delta_{L}}}$. When $n=1$ and $\kappa=0$, it is the result of ref [@biswas]. In order to obtain multi-atom entangled state, we consider the case of $n>1$. If we have$$\Delta_{L}-\Delta_{R}=\left( 1-n\right) g^{2}/\Delta_{L} \label{8th}$$ and thus $$\begin{aligned} \Omega_{0} & =2ng^{2}/\Delta_{L},\nonumber\\ \Omega_{1} & =2\sqrt{n}g^{2}/\Delta_{L}, \label{9th}$$ Then eqs. $\left( 7\right) $ reduce to a laconic expression$$\begin{aligned} \lambda_{0}\left( t\right) & =e^{-\kappa t/2}e^{\frac{i\Omega_{0}t}{2}}\cos\left( \frac{\Omega_{1}t}{2}\right) ,\nonumber\\ \lambda_{1}\left( t\right) & =e^{-\kappa t/2}ie^{\frac{i\Omega_{0}t}{2}}\sin\left( \frac{\Omega_{1}t}{2}\right) . \label{10th}$$ The factor $e^{-\kappa t/2}$ describes the leakage of cavity photons. Assume the detector $D_{0}$ or $D_{1}$ has a response at time $t=t_{r}\leq T$ (Here, $T$ is a waiting time of two detectors), and then this coherent time evolution $\left\vert \Psi\left( t\right) \right\rangle $ governed by $H_{2}$ is immediately interrupted by a corresponding quantum jump operator [@jump] $b_{0}\left( =a_{L}\right) $ or $b_{1}\left( =a_{R}\right) $, respectively. After the detector $D_{k}$ responses, the state of system can be written as$$\left\vert \Psi\left( t_{r}\right) \right\rangle ^{D_{k}}=\frac {b_{k}\left\vert \Psi\left( t_{r}\right) \right\rangle }{\left\Vert b_{k}\left\vert \Psi\left( t_{r}\right) \right\rangle \right\Vert }. \label{11th}$$ In the case that $D_{1}$ responses, after tracing out the cavity modes part, we can see that the n atoms are prepared in the entangled state $\left\vert \varphi\right\rangle _{ent}=\frac{1}{\sqrt{n}}\left( \left\vert 1_{1}0_{2}\cdots0_{n}\right\rangle +\cdots+\left\vert 0_{1}0_{2}\cdots 1_{n}\right\rangle \right) $. $\left\vert \varphi\right\rangle _{ent}$ is right a W state [@dur] of n particles, which is a special case ($m=1$) of Dicke states $\left\vert n,m\right\rangle _{Dicke}$. Notice that if we let $\sqrt{n}g_{L}=g_{R}$ in eqs. $\left( 4\right) $ and $\left( 5\right) $, the original multi-atom model reduces to a symmetric three-level system in the ref. [@biswas]. In this case, two-photon resonant condition $\Delta _{L}=\Delta_{R}$ should be met in order to generate $\left\vert \varphi \right\rangle _{ent}$. But the condition of $\sqrt{n}g_{L}=g_{R}$ is not always satisfied in actual system because $g_{L}/g_{R}$ has a determinate relation for two given atomic transitions. Obviously, the state of the n atoms relapses to their initial state $\left\vert 0_{1}0_{2}\cdots0_{n}\right\rangle $ if $D_{0}$ is triggered. With the reinjection of a new left-circularly polarized photon into the cavity, the same process is repeated until the entangled state is prepared. Similar to the ref. [@hong], we can make the experiment automatically repeat itself through replacing the detector $D_{0}$ by a path directed back to the cavity, so that the left-circularly polarized photon can be automatically fed back to the cavity. Therefore, $\left\vert \varphi\right\rangle _{ent}$ can be produced with near unit probability. Furthermore, if we reinject a new left-circularly polarized photon into the cavity after a W state of n atoms is achieved, we can prepare Dicke state $\left\vert n,2\right\rangle _{Dicke}$. Moreover, if the same process is repeated, universal n-qubit Dicke states $\left\vert n,m\right\rangle _{Dicke}$ can be prepared step by step [@yu]. Now we turn to analyze the efficiency of the scheme. In the absence of spontaneous emission of the atoms, From eqs. $\left( 10\right) $, one can know that the probability of photons decay from cavity is $P_{decay}\left( t\right) =1-\left\vert e^{-\kappa t/2}\right\vert ^{2}$, and in the time interval $dt_{r}$, the probability that the detector $D_{1}$ is triggered is $\left\vert \sin\left( \frac{\Omega_{1}t}{2}\right) \right\vert ^{2}\cdot\left( \frac{dP_{decay}\left( t_{r}\right) }{dt_{r}}\right) dt_{r}$. Noticeably, a left- or right-circularly polarized photon will finally leak out due to the factor $e^{-\kappa t/2}$ in eqs. $\left( 10\right) $. In fact, in the case of $\kappa T\gg1$, we consider one of detectors should response with the time $T$, so that the probability of the detector $D_{1}$ is triggered, i.e., the success probability $p_{suc}$ of entangled state $\left\vert \varphi\right\rangle _{ent}$ being prepared, can be roughly calculated as$$\begin{aligned} p_{suc} & \approx{\displaystyle\int\nolimits_{0}^{T\gg1/\kappa}} \left\vert \sin\left( \frac{\Omega_{1}t}{2}\right) \right\vert ^{2}\cdot\left( \frac{dP_{decay}\left( t_{r}\right) }{dt_{r}}\right) dt_{r}\nonumber\\ & \approx\frac{2ng^{4}}{\Delta_{L}^{2}\left( \kappa^{2}+\Omega_{1}^{2}\right) }. \label{12th}$$ In order to get the maximum of $p_{suc}$, we find that it is corresponds to the minimum of $\Omega_{1}$, that is, $\left( \Delta_{L}-\Delta_{R}\right) =-\frac{\left( n-1\right) g^{2}}{\Delta_{L}}$. Especially take notice that this condition agrees with eqs. $\left( 8,9\right) $. So we have$$p_{suc}=\frac{2ng^{4}}{\Delta_{L}^{2}\kappa^{2}+4ng^{4}}. \label{13th}$$ As showed in fig. 2, we can find, for a given $\Delta_{L}$, iff $\kappa\ll g$, $p_{suc}\left( g,\kappa\right) $ reaches near $50\%$. ![The success probability $p_{suc}$ computed numerically as a function $g/\kappa$ and $n$. Other parameter: $\Delta_{L}=20g$.](fig2.eps) We consider a set of practical parameters. $g=2\pi\times16$ $\operatorname{MHz}$ and $\kappa=2\pi\times1.4$ $\operatorname{MHz}$ are given in Rempe’s group [@rempe]. We also choose $T=0.5$ $\operatorname{\mu s}$ (Commonly, the coherence time of atomic internal states in a high-$Q$ cavity is much larger than the order of $T$ [@kuhr]), $\Delta_{L}=20g$, and $n=3$. After a trial, $p_{suc}=\frac{2ng^{4}}{\Delta_{L}^{2}\kappa^{2}+4ng^{4}}\sim0.36$. The probability is not very high, but it increases to quai-unit after several trials. For instance, the total success probability is more than $99\%$ after ten trials. In addition, the atoms always has spontaneous emission when they are in the excited states. However, the occupation of the excited states is very small since the atomic transitions are largely detuned with cavity modes in our scheme. Our numerical simulation shows the occupation is less than $3\%$ based on above practical parameters. Therefore, atomic spontaneous emission is an inappreciable ingredient in our scheme. F-P cavity has a general mode function described by $\chi\left( \overrightarrow{r}\right) =\sin\left( kz\right) \exp[-\left( x^{2}+y^{2}\right) /w_{0}^{2}]$ [@duan2], and $g\left( r\right) =g_{0}\chi\left( \overrightarrow{r}\right) $, where $w_{0}$ and $k=2\pi/\lambda$ are, respectively, the width and the wave vector of the Gaussian cavity mode, and $\overrightarrow{r}\left( x,y,z\right) $ describes the atomic locations; $z$ is assumed to be along the axis of the cavity. In the above description we have assumed the coupling rate $g_{L,R}^{i}\left( r\right) =g$, which means all atoms are addressed on some certain locations. An obviously best case is: $\sin\left( k_{L,R}z\right) =1$ and $x=y=0$. Apparently, these conditions still remain a challenge based on current cavity QED technology. We also note our scheme requires an efficient source of single photons [@sps] and their injection into an optical cavity. However, as a theoretical design, our scheme is still potentially feasible in the near future technology, for all these obstacles are now under active investigations [@kimble; @mckeever] and may be overcame in the near future. In conclusion, we have presented a scheme to prepare multi-atom entangled state in an optical cavity. Based on current and near future cavity QED technology, considering and using the dissipation of the system, we design a scheme based on measurements to leaky photons, to generate universal Dicke states $\left\vert n,m\right\rangle _{Dicke}$ of n atoms. In principle, although the preparation is probabilistic in one trial, the probability of success reaches close to unit after several trials. Obviously, our scheme has ultra-high fidelity, since non-ideal single-photon detectors, absorption by cavity mirrors and atomic spontaneous emission just decrease the success probability, not the fidelity of states. We thank Professor Y.-S. Zhang and X.-M. Lin for their helpful advice. This work was funded by National Fundamental Research Program of China (Grant No. 2001CB309300) and the Innovation Funds of the Chinese Academy of Sciences. [99]{} J. S. Bell, Physics (Long Island City, NY) **1**, 195 (1965). L. K. Grover, Phys. Rev. Lett. **79**, 325 (1997); P. W. Shor, SIAM J. Comput. **26**, 1484 (1997). M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England, 2000). A. K. Ekert, Phys. Rev. Lett. **67**, 661 (1991). D. Bouwmeester, J.-W. Pan, M. Daniell, H. Weinfurter, and A. Zeilinger, Phys. Rev. Lett. **82**, 1345 (1999). C. A. Sackett *et al.*, Nature (London) **404**, 256 (2000). L. M. Duan, Phys. Rev. Lett. **88**, 170402 (2002). A. Rauschenbeutel *et al.*, Science **288**, 2024 (2000). M. S. Zubairy, M. Kim, and M. O. Scully, Phys. Rev. A **68**, 033820 (2003). P. Lougovski, E. Solano, and H. Walther, eprint: quant-ph/0308059. A. Beige, D. Braun, B. Tregenna, and P. L. Knight, Phys. Rev. Lett. **85**, 1762 (2000); J. Pachos and H. Walther, *ibid.* **89**, 187903 (2002); Y.-F. Xiao *et al*., Phys. Rev. A **70**, 042314 (2004). E. Hagley *et al.*, Phys. Rev. Lett. **79**, 1 (1997); S. Osnaghi *et al.*, *ibid.* **87**, 037902 (2001). S.-B. Zheng, and G.-C. Guo, Phys. Rev. Lett. **85**, 2392 (2000). A. S. Sørensen and K. Mølmer, Phys. Rev. Lett. **91**, 097905 (2003); L.-M. Duan and H. J. Kimble, *ibid.* **90**, 253601 (2003); C. Simon and W.T.M. Irvine, *ibid.* **91**, 110405 (2003); D. E. Browne, M. B. Plenio, and S. F. Huelga, *ibid.* **91**, 067901 (2003). M. B. Plenio and P. L. Knight, Rev. Mod. Phys. **70**, 101 (1998). J. Hong and H.-W. Lee, Phys. Rev. Lett. **89**, 237901 (2002). L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University Press, Cambridge, England, 1995). W. Dur, G. Vidal, and J. I. Cirac, Phys. Rev. A **62**, 062314 (2000). A. Cabello, Phys. Rev. A **65**, 032108 (2002); M. Koashi, V. Buzek, and N. Imoto, *ibid.* **62**, 050302(R) (2000). P. Maunz *et al.*, Nature (London) **428**, 50 (2004). J.McKeever *et al.*, Science **303**, 1992 (2004). Asoka Biswas and G. S. Agarwal, Phys. Rev. A **69**, 062306 (2004). M. O. Scully and M. S. Zubairy, Quantum Optics (Cambridge University Press, Cambridge, England, 1997). B. Yu, Z.-W. Zhou and G.-C. Guo, J. Opt. B **6**, 86 (2004). S. Kuhr *et al.*, Phys. Rev. Lett. **91**, 213002 (2003). L.-M. Duan, A. Kuzmich, and H.J. Kimble, Phys. Rev. A **67**, 032305 (2003). B. Lounis and W. E. Moerner, Nature (London) **407**, 491 (2000); M. Pelton *et al.*, Phys. Rev. Lett. **89**, 233602 (2002). J. McKeever, J. R. Buck, A. D. Boozer, and H. J. Kimble, Phys. Rev. Lett. **93**, 143601 (2004).
--- author: - | [**Collaboration**]{}\ Karl Jansen$^{a}$, Andrea Shindler$^{a}$,\ Carsten Urbach$^{a,b}$ and Ines Wetzorke$^{a}$\ \ [$^a$ NIC/DESY Zeuthen]{}\ [Platanenallee 6, D-15738 Zeuthen, Germany]{}\ [$^b$ Institut für Theoretische Physik, Freie Universität Berlin]{}\ [Arnimallee 14, D-14195 Berlin, Germany]{}\ bibliography: - 'bibliography.bib' title: | [ ]{}\ Scaling test for Wilson twisted mass QCD ---
--- abstract: 'We classify morphisms from proper varieties to Brauer–Severi varieties, which generalizes the classical correspondence between morphisms to projective space and globally generated invertible sheaves. As an application, we study del Pezzo surfaces of large degree with a view towards Brauer–Severi varieties, and recover classical results on rational points, the Hasse principle, and weak approximation.' address: 'TU München, Zentrum Mathematik - M11, Boltzmannstr. 3, D-85748 Garching bei München, Germany' author: - Christian Liedtke date: 'May 4, 2016' title: 'Morphisms to Brauer–Severi varieties, with applications to del Pezzo surfaces' --- \[section\] \[section\] \[section\] \[Definition\][Example]{} \[Definition\][Examples]{} \[Definition\][Exercise]{} \[Definition\][Remark]{} \[Definition\][Remarks]{} \[Definition\][Caution]{} \[Definition\][Conjecture]{} \[Definition\][Question]{} \[Definition\][Questions]{} \[Definition\][Theorem]{} \[Definition\][Proposition]{} \[Definition\][Lemma]{} \[Definition\][Corollary]{} \[Definition\][Fact]{} \[Definition\][Facts]{} [****]{}[.]{}[ ]{} [****]{}[.]{}[ ]{} Introduction ============ Overview -------- The goal of this article is the study of morphisms $X\to P$ from a proper variety $X$ over a field $k$ to a Brauer–Severi variety $P$ over $k$, i.e., $P$ is isomorphic to projective space over the algebraic closure $\overline{k}$ of $k$, but not necessarily over $k$. If $X$ has a $k$-rational point, then so has $P$, and then, $P$ is isomorphic to projective space already over $k$. In this case, there exists a well-known description of morphisms $X\to P$ in terms of globally generated invertible sheaves on $X$. However, if $X$ has no $k$-rational point, then we establish in this article a correspondence between globally generated classes of ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, whose obstruction to coming from an invertible sheaf on $X$ is measured by some class $\beta$ in the Brauer group ${{\rm Br}}(k)$, and morphisms to Brauer–Severi varieties of class $\beta$ over $k$. As an application of this correspondence, we study del Pezzo surfaces over $k$ in terms of Brauer–Severi varieties, and recover many known results about their geometry and their arithmetic. If $k$ is a global field, then we obtain applications concerning the Hasse principle and weak approximation. Our approach has the advantage of being elementary, self-contained, and that we sometimes obtain natural reasons for the existence of $k$-rational points. Morphisms to Brauer–Severi varieties ------------------------------------ Let $X$ be a proper variety over a field $k$, and let $\overline{k}$ be the algebraic closure of $k$. When studying invertible sheaves on $X$, there are inclusions and equalities of abelian groups $${{\rm Pic }}(X) \,\subseteq\, {{\rm Pic }}_{(X/k)({{\rm \acute{e}t}})}(k) \,=\, {{\rm Pic }}_{(X/k)({\rm fppf})}(k)\,\subseteq\, {{\rm Pic }}(X_{\overline{k}}).$$ On the left (resp. right), we have invertible sheaves on $X$ (resp. $X_{\overline{k}}$) up to isomorphism, whereas in the middle, we have sections of the sheafified relative Picard functor over $k$ (with respect to the étale and fppf topology, respectively). Moreover, the first inclusion is part of an exact sequence $$0\,\to\,{{\rm Pic }}(X) \,\to\, {{\rm Pic }}_{(X/k)({{\rm \acute{e}t}})}(k)\,\stackrel{\delta}{\longrightarrow}\,{{\rm Br}}(k),$$ where ${{\rm Br}}(k)$ denotes the Brauer group of the field $k$, and we refer to Remark \[rem: explain delta\] for explicit descriptions of $\delta$. If $X$ has a $k$-rational point, then $\delta$ is the zero map, i.e., the first inclusion is a bijection. By definition, a [*Brauer–Severi variety*]{} is a variety $P$ over $k$, such that $P_{\overline{k}}{\cong}{{{{\mathbb P}}}}_{\overline{k}}^N$ for some $N$, i.e., $P$ is a twisted form of projective space. Associated to $P$, there exists a Brauer class $[P]\in{{\rm Br}}(k)$ and by a theorem of Châtelet, $P$ is trivial, i.e., isomorphic to projective space over $k$, if and only if $[P]=0$. This is also equivalent to $P$ having a $k$-rational point. In any case, we have a class ${{{\mathcal}O}}_P(1)\in{{\rm Pic }}_{(P/k)({\rm fppf})}(k)$, in general not arising from an invertible sheaf on $P$, which becomes isomorphic to ${{{\mathcal}O}}_{{{{{\mathbb P}}}}^N}(1)$ over $\overline{k}$, see Definition \[def: O(1) for BS\]. In this article, we extend the notion of a [*linear system*]{} to classes in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ that do not necessarily come from invertible sheaves. More precisely, we extend the notions of being [*globally generated*]{}, [*ample*]{}, and [*very ample*]{} to such classes, see Definition \[def: globally generated\]. Then, we set up a dictionary between globally generated classes in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ and morphisms from $X$ to Brauer–Severi varieties over $k$. In case $X$ has a $k$-rational point, then we recover the well-known correspondence between globally generated invertible sheaves and morphisms to projective space. Here is an easy version of our correspondence and we refer to Theorem \[thm: main\] and Remark \[rem: trivial case\] for details. \[theorem1\] Let $X$ be a proper variety over a field $k$. 1. Let $\varphi:X\to P$ be a morphism to a Brauer–Severi variety $P$ over $k$. If we set ${{\mathcal}L}:=\varphi^*{{{\mathcal}O}}_P(1)\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, then ${{\mathcal}L}$ is a globally generated class and $$\delta({{\mathcal}L}) \,=\, [P] \,\in{{\rm Br}}(k).$$ 2. If ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ is globally generated, then ${{{\mathcal}{L}}}\otimes_k\overline{k}$ corresponds to a unique invertible sheaf ${{\mathcal}M}$ on $X_{\overline{k}}$ and the morphism associated to the complete linear system $|{{\mathcal}M}|$ descends to a morphism over $k$ $$|{{\mathcal}L}|\,:\,X \,\to\, P,$$ where $P$ is a Brauer–Severi variety over $k$ with $\delta({{\mathcal}L})=[P]$. We note that our result is inspired by a geometric construction of Brauer–Severi varieties of Grothendieck, see [@Grothendieck; @Brauer Section (5.4)], and it seems that it is known to the experts. As immediate corollaries, we recover two classical theorems about Brauer–Severi varieties due to Châtelet and Kang, see Corollary \[cor: kang\] and Corollary \[cor: chatelet\]. Del Pezzo surfaces ------------------ In the second part, we apply this machinery to the geometry and arithmetic of del Pezzo surfaces over arbitrary ground fields. I would like to stress that most, if not all, of the results of this second part are well-known. To the best of my knowledge, I have tried to give the original references. However, my organization of the material and the hopefully more geometric approach to del Pezzo surfaces via morphisms to Brauer–Severi varieties is new. By definition, a [*del Pezzo surface*]{} is a smooth and proper surface $X$ over a field $k$, whose anti-canonical invertible sheaf $\omega_X^{-1}$ is ample. The [*degree*]{} of a del Pezzo surface is the self-intersection number of $\omega_X$. The classification of del Pezzo surfaces over $\overline{k}$ is well-known: The degree $d$ satisfies $1\le d\le 9$, and they are isomorphic either to ${{{{\mathbb P}}}}^1\times{{{{\mathbb P}}}}^1$ or to the blow-up of ${{{{\mathbb P}}}}^2$ in $(9-d)$ points in general position. As an application of Theorem \[theorem1\], we obtain the following. 1. If $d=8$ and $X_{\overline{k}}{\cong}{{{{\mathbb P}}}}^1_{\overline{k}}\times{{{{\mathbb P}}}}^1_{\overline{k}}$, then there exists an embedding $$\begin{array}{ccccc} |-\frac{1}{2}K_X| &:& X &{{\hookrightarrow}}& P \end{array}$$ into a Brauer–Severi threefold $P$. Moreover, $X$ is either isomorphic to a product of two Brauer–Severi curves or to a quadratic twist of the self-product of a Brauer–Severi curve. We refer to Theorem \[thm: product type\] and Proposition \[prop: product type classification\] for details. 2. If $d\geq7$ and $X_{\overline{k}}\not{\cong}{{{{\mathbb P}}}}^1_{\overline{k}}\times{{{{\mathbb P}}}}^1_{\overline{k}}$, then there exists a birational morphism $$\begin{array}{ccccc} f&:&X&\to&P \end{array}$$ to a Brauer–Severi surface $P$ over $k$ that is the blow-up in a closed and zero-dimensional subscheme of length $(9-d)$ over $k$. We refer to Theorem \[thm: del Pezzo descent\] for details. 3. If $d=6$, then there exist two finite field extensions $k\subseteq K$ and $k\subseteq L$ with $[K:k]|2$ and $[L:k]|3$ such that there exists a birational morphism $f:X\to P$ to a Brauer–Severi surface $P$ over $k$ that is the blow-up in a closed and zero-dimensional subscheme of length $3$ over $k$ if and only $k=K$. On the other hand, there exists a birational morphism $X\to Y$ onto a degree $8$ del Pezzo surface $Y$ of product type if and only if $k=L$. We refer to Theorem \[thm: degree 6\] for details. 4. For partial results if $d\leq5$, as well as birationality criteria for when a del Pezzo surface is birationally equivalent to a Brauer–Severi surface, we refer to Section \[sec: small degree\]. As further applications, we recover well-known results about rationality, unirationality, existence of $k$-rational points, Galois cohomology, the Hasse principle, and weak approximation for del Pezzo surfaces. It is a pleasure for me to thank Jörg Jahnel, Andrew Kresch, Raphael Riedl, Ronald van Luijk, and Anthony Várilly-Alvarado for comments and discussions. I especially thank Jean-Louis Colliot-Thélène and Alexei Skorobogatov for providing me with references, discussions, and pointing out mistakes, as well as correcting some of my too naive ideas. Last, but not least, I thank the referee for careful proof-reading and the many useful suggestions. Notations and Conventions {#notations-and-conventions .unnumbered} ========================= In this article, $k$ denotes an arbitrary field, $\overline{k}$ (resp. $k^{\rm sep}$) its algebraic (resp. separable) closure, and $G_k={\rm Gal}(k^{\rm sep}/k)$ its absolute Galois group. By a variety over $k$ we mean a scheme $X$ that is of finite type, separated, and geometrically integral over $k$. If $K$ is a field extension of $k$, then we define $X_K:=X\times_{{{\rm Spec}\:}k}{{\rm Spec}\:}K$. Picard functors and Brauer groups {#sec:brauer} ================================= This section, we recall a couple of definitions and general results about the various relative Picard functors, about Brauer groups of fields and schemes, as well as Brauer–Severi varieties. Relative Picard functors ------------------------ Let us first recall a couple of generalities about the several Picard functors. Our main references are [@Grothendieck; @Picard1], [@Grothendieck; @Picard2], as well as the surveys [@BLR Chapter 8] and [@Kleiman; @Picard]. For a scheme $X$, we define its [*Picard group*]{} ${{\rm Pic }}(X)$ to be the abelian group of invertible sheaves on $X$ modulo isomorphism. If $f:X\to S$ is a separated morphism of finite type over a Noetherian base scheme $S$, then we define the [*absolute Picard functor*]{} to be the functor that associates to each Noetherian $T\to S$ the abelian group ${{\rm Pic }}_X(T):={{\rm Pic }}(X_T)$, where $X_T:=X\times_ST$. Now, as explained, for example in [@Kleiman; @Picard Section 9.2], the absolute Picard functor is a separated presheaf for the Zariski, étale, and the fppf topologies, but it is never a sheaf for the Zariski topology. In particular, the absolute Picard functor is never representable by a scheme or by an algebraic space. This leads to the introduction of the [*relative Picard functor*]{} ${{\rm Pic }}_{X/S}$ by setting ${{\rm Pic }}_{X/S}(T):={{\rm Pic }}(X_T)/{{\rm Pic }}(T)$, and then, we have the associated sheaves for the Zariski, étale, and fppf topologies $${{\rm Pic }}_{(X/S)({\rm zar})},\mbox{ \quad } {{\rm Pic }}_{(X/S)({\rm \acute{e}t})},\mbox{ \quad and \quad } {{\rm Pic }}_{(X/S)({\rm fppf})}.$$ In many important cases, these sheaves are representable by schemes or algebraic spaces over $S$. For our purposes, it suffices to work with the sheaves so that we will not address representability questions here, but refer the interested reader to [@BLR Chapter 8.2] and [@Kleiman; @Picard Chapter 9.4] instead. Having introduced these sheaves, let us recall the following easy facts, see, for example, [@Kleiman; @Picard Exercise 9.2.3]. \[easy picard facts\] Let $X\to S$ be a scheme that is separated and of finite type over a Noetherian scheme $S$. Let $L$ be a field with a morphism ${{\rm Spec}\:}L\to S$. 1. Then, the following natural maps are isomorphisms: $${{\rm Pic }}_X(L)\,\stackrel{{\cong}}{\longrightarrow}\,{{\rm Pic }}_{X/S}(L)\,\stackrel{{\cong}}{\longrightarrow}\,{{\rm Pic }}_{(X/S)({\rm zar})}(L).$$ 2. If $L$ is algebraically closed, then also the following natural maps are isomorphisms: $${{\rm Pic }}_X(L)\,\stackrel{{\cong}}{\longrightarrow}\,{{\rm Pic }}_{(X/S)({\rm \acute{e}t})}(L)\,\stackrel{{\cong}}{\longrightarrow}\,{{\rm Pic }}_{(X/S)({\rm fppf})}(L).$$ It is important to note that if $L$ is not algebraically closed, then the natural map ${{\rm Pic }}_X(L)\to{{\rm Pic }}_{(X/S)({\rm \acute{e}t})}(L)$ is usually not an isomorphism, i.e., not every section of ${{\rm Pic }}_{(X/S)({\rm \acute{e}t})}$ over $L$ arises from an invertible sheaf on $X_L$. The following example, taken from [@Kleiman; @Picard Exercise 9.2.4], is crucial to everything that follows and illustrates this. \[ex: failure picard\] Let $X$ be the smooth plane conic over ${{{{\mathbb R}}}}$ defined by $$X\,:=\,\{\, x_0^2+x_1^2+x_2^2=0 \,\} \,\subset\,{{{{\mathbb P}}}}^2_{{{{{\mathbb R}}}}}.$$ Then, $X$ is not isomorphic to ${{{{\mathbb P}}}}^1_{{{{\mathbb R}}}}$ since $X({{{{\mathbb R}}}})=\emptyset$, but there exists an isomorphism $X_{{{{\mathbb C}}}}\to{{{{\mathbb P}}}}^1_{{{{\mathbb C}}}}$. In particular, $X$ is an example of a non-trivial Brauer–Severi variety (see Definition \[def: brauer severi\]). Next, if $x\in X$ is a closed point, then $\kappa(x){\cong}{{{{\mathbb C}}}}$, that is, $x$ is a zero-cycle of degree $2$. Moreover, ${{{\mathcal}O}}_X(x)$ generates ${{\rm Pic }}_X({{{{\mathbb R}}}})$, for if there was an invertible sheaf of odd degree on $X$, then there would exist an invertible sheaf of degree $1$ on $X$ and then, Riemann–Roch would imply $X({{{{\mathbb R}}}})\neq\emptyset$, a contradiction. On the other hand, $x$ splits on $X_{{{{\mathbb C}}}}$ into two closed points, say $x_1$ and $x_2$. Since ${{{\mathcal}O}}_{X_{{{{\mathbb C}}}}}(x_1)$ and ${{{\mathcal}O}}_{X_{{{{\mathbb C}}}}}(x_2)$ are isomorphic as invertible sheaves on $X_{{{{\mathbb C}}}}$, it follows that ${{{\mathcal}O}}_{X_{{{{\mathbb C}}}}}(x_1)$ descends from a class in ${{\rm Pic }}_{(X/{{{{\mathbb R}}}})({\rm \acute{e}t})}({{{{\mathbb C}}}})$ to a class in ${{\rm Pic }}_{(X/{{{{\mathbb R}}}})({\rm \acute{e}t})}({{{{\mathbb R}}}})$. These observations show that the natural map ${{\rm Pic }}_X({{{{\mathbb R}}}})\to{{\rm Pic }}_{(X/{{{{\mathbb R}}}})({\rm \acute{e}t})}({{{{\mathbb R}}}})$ is not surjective. In this example, we have $X({{{{\mathbb R}}}})=\emptyset$, i.e., the structure morphism $X\to{{\rm Spec}\:}{{{{\mathbb R}}}}$ has no section. Quite generally, we have the following comparison theorem for the several relative Picard functors, and refer, for example, to [@Kleiman; @Picard Theorem 9.2.5] for details and proofs. \[thm: comparison\] Let $f:X\to S$ be a scheme that is separated and of finite type over a Noetherian scheme $S$, and assume that ${{{\mathcal}O}}_S\stackrel{{\cong}}{\longrightarrow}f_\ast{{{\mathcal}O}}_X$ holds universally. 1. Then, the natural maps $${{\rm Pic }}_{X/S} \,{{\hookrightarrow}}\, {{\rm Pic }}_{(X/S)({\rm zar})} \,{{\hookrightarrow}}\, {{\rm Pic }}_{(X/S)({\rm \acute{e}t})} \,{{\hookrightarrow}}\, {{\rm Pic }}_{(X/S)({\rm fppf})}$$ are injections. 2. If $f$ has a section, then all three maps are isomorphisms. If $f$ has a section locally in the Zariski topology, then the latter two maps are isomorphisms, and if $f$ has a section locally in the étale topology, then the last map is an isomorphism. To understand the obstruction to realizing a section of ${{\rm Pic }}_{(X/S)({{\rm \acute{e}t}})}$ or ${{\rm Pic }}_{(X/S)({\rm fppf})}$ over $S$ by an invertible sheaf on $X$ in case there is no section of $X\to S$, we recall the following definition. \[def: brauer\] For a scheme $T$, the étale cohomology group ${{H_{\rm \acute{e}t}^{{2}}}}(T,{{{{\mathbb G}}}}_m)$ is called the [*cohomological Brauer group*]{}, and is denoted ${{\rm Br}}'(T)$. The set of sheaves of Azumaya algebras on $T$ modulo Brauer equivalence also forms a group, the [*Brauer group*]{} of $T$, and is denoted ${{\rm Br}}(T)$. We will not discuss sheaves of Azumaya algebras on schemes in the sequel, but only remark that these generalize central simple algebras over fields (see Section \[subsec: BS\] for the latter), and refer the interested reader to [@Grothendieck; @Brauer1] and [@Milne Chapter IV] for details and references, as well as to [@Poonen] for a survey. Using that ${{{{\mathbb G}}}}_m$ is a smooth group scheme, Grothendieck [@Grothendieck; @Brauer] showed that the natural map ${{H_{\rm \acute{e}t}^{{2}}}}(T,{{{{\mathbb G}}}}_m)\to H^2_{{\rm fppf}}(T,{{{{\mathbb G}}}}_m)$ is an isomorphism, i.e., it does not matter whether the cohomological Brauer group ${{\rm Br}}'(T)$ is defined with respect to the étale or the fppf topology. Next, there exists a natural injective group homomorphism ${{\rm Br}}(T)\to{{\rm Br}}'(T)$, whose image is contained in the torsion subgroup of ${{\rm Br}}'(T)$. If $T$ is the spectrum of a field $k$, then this injection is even an isomorphism, i.e., ${{\rm Br}}(k)={{\rm Br}}'(k)$, see, for example, [@Grothendieck; @Brauer], [@Gille; @Szamuely], and [@Milne Chapter IV] for details and references. The connection between Brauer groups, Proposition \[easy picard facts\], and Theorem \[thm: comparison\] is as follows, see, for example [@BLR Chapter 8.1] or [@Kleiman; @Picard Section 9.2]. \[prop: delta\] Let $f:X\to S$ be a scheme that is separated and of finite type over a Noetherian scheme $S$, and assume that ${{{\mathcal}O}}_S\stackrel{{\cong}}{\longrightarrow}f_\ast{{{\mathcal}O}}_X$ holds universally. Then, for each $S$-scheme $T$ there exists a canonical exact sequence $$0\,\to\,{{\rm Pic }}(T)\,\to\,{{\rm Pic }}(X_T)\,\to\,{{\rm Pic }}_{(X/S)({\rm fppf})}(T)\,\stackrel{\delta}{\longrightarrow}\,{{\rm Br}}'(T)\,\to\,{{\rm Br}}'(X_T)\,.$$ If $f$ has a section, then $\delta$ is the zero-map. Varieties and the Amitsur subgroup ---------------------------------- By our conventions above, a variety over a field $k$ is a scheme $X$ that is of finite type, separated, and geometrically integral over $k$. In this situation, the conditions of Proposition \[prop: delta\] are fulfilled, as the following remark shows. \[rem: geometry\] If $X$ is a proper variety over a field $k$, then 1. the structure morphism $f:X\to {{\rm Spec}\:}k$ is separated, of finite type, and ${{{\mathcal}O}}_{{{\rm Spec}\:}k}{\cong}f_\ast{{{\mathcal}O}}_X$ holds universally. 2. The morphism $f$ has sections locally in the étale topology (see, for example, [@Gille; @Szamuely Appendix A]). 3. Since the base scheme is a field $k$, we have ${{\rm Br}}(k)={{\rm Br}}'(k)$. In Remark \[rem: explain delta\], we will give an explicit description of $\delta$ in this case. In Example \[ex: failure picard\], the obstruction to representing the class of ${{\mathcal}L}:=\varphi^*{{{\mathcal}O}}_{{{{{\mathbb P}}}}^1_{{{{\mathbb C}}}}}(1)$ in ${{\rm Pic }}_{(X/{{{{\mathbb R}}}})({\rm fppf})}({{{{\mathbb R}}}})$ by an invertible sheaf on $X$ can be explained via $\delta$, which maps ${\mathcal}L$ to the non-zero element of ${{\rm Br}}({{{{\mathbb R}}}}){\cong}{{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}$. In terms of Azumaya algebras (since the base is ${{\rm Spec}\:}{{{{\mathbb R}}}}$, these are central simple ${{{{\mathbb R}}}}$-algebras), this Brauer class corresponds the ${{{{\mathbb R}}}}$-algebra ${{{{\mathbb H}}}}$ of quaternions, but we will not pursue this point of view in the sequel. \[prop: picard in geometry\] Let $X$ be a proper variety over a field $k$. Then, there exist natural isomorphisms of abelian groups $${{\rm Pic }}_{X/k}(k^{\rm sep})^{G_k} \,\stackrel{{\cong}}{\longrightarrow}\, {{\rm Pic }}_{(X/k)({{\rm \acute{e}t}})}(k) \,\stackrel{{\cong}}{\longrightarrow}\, {{\rm Pic }}_{(X/k)({\rm fppf})}(k),$$ where the ${-}^{G_k}$ denotes Galois invariants. [Proof.]{} The first isomorphism follows from Galois theory and sheaf axioms and the second isomorphism follows from Theorem \[thm: comparison\] and Remark \[rem: geometry\]. The Brauer group ${{\rm Br}}(k)$ of a field $k$ is an abelian torsion group, see, for example, [@Gille; @Szamuely Corollary 4.4.8]. Motivated by Proposition \[prop: delta\], we introduce the following subgroup of ${{\rm Br}}(k)$ that measures the deviation between ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ and ${{\rm Pic }}(X)$. \[def: amitsur group\] Let $X$ be a proper variety over a field $k$. Then, the [*Amitsur subgroup*]{} of $X$ in ${{\rm Br}}(k)$ is the subgroup $${\rm Am}(X) \,:=\, \delta({{\rm Pic }}_{(X/k)({\rm fppf})}(k))\,\subseteq\,{{\rm Br}}(k).$$ By the previous remarks, it is an abelian torsion group. The following lemma gives bounds for the order of torsion in ${\rm Am}(X)$. \[lem: order amitsur\] Let $X$ be a proper variety over a field $k$. If there exists a closed point on $X$, whose residue field is of degree $n$ over $k$, then every element of ${\rm Am}(X)$ has an order dividing $n$. [Proof.]{} Let $x\in X$ be a closed point, say, with residue field $K/k$ that is of degree $n$ over $k$. Since $X_K$ has a $K$-rational point, the map $\delta$ of $X_K$ is identically zero by Proposition \[prop: delta\]. Thus, we have an inclusion ${\rm Am}(X) \subseteq{{\rm Br}}(K|k):=\ker({{\rm Br}}(k)\to{{\rm Br}}(K))$, where ${{\rm Br}}(k)\to{{\rm Br}}(K)$ is the restriction homomorphism. If $K$ is separable over $k$, then ${{\rm Br}}(K|k)$ is contained in the $n$-torsion of ${{\rm Br}}(k)$, which follows from the fact that the composition of restriction and corestriction is multiplication by $n$, see [@Gille; @Szamuely Proposition 4.2.10]. If $K$ is a purely inseparable extension of $k$, generated by $p^r$-th roots, then ${{\rm Br}}(K|k)$ is $p^r$-torsion (which yields even stronger bounds on the torsion than claimed), see for example, Hochschild’s Theorem [@Gille; @Szamuely Theorem 9.1.1] for an explicit description for this group. In general, we can factor the extension $K/k$ into a separable and a purely inseparable extension, and by combining the previous two special cases, the statement follows. Using Proposition \[prop: delta\], we can give two alternative definitions of ${\rm Am}(X)$. In fact, the birational invariance of this group for Brauer–Severi varieties is a classical result of Amitsur, probably known to Châtelet and Witt in some form or another, see also Theorem \[thm: amitsur birational\] below. \[prop: amitsur is birational invariant\] Let $X$ be a smooth and proper variety over $k$. Then, $${\rm Am}(X)\,=\, \ker\left({{\rm Br}}(k)\,\to\,{{\rm Br}}'(X)\right) \,=\,\ker\left({{\rm Br}}(k)\,\to\,{{\rm Br}}(k(X))\right).$$ In particular, ${\rm Am}(X)$ is a birational invariant of smooth and proper varieties over $k$. [Proof.]{} The first equality follows from the exact sequence of Proposition \[prop: delta\]. Since $X$ is smooth over $k$, the natural map ${{\rm Br}}'(X)\to{{\rm Br}}(k(X))$ is injective, see, for example, [@Milne Example III.2.22], and then, the second equality follows. From this last description, it is clear that ${\rm Am}(X)$ is a birational invariant. In [@CTKMdP6 Section 5], the kernel of ${{\rm Br}}(k)\to{{\rm Br}}(k(X))$ was denoted ${{\rm Br}}(k(X)/k)$. Thus, if $X$ is smooth and proper over $k$, then this latter group coincides with ${\rm Am}(X)$. However, this group should not be confused with ${{\rm Br}}(k(X))/{{\rm Br}}(k)$, which is related to another important birational invariant that we will introduce in Section \[subsec: dP arithmetic\]. If $X$ has a $k$-rational point, then ${\rm Am}(X)=0$ by Proposition \[prop: delta\]. On the other hand, there exist proper varieties $X$ with trivial Amitsur subgroup without $k$-rational points (some degree $8$ del Pezzo surfaces of product type with $\rho=1$ provide examples, see Proposition \[cor: h1 product type\]). Let us recall that a [*zero-cycle*]{} on $X$ is a formal finite sum $\sum_i n_i Z_i$, where the $n_i\in{{{{\mathbb Z}}}}$ and where the $Z_i$ are closed points of $X$. It is called [*effective*]{} if $n_i\geq0$ for all $i$. The [*degree*]{} is defined to be $\deg(Z):=\sum_i n_i[\kappa(Z_i):k]$, where $\kappa(Z_i)$ denotes the residue field of the point $Z_i$. \[cor: amitsur trivial\] Let $X$ be a proper variety over a field $k$. If there exists a zero cycle of degree $1$ on $X$, then ${\rm Am}(X)=0$. If $X$ is a projective variety over $k$, then ${{\rm Pic }}_{(X/k)({{\rm \acute{e}t}})}$ and ${{\rm Pic }}_{(X/k)({\rm fppf})}$ are representable by a group scheme ${{\rm Pic }}_{X/k}$ over $k$, the [*Picard scheme*]{}. The connected component of the identity is denoted ${{\rm Pic }}_{X/k}^0$, and the quotient $${{\rm NS}}_{X/k}(\overline{k}) \,:=\, {{\rm Pic }}_{X_{\overline{k}}/\overline{k}}(\overline{k}) \,/\, {{\rm Pic }}^0_{X_{\overline{k}}/\overline{k}}(\overline{k}),$$ the [*Néron–Severi group*]{}, is a finitely generated abelian group, whose rank is denoted $\rho(X_{\overline{k}})$. We refer to [@BLR Section 8.4] for further discussion. Moreover, if $X$ is smooth over $k$, then ${{\rm Pic }}_{X/k}^0$ is of dimension $\frac{1}{2}b_1(X)$, where $b_1$ denotes the first $\ell$-adic Betti number. \[lem: picard rank\] Let $X$ be a smooth and projective variety over a field $k$ with $b_1(X)=0$. Then, ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ is a finitely generated abelian group, $${\rm rank}\,{{\rm Pic }}(X)\,=\,{\rm rank}\,{{\rm Pic }}_{(X/k)({\rm fppf})}(k) \,\leq\,\rho(X_{\overline{k}}),$$ and ${\rm Am}(X)$ is a finite abelian group. [Proof.]{} If $b_1(X)=0$, then, by the previous discussion, ${{\rm Pic }}(X_{\overline{k}})$ is a finitely generated abelian group of rank $\rho(X_{\overline{k}})$. Since ${{\rm Pic }}(X)$ and ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ are contained in ${{\rm Pic }}(X_{\overline{k}})$, they are also finitely generated of rank at most $\rho(X_{\overline{k}})$. Since ${\rm Am}(X)=\delta({{\rm Pic }}_{(X/k)({\rm fppf})}(k))$ is a torsion subgroup of ${{\rm Br}}(k)$, Proposition \[prop: delta\] implies the stated equality of ranks. Moreover, being torsion and a finitely generated abelian group, ${\rm Am}(X)$ is finite. Brauer–Severi varieties {#subsec: BS} ----------------------- Next, we recall a couple of results about Brauer–Severi varieties, and refer the interested reader to [@Gille; @Szamuely Chapter 5] and the surveys [@Jahnel], [@Poonen] for details, proofs, and further references. \[def: brauer severi\] A [*Brauer–Severi variety*]{} over a field $k$ is a proper variety $P$ over $k$, such that there exists a finite field extension $K$ of $k$ and an isomorphism $P_K{\cong}{{{{\mathbb P}}}}_K^n$ over $K$. In case $P$ is of dimension one (resp. two, resp. three), we will also refer to it as a Brauer–Severi curve (resp. Brauer–Severi surface, resp. Brauer–Severi threefold). Any field extension $K$ of $k$ such that $P_K$ is isomorphic to projective space over $K$ is called a [*splitting field*]{} for $P$, and $P$ is said to [*split*]{} over $K$. By a theorem of Châtelet, a Brauer–Severi variety $P$ over $k$ is [*trivial*]{}, i.e., splits over $k$, i.e., is $k$-isomorphic to projective space over $k$, if and only if it possesses a $k$-rational point. Since a geometrically integral variety over a field $k$ always has points over $k^{\rm sep}$, it follows that a Brauer–Severi variety can be split over a finite and separable extension of $k$, which we may also assume to be Galois if we want. For a finite field extension $K$ of $k$ that is Galois with Galois group $G$, the set of all Brauer–Severi varieties of dimension $n$ over $k$ that split over $K$, can be interpreted as the set of all $G$-twisted forms of ${{{{\mathbb P}}}}^n_K$, which is in bijection to the cohomology group $H^1(G,{\rm Aut}({{{{\mathbb P}}}}^n_K))$. Using ${\rm Aut}({{{{\mathbb P}}}}^n){\cong}{\rm PGL}_{n+1}$, and taking cohomology in the short exact sequence $$1\,\to\,{{{{\mathbb G}}}}_m\,\to\,{\rm GL}_{n+1}\,\to\,{\rm PGL}_{n+1}\,\to\,1,$$ the boundary map associates to the class of a Brauer–Severi variety $P$ of dimension $n$ in $H^1(G,{\rm PGL}_{n+1}(K))$ a class in $${{\rm Br}}(K|k) \,:=\, \ker\left({{\rm Br}}(k)\to{{\rm Br}}(K)\right) \,=\,\ker\left( {{H_{\rm \acute{e}t}^{{2}}}}(k,{{{{\mathbb G}}}}_m) \to {{H_{\rm \acute{e}t}^{{2}}}}(K,{{{{\mathbb G}}}}_m) \right).$$ Taking the limit over all finite Galois extensions of $k$, we obtain for every Brauer–Severi variety $P$ over $k$ a class $[P]\in{{\rm Br}}(k)$. This cohomology class is torsion and its order is called the [*period*]{} of $P$, denoted ${\rm per}(P)$. By a theorem of Châtelet, a Brauer–Severi variety is trivial if and only if the class $[P]\in{{\rm Br}}(k)$ is zero, i.e., if and only if ${\rm per}(P)=1$. We will say that two Brauer–Severi varieties over $k$ are [*Brauer equivalent*]{} if their associated classes in ${{\rm Br}}(k)$ are the same. To say more about Brauer classes associated to Brauer–Severi varieties, we will shortly digress on non-commutative $k$-algebras, and refer to [@Gille; @Szamuely Section 2] and [@Jacobson] for details: We recall that a [*central simple $k$-algebra*]{} is a $k$-algebra $A$, whose center is equal to $k$ (i.e., $A$ is central), and whose only two-sided ideals are $(0)$ and $A$ (i.e., $A$ is simple). If $A$ is moreover finite-dimensional over $k$, then by theorems of Noether, Köthe, and Wedderburn, there exists a finite and separable field extension $k\subseteq K$ that [*splits*]{} $A$, i.e., $A\otimes_kK{\cong}{\rm Mat}_{n\times n}(K)$. In particular, the dimension of $A$ over $k$ is always a square, and we set the [*degree*]{} of $A$ to be ${\rm deg}(A):=\sqrt{\dim_k(A)}$. Two central simple $k$-algebras $A_1$ and $A_2$ are said to be [*Brauer equivalent*]{} if there exist integers $a_1,a_2\geq1$ such that $A_1\otimes_k{\rm Mat}_{a_1\times a_1}(k){\cong}A_2\otimes_k{\rm Mat}_{a_2\times a_2}(k)$. The connection between central simple algebras and Brauer–Severi varieties is the following dictionary, see [@Gille; @Szamuely Theorem 2.4.3]. \[thm: central simple algebras\] Let $k\subseteq K$ be a field extension that is Galois with Galois group $G$. Then, there is a natural bijection of sets between 1. Brauer–Severi varieties of dimension $n$ over $k$ that split over $K$, 2. $H^1(G, {\rm PGL}_{n+1}(K))$, and 3. central simple $k$-algebras of degree $n+1$ over $k$ that split over $K$. Under this bijection, Brauer equivalence of (1) and (3) coincide. We also recall that a [*division algebra*]{} is a $k$-algebra in which every non-zero element has a two-sided multiplicative inverse. For example, field extensions of $k$ are division algebras, and a non-commutative example is provided by the quaternions over ${{{{\mathbb R}}}}$. Given a simple and finite-dimensional $k$-algebra $A$, a theorem of Wedderburn states that there exists a unique division algebra $D$ over $k$ and a unique integer $m\geq1$ and an isomorphism of $k$-algebras $A{\cong}{\rm Mat}_{m\times m}(D)$, see [@Gille; @Szamuely Theorem 2.1.3]. \[cor: isomorphic BS\] If two Brauer–Severi varieties over $k$ of the same dimension are Brauer equivalent, then they are isomorphic as schemes over $k$. [Proof.]{} By Theorem \[thm: central simple algebras\], it suffices to show that two Brauer equivalent central simple $k$-algebras $A_1$, $A_2$ of the same dimension are isomorphic. By Wedderburn’s theorem, there exist division algebras $D_i$ and integers $m_i\geq1$ such that $A_i{\cong}{\rm Mat}_{m_i\times m_i}(D_i)$ for $i=1,2$. By definition of Brauer-equivalence, there exist integers $a_i\geq1$ and an isomorphism of $k$-algebras $$A_1\otimes_k{\rm Mat}_{a_1\times a_1}(k) \,{\cong}\, A_2\otimes_k{\rm Mat}_{a_2\times a_2}(k).$$ Together with the $k$-algebras isomorphisms $$\begin{array}{lcl} A_i\otimes_k{\rm Mat}_{a_i\times a_i}(k) &{\cong}& {\rm Mat}_{m_i\times m_i}(D_i)\otimes_k{\rm Mat}_{a_1\times a_1}(k) \\ &{\cong}& {\rm Mat}_{a_im_i\times a_im_i}(D_i) \end{array}$$ and the uniqueness part in Wedderburn’s theorem, we conclude $D_1{\cong}D_2$, as well as $a_1=a_2$, whence $A_1{\cong}A_2$, see also [@Gille; @Szamuely Remark 2.4.7]. For Brauer–Severi varieties over $k$ that are of different dimension, we refer to Châtelet’s theorem (Corollary \[cor: chatelet\]) below. On the other hand, for Brauer–Severi varieties over $k$ that are of the same dimension, Amitsur conjectured that they are birationally equivalent if and only if their classes generate the same cyclic subgroup of ${{\rm Br}}(k)$, see also Remark \[rem: the amitsur conjecture\]. For projective space, the degree map $\deg:{{\rm Pic }}({{{{\mathbb P}}}}^n_k)\to{{{{\mathbb Z}}}}$, which sends ${{{\mathcal}O}}_{{{{{\mathbb P}}}}^n_k}(1)$ to $1$, is an isomorphism. Thus, if $P$ is a Brauer–Severi variety over $k$ and $G_k:={\rm Gal}(k^{\rm sep}/k)$, then there are isomorphisms $$\begin{array}{lclcl} {{\rm Pic }}_{(P/k)({\rm fppf})}(k) &{\cong}& {{\rm Pic }}_{(P/k)}(k^{\rm sep})^{G_k} &{\cong}&{{\rm Pic }}_{(P/k)}(k^{\rm sep}) \\ &{\cong}& {{\rm Pic }}({{{{\mathbb P}}}}^{\dim(P)}_{k^{\rm sep}}) &\stackrel{{\rm deg}}{\longrightarrow}& {{{{\mathbb Z}}}}. \end{array}$$ The first isomorphism is Proposition \[prop: picard in geometry\], and the second follows from the fact that the $G_k$-action must send the unique ample generator of ${{\rm Pic }}_{(P/k)}(k^{\rm sep})$ to an ample generator, showing that $G_k$ acts trivially. The third isomorphism follows from the fact that $P$ splits over a separable extension. \[def: O(1) for BS\] For a Brauer–Severi variety $P$ over $k$, we denote the unique ample generator of ${{\rm Pic }}_{(P/k)({\rm fppf})}(k)$ by ${{{\mathcal}O}}_P(1)$. We stress that ${{{\mathcal}O}}_P(1)$ is a class in ${{\rm Pic }}_{(P/k)({\rm fppf})}(k)$ that usually does not come from an invertible sheaf on $P$ - in fact this happens if and only if $P$ is a trivial Brauer–Severi variety, i.e., split over $k$. For a Brauer–Severi variety, the short exact sequence from Proposition \[prop: delta\] becomes the following. \[thm: brauer severi picard\] Let $P$ be a Brauer–Severi variety over $k$. Then, there exists an exact sequence $$0\,\to\, {{\rm Pic }}(P) \,\to\, \underbrace{{{\rm Pic }}_{(P/k)({\rm fppf})}(k)}_{{\cong}\,{{{{\mathbb Z}}}}} \,\stackrel{\delta}{\longrightarrow}\, {{\rm Br}}(k) \,\to\,{{\rm Br}}(k(P))\,.$$ More precisely, we have $$\begin{aligned} \delta({{{\mathcal}O}}_P(1)) &=& [P], \mbox{ \quad and} \\ {{\rm Pic }}(P) &=& {{{\mathcal}O}}_P({\rm per}(P))\cdot{{{{\mathbb Z}}}}. \end{aligned}$$ Since $\omega_P{\cong}{{{\mathcal}O}}_P(-\dim(P)-1)$, the period ${\rm per}(P)$ divides $\dim(P)+1$. Again, we refer to [@Gille; @Szamuely Theorem 5.4.5] for details and proofs. Using Proposition \[prop: amitsur is birational invariant\], we immediately obtain the following classical result of Amitsur [@Amitsur] as corollary. \[thm: amitsur birational\] If $P$ is a Brauer–Severi variety over $k$, then ${\rm Am}(P){\cong}{{{{\mathbb Z}}}}/{\rm per}(P){{{{\mathbb Z}}}}$. If two Brauer–Severi varieties are birationally equivalent over $k$, then the have the same Amitsur subgroups inside ${{\rm Br}}(k)$ and in particular, the same period. \[rem: the amitsur conjecture\] In general, it is not true that two Brauer–Severi varieties of the same dimension and the same Amitsur subgroup are isomorphic. We refer to Remark \[rem: amitsur remark\] for an example arising from a Cremona transformation of Brauer–Severi surfaces. However, Amitsur asked whether two Brauer–Severi varieties of the same dimension with the same Amitsur subgroup are birationally equivalent. In our applications to del Pezzo surfaces below, we will only need the following easy and probably well-known corollary. \[cor: zero cycle\] Let $P$ be a Brauer–Severi variety over $k$. If there exists a zero-cycle on $P$, whose degree is prime to $(\dim(P)+1)$, then $P$ is is trivial. [Proof.]{} Since ${\rm Am}(P){\cong}{{{{\mathbb Z}}}}/{\rm per}(P){{{{\mathbb Z}}}}$ and its order divides $(\dim(P)+1)$, Lemma \[lem: order amitsur\] and the assumptions imply ${\rm Am}(P)=0$. Thus, ${\rm per}(P)=1$, and then, $P$ is trivial. We end this section by mentioning another important invariant of a Brauer–Severi variety $P$ over $k$, namely, its [*index*]{}, denoted ${\rm ind}(P)$. We refer to [@Gille; @Szamuely Chapter 4.5] for the precise definition and note that it is equal to the smallest degree of a finite separable field extension $K/k$ such that $P_K$ is trivial, as well as to the greatest common divisor of the degrees of all finite separable field extensions $K/k$ such that $P_K$ is trivial. By a theorem of Brauer, the period divides the index, and they have the same prime factors, see [@Gille; @Szamuely Proposition 4.5.13]. Morphisms to Brauer–Severi varieties ==================================== This section contains Theorem \[thm: main\], the main observation of this article that describes morphisms from a proper variety $X$ over a field $k$ to Brauer–Severi varieties in terms of classes in of ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. We start by extending classical notions for invertible sheaves to such classes, and then, use these notions to phrase and prove Theorem \[thm: main\]. As immediate corollaries, we obtain two classical results of Kang and Châtelet on the geometry of Brauer–Severi varieties. Splitting fields, globally generated and ample classes ------------------------------------------------------ Before coming to the main result of this section, we introduce the following. \[def: globally generated\] Let $X$ be a proper variety over $k$ and ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. 1. A [*splitting field*]{} for ${\mathcal}L$ is a field extension $K/k$ such that ${{\mathcal}L}\otimes_kK$ lies in ${{\rm Pic }}(X_K)$, i.e., arises from an invertible sheaf on $X_K$. 2. The class ${\mathcal}L$ is called [*globally generated*]{} (resp. [*ample*]{}, resp. [*very ample*]{}) if there exists a splitting field $K$ for ${{\mathcal}L}$ such that ${{\mathcal}L}\otimes_kK$ is globally generated (resp. ample, resp. very ample) as an invertible sheaf on $X_K$. From the short exact sequence in Proposition \[prop: delta\], it follows that if $K$ is a splitting field for the class ${{\mathcal}L}$, then there exists precisely one invertible sheaf on $X_K$ up to isomorphism that corresponds to this class. The following lemma shows that these notions are independent of the choice of a splitting field of the class ${\mathcal}L$. \[well defined lemma\] Let $X$ be a proper variety over $k$ and ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. 1. There exists a splitting field for ${\mathcal}L$ that is a finite and separable extension $k$, and it can also chosen to be Galois over $k$. 2. Let $K$ and $K'$ be splitting fields for ${{\mathcal}L}$. Then ${{\mathcal}L}\otimes_k K\in{{\rm Pic }}(X_K)$ is globally generated (resp. ample, resp. very ample) if and only if ${{\mathcal}L}\otimes_k K'\in{{\rm Pic }}(X_{K'})$ is globally generated (resp. ample, resp. very ample). [Proof.]{} To simplify notation in this proof, we set ${{\mathcal}L}_K:={{\mathcal}L}\otimes_k K$. Let $K$ be a finite and separable extension of $k$, such that $\delta({{\mathcal}L})\in{{\rm Br}}(k)$ lies in ${{\rm Br}}(K|k)$, where $\delta$ is as in Proposition \[prop: delta\]. Then, $\delta({{\mathcal}L}_K)=0$, i.e., ${{\mathcal}L}_K$ comes from an invertible sheaf on $X_K$. In particular, $K$ is a splitting field for ${\mathcal}L$, which is a finite and separable extension of $k$. Passing to the Galois closure of $K/k$, we obtain a splitting field for ${\mathcal}L$ that is a finite Galois extension of $k$. This establishes claim (1). Claim (2) is a well-known application of flat base change, but let us recall the arguments for the reader’s convenience: By choosing a field extension of $k$ that contains both $K$ and $K'$, we reduce to the case $k\subseteq K\subseteq K'$. We have $H^0(X_K,{{\mathcal}L}_K)\otimes_K K'{\cong}H^0(X_{K'},{{\mathcal}L}_{K'})$ by flat base change for cohomology, from which it is easy to see that ${{\mathcal}L}_K$ is globally generated if and only if ${{\mathcal}L}_{K'}$ is so. Next, if ${{\mathcal}L}_K$ is very ample, then its global sections give rise to a closed immersion $X_K\to{{{{\mathbb P}}}}^n_K$ for some $n$. After base change to $K'$, we obtain a closed embedding $X_{K'}\to{{{{\mathbb P}}}}^n_{K'}$ which corresponds to the global sections of ${{\mathcal}L}_{K'}$, and so, also ${{\mathcal}L}_{K'}$ is very ample. Conversely, if ${{\mathcal}L}_{K'}$ is very ample, then it is globally generated, and thus, ${{\mathcal}L}_K$ is globally generated by what we just established, and thus, gives rise to a morphism $\varphi_K:X_K\to{{{{\mathbb P}}}}^n_K$. By assumption and flat base change, $\varphi_{K'}$ is a closed embedding, and thus, $\varphi_K$ is a closed embedding, and ${{\mathcal}L}_K$ is very ample. From this, it also follows that ${{\mathcal}L}_K$ is ample if and only if ${{\mathcal}L}_{K'}$ is. \[rem: explain delta\] Let $X$ be a proper variety over $k$ and let $$\begin{array}{ccccc} \delta &:& {{\rm Pic }}_{(X/k)({\rm fppf})}(k) &{\longrightarrow}& {{\rm Br}}(k) \end{array}$$ be as in Proposition \[prop: delta\]. We are now in a position to describe $\delta$ explicitly. 1. First, and more abstractly: given a class ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, we can choose a splitting field $K$ that is a finite extension $k$. Thus, ${{\rm Spec}\:}K\to {{\rm Spec}\:}k$ is an fppf cover, the class ${{\mathcal}L}\otimes_k K$ comes with an fppf descent datum, and it arises from an invertible sheaf ${{\mathcal}M}$ on $X_K$. The crucial point is that the descent datum is for a class in ${{\rm Pic }}(X_K)$, where isomorphism classes of invertible sheaves are identified. In order to turn this into a descent datum for the invertible sheaf ${\mathcal}M$, we have to choose isomorphisms, which are only unique up to a ${{{{\mathbb G}}}}_m={\rm Aut}({{\mathcal}M})$-action, and we obtain a ${{{{\mathbb G}}}}_m$-gerbe that is of class $\delta({{\mathcal}L})\in H^2_{{\rm fppf}}({{\rm Spec}\:}k,{{{{\mathbb G}}}}_m)={{\rm Br}}(k)$. This gerbe is neutral if and only if $\delta({{\mathcal}L})=0$. This is equivalent to being able to extend the descent datum for the class ${{\mathcal}L}\otimes_k K$ to a descent datum for the invertible sheaf ${{\mathcal}M}$. 2. Second, and more concretely: given a class ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, we can choose a splitting field $K$ that is a finite Galois extension of $k$, say with Galois group $G$. Thus, the class ${{{{\mathcal}{L}}}}\otimes_k K$ arises from an invertible sheaf ${\mathcal}M$ on $X_K$ and lies in ${{\rm Pic }}_X(K)^G$ and we can choose isomorphisms $$\imath_g \,:\, g^\ast {{\mathcal}M} \,\stackrel{{\cong}}{\longrightarrow}\, {{\mathcal}M},$$ which are unique up to a ${{{{\mathbb G}}}}_m$-action. In particular, they may fail to form a Galois descent datum for ${\mathcal}M$, and the failure of turning $\{\imath_g\}_{g\in G}$ into a Galois descent datum for ${\mathcal}M$ gives rise to a cohomology class $\delta({{\mathcal}L})\in {{H_{\rm \acute{e}t}^{{2}}}}({{\rm Spec}\:}k,{{{{\mathbb G}}}}_m)={{\rm Br}}(k)$. More precisely, this class lies in the subgroup ${{\rm Br}}(K|k)$ of ${{\rm Br}}(k)$. The following is an analog for Brauer–Severi varieties of the classical correspondence between morphisms to projective space and globally generated invertible sheaves as explained, for example, in [@Hartshorne Theorem II.7.1], see also Remark \[rem: trivial case\] below. \[thm: main\] Let $X$ be a proper variety over a field $k$. 1. Let $\varphi:X\to P$ be a morphism to a Brauer–Severi variety $P$ over $k$, and consider the induced homomorphism of abelian groups $$\begin{array}{ccccc} \varphi^* &:& {{\rm Pic }}_{(P/k)({\rm fppf})}(k) &\to& {{\rm Pic }}_{(X/k)({\rm fppf})}(k). \end{array}$$ Then, ${{\mathcal}L}:=\varphi^*{{{\mathcal}O}}_P(1)$ is a globally generated class with $$\begin{array}{ccc} \delta({{\mathcal}L}) \,=\, [P] &\in& {{\rm Br}}(k), \end{array}$$ where $\delta$ is as in Proposition \[prop: delta\]. If $\varphi$ is a closed immersion, then ${\mathcal}L$ is very ample. 2. Let ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ be a globally generated class. If $K$ is a splitting field, then the morphism to projective space over $K$ associated to the complete linear system $|{{\mathcal}L}\otimes_kK|$ descends to morphism over $k$ $$\begin{array}{ccccc} |{{\mathcal}L}| &:& X &\to& P, \end{array}$$ where $P$ is a Brauer–Severi variety over $k$ with $\delta({{\mathcal}L})=[P]$. If ${{\mathcal}L}$ is very ample, then $|{{\mathcal}L}|$ is a closed immersion. [Proof.]{} Let $\varphi:X\to P$ and ${\mathcal}L$ be as in (1). Then, we have $\delta({{\mathcal}L})=\delta({{{\mathcal}O}}_P(1))=[P]\in{{\rm Br}}(k)$, where the first equality follows from functoriality of the exact sequence in Proposition \[prop: delta\], and the second from Theorem \[thm: brauer severi picard\]. Let $K$ be a splitting field for ${\mathcal}L$, and let ${\mathcal}M$ be the invertible sheaf corresponding to ${{\mathcal}L}\otimes_k K$ on $X_K$. Being an invertible sheaf, we have $\delta({{\mathcal}M})=0\in{{\rm Br}}(K)$, which implies that the morphism $\varphi_K:X_K\to P_K$ maps to a Brauer–Severi variety of class $[P_K]=\delta({{\mathcal}M})=0$, i.e., $P_K{\cong}{{{{\mathbb P}}}}_K^n$. By definition and base change, we obtain ${{\mathcal}M}{\cong}\varphi_K^*({{{\mathcal}O}}_{{{{{\mathbb P}}}}_K^n}(1))$. Thus, ${\mathcal}M$ is globally generated (as an invertible sheaf), which implies that ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ is globally generated in the sense of Definition \[def: globally generated\]. Moreover, if $\varphi$ is a closed immersion, then so is $\varphi_K$, which implies that ${{\mathcal}M}\in{{\rm Pic }}(X_K)$ is very ample (as an invertible sheaf), and thus, ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ is very ample in the sense of Definition \[def: globally generated\]. This establishes claim (1) To establish claim (2), let ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf)}}(k)$ be globally generated. By Lemma \[well defined lemma\], there exists a splitting field $K'$ for ${{\mathcal}L}$ that is a finite Galois extension of $k$, say with Galois group $G$. Thus, ${{\mathcal}L}\otimes_kK'$ corresponds to an invertible sheaf ${{\mathcal}M}$ on $X_{K'}$, whose isomorphism class lies in ${{\rm Pic }}_{X}(K')^G$, see Proposition \[prop: picard in geometry\]. If $f:X\to{{\rm Spec}\:}k$ is the structure morphism, then $(f_{K'})_\ast{{\mathcal}M}$ is a finite-dimensional $K'$-vector space. By our assumptions on global generation we obtain a morphism over $K'$ $$|{{\mathcal}M}| \,:\,X_{K'} \,\to\, {{{{\mathbb P}}}}((f_{K'})_\ast{{\mathcal}M}).$$ As explained in Remark \[rem: explain delta\].(2), there exist isomorphisms $\{\imath_g:g^*{{\mathcal}M}\to{{\mathcal}M}\}_{g\in G}$ that are unique up to a ${{{{\mathbb G}}}}_m$-action. In particular, we obtain a well-defined $G$-action on ${{{{\mathbb P}}}}((f_{K'})_\ast{{\mathcal}M})$, and the morphism defined by $|{{\mathcal}M}|$ is $G$-equivariant. Taking the quotient by $G$, we obtain a morphism over $k$ $$|{{\mathcal}L}|\,:\,X\to P.$$ Since $P_K$ is isomorphic to ${{{{\mathbb P}}}}((f_{K'})_\ast{{\mathcal}M})$, we see that $P$ is a Brauer–Severi variety over $k$ and, as observed by Grothendieck in [@Grothendieck; @Brauer Section (5.4)], we have $\delta({{\mathcal}L})=[P]$ in ${{\rm Br}}(k)$. Finally, let $K$ be an arbitrary splitting field for ${{\mathcal}L}$. Let $\varphi:X\to P$ be the previously constructed morphism and choose an extension field $\Omega$ of $k$ that contains $K$ and $K'$. Then, ${{\mathcal}L}\otimes_k\Omega$ is an invertible sheaf on $X_\Omega$, globally generated by Lemma \[well defined lemma\], and, since $k\subseteq K'\subseteq\Omega$, the morphism associated to $|{{\mathcal}L}\otimes_k\Omega|$ is equal to $\varphi_\Omega=(\varphi_{K'})_\Omega:X_\Omega\to P_\Omega$. Since $K$ is a splitting field for ${{\mathcal}L}$, it is also a splitting field for $P_{K}$ (see the argument in the proof of claim (1)), and in particular, $P_{K'}$ is a trivial Brauer–Severi variety. We have ${{\mathcal}L}\otimes_k\Omega{\cong}\varphi_\Omega^*{{{\mathcal}O}}_{P_\Omega}(1)$, from which we deduce ${{\mathcal}L}\otimes_k K{\cong}\varphi_{K}^*{{{\mathcal}O}}_{P_{K}}(1)$, as well as that $\varphi_{K}$ is the morphism associated to $|{{\mathcal}L}\otimes_k K|$. In particular, the morphism associated to $|{{\mathcal}L}\otimes_k K|$ descends to $\varphi:X\to P$, where $P$ is a Brauer–Severi variety of class $\delta({{\mathcal}L})$. This establishes claim (2). \[rem: trivial case\] Let us note the following. 1. The construction of a Brauer–Severi variety over $k$ from a globally generated class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ (in our terminology) is due to Grothendieck in [@Grothendieck; @Brauer Section (5.4)]. 2. In Theorem \[thm: main\].(2), we only considered complete linear systems. We leave it to the reader to show the following generalization: Given a class ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, a splitting field $K$ that is finite and Galois over $k$ with Galois group $G$, and $V\subseteq H^0(X_K,{{\mathcal}L}\otimes_kK)$ a $G$-stable $K$-linear subspace, whose global sections generate ${{\mathcal}L}\otimes_kK$, we can descend the morphism $X_K\to{{{{\mathbb P}}}}(V)$ to a morphism $X\to P'$, where $P'$ is a Brauer–Severi variety over $k$ of class $[P']=\delta({{\mathcal}L})\in{{\rm Br}}(k)$. 3. If $X$ in Theorem \[thm: main\] has a $k$-rational point, i.e., $X(k)\neq\emptyset$, then we recover the well-known correspondence between morphisms to projective space and globally generated invertible sheaves: 1. Then, $\delta\equiv0$ and every class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ comes from an invertible sheaf on $X$ by Proposition \[prop: delta\], 2. and since every morphism $\varphi:X\to P$ gives rise to a $k$-rational point on $P$, i.e., $P$ is a trivial Brauer–Severi variety. Two classical results on Brauer–Severi varieties ------------------------------------------------ As our first corollary and application, we recover the following theorem of Kang [@Kang], see also [@Gille; @Szamuely Theorem 5.2.2], which is a Brauer–Severi variety analog of Veronese embeddings of projective spaces. \[cor: kang\] Let $P$ be a Brauer–Severi variety of period ${\rm per}(P)$ over $k$. Then, the class of ${{{\mathcal}O}}_{P}({\rm per}(P))$ arises from a very ample invertible sheaf on $P$ and gives rise to an embedding $$|{{{\mathcal}O}}_P({\rm per}(P))| \,:\, P \,\to\, {{{{\mathbb P}}}}^N_k, \mbox{ \quad where \quad } N\,=\, \binom{\dim(P)+{\rm per}(P)}{{\rm per}(P)} .$$ After base change to a splitting field $K$ of $P$, this embedding becomes the ${\rm per}(P)$-uple Veronese embedding of ${{{{\mathbb P}}}}^{\dim(P)}_K$ into ${{{{\mathbb P}}}}^N_K$. If $n\geq1$, then ${{{\mathcal}O}}_P(n)$ is very ample in the sense of Definition \[def: globally generated\], and thus, defines an embedding into a Brauer–Severi variety $P'$ over $k$. Over a splitting field of $P$, this embedding becomes the $n$-uple Veronese embedding. Since $\delta({{{\mathcal}O}}_P(1))=[P]\in{{\rm Br}}(k)$ and this element of order ${\rm per}(P)$, we see that if ${\rm per}(P)$ divides $n$, then ${{{\mathcal}O}}_P(n)$ is an invertible sheaf on $P$ and $P'$ is a trivial Brauer–Severi variety. \[example: BS curve\] Let $X$ be a smooth and proper variety of dimension one over $k$. If $\omega_X^{-1}$ is ample, then it is a curve of genus $g(X)=h^0(X,\omega_X)=0$. Thus, $X$ is isomorphic to ${{{{\mathbb P}}}}^1$ over $\overline{k}$, i.e., $X$ is a Brauer–Severi curve. There exists a unique class ${{{\mathcal}{L}}}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ with ${{{\mathcal}{L}}}^{\otimes2}{\cong}\omega_X^{-1}$, and it gives rise to an isomorphism $|{{{\mathcal}{L}}}|:X\to P$, where $P$ is a Brauer–Severi curve with $\delta({{{\mathcal}{L}}})=[P]\in{{\rm Br}}(k)$. Moreover, ${{{\mathcal}{L}}}^{\otimes2}{\cong}\omega_X^{-1}$ is an invertible sheaf on $X$ that defines an embedding $|\omega_X^{-1}|:X\to{{{{\mathbb P}}}}^2_k$ as a plane conic. A subvariety $X\subseteq P$ of a Brauer–Severi variety $P$ over $k$ is called [*twisted linear*]{} if $X_{\overline{k}}$ is a linear subspace of $P_{\overline{k}}$. As second application, we recover the following theorem of Châtelet, see [@Gille; @Szamuely Section 5.3], and it follows from a Brauer–Severi variety analog of Segre embeddings of products of projective spaces. \[cor: chatelet\] Let $P_1$ and $P_2$ be two Brauer–Severi varieties over $k$ of dimension $d_1$ and $d_2$, respectively. 1. If $P_1$ is a twisted linear subvariety of $P_2$, then $[P_1]=[P_2]\in{{\rm Br}}(k)$. 2. If $[P_1]=[P_2]\in{{\rm Br}}(k)$, then there exists a Brauer–Severi variety $P$ over $k$, such that $P_1$ and $P_2$ can be embedded as twisted-linear subvarieties into $P$. [Proof.]{} If $\varphi:P_1{{\hookrightarrow}}P_2$ is a twisted-linear subvariety, then $\varphi^*{{{\mathcal}O}}_{P_2}(1)={{{\mathcal}O}}_{P_1}(1)\in{{\rm Pic }}_{(P_1/k)({\rm fppf})}(k)$. We find $[P_1]=\delta({{{\mathcal}O}}_{P_1}(1))=\delta({{{\mathcal}O}}_{P_2}(1))=[P_2]$ by functoriality of the exact sequence of Proposition \[prop: delta\], and (1) follows. Next, we show (2). By Theorem \[thm: main\], there exists an embedding $\varphi$ of $P_1\times{{{{\mathbb P}}}}_k^{d_2}$ into a Brauer–Severi variety $P$ of dimension $N:=(d_1+1)(d_2+1)-1=d_1d_2+d_1+d_2$ over $k$ associated to the class ${{{\mathcal}O}}_{P_1}(1)\boxtimes{{{\mathcal}O}}_{{{{{\mathbb P}}}}_k^{d_2}}(1)$. Over a splitting field of $P_1$, this embedding becomes the Segre embedding of ${{{{\mathbb P}}}}^{d_1}\times{{{{\mathbb P}}}}^{d_2}$ into ${{{{\mathbb P}}}}^N$. If $x$ is a $k$-rational point of ${{{{\mathbb P}}}}_k^{d_2}$, then $\varphi(P_1\times\{x\})$ realizes $P_1$ as twisted-linear subvariety of $P$ and we have $[P]=[P_1]\in{{\rm Br}}(k)$ by claim (1). Similarly, we obtain an embedding of $P_2$ as twisted-linear subvariety into a Brauer–Severi variety $P'$ of dimension $N$ over $k$ of class $[P']=[P_2]\in{{\rm Br}}(k)$. Since $[P]=[P']\in{{\rm Br}}(k)$ and $\dim(P)=\dim(P')$, we find $P{\cong}P'$ by Corollary \[cor: isomorphic BS\] and (2) follows. Del Pezzo surfaces ================== For the remainder of this article, we study del Pezzo surfaces with a view towards Brauer–Severi varieties. Most, if not all, results of these sections are known in some form or another to the experts. However, our more geometric approach, as well as some of the proofs, are new. Let us first recall some classical results about del Pezzo surfaces, and refer the reader to [@Manin Chapter IV] or the surveys [@CT; @survey], [@Varilly], [@Poonen] for details, proofs, and references. For more results about the classification of geometrically rational surfaces, see [@Manin; @surfaces] and [@Iskovskih]. A [*del Pezzo surface*]{} is a smooth and proper variety $X$ of dimension two over a field $k$ such that $\omega_X^{-1}$ is ample. The [*degree*]{} of a del Pezzo surface is the self-intersection number of $\omega_X$. In arbitrary dimension, smooth and proper varieties $X$ over $k$ with ample $\omega_X^{-1}$ are called [*Fano varieties*]{}. As discussed in Example \[example: BS curve\], Fano varieties of dimension one over $k$ are the same as Brauer–Severi curves over $k$. Geometry -------- The degree $d$ of a del Pezzo surface $X$ over a field $k$ satisfies $1\leq d\leq 9$. Set $\overline{X}:=X_{\overline{k}}$. We will say that $X$ is [*of product type*]{} if $$\begin{array}{ccc} \overline{X} &{\cong}& {{{{\mathbb P}}}}^1_{\overline{k}}\times{{{{\mathbb P}}}}^1_{\overline{k}}, \end{array}$$ in which case we have $d=8$. If $X$ is not of product type, then there exists a birational morphism $$\begin{array}{ccccc} \overline{f} &:&\overline{X}&\to&{{{{\mathbb P}}}}^2_{\overline{k}} \end{array}$$ that is a blow-up of $(9-d)$ closed points $P_1,...,P_{9-d}$ in general position, i.e., no $3$ of them lie on a line, no $6$ of them lie on a conic, and there is no cubic through all these points having a double point in one of them. In particular, if $d=9$, then $\overline{f}$ is an isomorphism and $X$ is a Brauer–Severi surface over $k$. Arithmetic {#subsec: dP arithmetic} ---------- By the previous discussion and Lemma \[lem: picard rank\], the [*Néron–Severi rank*]{} of a del Pezzo surface $X$ of degree $d$ over $k$ satisfies $$1\,\leq\,\rho(X)\,:=\, {\rm rank}\,{{\rm Pic }}(X)\,=\,{\rm rank}\,{{\rm Pic }}_{(X/k)({\rm fppf})}(k) \,\leq\,10-d,$$ and $\rho(X_{\overline{k}})=10-d$. The following result about geometrically rational surfaces allows using methods from Galois theory even if the ground field $k$ is not perfect. This result is particularly useful in proofs, see also the discussion in [@Varilly Section 1.4]. In particular, it applies to del Pezzo surfaces. \[thm: Coombes\] Let $X$ be a smooth and proper variety over $k$ such that $X_{\overline{k}}$ is birational to ${{{{\mathbb P}}}}^2_{\overline{k}}$. Then, 1. $X_{k^{\rm sep}}$ is birationally equivalent to ${{{{\mathbb P}}}}^2_{k^{\rm sep}}$ via a sequence of blow-ups in points in $k^{\rm sep}$-rational points and their inverses. 2. The natural map ${{\rm Pic }}_X(k^{\rm sep})\to{{\rm Pic }}_X({\overline{k}})$ is an isomorphism. [Proof.]{} Assertion (1) is the main result of [@Coombes]. Clearly, assertion (2) holds for projective space over any field. Next, let $Y$ be a variety that is smooth and proper over $k^{\rm sep}$, $\widetilde{Y}\to Y$ be the blow-up of a $k^{\rm sep}$-rational point, and let $E\subset\widetilde{Y}$ be the exceptional divisor. Then, ${{\rm Pic }}_{\widetilde{Y}}(K)={{\rm Pic }}_Y(K)\oplus{{{{\mathbb Z}}}}\cdot E$ for $K=k^{\rm sep}$, as well as for $K=\overline{k}$. Using (1) and these two observations, assertion (2) follows. We will also need the following useful observation, due to Lang [@Lang] and Nishimura [@Nishimura], which implies that having a $k$-rational point is a birational invariant of smooth and proper varieties over $k$. We refer to [@Varilly Section 1.2] for details and proof. \[lem: Lang\] Let $X\dashrightarrow Y$ be a rational map of varieties over $k$, such that $X$ is smooth over $k$, and such that $Y$ is proper over $k$. If $X$ has a $k$-rational point, then so has $Y$. Moreover, we have already seen that a Brauer–Severi variety $P$ over $k$ is isomorphic to projective space over $k$ if and only if $P$ has a $k$-rational point, and we refer the interested reader to [@BS; @algorithm] for an algorithm to decide whether a Brauer–Severi surface has a $k$-rational point. In Definition \[def: amitsur group\], we defined the Amitsur group and showed its birational invariance in Proposition \[prop: amitsur is birational invariant\]. Using Iskovskih’s classification [@Iskovskih] of geometrically rational surfaces, we obtain the following list and refer to [@CTKMdP6 Proposition 5.2] for details and proof. Let $X$ be a smooth and proper variety over a perfect field $k$ such that $X_{\overline{k}}$ is birationally equivalent to ${{{{\mathbb P}}}}^2_{\overline{k}}$. Then, ${\rm Am}(X)$ is one of the following groups $$0,\mbox{ \quad }{{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}},\mbox{ \quad } ({{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}})^2,\mbox{ \quad and \quad }{{{{\mathbb Z}}}}/3{{{{\mathbb Z}}}}.$$ We will see explicit examples of all these groups arising as Amitsur groups of del Pezzo surfaces in the next sections. We now introduce another important invariant. Namely, if $G_k$ denotes the absolute Galois group of $k$, and $H\subseteq G_k$ is a closed subgroup, then we consider for a smooth and projective variety $X$ over $k$ the group cohomology $$H^1\left(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}})\right),$$ which is an abelian torsion group. If $b_1(X)=0$, then ${{\rm Pic }}_{X/k}(k^{\rm sep})$ is finitely generated by Lemma \[lem: picard rank\] and then, $H^1(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}}))$ is a finite abelian group. Moreover, if $X_{k^{\rm sep}}$ is a rational surface, then ${{\rm Br}}'(X_{k^{\rm sep}})=0$ (see, for example, [@Manin Theorem 42.8] or [@Milne; @Brauer]) and an appropriate Hochschild–Serre spectral sequence yields an exact sequence $$0\,\to\,{{\rm Br}}'(X)/{{\rm Br}}(k)\,\stackrel{\alpha}{\longrightarrow}\, H^1\left(G_k,\, {{\rm Pic }}_{X/k}({k^{\rm sep}})\right) \,\to\, H^3(G_k,(k^{\rm sep})^\times).$$ Moreover, if $k$ is a global field, then the term on the right is zero by a theorem of Tate (see, for example, [@Neukirch Chapter VIII.3]), thus, $\alpha$ is an isomorphism, and we obtain an interpretation of this cohomology group in terms of Brauer groups, see [@Varilly Section 3.4]. \[lem: h1 brauer-severi\] If $P$ is a Brauer–Severi variety over $k$, then $$H^1\left(H,\, {{\rm Pic }}_{P/k}({k^{\rm sep}})\right)\,=\,0$$ for all closed subgroups $H\subseteq G_k$. [Proof.]{} Since ${{\rm Pic }}_{P/k}({k^{\rm sep}}){\cong}{{{{\mathbb Z}}}}\cdot {{{\mathcal}O}}_P(1)$ and since $G_k$ acts trivially on the class ${{{\mathcal}O}}_P(1)$, the desired $H^1$ is isomorphic to ${\rm Hom}(H,{{{{\mathbb Z}}}})$, see [@Brown Chapter III.1, Exercise 2], for example. This is zero since $H$ is a profinite group and the homomorphisms to ${{{{\mathbb Z}}}}$ are required to be continuous. In Proposition \[prop: amitsur is birational invariant\], we established birational invariance of ${\rm Am}(X)$. The following result of Manin [@Manin Section 1 of the Appendix] shows that also the above group cohomology groups are a birational invariants. \[thm: birational invariance of h1\] For every closed subgroup $H\subseteq G_k$, the group $$H^1\left(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}})\right)$$ is a birational invariant of smooth and projective varieties over $k$. Every birational map between smooth and projective surfaces can be factored into a sequence of blow-ups in closed points, see [@Manin Chapter III]. Using this, one can give very explicit proofs of Proposition \[prop: amitsur is birational invariant\] and Theorem \[thm: birational invariance of h1\] in dimension $2$. (For such a proof of Theorem \[thm: birational invariance of h1\] in dimension $2$, see the proof of [@Manin Theorem 29.1].) Hasse principle and weak approximation {#subsec: Hasse} -------------------------------------- For a global field $K$, i.e., a finite extension of ${{{{\mathbb Q}}}}$ or of ${{{{\mathbb F}}}}_p(t)$, we denote by $\Omega_K$ the set of its places, including the infinite ones if $K$ is of characteristic zero. A class ${{{\mathcal}{C}}}$ of varieties over $K$ satisfies 1. the [*Hasse principle*]{}, if for every $X\in{{{\mathcal}{C}}}$ we have $X(K)\neq\emptyset$ if and only if $X(K_\nu)\neq\emptyset$ for all $\nu\in\Omega_K$. Moreover, ${{{\mathcal}{C}}}$ satisfies 2. [*weak approximation*]{}, if the diagonal embedding $$X(K) \to \prod_{\nu\in\Omega_K} X(K_\nu)$$ is dense for the product of the $\nu$-adic topologies. If ${{{\mathcal}{C}}}$ satisfies weak approximation, then it obviously also satisfies the Hasse principle, but the converse need not hold. For example, Brauer–Severi varieties over $K$ satisfy the Hasse principle by a theorem of Châtelet [@Chatelet], as well as weak approximation. However, both properties may fail for del Pezzo surfaces over $K$, and we refer to [@Varilly] for an introduction to this topic. We end this section by noting that the obstruction to a class ${{\rm Pic }}_{(X/K)({\rm fppf})}(K)$ coming from ${{\rm Pic }}_X(K)$ satisfies the Hasse principle. Let $X$ a proper variety over a global field $K$ and let ${{{\mathcal}{L}}}\in{{\rm Pic }}_{(X/K)({\rm fppf})}(K)$. Then, the following are equivalent 1. $0=\delta({{\mathcal}L})\in{{\rm Br}}(K)$ , and 2. $0=\delta({{{\mathcal}{L}}}\otimes_K K_\nu)\in{{\rm Br}}(K_\nu)\mbox{ for all }\nu\in\Omega_K$. [Proof.]{} A class in ${{\rm Br}}(K)$ is zero if and only if its image in ${{\rm Br}}(K_\nu)$ is zero for all $\nu\in\Omega_K$ by the Hasse principle for the Brauer group. From this, and functoriality of the exact sequence from Proposition \[prop: delta\], the assertion follows. For example, if $X(K_\nu)\neq\emptyset$ for all $\nu\in\Omega_X$, then $\delta$ is the zero map by Proposition \[prop: delta\] and this lemma. In this case, every class in ${{\rm Pic }}_{(X/K)({\rm fppf})}(K)$ comes from an invertible sheaf on $X$. Del Pezzo surfaces of product type ================================== In this section, we classify degree $8$ del Pezzo surfaces of product type over $k$, i.e., surfaces $X$ over $k$ with $X_{\overline{k}}{\cong}{{{{\mathbb P}}}}^1_{\overline{k}}\times{{{{\mathbb P}}}}^1_{\overline{k}}$, in terms of Brauer–Severi varieties. First, for ${{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k$, the anti-canonical embedding can be written as composition of Veronese- and Segre-maps as follows $$\begin{array}{ccccccc} |-K_{{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k}| &:& {{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k &\stackrel{\nu_2\times\nu_2}{\longrightarrow}& {{{{\mathbb P}}}}^2_k\times{{{{\mathbb P}}}}^2_k &\stackrel{\sigma}{\longrightarrow}& {{{{\mathbb P}}}}^8_k\,. \end{array}$$ Next, the invertible sheaf $\omega_{{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k}^{-1}$ is uniquely $2$-divisible in the Picard group, and we obtain an embedding as a smooth quadric $$\begin{array}{ccccc} |{\scriptstyle -\frac{1}{2}}K_{{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k}| &:& {{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k &\stackrel{\sigma}{\longrightarrow}& {{{{\mathbb P}}}}^3_k\,. \end{array}$$ Now, let $X$ be a degree $8$ del Pezzo surface of product type over $k$. Then, the anti-canonical linear system yields an embedding of $X$ as a surface of degree $8$ into ${{{{\mathbb P}}}}^8_k$. However, the “half-anti-canonical linear system” exists in general only as a morphism to a Brauer–Severi threefold as the following result shows. \[thm: product type\] Let $X$ be a degree $8$ del Pezzo surface of product type over a field $k$. Then, there exist a unique class ${{\mathcal}L}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ with ${{\mathcal}L}^{\otimes 2}{\cong}\omega_X^{-1}$ and an embedding $$\begin{array}{ccccc} |{{{\mathcal}{L}}}| &:& X &{{\hookrightarrow}}&P \end{array}$$ into a Brauer–Severi threefold $P$ over $k$ with Brauer class $$\begin{array}{ccc} \delta({{\mathcal}L}) \,=\, [P] &\in&{{\rm Br}}(k), \end{array}$$ and such that $X_{\overline{k}}$ is a smooth quadric in $P_{\overline{k}}{\cong}{{{{\mathbb P}}}}^3_{\overline{k}}$. Moreover, $X$ is rational if and only if $X$ has a $k$-rational point. In this case, we have $P{\cong}{{{{\mathbb P}}}}^3_k$. [Proof.]{} To simplify notation, set $L:=k^{\rm sep}$. We have $X(L)\neq\emptyset$, for example, by [@Gille; @Szamuely Proposition A.1.1], as well as ${{\rm Pic }}(X_L){\cong}{{\rm Pic }}(X_{\overline{k}}){\cong}{{{{\mathbb Z}}}}^2$ by Theorem \[thm: Coombes\]. The classes $(1,0)$ and $(0,1)$ of ${{\rm Pic }}(X_L)$ give rise to two morphisms $X_L\to{{{{\mathbb P}}}}^1_L$, and we obtain an isomorphism $X_L{\cong}{{{{\mathbb P}}}}^1_L\times{{{{\mathbb P}}}}^1_L$. By abuse of notation, we re-define $\overline{X}$ to be $X_L$. Next, the absolute Galois group $G_k$ acts trivially on the canonical class $(-2,-2)$, and thus, the $G_k$-action on ${{{{\mathbb Z}}}}(1,1)\subset{{{{\mathbb Z}}}}^2$ is trivial. By Proposition \[prop: picard in geometry\], we have ${{\rm Pic }}_{X/k}(K)^{G_k}{\cong}{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, and, since $(1,1)\in{{{{\mathbb Z}}}}^2$ is $G_k$-invariant, the unique invertible sheaf ${{{\mathcal}{L}}}$ on $\overline{X}$ with ${{{\mathcal}{L}}}^{\otimes 2}{\cong}\omega_{\overline{X}}^{-1}$ descends to a class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. Over $L$, the class ${{{\mathcal}{L}}}$ is very ample and defines an embedding of $\overline{X}$ as smooth quadric surface into ${{{{\mathbb P}}}}^3_L$. Thus, by Theorem \[thm: main\], we obtain an embedding $|{{{\mathcal}{L}}}|:X{{\hookrightarrow}}P$, where $P$ is a Brauer–Severi threefold over $k$ with $\delta({{{\mathcal}{L}}})=[P]\in{{\rm Br}}(k)$. Finally, if $X$ is rational, then it has a $k$-rational point, and then, also $P$ has a $k$-rational point, i.e., $P{\cong}{{{{\mathbb P}}}}^3_k$. Conversely, if there exists a $k$-rational point $x\in X$, then $X$ is a quadric in ${{{{\mathbb P}}}}^3_k$, and projection away from $x$ induces a birational map $X\dashrightarrow{{{{\mathbb P}}}}^2_k$. Next, we establish an explicit classification of degree $8$ del Pezzo surfaces of product type in terms of the Néron–Severi rank $\rho$ and Brauer–Severi curves. To simplify notation in the sequel, let us recall the definition of contracted products. If a finite group $G$ acts on a scheme $X$ from the right and it acts on a scheme $Y$ from the left and all schemes and actions are over ${{\rm Spec}\:}k$ for some field $k$, then we denote the quotient of $X\times_{{{\rm Spec}\:}k} Y$ by the diagonal $G$-action defined by $(x,y)\mapsto (xg,g^{-1}y)$ for all $g\in G$ by $$X\wedge^G Y \,:=\, (X\times_{{{\rm Spec}\:}k} Y)/G.$$ We refer to [@Giraud Chapter III.1.3] for details and applications. \[prop: product type classification\] Let $X$ and $X\subset P$ be as in Theorem \[thm: product type\]. 1. if $\rho(X)=2$, then $$\begin{array}{ccc} X &{\cong}& P'\times P'', \end{array}$$ where $P'$ and $P''$ are Brauer–Severi curves over $k$, whose Brauer classes satisfy $[P]=[P']+[P'']\in{{\rm Br}}(k)$. In particular, $P{\cong}{{{{\mathbb P}}}}^3_k$ if and only if $P'{\cong}P''$. 2. If $\rho(X)=1$, then there exist a Brauer–Severi curve $P'$ over $k$ and a finite Galois extension $K/k$ with Galois group $H:={{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}$, such that $X$ arises as twisted self-product $$\begin{array}{ccccc} X&{\cong}& (P'\times P')_K/H & = &{{\rm Spec}\:}K\wedge^H (P'\times P'), \end{array}$$ where the $H$-action permutes the factors of $P'_K\times P'_K$. Moreover, $P{\cong}{{{{\mathbb P}}}}^3_k$ and $P'$ is a hyperplane section of $X\subset {{{{\mathbb P}}}}^3_k$. We keep the notations and assumptions from the proof of Theorem \[thm: product type\]. The $G_k$-action fixes the class $(1,1)$. Since the $G_k$-action preserves the intersection pairing on ${{\rm Pic }}_{X/k}(k^{\rm sep})$, it follows that $G_k$ acts on ${{{{\mathbb Z}}}}(1,-1)$ either trivially, or by sign changes. We have $\rho(X)=2$ in the first case, and $\rho(X)=1$ in the latter. First, assume that $\rho(X)=2$. By Theorem \[thm: main\], the classes $(1,0)$ and $(0,1)$ give rise to morphisms to Brauer–Severi curves $X\to P'$ and $X\to P''$ of class $[P']=\delta((1,0))$ and $[P'']=\delta((0,1))$ in ${{\rm Br}}(k)$, respectively. Thus, we obtain a morphism $X\to P'\times P''$, which is an isomorphism because it is an isomorphism over $k^{\rm sep}$. Since $\delta$ is a homomorphism, we find $[P]=\delta({{{\mathcal}{L}}})=\delta((1,1))=\delta((1,0))+\delta((0,1))=[P']+[P'']$. Using that $P'$ and $P''$ are of period $2$, we find that $P{\cong}{{{{\mathbb P}}}}^3_k$ if and only if $[P]=0$, i.e., if and only if $[P']=[P'']$. By Corollary \[cor: isomorphic BS\], the latter is equivalent to $P'{\cong}P''$. Second, assume that $\rho(X)=1$. Then, the $G_k$-action permutes $(0,1)$ and $(1,0)$, i.e., it permutes the factors of ${{{{\mathbb P}}}}^1_{k^{\rm sep}}\times{{{{\mathbb P}}}}^1_{k^{\rm sep}}$. Thus, there exists a unique quadratic Galois extension $K/k$, such that ${\rm Gal}(k^{\rm sep}/K)$ acts trivially on ${{\rm Pic }}_{X/k}(k^{\rm sep})$ and by the previous analysis we have $X_K:=Q''\times Q'''$ for two Brauer–Severi curves $Q''$, $Q'''$ over $K$. Using these and the $H:={\rm Gal}(K/k)$-action, we obtain a $H$-stable diagonal embedding $Q'\subset X_K$ of a Brauer–Severi curve over $K$, and then, the two projections induce isomorphisms $Q'{\cong}Q''$ and $Q'{\cong}Q'''$ over $K$. Taking the quotient by $H$, we obtain a Brauer–Severi curve $P':=Q'/H\subset X$ over $k$. Clearly, $P'_K{\cong}Q'$ and we obtain the description of $X$ as twisted self-product. On $X$, the curve $P'$ is a section of the class $(1,1)$, which implies that this class comes from an invertible sheaf, and thus, $0=\delta((1,1))\in{{\rm Br}}(k)$ by Proposition \[prop: delta\]. Since $\delta((1,1))=[P]$, we conclude $P{\cong}{{{{\mathbb P}}}}^3_k$. In the case of quadrics in ${{{{\mathbb P}}}}^3$, similar results were already established in [@CTSk]. A related, but somewhat different view on degree $8$ del Pezzo surfaces of product type was taken in (the proof of) [@CTKMdP6 Proposition 5.2]: If $X$ is such a surface, then there exists a quadratic Galois extension $K/k$ and a Brauer–Severi curve $C$ over $K$, such that $X{\cong}{\rm Res}_{K/k}C$, where ${\rm Res}_{K/k}$ denotes Weil restriction, see also [@Poonen]. \[cor: h1 product type\] Let $X$ be as in Theorem \[thm: product type\]. Then, $$H^1\left(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}})\right) \,=\,0$$ for all closed subgroups $H\subseteq G_k$, and $${\rm Am}(X) \,{\cong}\, \left\{ \begin{array}{ll} 0 & \mbox{ if $\rho=1$ or if $X{\cong}{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k$,}\\ ({{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}})^2 & \mbox{ if $\rho=2$ and ${{{{\mathbb P}}}}^1_k\not{\cong}P'\not{\cong}P''\not{\cong}{{{{\mathbb P}}}}^1_k$,}\\ ({{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}) & \mbox{ in the remaining $\rho=2$-cases.} \end{array} \right.$$ [Proof.]{} Set $H^1(H):=H^1(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}}))$. If $\rho=2$, then the $G_k$-action on ${{\rm Pic }}_{X/k}(k^{\rm sep})$ is trivial, and we find $H^1(H)=0$ as in the proof of Lemma \[lem: h1 brauer-severi\]. Moreover, ${\rm Am}(X)$ is generated by $\delta((0,1)$ and $\delta((1,0))$, i.e., by $[P']$ and $[P'']$ in ${{\rm Br}}(k)$. From this, the assertions on ${\rm Am}(X)$ follow in case $\rho=2$. If $\rho=1$, then there exists an isomorphism ${{\rm Pic }}_{X/k}(k^{\rm sep}){\cong}{{{{\mathbb Z}}}}^2$, such that the $G_k$-action factors through a surjective homomorphism $G_k\to{{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}$ and acts on ${{{{\mathbb Z}}}}^2$ via $(a,b)\mapsto(b,a)$. In particular, we find $H^1({{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}},{{{{\mathbb Z}}}}^2)=0$ with respect to this action, see, for example, [@Brown Chapter III.1, Example 2]. From this, we deduce $H^1(H)=0$ using inflation maps. Moreover, ${\rm Am}(X)$ is generated by $\delta((1,1))$, which is zero, since $(1,1)$ is the class of an invertible sheaf. \[cor: product case point\] If $X$ is as in Theorem \[thm: product type\], then the following are equivalent 1. $X$ is birationally equivalent to a Brauer–Severi surface, 2. $X$ is rational, 3. $X$ has a $k$-rational point, and 4. $X$ is isomorphic to $$X\,{\cong}\,{{{{\mathbb P}}}}^1_k\times {{{{\mathbb P}}}}^1_k \mbox{ \quad or to \quad }X\,{\cong}\,{{\rm Spec}\:}K\wedge({{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k).$$ [Proof.]{} The implications $(2)\Rightarrow(1)$ and $(2)\Rightarrow(3)$ are trivial, and we established $(3)\Rightarrow(2)$ in Theorem \[thm: product type\]. Moreover, if $X$ is birationally equivalent to a Brauer–Severi surface $P$, then ${\rm Am}(P)={\rm Am}(X)$ is cyclic of order $1$ or $3$ by Lemma \[lem: h1 brauer-severi\] and Theorem \[thm: birational invariance of h1\]. Together with Corollary \[cor: h1 product type\], we conclude ${\rm Am}(P)={\rm Am}(X)=0$, i.e., $P{\cong}{{{{\mathbb P}}}}^2_k$, which establishes $(1)\Rightarrow(2)$. Since $(4)\Rightarrow(3)$ is trivial, it remains to establish $(3)\Rightarrow(4)$. Thus, we assume $X(k)\neq\emptyset$. If $\rho=2$, then $X{\cong}P'\times P''$ and both Brauer–Severi curves $P'$ and $P''$ have $k$-rational points, i.e., $X{\cong}{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k$. If $\rho=1$, we have an embedding $X\subset{{{{\mathbb P}}}}^3_k$ and $X{\cong}{{\rm Spec}\:}K\wedge (P'\times P')$. Since $X(k)\neq\emptyset$, we have $X(K)\neq\emptyset$, which yields $P'(K)\neq\emptyset$, and thus $P'_K{\cong}{{{{\mathbb P}}}}^1_K$. A $k$-rational point on $X$ gives rise to a $K$-rational and ${\rm Gal}(K/k)$-stable point on $X_K{\cong}{{{{\mathbb P}}}}^1_K\times{{{{\mathbb P}}}}^1_K$. In particular, this point lies on some diagonal ${{{{\mathbb P}}}}^1_K\subset X_K$, and thus, lies on some diagonal $P''\subseteq X$ with $X{\cong}{{\rm Spec}\:}K\wedge(P''\times P'')$. Since $P''(k)\neq\emptyset$, we find $P''{\cong}{{{{\mathbb P}}}}^1_k$. We refer to Section \[sec: del Pezzo application\] for more applications of these results to the arithmetic and geometry of these surfaces. Del Pezzo surfaces of large degree {#sec: dP large degree} ================================== Let $X$ be a del Pezzo surface of degree $d$ over a field $k$ that is not of product type. Then, there exists a birational morphism $$\begin{array}{ccccc} \overline{f} &:& \overline{X} &\to&{{{{\mathbb P}}}}^2_{\overline{k}} \end{array}$$ that is a blow-up in $(9-d)$ closed points $P_1,...,P_{9-d}$ in general position. We set $H:=\overline{f}^*{{{\mathcal}O}}_{{{{{\mathbb P}}}}^2_{\overline{k}}}(1)$ and let $E_i:=\overline{f}^{-1}(P_i)$ be the exceptional divisors of $\overline{f}$. Then, there exists an isomorphism of abelian groups $$\begin{array}{ccc} {{\rm Pic }}(\overline{X}) &\cong& {{{{\mathbb Z}}}}H\,\oplus\,\bigoplus_{i=1}^{9-d}\,{{{{\mathbb Z}}}}E_i. \end{array}$$ The $(-1)$-curves of $\overline{X}$ consist of the $E_i$, of preimages under $\overline{f}$ of lines through two distinct points $P_i$, of preimages under $\overline{f}$ of quadrics through five distinct points $P_i$, etc., and we refer to [@Manin Theorem 26.2] for details. Let $K_{\overline{X}}$ be the canonical divisor class of $\overline{X}$, and let $\widetilde{E}$ be the sum of all $(-1)$-curves on $\overline{X}$. We leave it to the reader to verify the following table. $d$ ----- -------- ------ ---------------------- --------------------- ----- ------------------------- --- ----------------- $9$ $3H$ $=$ $-K_{\overline{X}}$ $8$ $E_1$ $3H$ $=$ $-K_{\overline{X}}$ + $\widetilde{E}$ $7$ $H$ $H$ $=$ $\widetilde{E}$ $6$ $3H$ $-$ $\sum_{i=1}^3E_i$ $0$ $=$ $-K_{\overline{X}}$ - $\widetilde{E}$ $5$ $6H$ $-$ $2\sum_{i=1}^4E_i$ $0$ $=$ $-2K_{\overline{X}}$ - $\widetilde{E}$ $4$ $12H$ $-$ $4\sum_{i=1}^5E_i$ $0$ $=$ $-4K_{\overline{X}}$ - $\widetilde{E}$ $3$ $27H$ $-$ $9\sum_{i=1}^6E_i$ $0$ $=$ $-9K_{\overline{X}}$ - $\widetilde{E}$ $2$ $84H$ $-$ $28\sum_{i=1}^7E_i$ $0$ $=$ $-28K_{\overline{X}}$ - $\widetilde{E}$ $1$ $720H$ $-$ $240\sum_{i=1}^8E_i$ $0$ $=$ $-240 K_{\overline{X}}$ - $\widetilde{E}$ Together with Theorem \[thm: main\], we obtain the following result. \[thm: del Pezzo descent\] Let $X$ be a del Pezzo surface of degree $d\geq7$ over a field $k$ that is not of product type. Then, $\overline{f}$ descends to a birational morphism $$f : X\to P$$ to a Brauer–Severi surface $P$ over $k$, where $$\delta(H) \,=\, [P] \,\in\,{{\rm Br}}(k) \mbox{ \quad and \quad }{\rm Am}(X)\,{\cong}\,{{{{\mathbb Z}}}}/{\rm per}(P){{{{\mathbb Z}}}}.$$ Moreover, $X$ is rational if and only if $P{\cong}{{{{\mathbb P}}}}^2_k$. This is equivalent to $X$ having a $k$-rational point. By Theorem \[thm: Coombes\], the invertible sheaf $H$ on $X_{\overline{k}}$ defining $\overline{f}$ already lies in ${{\rm Pic }}_X({k^{\rm sep}})$, i.e., $\overline{f}$ descends to $k^{\rm sep}$, and by abuse of notation, we re-define $\overline{X}$ to be $X_{k^{\rm sep}}$. Clearly, the canonical divisor class $K_{\overline{X}}$ is $G_k$-invariant, and since $G_k$ permutes the $(-1)$-curves of $\overline{X}$, also the class of $\widetilde{E}$ is $G_k$-invariant. In particular, $K_{\overline{X}}$ and $\widetilde{E}$ define classes in ${{\rm Pic }}_{X/k}(k^{\rm sep})^{G_k}{\cong}{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. If $d\geq7$, then the above table shows that there exist positive multiples of $H$ that are integral linear combinations of $K_{\overline{X}}$ and $\widetilde{E}$. Thus, $H\in{{\rm Pic }}_{X/k}(k^{\rm sep})$ descends to a class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. By Theorem \[thm: main\], $\overline{f}$ descends to a birational morphism $f:X\to P$, where $P$ is a Brauer–Severi surface of class $\delta(H)\in{{\rm Br}}(k)$. The assertion on ${\rm Am}(X)$ follows from Proposition \[prop: amitsur is birational invariant\] and Theorem \[thm: amitsur birational\]. If $X$ has a $k$-rational point, then so has $P$, and then $P{\cong}{{{{\mathbb P}}}}^2_k$. Since $f$ is a birational morphism, $P{\cong}{{{{\mathbb P}}}}^2_k$ implies that $X$ is rational. And if $X$ is rational, then it has a $k$-rational point by Lemma \[lem: Lang\]. As an immediate consequence, we obtain rationality and the existence of $k$-rational points in some cases. \[cor: has rational point\] Let $X$ be as in Theorem \[thm: del Pezzo descent\]. If $d\in\{7,8\}$, then $X$ has a $k$-rational point and $\overline{f}$ descends to a birational morphism $f:X\to{{{{\mathbb P}}}}^2_k$. By Theorem \[thm: del Pezzo descent\], there exists a birational morphism $X\to P$ that is a blow-up in a closed subscheme $Z\subset P$ of length $(9-d)$. By Corollary \[cor: zero cycle\], we have $P{\cong}{{{{\mathbb P}}}}^2_k$ if $3$ and $(9-d)$ are coprime. In particular, we have $X(k)\neq\emptyset$ in these cases by Theorem \[thm: del Pezzo descent\] and Lemma \[lem: Lang\]. Since a del Pezzo surface of degree $9$ is a Brauer–Severi surface, it has rational points if and only if it is trivial. In particular, Corollary \[cor: has rational point\] does not hold for $d=9$. Applications to arithmetic geometry {#sec: del Pezzo application} ----------------------------------- We now give a couple of applications of the just established results. Again, we stress that most if not all of these applications are well-known, and merely illustrate the usefulness of studying varieties via Brauer–Severi varieties. \[cor: h1 for non product\] If $X$ is a del Pezzo surface of degree $\geq7$ over $k$, then $$\begin{array}{ccc} H^1\left(H,\, {{\rm Pic }}_{X/k}({k^{\rm sep}})\right) &=&0. \end{array}$$ for all closed subgroups $H\subseteq G_k$ [Proof.]{} If $X$ is not of product type, then it is birationally equivalent to a Brauer–Severi surface $P$ by Theorem \[thm: del Pezzo descent\], and then the statement follows from Theorem \[thm: birational invariance of h1\] and Lemma \[lem: h1 brauer-severi\]. If $X$ is of product type, then this is Corollary \[cor: h1 product type\]. For the next application, let us recall that a surface is called [*rational*]{} if it is birationally equivalent to ${{{{\mathbb P}}}}^2_k$, and that it is called [*unirational*]{} if there exists a dominant and rational map from ${{{{\mathbb P}}}}^2_k$ onto it. The following result is a special case of [@Manin Theorem 29.4]. \[cor: birational to p2\] Let $X$ be a del Pezzo surface of degree $\geq7$ over a field $k$. Then, the following are equivalent: 1. $X$ is rational, 2. $X$ is unirational, and 3. $X$ has a $k$-rational point. [Proof.]{} Clearly, we have $(1)\Rightarrow(2)\Rightarrow(3)$, whereas $(3)\Rightarrow(1)$ follows from Corollary \[cor: product case point\] and Theorem \[thm: del Pezzo descent\]. This leads us to the question whether a del Pezzo surface necessarily has a $k$-rational point. Over finite fields, this is true and follows from the Weil conjectures, which we will recall in Theorem \[thm: weil\] below. By a theorem of Wedderburn, finite fields have trivial Brauer groups, and thus, the following corollary gives existence of $k$-rational points for more general fields. Let $X$ be a del Pezzo surface of degree $\geq7$ over a field $k$ with ${{\rm Br}}(k)=0$. Then, $X$ has a $k$-rational point, and thus, is rational. If $X$ is not of product type, then there exists a birational morphism $f:X\to P$ to a Brauer–Severi surface by Theorem \[thm: del Pezzo descent\]. Since ${{\rm Br}}(k)=0$, we have $P{\cong}{{{{\mathbb P}}}}^2_k$, and Theorem \[thm: del Pezzo descent\] gives $X(k)\neq\emptyset$. Thus, let $X$ be of product type. By Proposition \[prop: product type classification\], $X$ is a product of Brauer–Severi curves ($\rho=2$), or contains at least a Brauer–Severi curve ($\rho=1$). Since ${{\rm Br}}(k)=0$, all Brauer–Severi curves are isomorphic to ${{{{\mathbb P}}}}^1_k$, and thus, contain $k$-rational points. In particular, we find $X(k)\neq\emptyset$. In Section \[subsec: Hasse\], we discussed the Hasse principle and weak approximation for varieties over global fields. Here, we establish the following. \[cor:hasse principle\] Del Pezzo surfaces of degree $\geq7$ over global fields satisfy weak approximation and the Hasse principle. If $X$ is not of product type, then it is birationally equivalent to a Brauer–Severi surface by Theorem \[thm: del Pezzo descent\], and since the two claimed properties are preserved under birational maps and hold for Brauer–Severi varieties, the assertion follows in this case. If $X$ is of product type, then there are two cases by Proposition \[prop: product type classification\]. If $\rho=2$, then $X$ is a product of two Brauer–Severi curves, and we conclude as before. Thus, we may assume $\rho=1$. Let us first establish the Hasse principle: there exists a quadratic Galois extension $L/K$, such that $\rho(X_L)=2$. From $X(K_\nu)\neq\emptyset$ for all $\nu\in\Omega_K$, we find $X_{L_\mu}{\cong}{{{{\mathbb P}}}}^1_{L_\mu}\times{{{{\mathbb P}}}}^1_{L_\mu}$ for all $\mu\in\Omega_L$, and thus, $X_L{\cong}{{{{\mathbb P}}}}^1_L\times{{{{\mathbb P}}}}^1_L$ by the Hasse principle for Brauer–Severi curves. As in the proof of Corollary \[cor: product case point\], we exhibit $X$ as twisted self-product of ${{{{\mathbb P}}}}^1_k$, which has a $k$-rational point and establishes the Hasse principle. Thus, to establish weak approximation, we may assume that $X$ has a $k$-rational point. But then, $X$ is rational by Corollary \[cor: product case point\], and since weak approximation is a birational invariant, the assertion follows. Del Pezzo surfaces of degree 6 ============================== In the previous sections, we have seen a close connection between Brauer–Severi varieties and del Pezzo surfaces of degree $\geq7$. In this section, we discuss del Pezzo surfaces of degree $6$, which are not so directly linked to Brauer–Severi varieties. For the geometry and the arithmetic of these surfaces, we refer the interested reader to [@CTdP6], [@Manin], and the survey [@Varilly Section 2.4]. We keep the notation introduced in Section \[sec: dP large degree\]: If $X$ is a degree $6$ del Pezzo surface over a field $k$, then there exists a blow-up $f_{\overline{k}}:\overline{X}\to{{{{\mathbb P}}}}^2_{\overline{k}}$ in three points in general position with exceptional $(-1)$-curves $E_1$, $E_2$, and $E_3$. Then, there are six $(-1)$-curves on $X$, namely the three exceptional curves $E_i$, $i=1,2,3$ of $\overline{f}$, as well as the three curves $E_i':=H-E_j-E_k$, $i=1,2,3$ where $\{j,k\}=\{1,2,3\}\backslash\{i\}$ and where $H=\overline{f}^*{{{\mathcal}O}}_{{{{{\mathbb P}}}}^2}(1)$ as in Section \[sec: dP large degree\]. These curves intersect in a hexagon as follows. $$\xymatrix{ & \ar@{-}^{E_1}[r] & \ar@{-}^{E_2'}[dr] &\\ \ar@{-}^{E_3'}[ur]\ar@{-}_{E_2}[dr] & & & \\ & \ar@{-}_{E_1'}[r] & \ar@{-}_{E_3}[ur] & }$$ The absolute Galois group $G_k$ acts on these six $(-1)$-curves on $X_{k^{\rm sep}}$, and associated to this action, we have following field extensions of $k$. 1. Since $G_k$ acts on the two sets $\{E_1,E_2,E_3\}$ and $\{E_1',E_2',E_3'\}$, there is a group homomorphism $$\varphi_1\,:\,G_k\,\to\,S_2\,{\cong}\,{{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}.$$ The fixed field of either of the two sets is a finite separable extension $k\subseteq K$ with $[K:k]|2$, and $k\neq K$ if and only if $\varphi_1$ is surjective. 2. Since $G_k$ acts on the three sets $\{E_i,E_i'\}$, $i=1,2,3$, there is a group homomorphism $$\varphi_2\,:\,G_k\,\to\,S_3.$$ There exists a finite separable extension $k\subseteq L$ with $[L:k]|3$, unique up to conjugation in $k^{\rm sep}$, over which at least one of these three sets is defined. We have $k\neq L$ if and only if $3$ divides the order of $\varphi_2(G_k)$. Next, there exists a finite and separable extension $L\subseteq M$ with $[M:L]|2$, over which all three sets are defined. Combining $\varphi_1$ and $\varphi_2$, we obtain a group homomorphism $$G_k\,\stackrel{\varphi_1\times\varphi_2}{\longrightarrow}\, {{{{\mathbb Z}}}}/2{{{{\mathbb Z}}}}\times S_3\,{\cong}\,D_{2\cdot 6},$$ where $D_{2\cdot 6}$ denotes the dihedral group of order $12$, i.e., the automorphism group of the hexagon. Using these field extensions, we obtain the following classification, which uses and slightly extends a classical result of Manin from [@Manin] in case (3). \[thm: degree 6\] Let $X$ be a del Pezzo surface of degree $6$ over a field $k$. 1. The morphism $\overline{f}$ descends to a birational morphism $$f\,:\,X\,\to\,P$$ to a Brauer–Severi surface $P$ if and only if $k=K$. In this case, $\rho(X)\geq2$ and ${\rm Am}(X)={\rm Am}(P)$. 2. There exists a birational morphism $X\to Y$ onto a degree $8$ del Pezzo surface $Y$ of product type if and only if $k=L$. In this case, $$\begin{array}{c|ccc} & \rho(X) & Y \\ \hline k\neq M & 3 & {{\rm Spec}\:}M\wedge({{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k) \\ k=M & 4 & {{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k \end{array}$$ $X$ has a $k$-rational point, and ${\rm Am}(X)=0$. 3. If $k\neq K$ and $k\neq L$, then $\rho(X)=1$, ${\rm Am}(X)=0$, and the following are equivalent. 1. $X$ is birationally equivalent to a Brauer–Severi surface, 2. $X$ is birationally equivalent to a product of two Brauer–Severi curves, 3. $X$ is rational, and 4. $X$ has a $k$-rational point. [Proof.]{} Let us first show (1). If $k=K$, then $F:=E_1+E_2+E_3$ descends to a class in ${{\rm Pic }}(X_{k^{\rm sep}})^{G_k}={{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ and we find $\rho(X)\geq2$. Thus, also $H=\frac{1}{3}(-K_X+F)$ descends to a class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$, and by Theorem \[thm: main\], we obtain a birational morphism $|H|:X\to P$ to a Brauer–Severi surface, which coincides with $\overline{f}$ over $\overline{k}$. Conversely, if $\overline{f}$ descends to a birational morphism $f:X\to P$, then the exceptional divisor of $f$ is of class $F$ or $E'_1+E_2'+E_3'$, and we find $k=K$. Moreover, we have ${\rm Am}(X)={\rm Am}(P)$ by Theorem \[thm: birational invariance of h1\]. If $k=L$, then, say $E_1+E_1'$, descends to a class in ${{\rm Pic }}(X_{k^{\rm sep}})^{G_k}$. Moreover, we find that the classes $\frac{1}{2}(-K_X+E_1+E_1')=\,2H-E_2-E_3$ as well as $\frac{1}{2}(-K_X-E_1-E_1')\,=\, H-E_1$, and thus, the classes $H$, $E_1$, and $E_1'=H-E_2-E_3$ lie in ${{\rm Pic }}(X_{k^{\rm sep}})^{G_k}$. The $G_k$-action is trivial on $H$ and $E_1$, whereas it is either trivial on the set $\{E_2,E_3\}$ (if $k=M$) or permutes the two (if $k\neq M$). Since the class of $E_1$ is $G_k$-invariant and there is a unique effective divisor in this linear system, we find that ${{{{\mathbb P}}}}^1_k{\cong}E_1\subset X$. In particular, $X$ has a $k$-rational point and ${\rm Am}(X)=0$. Using Theorem \[thm: main\] and the fact that $X$ has a $k$-rational point, we obtain a birational morphism $$|\frac{1}{2}(-K_X+E_1+E_1')|\,:\, X\to Y\subset {{{{\mathbb P}}}}^3_k$$ onto a smooth quadric $Y$ with a $k$-rational point. In particular, $Y$ is a degree $8$ del Pezzo surface of product type. Over $k^{\rm sep}$, this morphism contracts $E_1$ and $E_1'$ and thus, we find $${{\rm Pic }}(Y_{k^{\rm sep}}) \,{\cong}\, \left( {{{{\mathbb Z}}}}H\oplus\bigoplus_{i=1}^3{{{{\mathbb Z}}}}E_i\right)/ \langle E_1, E_1' \rangle \,{\cong}\,{{{{\mathbb Z}}}}\overline{E}_2 \oplus {{{{\mathbb Z}}}}\overline{E}_3.$$ The $G_k$-action on it is either trivial ($k=M$) or permutes the two summands ($k\neq M$). Using $Y(k)\neq\emptyset$ and Corollary \[cor: product case point\], we find $\rho(X)=4$ and $Y{\cong}{{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k$ in the first case, and $\rho(X)=3$ and $Y{\cong}{{\rm Spec}\:}M\wedge({{{{\mathbb P}}}}^1_k\times{{{{\mathbb P}}}}^1_k)$ in the latter. Conversely, if there exists a birational morphism $X\to Y$ onto a degree $8$ del Pezzo surface of product type, then the exceptional divisor is of class $E_i+E_i'$ for some $i$, and thus, $k=L$. This establishes (2). Finally, assume that $k\neq K$ and $k\neq L$. Then, $\varphi_1$ is surjective, and $\varphi_2(G_k)$ contains all $3$-cycles of $S_3$. From this, it is not difficult to see that ${{\rm Pic }}(X_{\overline{k}})^{G_k}$ is of rank $1$ and generated by the class of $K_X$. Since this latter class is an invertible sheaf, we find ${\rm Am}(X)=0$. Thus, if $X$ is birationally equivalent to a Brauer–Severi surface $P$, then ${\rm Am}(X)=0$ together with Lemma \[lem: h1 brauer-severi\] and Theorem \[thm: birational invariance of h1\] implies that $P{\cong}{{{{\mathbb P}}}}^2_k$. Similarly, if $X$ is birationally equivalent to the product $P'\times P''$ of two Brauer–Severi curves, then $P'{\cong}P''{\cong}{{{{\mathbb P}}}}^1_k$. From this, we obtain the implications $(a)\Leftrightarrow (b)\Leftrightarrow (c)\Rightarrow (d)$. The implication $(d)\Rightarrow (c)$ is due to Manin [@Manin Theorem 29.4]. \[rem: amitsur remark\] In case (1) of the above Theorem it is important to note that $P$ need not be unique, but that ${\rm Am}(P)$ is well-defined. More precisely, if we set $F:=E_1+E_2+E_3$ and $F'=E_1'+E_2'+E_3'$, then Theorem \[thm: main\] provides us with two morphisms to Brauer–Severi surfaces $P_1$ and $P_2$ $$\begin{array}{lclcc} |H| &=& |\frac{1}{3}(-K_X+F)| &:& X \,\to\, P_1 \\ |H'| &:=&|\frac{1}{3}(-K_X+F')| &:& X\,\to\, P_2\\ \end{array}$$ Since $H+H'=-K_X$ and $\delta(K_X)=0$, we find $$[P_1]\,=\,\delta(H)\,=\,\delta(-K_X-H')\,=\,-\delta(H')\,=\,-[P_2]\,\in\,{{\rm Br}}(k),$$ and thus, $P_1{\cong}P_2$ if and only if both are isomorphic to ${{{{\mathbb P}}}}^2_k$. On the other hand, $P_1$ and $P_2$ are birationally equivalent, since we have birational morphisms $$P_1\,\stackrel{|H|}{\longleftarrow}\,X\,\stackrel{|H'|}{\longrightarrow} P_2\,.$$ Over $\overline{k}$, this becomes the blow-up of three closed points $Z$ followed by the blow-down of the three $(-1)$-curves that are the strict transforms of lines through any two of the points in $Z$. This is an example of a [*Cremona transformation*]{}. We remark that a surface of case (3) and without $k$-rational points is neither birationally equivalent to a Brauer–Severi surface nor to the product of two Brauer–Severi curves. For finer and more detailed classification results for degree $6$ del Pezzo surfaces, we refer the interested reader to [@Corn], [@Blunk], and [@CTKMdP6]. Finally, the sum $\widetilde{E}$ of all $(-1)$-curves on $X_{k^{\rm sep}}$ is a $G_k$-invariant divisor, and thus, descends to a curve on $X$. By [@Manin Theorem 30.3.1], the complement $X\backslash\widetilde{E}$ is isomorphic to a torsor under a two-dimensional torus over $k$, which can be used to study the arithmetic and geometry of these surfaces, see also [@Skorobogatov; @book]. Del Pezzo surfaces of small degree {#sec: small degree} ================================== For the remainder of this article, our results will be less complete and less self-contained. We will circle around questions of birationality of a del Pezzo surface $X$ of degree $\leq5$ to Brauer–Severi surfaces, and about descending the morphism $\overline{f}:\overline{X}\to{{{{\mathbb P}}}}^2_{\overline{k}}$ to $k$. Birationality to Brauer–Severi surfaces --------------------------------------- Let $k={{{{\mathbb F}}}}_q$ be a finite field of characteristic $p$, and let $X$ be a smooth and projective surface over $k$ such that $X_{\overline{k}}$ is birationally equivalent to ${{{{\mathbb P}}}}^2$. Then, it follows from the Weil conjectures (in this case already a theorem of Weil himself) that the number of $k$-rational points is congruent to $1$ modulo $q$, see [@Manin Chapter IV.27]. In particular, we obtain that \[thm: weil\] If $X$ is a del Pezzo surface over a finite field ${{{{\mathbb F}}}}_q$, then $X$ has a ${{{{\mathbb F}}}}_q$-rational point. Since ${\rm Br}({{{{\mathbb F}}}}_q)=0$ by a theorem of Wedderburn, there are no non-trivial Brauer–Severi varieties over ${{{{\mathbb F}}}}_q$. \[rem: M-CT-S\] Let $X$ be a del Pezzo surface of degree $\geq5$ over a field $k$. Manin [@Manin Theorem 29.4] showed that $X$ is rational if and only if it contains a $k$-rational point. Even if $X$ has no $k$-rational point, Manin [@Manin Theorem 29.3] showed that $$H^1\left(H,\,{{\rm Pic }}_{(X/k)({\rm fppf})}(k^{\rm sep})\right)\,=\,0$$ for all closed subgroups $H\subseteq G_k$. We refer to [@CTS Théorème 2.B.1] for a general principle explaining this vanishing of cohomology. In this section, we give a partial generalization to birational maps to Brauer–Severi surfaces. \[lem: del Pezzo Amitsur\] Let $X$ be a degree $d$ del Pezzo surface over $k$. Then, 1. There exists an effective zero-cycle $Z$ of degree $d$ on $X$. If $d\neq2$ or if ${\rm char}(k)\neq2$, then there exists such a zero-cycle $Z$, whose closed points have residue fields that are separable over $k$. 2. The abelian group ${\rm Am}(X)$ is finite and every element has an order dividing $d$. [Proof.]{} If $d\geq3$, then $\omega_X^{-1}$ is very ample, and $|\omega_X^{-1}|$ embeds $X$ as a surface of degree $d$ into ${{{{\mathbb P}}}}^d_k$. Intersecting $X$ with a linear subspace of codimension $2$, we obtain an effective zero-cycle $Z$ of degree $d$ on $X$. The closed points of $Z$ have automatically separable residue fields if $k$ is finite. Otherwise, $k$ is infinite, and then, the intersection with a generic linear subspace of codimension $2$ yields a $Z$ that is smooth over $k$ by [@Bertini; @theorems Théorème I.6.3]. Thus, in any case, we obtain a $Z$, whose closed points have residue fields that are separable over $k$. If $d=2$, then $|\omega_X^{-1}|$ defines a double cover $X\to{{{{\mathbb P}}}}^2_k$, and the pre-image of a $k$-rational point yields an effective zero-cycle $Z$ of degree $2$ on $X$. If ${\rm char}(k)\neq2$, then residue fields of closed points of $Z$ are separable over $k$. If $d=1$, then $|-K_X|$ has a unique-base point, and in particular, $X$ has a $k$-rational point. This establishes (1). Since $b_1(X)=0$, the group ${\rm Am}(X)$ is finite by Lemma \[lem: picard rank\]. Then, assertion (2) follows from Corollary \[lem: order amitsur\]. \[cor: dP has a rational point\] Let $X$ be a del Pezzo surface of degree $d$ over a field $k$. 1. If $d\in\{1,2,4,5,7,8\}$ and $X$ is birationally equivalent to a Brauer–Severi surface $P$, then $P{\cong}{{{{\mathbb P}}}}^2_k$ and $X$ has a $k$-rational point. 2. If $d\in\{1,3,5,7,9\}$ and $X$ is birationally equivalent to a product $P'\times P''$ of two Brauer–Severi curves, then $P'{\cong}P''{\cong}{{{{\mathbb P}}}}^1_k$ and $X$ has a $k$-rational point. [Proof.]{} Let $X$ and $d$ be as in (1). Then, every element of ${\rm Am}(X)$ is of order dividing $d$ by Lemma \[lem: del Pezzo Amitsur\], but also of order dividing $3$ by Theorem \[thm: birational invariance of h1\] and Theorem \[thm: brauer severi picard\]. By our assumptions on $d$, we find ${\rm Am}(P)=0$, and thus, $P{\cong}{{{{\mathbb P}}}}^2_k$. Since the latter has a $k$-rational point, so has $X$ by Lemma \[lem: Lang\]. This shows (1). The proof of (2) is similar and we leave it to the reader. Combining this with a result of Coray [@Coray], we obtain the following. \[thm: birational to BS surface\] Let $X$ be a del Pezzo surface of degree $d\in\{5,7,8\}$ over a perfect field $k$. Then, the following are equivalent 1. There exists a dominant and rational map $P\dashrightarrow X$ from a Brauer–Severi surface $P$ over $k$, 2. $X$ is birationally equivalent to a Brauer–Severi surface, 3. $X$ is rational, and 4. $X$ has a $k$-rational point. [Proof.]{} The implications $(3)\Rightarrow(2)\Rightarrow(1)$ are trivial. Let $\varphi:P\dashrightarrow X$ be as in (1). By Lemma \[lem: del Pezzo Amitsur\], there exists a zero-cycle of degree $9$ on $P$, and another one of degree $d$ on $X$. Using $\varphi$, we obtain a zero-cycle of degree dividing $9$ on $X$. By assumption, $d$ is coprime to $9$, and thus, there exists a zero-cycle of degree $1$ on $X$. By [@Coray], this implies that $X$ has a $k$-rational point and establishes $(1)\Rightarrow(4)$. The implication $(4)\Rightarrow(3)$ is a result of Manin [@Manin Theorem 29.4] . Now, if a del Pezzo surface $X$ over a field $k$ is birationally equivalent to a Brauer–Severi surface, then $H^1(H,{{\rm Pic }}_{X/k}(k^{\rm sep}))=0$ for all closed subgroups $H\subseteq G_k$ by Theorem \[thm: birational invariance of h1\]. Moreover, this vanishing holds for all del Pezzo surfaces of degree $\geq5$, see Remark \[rem: M-CT-S\]. However, for del Pezzo surfaces of degree $\leq4$, these cohomology groups may be non-zero, see [@Manin Section 31], [@Sir; @Peter], [@Kresch; @Tschinkel], and [@Varilly; @weak]. In particular, del Pezzo surfaces of degree $\leq4$ are in general [*not*]{} birationally equivalent to Brauer–Severi surfaces. For further information concerning geometrically rational surfaces, unirationality, central simple algebras, and connections with cohomological dimension, we refer the interested reader to [@CTKMdP6]. Del Pezzo surfaces of degree 5 ------------------------------ In order to decide whether a birational map $f_{\overline{k}}:X_{\overline{k}}\to{{{{\mathbb P}}}}^2_{\overline{k}}$ as in Section \[sec: dP large degree\] descends to $k$ for a degree $5$ del Pezzo surface $X$ over $k$, we introduce the following notion. \[def: conic\] Let $X$ be a del Pezzo surface over a field $k$. A [*conic*]{} on $X$ is a geometrically integral curve $C$ on $X$ with $C^2=0$ and $-K_X\cdot C=2$. An element ${{{\mathcal}{L}}}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ is called a [*conic class*]{} if ${{{\mathcal}{L}}}\otimes_k\overline{k}{\cong}{{{\mathcal}O}}_{X_{\overline{k}}}(\overline{C})$ for some conic $\overline{C}$ on $X_{\overline{k}}$. The following is an analogue of Theorem \[thm: del Pezzo descent\] for degree $5$ del Pezzo surfaces. \[thm: dP5 descent\] Let $X$ be a del Pezzo surface of degree $5$ over a field $k$. Then, the following are equivalent: 1. There exists a birational morphism $f:X\to P$ to a Brauer–Severi surface, such that $f_{\overline{k}}$ is the blow-up of $4$ points in general position. 2. There exists a birational morphism $f:X\to{{{{\mathbb P}}}}^2_k$, such that $f_{\overline{k}}$ is the blow-up of $4$ points in general position. 3. There exists a class $F\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ such that $$F_{\overline{k}} \,{\cong}\, {{{\mathcal}O}}_{\overline{X}}(E_1+E_2+E_3+E_4),$$ where the $E_i$ are disjoint $(-1)$-curves on $\overline{X}$. 4. There exists a conic class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. If these equivalent conditions hold, then $X$ has a $k$-rational point. [Proof.]{} If $f$ is as in (1), then $X$ has a $k$-rational point by Corollary \[cor: dP has a rational point\]. Thus, $P{\cong}{{{{\mathbb P}}}}^2_k$, and we obtain $(1)\Rightarrow(2)$. If $f$ is as in $(2)$, then the exceptional divisor of $f$ is a class $F$ as stated in $(3)$, and we obtain $(2)\Rightarrow(3)$. If $f$ is as in $(3)$, then, using Theorem \[thm: main\], there exists a birational morphism $|\frac{1}{3}(-K_X-F)|:X\to P$ to a Brauer–Severi surface $P$ as in $(1)$, which establishes $(3)\Rightarrow(1)$. If $f$ is as in $(2)$, let $Z\subset{{{{\mathbb P}}}}^2_k$ be the degree $4$ cycle blown up by $f$. Then $f^*({{{\mathcal}O}}_{{{{{\mathbb P}}}}^2_k}(2)(-Z))$, i.e., the pullback of the pencil of conics through $Z$, is a conic class on $X$ and establishes $(2)\Rightarrow(4)$. Finally, if $C$ is a conic class on $X$, then, using Theorem \[thm: main\], there exists a birational morphism $|-K_X+C|:X\to P$ to a Brauer–Severi surface $P$ as in $(1)$, which establishes $(4)\Rightarrow(1)$. By theorems of Enriques, Swinnerton-Dyer, Skorobogatov, Shepherd-Barron, Kollár, and Hassett (see [@Varilly Theorem 2.5] for precise references and overview), a degree $5$ del Pezzo $X$ over a field $k$ always has a $k$-rational point. Thus, $X$ is rational by [@Manin Theorem 29.4], and we have $${\rm Am}(X)=0,\mbox{ \quad as well as \quad } H^1(H,{{\rm Pic }}_{X/k}(k^{\rm sep}))=0$$ for every closed subgroup $H\subseteq G_k$ by Corollary \[cor: amitsur trivial\], Theorem \[thm: birational invariance of h1\], and Lemma \[lem: h1 brauer-severi\]. Del Pezzo surfaces of degree 4 ------------------------------ A classical theorem of Manin [@Manin Theorem 29.4] states that a del Pezzo surface of degree $4$ over a sufficiently large field $k$ is unirational if and only if it contains a $k$-rational point. Here, we have the following analogue in our setting. Let $X$ be a del Pezzo surface of degree $4$ over a perfect field $k$. Then, the following are equivalent 1. There exists a dominant rational map $P\dashrightarrow X$ from a Brauer–Severi surface $P$ over $k$. 2. $X$ is unirational, 3. $X$ has a $k$-rational point, [Proof.]{} The implications $(2)\Rightarrow(1)$ is trivial and $(2)\Rightarrow(3)$ is Lemma \[lem: Lang\]. The implication $(3)\Rightarrow(2)$ is shown in [@Manin Theorem 29.4] and [@Manin Theorem 30.1] if $k$ has at least 23 elements and in [@Knecht Theorem 2.1] and [@Pieropan Proposition 5.19] in the remaining cases. To show $(1)\Rightarrow(3)$, we argue as in the proof of the implication $(1)\Rightarrow(4)$ of Theorem \[thm: birational to BS surface\] by first exhibiting a degree $1$ zero-cycle on $X$, and then, using [@Coray] to deduce the existence of a $k$-rational point on $X$. We leave the details to the reader. If a field $k$ is finite or perfect of characteristic $2$, then a degree $4$ del Pezzo surface over $k$ always has a $k$-rational point, see [@Manin Theorem 27.1] and [@Duncan]. In this case, we also have ${\rm Am}(X)=0$. From Lemma \[lem: del Pezzo Amitsur\], we infer that ${\rm Am}(X)$ is at most $4$-torsion for degree $4$ del Pezzo surfaces. For the possibilities of $H^1(G_k,{{\rm Pic }}_{X/k}(k^{\rm sep}))$, see [@Sir; @Peter]. The following is an anolog of Theorem \[thm: del Pezzo descent\] for degree $4$ del Pezzo surfaces. \[thm: dP4 quasi-split\] Let $X$ be a del Pezzo surface of degree $4$ over a field $k$. Then, the following are equivalent: 1. There exists a birational morphism $f:X\to P$ to a Brauer–Severi surface, such that $f_{\overline{k}}$ is the blow-up of $5$ points in general position. 2. There exists a birational morphism $f:X\to{{{{\mathbb P}}}}^2_k$, such that $f_{\overline{k}}$ is the blow-up of $5$ points in general position. 3. There exists a curve ${{{{\mathbb P}}}}^1_k{\cong}E\subset X$ with $E^2=-1$. 4. There exists a class $E\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ with $E^2=K_X\cdot E=-1$. If these equivalent conditions hold, then $X$ has a $k$-rational point. [Proof.]{} The implication $(2)\Rightarrow(1)$ is trivial. If $f$ is as in (1), then $X$ has a $k$-rational point by Corollary \[cor: dP has a rational point\]. Thus, $P{\cong}{{{{\mathbb P}}}}^2_k$, and we obtain $(1)\Rightarrow(2)$. If $f$ is as in $(2)$, let $Z\subset{{{{\mathbb P}}}}^2_k$ be the degree $5$ cycle blown up by $f$. Then $f^*({{{\mathcal}O}}_{{{{{\mathbb P}}}}^2_k}(2)(-Z))$, i.e., the pullback of the class of the unique conic through $Z$, is a class $E$ as stated in $(4)$ on $X$ and establishes $(2)\Rightarrow(4)$. If $E$ is a class as in $(4)$, then, using Theorem \[thm: main\], there exists a birational morphism $|-K_X-E|:X\to P$ to a Brauer–Severi surface $P$ as in $(1)$, which establishes $(4)\Rightarrow(1)$. The implication $(3)\Rightarrow(4)$ is trivial, and if $E$ is a class as in $(4)$, then there exists a unique section of the associated invertible sheaf on $k^{\rm sep}$. This is necessarily $G_k$-invariant, thus, descends to a curve on $X$, and establishes $(4)\Rightarrow(3)$. In [@Skorobogatov], Skorobogatov called del Pezzo surfaces of degree $4$ that satisfy condition (3) above [*quasi-split*]{}. Before proceeding, let us recall a couple of classical results on the geometry of degree $4$ del Pezzo surfaces, and refer the interested reader to [@Skorobogatov] and [@Dolgachev Chapter 8.6] for details. The anti-canonical linear system embeds $X$ as a complete intersection of two quadrics in ${{{{\mathbb P}}}}^4_k$, i.e., $X$ is given by $Q_0=Q_1=0$, where $Q_0$ and $Q_1$ are two quadratic forms in five variables over $k$. The [*degeneracy locus*]{} of this pencil of quadrics $${\rm Deg}_X \,:=\, \left\{\,\det(t_0Q_0+t_1Q_1)\,=\,0 \,\right\} \,\subset\, {{{{\mathbb P}}}}^1_k\,=\,{{\rm Proj}\:}k[t_0,t_1]$$ is a zero-dimensional subscheme, which is étale and of length $5$ over $k$. Over $\overline{k}$, its points correspond to the singular quadrics containing $X$, all of which are cones over smooth quadric surfaces. Let $\nu_2:{{{{\mathbb P}}}}^1_k\to{{{{\mathbb P}}}}^2_k$ be the $2$-uple Veronese embedding and set $$Z := \nu_2({\rm Deg}_X)\,\subset\,C:=\nu_2({{{{\mathbb P}}}}^1_k)\,\subset\,{{{{\mathbb P}}}}^2_k.$$ If $X$ contains a $k$-rational $(-1)$-curve, i.e., if $X$ is quasi-split, then $X$ is the blow-up of ${{{{\mathbb P}}}}^2_k$ in $Z$, see Theorem \[thm: dP4 quasi-split\] and [@Skorobogatov Theorem 2.3]. \[prop: dP4 to dP8\] Let $X$ be a del Pezzo surface of degree $4$ over a field $k$ of characteristic $\neq2$ with at least $5$ elements. Then, the following are equivalent: 1. The degeneracy scheme ${\rm Deg}_X$ has a $k$-rational point. 2. There exists a finite morphism $\psi:X\to S$ of degree $2$, where $S$ is a del Pezzo surface of degree $8$ of product type. Moreover, if $\psi$ is as in $(2)$, then $S$ is isomorphic to a quadric in ${{{{\mathbb P}}}}^3_k$. [Proof.]{} To show $(1)\Rightarrow(2)$, assume that ${\rm Deg}_X$ has a $k$-rational point. Thus, there exists degenerate quadric $Q$ with $X\subset Q\subset{{{{\mathbb P}}}}^4_k$. As explained in the proof of [@Dolgachev Theorem 8.6.8], $Q$ is a cone over a smooth quadric surface, and the projection away from its vertex ${{{{\mathbb P}}}}^4_k\dashrightarrow{{{{\mathbb P}}}}^3_k$ induces a morphism $X\to {{{{\mathbb P}}}}^3_k$ that is finite of degree $2$ onto a smooth quadric surface $S$. In particular, $S$ is a del Pezzo surface of degree $8$ of product type. To show $(2)\Rightarrow(1)$, let $\psi:X\to S$ be as in the statement. Then, we have a short exact sequence (which even splits since ${\rm char}(k)\neq2$) $$0\,\to\,{{{\mathcal}O}}_S\,\to\,\psi_\ast{{{\mathcal}O}}_X\,\to\,{{{{\mathcal}{L}}}}^{-1}\,\to\,0,$$ where ${{{\mathcal}{L}}}$ is an invertible sheaf on $S$, which is of type $(1,1)$ on $S_{\overline{k}}{\cong}{{{{\mathbb P}}}}^1_{\overline{k}}\times{{{{\mathbb P}}}}^1_{\overline{k}}$. In particular, $|{{{\mathcal}{L}}}|$ defines an embedding $\imath:S\to{{{{\mathbb P}}}}^3_k$ as a quadric, and establishes the final assertion. Now, $\imath\circ\psi$ arises from a $4$-dimensional subspace $V$ inside the linear system $(\imath\circ\psi)^*{{{\mathcal}O}}_{{{{{\mathbb P}}}}^3_k}(1){\cong}\omega_X^{-1}$. Thus, $\imath\circ\psi$ is the composition of the anti-canonical embedding $X\to{{{{\mathbb P}}}}^4_k$, followed by a projection ${{{{\mathbb P}}}}^4_k\dashrightarrow{{{{\mathbb P}}}}^3_k$. As explained in the proof of [@Dolgachev Theorem 8.6.8], such a projection induces a degree $2$ morphism onto a quadric if and only if the point of projection is the vertex of a singular quadric in ${{{{\mathbb P}}}}^4_k$ containing $X$. In particular, this vertex and the corresponding quadric are defined over $k$, giving rise to a $k$-rational point of ${\rm Deg}_X$. In order to refine Proposition \[prop: dP4 to dP8\], we will use conic classes as introduced in Definition \[def: conic\]. Let $X$ be a del Pezzo surface of degree $4$ over a field $k$. Then, the following are equivalent: 1. There exists a conic class in ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)$. 2. There exists a finite morphism $\psi:X\to P'\times P''$ of degree $2$, where $P'$ and $P''$ are a Brauer–Severi curves over $k$. Moreover, if $\psi$ is as in $(2)$, then $P'{\cong}P''$. [Proof.]{} Let ${{{\mathcal}{L}}}\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ be a conic class. By Theorem \[thm: main\], there exist morphisms $|{{{{\mathcal}{L}}}}|:X\to P'$ and $|\omega_X^{-1}\otimes{{{\mathcal}{L}}}^{-1}|:X\to P''$, where $P'$ and $P''$ are Brauer–Severi curves over $k$. Combining them, we obtain a finite morphism $X\to P'\times P''$ of degree $2$. As in the proof of $(2)\Rightarrow(1)$ of Proposition \[prop: dP4 to dP8\] we find that $P'\times P''$ embeds into ${{{{\mathbb P}}}}^3$, and thus, $0=[{{{{\mathbb P}}}}^3_k]=[P']+[P'']\in{{\rm Br}}(k)$ by Proposition \[prop: product type classification\]. This implies $[P']=[P'']$ since these classes are $2$-torsion, and thus, $P'{\cong}P''$ by Corollary \[cor: isomorphic BS\]. This establishes $(1)\Rightarrow(2)$. Conversely, let $\psi:X\to P'\times P''$ be as in (2). Then, $\psi^*({{{\mathcal}O}}_{P'}(1)\boxtimes{{{\mathcal}O}}_{P''}(1))$ is a conic class, and (1) follows. Del Pezzo surfaces of degree 3 ------------------------------ For these surfaces, we have the following analogue of Theorem \[thm: del Pezzo descent\]. Let $X$ be a del Pezzo surface of degree $3$ over a field $k$. Then, the following are equivalent: 1. There exists a birational morphism $f:X\to P$ to a Brauer–Severi surface, such that $f_{\overline{k}}$ is the blow-up of $6$ points in general position. 2. There exists a class $F\in{{\rm Pic }}_{(X/k)({\rm fppf})}(k)$ such that $$F_{\overline{k}} \,{\cong}\, {{{\mathcal}O}}_{\overline{X}}(E_1+E_2+E_3+E_4+E_5+E_6),$$ where the $E_i$ are disjoint $(-1)$-curves on $\overline{X}$. [Proof.]{} The proof is analogous to that of Theorem \[thm: dP5 descent\], and we leave the details to the reader. Note that if the equivalent conditions of this theorem are fulfilled, then $X$ is not minimal. But the converse does not hold in general: If $Y$ is a unirational, but not rational del Pezzo surface of degree $4$ over $k$, and $y\in Y$ is a $k$-rational point not lying on an exceptional curve, then the blow-up $X\to Y$ in $y$ is a non-minimal degree $3$ del Pezzo surface over $k$ with $k$-rational points that is not birationally equivalent to a Brauer–Severi surface over $k$. By [@Manin Theorem 28.1], a degree $3$ del Pezzo surface $X$ is minimal if and only if $\rho(X)=1$, i.e., ${{\rm Pic }}_{(X/k)({\rm fppf})}(k)={{{{\mathbb Z}}}}\cdot\omega_X$. In this case, we have ${\rm Am}(X)=0$. In particular, if such a surface is birationally equivalent to a Brauer–Severi surface $P$, then $P{\cong}{{{{\mathbb P}}}}^2_k$ by Proposition \[prop: amitsur is birational invariant\] and Theorem \[thm: amitsur birational\]. In particular, $X$ is rational and has a $k$-rational point in this case. Del Pezzo surfaces of degree 2 ------------------------------ Arguing as in the proof of Theorem \[thm: birational to BS surface\], it follows that if there exists a dominant and rational map $P\dashrightarrow X$ from a Brauer–Severi surface $P$ onto a degree $2$ del Pezzo surface over a perfect field $k$, then $X$ has a $k$-rational point, and thus ${\rm Am}(X)=0$. In particular, if $X$ is birationally equivalent to a Brauer–Severi surface, then it is rational, see also Corollary \[cor: dP has a rational point\]. By work of Manin [@Manin Theorem 29.4], a degree $2$ del Pezzo surface over a field $k$ is unirational if it has a $k$-rational point not lying on an exceptional curve. Together with non-trivial refinements of [@Salgado] and [@Festi], such surfaces over finite fields are always unirational. By Lemma \[lem: del Pezzo Amitsur\], we have that ${\rm Am}(X)$ is at most $2$-torsion for degree $2$ del Pezzo surfaces. For the possibilities of $H^1(G_k,{{\rm Pic }}_{X/k}(k^{\rm sep}))$, as well as further information concerning arithmetic questions, we refer to [@Kresch; @Tschinkel]. Del Pezzo surfaces of degree 1 ------------------------------ If $X$ is a del Pezzo surface of degree $1$, then it has a $k$-rational point, namely the unique base point of $|-K_X|$. Thus, we have ${\rm Am}(X)=0$, and there are no morphisms or rational maps to non-trivial Brauer–Severi varieties. [xxxxx]{} S. A. Amitsur, [*Generic splitting fields of central simple algebras*]{}, Ann. of Math. (2) 62, (1955), 8–43. M. Blunk, [*Del Pezzo surfaces of degree 6 over an arbitrary field*]{}, J. Algebra 323, 42–58 (2010). S. Bosch, W. Lütkebohmert, M. Raynaud, [*Néron models*]{}, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) 21, Springer (1990). K. S. Brown, [*Cohomology of groups*]{}, Graduate Texts in Mathematics 87, Springer (1982). F. Châtelet, [*Variations sur un thème de H. Poincaré*]{}, Ann. Sci. École Norm. Sup. (3) 61, (1944), 249–300. J.-L. Colliot-Thélène, [*Surfaces de Del Pezzo de degré 6*]{}, C. R. Acad. Sci. Paris Sér. A-B 275 (1972), A109–A111. J.-L. Colliot-Thélène, J.-J. Sansuc, [*La descente sur les variétés rationnelles. II*]{}, Duke Math. J. 54 (1987), no. 2, 375–492. J.-L. Colliot-Thélène, A. Skorobogatov, [*Groupe de Chow des zéro-cycles sur les fibrés en quadriques*]{}, K-Theory 7 (1993), no. 5, 477-500. J.-L. Colliot-Thélène, [*Points rationnels sur les variétés non de type général*]{}, Course de J.-L. Colliot-Thélène á Orsay/IHP (1999), available from the author’s webpage. J.-L. Colliot-Thélène, N. A. Karpenko, A. S. Merkurjev, [*Rational surfaces and the canonical dimension of the group ${\rm PGL}_6$*]{}, Algebra i Analiz 19 (2007), no. 5, 159–178, translation in St. Petersburg Math. J. 19 (2008), no. 5, 793–804. K. R. Coombes, [*Every rational surface is separably split*]{}, Comment. Math. Helv. 63 (1988), no. 2, 305–311. D. Coray, [*Points algébriques sur les surfaces de del Pezzo*]{}, C. R. Acad. Sci. Paris Sér. A-B 284 (1977). P. Corn, [*Del Pezzo surfaces of degree 6*]{}, Math. Res. Lett. 12 (2005), no. 1, 75–84. W. A. de Graaf, M. Harrison, J. Pílniková, J. Schicho, [ *A Lie algebra method for rational parametrization of Severi-Brauer surfaces*]{}, J. Algebra 303 (2006), no. 2, 514–529. I. Dolgachev, [*Classical algebraic geometry. A modern view*]{}, Cambridge University Press (2012). I. Dolgachev, A. Duncan, [*Regular pairs of quadratic forms on odd-dimensional spaces in characteristic 2*]{}, arXiv:1510.06803 (2015). D. Festi, R. van Luijk, [*Unirationality of del Pezzo surfaces of degree two over finite fields*]{}, Bull. Lond. Math. Soc. 48 (2016), 135–140. P. Gille, T. Szamuely, [*Central simple algebras and Galois cohomology*]{}, Cambridge Studies in Advanced Mathematics 101, Cambridge University Press (2006). J. Giraud, [*Cohomologie non abélienne*]{}, Grundlehren der mathematischen Wissenschaften 179, Springer (1971). A. Grothendieck, [*Le groupe de Brauer. I. Algèbres d’Azumaya et interprétations diverses*]{}, Dix Exposés sur la Cohomologie des Schémas, 46–66, North-Holland, Amsterdam (1968). A. Grothendieck, [*Le groupe de Brauer. III. Exemples et compléments*]{}, Dix Exposés sur la Cohomologie des Schémas, 88–188, North-Holland, Amsterdam (1968). A. Grothendieck, [*Technique de descente et théorèmes d’existence en géométrie algébrique. V. Les schémas de Picard: théorèmes d’existence*]{}, Séminaire Bourbaki 7, Exp. No. 232, 143–161, Soc. Math. France (1995). A. Grothendieck, [*Technique de descente et théorèmes d’existence en géométrie algébrique. VI. Les schémas de Picard: propriétés générales*]{}, Séminaire Bourbaki 7, Exp. No. 236, 221–243, Soc. Math. France (1995). R. Hartshorne, [*Algbraic Geometry*]{}, GTM 52, Springer (1977). V. A. Iskovskih, [*Minimal models of rational surfaces over arbitrary fields*]{}, Izv. Akad. Nauk SSSR Ser. Mat. 43 (1979), no. 1, 19–43. N. Jacobson, [*Finite-Dimensional Division Algebras over Fields*]{}, Springer (1996). J. Jahnel, [*The Brauer–Severi variety associated with a central simple algebra: a survey*]{}, Linear Algebraic Groups and Related Structures 52, 1–60 (2000), available from the author’s webpage. J.-P. Jouanolou, [*Théorèmes de Bertini et applications*]{}, Progress in Mathematics 42, Birkhäuser (1983). M. C.  Kang, [*Constructions of Brauer-Severi varieties and norm hypersurfaces*]{}, Canad. J. Math. 42 (1990), no. 2, 230–238. S. L. Kleiman, [*The Picard scheme*]{}, Fundamental algebraic geometry, 235-321, Math. Surveys Monogr. 123, AMS 2005. A. Knecht, [*Degree of unirationality for del Pezzo surfaces over finite fields*]{}, arXiv:1307.3215 (2013). A. Kresch, Y. Tschinkel, [*On the arithmetic of del Pezzo surfaces of degree 2*]{}, Proc. London Math. Soc. (3) 89 (2004), no. 3, 545–569. S. Lang, [*Some applications of the local uniformization theorem*]{}, Amer. J. Math. 76, (1954), 362–374. Y. I. Manin, [*Rational surfaces over perfect fields*]{}, Inst. Hautes Études Sci. Publ. Math. No. 30 (1966), 55–113. Y. I. Manin, [*Cubic forms. Algebra, geometry, arithmetic*]{}, Second edition, North-Holland Mathematical Library, 4. North-Holland Publishing (1986). J. S. Milne, [*The Brauer group of a rational surface*]{}, Invent. Math. 11 (1970), 304–307. J. S. Milne, [*Étale cohomology*]{}, Princeton Mathematical Series 33, Princeton University Press (1980). J. Neukirch, A. Schmidt, K. Wingberg, [*Cohomology of number fields*]{}, second edition, Grundlehren der mathematischen Wissenschaften 323, Springer (2008). H. Nishimura, [*Some remarks on rational points*]{}, Mem. Coll. Sci. Univ. Kyoto. Ser. A. Math. 29 (1955), 189–192. M. Pieropan, [*On the unirationality of del Pezzo surface over an arbitrary field*]{}, Master Thesis (2012). B. Poonen, [*Rational points on varieties*]{}, available from the author’s webpage. C. Salgado, D. Testa, A. Várilly-Alvarado, [*On the unirationality of del Pezzo surfaces of degree 2*]{}, J. Lond. Math. Soc. (2) 90 (2014), no. 1, 121–139. A. Skorobogatov, [*Torsors and rational points*]{}, Cambridge Tracts in Mathematics 144, Cambridge University Press (2001). A. Skorobogatov, [*Del Pezzo surfaces of degree 4 and their relation to Kummer surfaces*]{}, Enseign. Math. (2) 56, 73–85 (2010). P. Swinnerton-Dyer, [*The Brauer group of cubic surfaces*]{}, Math. Proc. Cambridge Philos. Soc. 113, 449–460 (1993). A. Várilly-Alvarado, [*Weak approximation on del Pezzo surfaces of degree 1*]{}, Adv. Math. 219 (2008), no. 6, 2123–2145. A. Várilly-Alvarado, [*Arithmetic of del Pezzo surfaces. Birational geometry*]{}, rational curves, and arithmetic, 293–319, Springer (2013).
--- abstract: 'Braided convolutional codes (BCCs) are a class of spatially coupled turbo-like codes (SC-TCs) with excellent belief propagation (BP) thresholds. In this paper we analyze the performance of BCCs in the finite block-length regime. We derive the average weight enumerator function (WEF) and compute the union bound on the performance for the uncoupled BCC ensemble. Our results suggest that the union bound is affected by poor distance properties of a small fraction of codes. By computing the union bound for the expurgated ensemble, we show that the floor improves substantially and very low error rates can be achieved for moderate permutation sizes. Based on the WEF, we also obtain a bound on the minimum distance which indicates that it grows linearly with the permutation size. Finally, we show that the estimated error floor for the uncoupled BCC ensemble is also valid for the coupled ensemble by proving that the minimum distance of the coupled ensemble is lower bounded by the minimum distance of the uncoupled ensemble.' author: - | \ [^1] title: Finite Length Weight Enumerator Analysis of Braided Convolutional Codes --- Introduction ============ Low-density parity-check (LDPC) convolutional codes [@JimenezLDPCCC], also known as spatially coupled LDPC (SC-LDPC) codes [@Kudekar_ThresholdSaturation], have attracted a lot of attention because they exhibit a threshold saturation phenomenon: the belief propagation (BP) decoder can achieve the threshold of the optimal maximum-a-posteriori (MAP) decoder. Spatial coupling is a general concept that is not limited to LDPC codes. Spatially coupled turbo-like codes (SC-TCs) are proposed in [@Moloudi_SCTurbo], where some block-wise spatially coupled ensembles of parallel concatenated codes (SC-PCCs) and serially concatenated codes (SC-SCCs) are introduced. Braided convolutional codes (BCCs) [@ZhangBCC] are another class of SC-TCs. The original BCC ensemble has an inherent spatially coupled structure with coupling memory $m=1$. Two extensions of BCCs to higher coupling memory, referred to as Type-I and Type-II BCCs, are proposed in [@Moloudi_SPCOM14]. The asymptotic behavior of BCCs, SC-PCCs and SC-SCCs is analyzed in [@JournalMLD] where the exact density evolution (DE) equations are derived for the binary erasure channel (BEC). Using DE, the thresholds of the BP decoder are computed for both uncoupled and coupled ensembles and compared with the corresponding MAP thresholds. The obtained numerical results demonstrate that threshold saturation occurs for all considered SC-TC ensembles if the coupling memory is large enough. Moreover, the occurrence of threshold saturation is proved analytically for SC-TCs over the BEC in [@JournalMLD; @Moloudi_ITW15]. While the uncoupled BCC ensemble suffers from a poor BP threshold, the BP threshold of the coupled ensemble improves significantly even for coupling memory $m=1$. Comparing the BP thresholds of SC-TCs in [@JournalMLD] indicates that for a given coupling memory, the Type-II BCC ensemble has the best BP threshold for almost all code rates. Motivated by the good asymptotic performance and the excellent BP thresholds of BCCs, our aim in this paper is analyzing the performance of BCCs in the finite block-length regime by means of the ensemble weight enumerator. As a first step, we derive the finite block-length ensemble weight enumerator function (WEF) of the uncoupled ensemble by considering uniform random permutations. Then we compute the union bound for uncoupled BCCs. The unexpectedly high error floor predicted by the bound suggests that the bound is affected by the bad performance of codes with poor distance properties. We therefore compute the union bound on the performance of the expurgated ensemble by excluding the codes with poor distance properties. The expurgated bound demonstrates very low error floors for moderate permutation sizes. We also obtain a bound on the minimum distance of the BCC ensemble which reveals that the minimum distance grows linearly with the permutation size. Finally, we prove that the codeword weights of the coupled ensemble are lower bounded by those of the uncoupled ensemble. Thus, the minimum distance of the coupled BCC ensemble is larger than the minimum distance of the uncoupled BCC ensemble. From this, we conclude that the estimated error floor of the uncoupled ensemble is also valid for the coupled ensemble. Compact Graph Representation of\ Turbo-Like Codes ================================ In this section, we describe three ensembles of turbo-like codes, namely PCCs, uncoupled BCCs, and coupled BCCs, using the compact graph representation introduced in [@JournalMLD]. Parallel Concatenated Codes --------------------------- Fig. \[CGPCCBCC\](a) shows the compact graph representation of a PCC ensemble with rate $R=\frac{N}{3N}=\frac{1}{3}$, where $N$ is the permutation size. These codes are built of two rate-$1/2$ recursive systematic convolutional encoders, referred to as the upper and lower component encoder. The corresponding trellises are denoted by $\text{T}^{\text{U}}$ and $\text{T}^{\text{L}}$, respectively. In the graph, factor nodes, represented by squares, correspond to trellises. All information and parity sequences are shown by black circles, called variable nodes. The information sequence, ${\ensuremath{\boldsymbol{u}}}$, is connected to factor node $\text{T}^{\text{U}}$ to produce the upper parity sequence ${\ensuremath{\boldsymbol{v}}}^{\text{U}}$. Similarly, a reordered copy of ${\ensuremath{\boldsymbol{u}}}$ is connected to $\text{T}^{\text{L}}$ to produce ${\ensuremath{\boldsymbol{v}}}^{\text{U}}$. In order to emphasize that a reordered copy of ${\ensuremath{\boldsymbol{u}}}$ is used in $\text{T}^{\text{L}}$, the permutation is depicted by a line that crosses the edge which connects ${\ensuremath{\boldsymbol{u}}}$ to $\text{T}^{\text{L}}$. Braided Convolutional Codes --------------------------- ### Uncoupled BCCs The original BCCs are inherently a class of SC-TCs [@Moloudi_SCTurbo; @ZhangBCC; @Moloudi_SPCOM14]. An uncoupled BCC ensemble can be obtained by tailbiting a BCC ensemble with coupling length $L=1$. The compact graph representation of this ensemble is shown in Fig. \[CGPCCBCC\](b). The BCCs of rate $R=\frac{1}{3}$ are built of two rate-$2/3$ recursive systematic convolutional encoders. The corresponding trellises are denoted by $\text{T}^{\text{U}}$ and $\text{T}^{\text{L}}$, and referred to as the upper and lower trellises, respectively. The information sequence ${\ensuremath{\boldsymbol{u}}}$ and a reordered version of the lower parity sequence ${\ensuremath{\boldsymbol{v}}}^{\text{L}}$ are connected to $\text{T}^{\text{U}}$ to produce the upper parity sequence ${\ensuremath{\boldsymbol{v}}}^{\text{U}}$. Likewise, a reordered version of ${\ensuremath{\boldsymbol{u}}}$ and a reordered version of ${\ensuremath{\boldsymbol{v}}}^{\text{U}}$ are connected to $\text{T}^{\text{L}}$ to produce ${\ensuremath{\boldsymbol{v}}}^{\text{L}}$. ![(a) Compact graph representation of (a) PCCs (b) BCCs.](CG.pdf){width="0.5\linewidth"} \[CGPCCBCC\] ### Coupled BCCs, Type-I Fig. \[SCBCC\](a) shows the compact graph representation of the original BCC ensemble, which can be classified as Type-I BCC ensemble [@Moloudi_SPCOM14] with coupling memory $m=1$. As depicted in Fig. \[SCBCC\](a), at time $t$, the information sequence ${\ensuremath{\boldsymbol{u}}}_{t}$ and a reordered version of the lower parity sequence at time $t-1$, ${\ensuremath{\boldsymbol{v}}}_{t-1}^{\text{L}}$, are connected to $\text{T}_{t}^{\text{U}}$ to produce the current upper parity sequence ${\ensuremath{\boldsymbol{v}}}_{t}^{\text{U}}$. Likewise, a reordered version of ${\ensuremath{\boldsymbol{u}}}_t$ and ${\ensuremath{\boldsymbol{v}}}_{t-1}^{\text{U}}$ are connected to $\text{T}_{t}^{\text{L}}$ to produce ${\ensuremath{\boldsymbol{v}}}_{t}^{\text{L}}$. At time $t$, the inputs of the encoders come only from time $t$ and $t-1$, hence the coupling memory is $m=1$. ### Coupled BCCs, Type-II Fig. \[SCBCC\](b) shows the compact graph representation of Type-II BCCs with coupling memory $m=1$. As depicted in the figure, in addition to the coupling of the parity sequences, the information sequence is also coupled. At time $t$, the information sequence ${\ensuremath{\boldsymbol{u}}}_t$ is divided into two sequences ${\ensuremath{\boldsymbol{u}}}_{t,0}$ and ${\ensuremath{\boldsymbol{u}}}_{t,1}$. Likewise, a reordered copy of the information sequence, ${\ensuremath{\boldsymbol{\tilde{u}}}}_t$, is divided into two sequences ${\ensuremath{\boldsymbol{u}}}_{t,0}$ and ${\ensuremath{\boldsymbol{u}}}_{t,1}$. At time $t$, the first inputs of the upper and lower encoders are reordered versions of the sequences $({\ensuremath{\boldsymbol{u}}}_{t,0},{\ensuremath{\boldsymbol{u}}}_{t-1,1})$ and $(\tilde{{\ensuremath{\boldsymbol{u}}}}_{t,0},\tilde{{\ensuremath{\boldsymbol{u}}}}_{t-1,1})$, respectively. Input-Parity Weight Enumerator ============================== Input-Parity Weight Enumerator for Convolutional Codes ------------------------------------------------------ Consider a rate-$2/3$ recursive systematic convolutional encoder. The input-parity weight enumerator function (IP-WEF), $A(I_1,I_2,P)$, can be written as $$A(I_1,I_2,P)=\sum_{i_1} \sum_{i_2}\sum_{p} A_{i_1,i_2,p}I^{i_1}I^{i_2}P^p,$$ where $A_{i_1,i_2,p}$ is the number of codewords with weights $i_1$, $i_2$, and $p$ for the first input, the second input, and the parity sequence, respectively. To compute the IP-WEF, we can define a transition matrix between trellis sections denoted by ${\ensuremath{\boldsymbol{M}}}$. This matrix is a square matrix whose element in the $r$th row and the $c$th column $[{\ensuremath{\boldsymbol{M}}}]_{r,c}$ corresponds to the trellis branch which starts from the $r$th state and ends up at the $c$th state. More precisely, $[{\ensuremath{\boldsymbol{M}}}]_{r,c}$ is a monomial $I_1^{i_1}I_2^{i_2}P^{p}$, where $i_1$, $i_2$, and $p$ can be zero or one depending on the branch weights. ![Compact graph representation of coupled BCCs with coupling memory $m=1$ (a) Type-I (b) Type-II. []{data-label="SCBCC"}](BCCSCT1T2.pdf){width="\linewidth"} For a rate-$2/3$ convolutional encoder with generator matrix $$\label{BCCG} \boldsymbol{G}= \left( \begin{array}{ccc}1&0&1/7\\0&1&5/7\end{array}\right),$$ in octal notation, the matrix ${\ensuremath{\boldsymbol{M}}}$ is $$\label{eq:AR23} \boldsymbol{M}(I_1,I_2,P)=\left( \begin{array}{cccc}1&I_2P&I_1I_2&I_1P\\I_1&I_1I_2P&I_2&P\\I_2P&1&I_1P&I_1I_2\\I_1I_2P&I_1&P&I_2 \end{array}\right). $$ Assume termination of the encoder after $N$ trellis sections. The IP-WEF can be obtained by computing ${\ensuremath{\boldsymbol{M}}}^{N}$. The element $[{\ensuremath{\boldsymbol{M}}}^{N}]_{1,1}$ of the resulting matrix is the corresponding IP-WEF. The WEF of the encoder is defined as $$A(W)=\sum_{w=1}^{N}A_{w}W^w=A(I_1,I_2,P)\vert_{I_1=I_2=P=W},$$ where $A_w$ is the number of codewords of weight $w$. In a similar way, we can obtain the matrix ${\ensuremath{\boldsymbol{M}}}$ for a rate-$1/2$ convolutional encoder. The IP-WEF of the encoder is $[{\ensuremath{\boldsymbol{M}}}^{N}]_{1,1}$ and is given by $$A(I,P)=\sum_{i} \sum_{p}A_{i,p} I^iP^p,$$ where $A_{i,p}$ is the number of codewords of input weight $i$ and parity weight $p$. Parallel Concatenated Codes --------------------------- For the PCC ensemble in Fig. \[CGPCCBCC\](a), the IP-WEFs of the upper and lower encoders are defined by $A^{\text{T}_{\text{U}}}(I,P)$ and $A^{\text{T}_{\text{L}}}(I,P)$, respectively. The IP-WEF of the overall encoder, $A^{\text{PCC}}(I,P)$, depends on the permutation that is used, but we can compute the average IP-WEF for the ensemble. The coefficients of the IP-WEF of the PCC ensemble [@UnveilingTC] can be written as $$\bar{A}_{i,p}^{\text{PCC}}=\frac{\sum_{p_1}A^{\text{T}_{\text{U}}}_{i,p_1}\cdot A^{\text{T}_{\text{L}}}_{i,p-p_1}}{\binom{N}{i}}.$$ Braided Convolutional Codes --------------------------- For the uncoupled BCC ensemble depicted in Fig. \[CGPCCBCC\](b), the IP-WEFs of the upper and lower encoders are denoted by $A^{\text{T}_{\text{U}}}(I_1,I_2,P)$ and $A^{\text{T}_{\text{L}}}(I_1,I_2,P)$, respectively. To derive the average WEF, we have to average over all possible combinations of permutations. The coefficients of the IP-WEF of the uncoupled BCC ensemble can be written as $$\label{IPWEBCC} \bar{A}_{i,p}^{\text{BCC}}=\frac{\sum_{p_1}A^{\text{T}_{\text{U}}}_{i,p_1,p-p_1}\cdot A^{\text{T}_{\text{L}}}_{i,p-p_1,p_1}}{\binom{N}{i}\binom{N}{p_1}\binom{N}{p-p_1}}.$$ [*[Remark:]{}*]{} It is possible to interpret the BCCs in Fig. \[CGPCCBCC\](b) as protograph-based generalized LDPC codes with trellis constraints. As a consequence, the IP-WEF of the ensemble can also be computed by the method presented in [@AbuSurrahGlobeCom07; @AbuSurra2011]. Performance Bounds for Braided Convolutional Codes ================================================== Bounds on the Error Probability ------------------------------- Consider the PCC and BCC ensembles in Fig. \[CGPCCBCC\] with permutation size $N$. For transmission over an additive white Gaussian noise (AWGN) channel, the bit error rate (BER) of the code is upper bounded by $$\label{BER} P_b\leq \sum_{i=1}^{N} \sum_{p=1}^{2N} \frac{i}{N} \bar{A}_{i,p} Q\left ( \sqrt{2(i+p)R\frac{E_b}{N_0}}\right),$$ and the frame error rate (FER) is upper bounded by $$\label{FER} P_F\leq \sum_{i=1}^{N} \sum_{w=1}^{2N} \bar{A}_{i,p} Q\left( \sqrt{2(i+p)R\frac{E_b}{N_0}}\right),$$ where $Q(.)$ is the $Q$-function and $\frac{E_b}{N_0}$ is the signal-to-noise ratio. ![Simulation results and bound on performance of the PCC and BCC ensembles.[]{data-label="PCCBCC"}](PCCBCC512.pdf){width="0.9\linewidth"} The truncated union bounds on the BER and FER of the PCC ensemble in Fig. \[CGPCCBCC\](a) are shown in Fig. \[PCCBCC\]. We have considered identical component encoders with generator matrix ${\ensuremath{\boldsymbol{G}}}=(1,5/7)$ in octal notation and permutation size $N=512$. We also plot the bounds for the uncoupled BCC ensemble with identical component encoders with generator matrix given in . The bounds are truncated at a value greater than the corresponding Gilbert-Varshamov bound. ![Simulation results for BCC with uniformly random permutations and fixed permutations.[]{data-label="BCCFP"}](BCCFP.pdf){width="0.9\linewidth"} Simulation results for the PCC and the uncoupled BCC ensemble are also provided in Fig. \[PCCBCC\]. To simulate the average performance, we have randomly selected new permutations for each simulated block. The simulation results are in agreement with the bounds for both ensembles. It is interesting to see that the error floor for the BCC ensemble is quite high and the slope of the floor is even worse than that of the PCC ensemble. Fig. \[BCCFP\] shows simulation results for BCCs with randomly selected but fixed permutations. According to the figure, for the BCC with fixed permutations, the performance improves and no error floor is observed. For example, at $\frac{E_b}{N_0}=2.5\text{dB}$, the FER improves from $9.5\cdot 10^{-5}$ to $6.8\cdot 10^{-7}$. Comparing the simulation results for permutations selected uniformly at random and fixed permutations, suggests that the bad performance of the BCC ensemble is caused by a fraction of codes with poor distance properties. In the next subsection, we demonstrate that the performance of BCCs improves significantly if we use expurgation. Bound on the Minimum Distance and Expurgated Union Bound -------------------------------------------------------- Using the average WEF, we can derive a bound on the minimum distance. We assume that all codes in the ensemble are selected with equal probability. Therefore, the total number of codewords of weight $w$ over all codes in the ensemble is $N_{c}\cdot \bar{A}_w$, where $N_c$ is the number of possible codes. As an example, $N_c$ is equal to $(N!)^3$ for the BCC ensemble. Assume that $$\label{EX} \sum^{\hat{d}-1}_{w=1}\bar{A}_w<1-\alpha,$$ for some integer value $\hat{d}>1$ and a given $\alpha$, $0\leq\alpha<1$. Then a fraction $\alpha$ of the codes cannot contain codewords of weight $w<\hat{d}$. If we exclude the remaining fraction $1-\alpha$ of codes with poor distance properties, the minimum distance of the remaining codes is lower bounded by $d_{\min}\geq \hat{d}$. The best bound can be obtained by computing the largest $\hat{d}$ that satisfies the condition in . Considering $\bar{A}_w$ for different permutation sizes, this bound is shown in Fig. \[MinDis\] for $\alpha=0$, $\alpha=0.5$, and $\alpha=0.95$. According to the figure, the minimum distance of the BCC ensemble grows linearly with the permutation size. The bound corresponding to $\alpha=0.95$, which is obtained by excluding only $5\%$ of the codes, is very close to the existence bound for $\alpha=0$. This means that only a small fraction of the permutations leads to poor distance properties. ![Bound on the minimum distance for the BCC ensemble.[]{data-label="MinDis"}](BCCMinDis.pdf){width="0.9\linewidth"} Excluding the codes with $d_{\min}<\hat{d}$, the BER of the expurgated ensemble is upper bounded by $$P_b\leq \frac{1}{\alpha}\mathop{\sum_{i=1}^{kN} \sum_{p=1}^{(n-k)N}}_{i+p\geq\hat{d}}\frac{i}{N} \bar{A}_{i,p} Q\left ( \sqrt{2(i+p)R\frac{E_b}{N_0}}\right).$$ ![Expurgated union bound on the performance of PCC and BCC.[]{data-label="EXbounds"}](BCCBounds.pdf){width="\linewidth"} For the BCC ensemble, the expurgated bounds on the BER are shown in Fig. \[EXbounds\] for $\alpha=0.5$ and permutation sizes $N=128, 256$, and $512$. The error floors estimated by the expurgated bounds are much steeper than those given by the unexpurgated bounds. The expurgated bounds on the BER are also shown in Fig. \[EXbounds\] for the PCC ensemble. These bounds demonstrate that expurgation does not improve the performance of the PCC ensemble significantly. Spatially Coupled\ Braided Convolutional Codes =========================== The performance of BCCs in the waterfall region can be significantly improved by spatial coupling. To demonstrate it, we provide simulation results for the uncoupled BCCs and Type-II BCCs for $N=1000$ and $5000$. For coupled BCCs, we consider coupling length $L=100$ and a sliding window decoder with window size $W=5$ [@BCCWinDec]. For all cases, the permutations are selected randomly but fixed. Simulation results are shown in Fig. \[SCBCCSim\]. According to the figure, for a given permutation size, Type-II BCCs perform better than uncoupled BCCs. As an example, for $N=5000$, the performance improves almost $1.5\;\text{dB}$. We also compare the uncoupled and coupled BCCs with equal decoding latency. In this case, we consider $N=5000$ and $N=1000$ for the uncoupled and coupled BCCs, respectively. Considering equal decoding latency, the performance of the coupled Type-II BCCs is still significantly better than that of the uncoupled BCCs. The coupled BCCs have good performance in the waterfall region and their error floor is so low that it cannot be observed. It is possible to generalize equation for the coupled BCC ensembles in Fig. \[SCBCC\] but the computational complexity is significantly increased. In the following theorem, we establish a connection between the WEF of the uncoupled BCC ensemble and that of the coupled ensemble. More specifically, we show that the weights of codewords cannot decrease by spatial coupling. A similar property is shown for LDPC codes in [@TruhachevDistBoundsTBLDPCCCs; @MitchellMinDisTrapSet2013; @PseudocodLDPC]. ![Simulation results for uncoupled and coupled BCCs with fixed permutations, $N=1000$ and $N=5000$.[]{data-label="SCBCCSim"}](SCBCCSIM.pdf){width="\linewidth"} Consider an uncoupled BCC $\tilde{\mathcal{C}}$ with permutations $\Pi$, $\Pi^{\text{U}}$ and $\Pi^{\text{L}}$. This code can be obtained by means of tailbiting an original (coupled) BCC $\mathcal{C}$ with time-invariant permutations $\Pi_t=\Pi$, $\Pi_t^{\text{U}}=\Pi^{\text{U}}$ and $\Pi_t^{\text{L}}=\Pi^{\text{L}}$. Let ${\ensuremath{\boldsymbol{v}}}=({\ensuremath{\boldsymbol{v}}}_1, \dots,{\ensuremath{\boldsymbol{v}}}_t, \dots, {\ensuremath{\boldsymbol{v}}}_L)$, ${\ensuremath{\boldsymbol{v}}}_t=({\ensuremath{\boldsymbol{u}}}_t,{\ensuremath{\boldsymbol{v}}}_t^{\text{U}},{\ensuremath{\boldsymbol{v}}}_t^{\text{U}})$, be an arbitrary code sequence of $\mathcal{C}$. Then there exists a codeword $\tilde{{\ensuremath{\boldsymbol{v}}}} \in \tilde{\mathcal{C}}$ that satifies $$w_{\text{H}}(\tilde{{\ensuremath{\boldsymbol{v}}}}) \leq w_{\text{H}}({{\ensuremath{\boldsymbol{v}}}}) \ ,$$ i.e., the coupling does either preserve or increase the Hamming weight of valid code sequences. A valid code sequence of $\mathcal{C}$ has to satisfy the local constraints $$\begin{aligned} \begin{pmatrix} {\ensuremath{\boldsymbol{u}}}_t & {\ensuremath{\boldsymbol{v}}}_{t-1}^{\text{L}} \cdot \Pi_t^{\text{U}} & {\ensuremath{\boldsymbol{v}}}_{t}^{\text{U}} \end{pmatrix} \cdot {\ensuremath{\boldsymbol{H}}}_{\text{U}}^T & = {\ensuremath{\boldsymbol{0}}} \label{eq:coupledUpper} \\ \begin{pmatrix} {\ensuremath{\boldsymbol{u}}}_t \cdot \Pi_t & {\ensuremath{\boldsymbol{v}}}_{t-1}^{\text{U}} \cdot \Pi_t^{\text{L}} & {\ensuremath{\boldsymbol{v}}}_{t}^{\text{L}} \end{pmatrix} \cdot {\ensuremath{\boldsymbol{H}}}_{\text{L}}^T & = {\ensuremath{\boldsymbol{0}}} \label{eq:coupledLower} \end{aligned}$$ for all $t=1,\dots,L$, where ${\ensuremath{\boldsymbol{H}}}_{\text{U}}$ and ${\ensuremath{\boldsymbol{H}}}_{\text{L}}$ are the parity-check matrices that represent the contraints imposed by the trellises of the upper and lower component encoders, respectively. Since these constraints are linear and time-invariant, it follows that any superposition of vectors ${\ensuremath{\boldsymbol{v}}}_t=({\ensuremath{\boldsymbol{u}}}_t,{\ensuremath{\boldsymbol{v}}}_t^{\text{U}},{\ensuremath{\boldsymbol{v}}}_t^{\text{U}})$ from different time instants $t \in \{1,\dots,L\}$ will also satisfy and . In particular, if we let $$\tilde{{\ensuremath{\boldsymbol{u}}}}=\sum_{t=1}^{L} {\ensuremath{\boldsymbol{u}}}_t \ , \quad \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{L}}=\sum_{t=1}^{L} {\ensuremath{\boldsymbol{v}}}_t^{\text{L}} \ , \quad \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{U}}=\sum_{t=1}^{L} {\ensuremath{\boldsymbol{v}}}_t^{\text{U}},$$ then $$\begin{aligned} \begin{pmatrix} \tilde{{\ensuremath{\boldsymbol{u}}}} & \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{L}} \cdot \Pi^{\text{U}} & \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{U}} \end{pmatrix} \cdot {\ensuremath{\boldsymbol{H}}}_{\text{U}}^T & = {\ensuremath{\boldsymbol{0}}} \label{eq:uncoupledUpper} \\ \begin{pmatrix} \tilde{{\ensuremath{\boldsymbol{u}}}} \cdot \Pi & \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{U}} \cdot \Pi^{\text{L}} & \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{L}} \end{pmatrix} \cdot {\ensuremath{\boldsymbol{H}}}_{\text{L}}^T & = {\ensuremath{\boldsymbol{0}}} \label{eq:uncoupledLower} \enspace .\end{aligned}$$ Here we have implicitly made use of the fact that ${\ensuremath{\boldsymbol{v}}}_t={\ensuremath{\boldsymbol{0}}}$ for $t<1$ and $t>L$. But now it follows from and that $\tilde{{\ensuremath{\boldsymbol{v}}}}=(\tilde{{\ensuremath{\boldsymbol{u}}}}, \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{U}}, \tilde{{\ensuremath{\boldsymbol{v}}}}^{\text{L}}) \in \tilde{\mathcal{C}}$, i.e., we obtain a codeword of the uncoupled code. If all non-zero symbols within ${\ensuremath{\boldsymbol{v}}}_t$ occur at different positions for $t=1,\dots,L$, then $w_{\text{H}}(\tilde{{\ensuremath{\boldsymbol{v}}}}) = w_{\text{H}}({{\ensuremath{\boldsymbol{v}}}})$. If, on the other hand, the support of non-zero symbols overlaps, the weight of $\tilde{{\ensuremath{\boldsymbol{v}}}}$ is reduced accordingly and $w_{\text{H}}(\tilde{{\ensuremath{\boldsymbol{v}}}}) < w_{\text{H}}({{\ensuremath{\boldsymbol{v}}}})$. The minimum distance of the coupled BCC $\mathcal{C}$ is larger than or equal to the minimum distance of the uncoupled BCC $\tilde{\mathcal{C}}$, $$d_{\text{min}}(\mathcal{C}) \geq d_{\text{min}}(\tilde{\mathcal{C}}).$$ $\square$ From [*[Corollary 1]{}*]{}, we can conclude that the estimated floor for the uncoupled BCC ensemble is also valid for the coupled BCC ensemble. Conclusion ========== The finite block length analysis of BCCs performed in this paper, together with the DE analysis in [@JournalMLD], show that BCCs are a very promising class of codes. They provide both close-to-capacity thresholds and very low error floors even for moderate block lengths. However, we would like to remark that the bounds on the error floor in this paper assume a maximum likelihood decoder. In practice, the error floor of the BP decoder may be determined by absorbing sets. This can be observed, for example, for some ensembles of SC-LDPC codes [@Mitchell_AS2014]. Therefore, it would be interesting to analyze the absorbing sets of BCCs. [^1]: This work was supported in part by the Swedish Research Council (VR) under grant \#621-2013-5477.
--- abstract: 'This paper outlines the results of investigations into the effects of radiation damage in the mini-MALTA prototype. Measurements were carried out at Diamond Light Source using a micro-focus X-ray beam, which scanned across the surface of the device in 2 $\mathrm{\mu m}$ steps. This allowed the in-pixel photon response to be measured directly with high statistics. Three pixel design variations were considered: one with the standard continuous $\mathrm{n^-}$ layer layout and front-end, and extra deep p-well and $\mathrm{n^-}$ gap designs with a modified front-end. Five chips were measured: one unirradiated, one neutron irradiated, and three proton irradiated.' address: - 'University of Oxford (UK)' - 'University of Birmingham (UK)' - 'CERN (CH)' - 'University of Zagreb (HR)' - 'University of Glasgow (UK)' - 'University of Oslo (NO)' author: - 'M. Mironova' - 'K. Metodiev' - 'P. Allport' - 'I. Berdalovic' - 'D. Bortoletto' - 'C. Buttar' - 'R. Cardella' - 'V. Dao' - 'M. Dyndal' - 'P. Freeman' - 'L. Flores Sanz de Acedo' - 'L. Gonella' - 'T. Kugathasan' - 'H. Pernegger' - 'F. Piro' - 'R. Plackett' - 'P. Riedler' - 'A. Sharma' - 'E.J. Schioppa' - 'I. Shipsey' - 'C. Solans Sanchez' - 'W. Snoeys' - 'H. Wennlöf' - 'D. Weatherill' - 'D. Wood' - 'S. Worm' bibliography: - 'bibliography.bib' date: September 2019 title: 'Measurement of the relative response of TowerJazz Mini-MALTA CMOS prototypes at Diamond Light Source' --- Monolithic active pixel sensors; CMOS sensors; Radiation-hard detectors; Synchrotron light source Introduction ============ The mini-MALTA device is a depleted monolithic pixel sensor prototype made in TowerJazz 180 nm CIS technology, with 64x16 pixels and 36.4 x 36.4 $\mathrm{\mu m^2}$ pixel pitch. The device, laboratory and particle testbeam measurements are described in [@MiniMALTA2019]. It is based on the design of the MALTA chip, which was a full-sized CMOS demonstrator [@TowerJazz2; @TowerJazzATLAS]. Mini-MALTA has eight sectors which differ in their front-end design, reset mechanism and process. The standard design, similar to the one of the MALTA chip, is referred to as the continuous $\mathrm{n^-}$ layer design. Additionally, there are two process modifications which have been introduced in the mini-MALTA design and are shown in Figure \[fig:processes\]. The aim of these two modifications in the substrate is to improve charge collection at the pixel edges. The first variant has an extra deep implant of p-type silicon, which should shape the electric field such that the produced charge carriers are steered more directly towards the collection electrode in the centre of the pixel. The second design has a gap in the $\mathrm{n^-}$ layer, which is expected to have a similar effect on the electric field. Both of the designs have been shown to perform well in TCAD simulations [@TCAD]. Each of the designs is implemented with a standard front-end and with a modified front-end featuring enlarged transistors to reduce random telegraph signal (RTS) noise. For the continuous $\mathrm{n^-}$ layer design the sector with the standard front-end was tested, and for the $\mathrm{n^-} $ gap and extra deep p-well designs, the sectors with the enlarged transistors were measured. ![Process modification in mini-MALTA. Figure from [@miniMALTA].[]{data-label="fig:processes"}](figures/processes.png){width="100.00000%"} The main goal of the testbeam at Diamond Light Source was to study the performance of the two new designs in comparison to the standard design for different levels of irradiation. One unirradiated chip was tested (W2R11). One sample (W2R1) was irradiated at Ljubliana with 1 MeV neutrons up to 1e15 $\mathrm{n_{eq}/cm^2}$. The remaining three chips were irradiated with 23 MeV protons at the MC40 cyclotron at Birmingham to 5e14 and 7e13 $\mathrm{n_{eq}/cm^2}$ [@Birmingham]. Here the delivered fluence was estimated through nickel dosimetry and the total ionising dose (TID) was calculated using the Bethe model for 23 MeV protons. Table \[table:irradiations\] summarises the properties and irradiation doses of the different chips. The devices were tested three weeks after irradiation with noise and threshold scans, as well as data acquisition with a Sr-90 source. One of the samples (W5R9) was annealed at low temperatures to obtain reasonable results for these tests, as its noise was too high after irradiation. [|X|X|X|X|X|X|]{} Sample & Epitaxial Thickness $(\mu\mathrm{m})$ & Irradiation & Fluence $(\mathrm{n_{eq}/cm}^2)$ & TID (MRad) & annealing\ \[0.5ex\] W2R11 & 30 & none & 0 & 0 & none\ \[1ex\] W2R9 & 30 & 23 MeV protons & 5.0e14 & 66 & none\ \[1ex\] W2R1 & 30 & 1 MeV neutrons & 1e15 & & none\ \[1ex\] W5R9 & 25 & 23 MeV protons & 5.6e14 & 74 & 120 min at 35 $^\circ$C\ \[1ex\] W4R9 & 25 & 23 MeV protons & 7e13 & 9 & none\ \[1ex\] Measurements and techniques =========================== The measurements were performed at the B16 beamline at Diamond Light Source. The setup in the beamline is shown in Figure \[fig:setup\]. The mini-MALTA chip was mounted on a PCB, which was attached onto the cooling system. The cooling system was developed in Oxford and is shown in Figure \[fig:cooling\]. It consisted of a water-cooled metal chuck, which reached a temperature of $\mathrm{10\;\degree C}$, coupled to the warm side of a Peltier element, which in turn was connected to a cold adapter plate onto which the mini-MALTA PCB was attached. This cooling system consistently kept the sensor at $\mathrm{-13\;\degree C}$, with the temperature being monitored with a temperature sensor. The X-ray beam was focused using a Compound Reflective Lens (CRL) X-ray mirror arrangement to a beam spot of 2 $\mathrm{\mu m}$ (FWHM), as measured using a knife edge technique. The mini-MALTA chip was mounted on a motion stage, which was placed at the location of the focal point of the X-ray beam. The device was read out using a Kintex KC705 FPGA board. The choice of beam energy was made based on an estimate of how much energy a minimum ionising particle (MIP) would deposit in the depletion region of mini-MALTA. The stopping power for a MIP in silicon is [@PDG]: $$\label{eq:mip} \Big\langle \frac{dE}{dx}\Big\rangle=3.88\;\mathrm{\frac{MeV}{cm}}.$$ Mini-MALTA is expected to have a depletion depth of between 20 and 25 $\mathrm{\mu m}$, depending on chip properties, biasing voltages and levels of irradiation. This means that a MIP would deposit between 7.7 and 9.7 keV in the device. X-rays interact primarily through the photoelectric effect and deposit all of their energy in one location. Based on this, a beam energy of 8 keV was chosen. The beam was attenuated by 0.5 mm of aluminium, in order to decrease the hit rate to that which can easily be read out by the mini-MALTA. During the measurements, the detector was moved in both orthogonal directions to the beam in 2 $\mathrm{\mu m}$ steps with a precision of 400 nm. Data was acquired for 1 s at each position to reach high statistics. The scans covered a total area of 100x100 $\mathrm{\mu m^2}$, which fully contained at least four pixels. The measurements on the different devices were done at different thresholds, which are summarised in Table \[table:thresholds\]. The thresholds were chosen based on the amount of noise in the chip, set with the discriminator bias current (“IDB”) and measured with a threshold scan on each device. The value of the threshold does not influence the results of an X-ray testbeam as much as it does for a MIP testbeam, because the photon always deposits all of its energy when it interacts. For an 8 keV photon one would expect 2200 $\mathrm{e^-}$ to be produced, which is significantly higher than the thresholds noted in table \[table:thresholds\], even if the charge is split between multiple pixels. [|X|X|X|X|X|X|]{} Sample & Fluence $\mathrm{n_{eq}/cm^2}$ & TID (MRad)& MALTA threshold (e) & extra deep p-well threshold (e) & $\mathrm{n^-} $ gap threshold (e)\ \[0.5ex\] W2R11 & 0 & 0 & 368 & 197 & 190\ \[1ex\] W2R9 & 5e14 (p) & 66 & 533 & 303 & 274\ W2R1 & 1e15 (n) & & 330 & 181 & 187\ W5R9 & 5e14 (p) & 70 & 681 & 379 & 374\ \[1ex\] W4R9 & 7e13 (p) & 9 & 437 & 234 & 232\ Results ======= Single Pixel Response & Analysis -------------------------------- The number of hits in a particular pixel was considered at each stage position. This was then stored in a 2D map, an example of which is shown in Figure \[fig:principle\]. The pixel shape is clearly visible. Next, the centre of each pixel was identified by applying a Gaussian filter to the image and finding the location with the maximum value. Then, the average number of hits was calculated in a small region around the pixel centre, which is outlined with a green square. The number of hits in each pixel was then normalised to that average. This removes pixel to pixel variations. Taking the average of a larger region around the pixel centre also accounts for variations in the number of hits per step of the motion stage. Subsequently, the average normalised number of hits was calculated within a 36.4 x 36.4 $\mathrm{\mu m^2}$ square, which is shown in red in Figure \[fig:principle\] and corresponds to the theoretical size of the pixel. This average is defined as the relative photon pixel response. The uncertainty on the pixel response is defined as the standard deviation of the number of hits in the normalisation area, i.e. the green square. The calculation was repeated for each pixel in the scan and the pixel responses are averaged across the pixels which are fully visible in the scan. The single pixel maps are then added together, while removing everything from the pixel maps which is below 7% normalised response, as those hits are likely to stem from the X-ray halo or noise in the detector. The pixel response calculations are done before this cut, so the normalisation and calculations are not affected by its choice, rather it is implemented to make sure the that the summed pixel response maps only show hits from the beam and not the X-ray halo. An example of the summed pixel response map is shown in Figure \[fig:fullmap\]. At the pixel edges charge can be picked up by two pixels at the same time, which leads to a response above 1. This charge sharing is discussed in more detail in section \[subsec:chargesharing\]. An interesting feature of the mini-MALTA is that the pixel response is asymmetric and extended in one direction. There is also a double column structure visible, i.e. two adjacent columns of pixels appear mirrored. This is in agreement with what was observed in previous testbeams of the MALTA chip. The reason for the asymmetric pixel shape is the shape of the deep p-well cut-out. This is the region between collection electrode and the p-well which does not contain p-type silicon. The shape of the deep p-well cut-out is shown as an overlay in Figure \[fig:overlay\], with the n-type collection electrode as the pink dot in the centre. In regions with less p-well coverage, i.e. a larger gap, one can observe a higher pixel response. This is caused by the fact that a larger gap causes a larger potential difference with respect to the collection electrode. The layouts of the p-well are mirrored in double-columns, which explains the double-column structure of the pixels. ![Overlay of the scan of the continuous $\mathrm{n^-}$ sector in W2R1 (irradiated to 1e15 $\mathrm{n_{eq}/cm^2}$) and the shape of the deep p-well cutout and collection electrodes. The cutout of the deep p-well influences the charge collection shape.[]{data-label="fig:overlay"}](figures/overlay.png){width="50.00000%"} Pixel response as a function of dose ------------------------------------ The results of the normalised pixel response analysis for the different samples are shown in Table \[table:efficiencies\] and the response maps are shown in Figures \[fig:W2R11\_MALTA\] - \[fig:W2R1\_ngap\]. In order to extract a dependence on levels of irradiation, the measurements were performed at the same applied biasing voltage. The most accurate comparison between different designs and radiation levels is provided by comparing the samples W2R11, W2R9, and W2R1. These samples are from the same wafer (W2), which means they must have the same resistivity. The continuous $\mathrm{n^-}$ sectors for the unirradiated (W2R11) and neutron irradiated samples (W2R1) are shown in Figures \[fig:W2R11\_MALTA\] and \[fig:W2R1\_MALTA\] respectively. For the unirradiated sample, almost all of the pixel is fully responsive, except for some of the corners. For the neutron irradiated sample the pixel response decreases significantly around the corners and pixel edges. Overall the average pixel response in the continuous $\mathrm{n^-}$ sector decreases with neutron equivalent radiation dose, by more than 10% between W2R11 and W2R1. The photon response which is found here is not directly comparable to the efficiency which is determined at proton testbeams, because of differences in the testbeam method and the different energy deposition mechanism for protons and photons. However, the proton efficiency determined at previous a MALTA testbeam at SPS [@MALTA_OLD] shows a similar trend with irradiation. The same behaviour was also observed a the mini-MALTA testbeam performed at ELSA [@MiniMALTA2019]. Compared to the continuous $\mathrm{n^-}$ design, the extra deep p-well and $\mathrm{n^-} $ gap designs perform better. For the unirradiated sample there is only a small improvement numerically, but in the pixel response maps (Figures \[fig:W2R11\_pwell\] and \[fig:W2R11\_ngap\]) there is no more loss in response at the pixel edges. Moreover, for the irradiated samples, there is almost no decrease in relative response observed with radiation dose. Most of the pixel remains fully responsive, which means the new designs significantly improve the detector performance after irradiation. For the other samples (W5R9 and W4R9) the new designs show a similar improvement of the pixel response, compared to the MALTA design. [|X|X|X|X|X|X|]{} Sample & Fluence $\mathrm{n_{eq}/cm^2}$ & TID (MRad)& continuous $\mathrm{n^-}$ response (%) & extra deep p-well response (%) & $\mathrm{n^-} $ gap response (%)\ \[0.5ex\] W2R11 & 0 & 0 & $88.3\pm2.4$ & $90.5\pm2.2$ & $90.9\pm2.2$\ \[1ex\] W2R9 & 5e14 (p) & 66 & $81.2\pm2.8$ & $87.6\pm4.2$ & $88.4\pm3.8$\ W2R1 & 1e15 (n) & & $75.4\pm3.8$ & $90.5\pm2.8$ & $89.0\pm3.1$\ W5R9 & 5e14 (p) & 70 & $80.4\pm2.8$ & $89.0\pm2.5$ & $89.3\pm2.2$\ \[1ex\] W4R9 & 7e13 (p) & 9 & $78.7\pm2.6$ & $89.8\pm2.3$ & $89.9\pm2.3$\ Pixel response as a function of biasing voltage ----------------------------------------------- An additional measurement was performed to compare the pixel response at different biasing voltages. Each of the sectors was scanned once at -6 V and -20 V. The resultant response maps for the continuous $\mathrm{n^-}$ sector at each voltage are shown in Figure \[fig:20V\]. The average pixel response decreases from 76.7% to 72.2% with increasing bias voltage, which means that the pixel edges become less responsive with respect to the pixel centre and appear sharper. The same effect is seen for the new designs, as shown for the $\mathrm{n^-}$ gap design in Figure \[fig:biasingNgap\], and the response results summarised in Table \[table:bias20\]. Simulations suggest that there are two effects caused by a higher applied voltage: a longer drift path and faster drift along the sensor depth [@TCAD]. In the pixel center the faster drift causes better charge collection. At the pixel edges however, the charge deposited below the deep p-well implant is pushed towards the deep p-well quickly and has to travel almost parallel to the surface. This longer drift path leads to a higher probability for the charge to be trapped and not to reach the collection electrode. And as the drift velocity perpendicular to the surface does not increase with higher voltage, more charge is lost and the normalised response around the pixel edges decreases. [|X|X|X|X|X|X|]{} Sample & Bias (V) & continuous $\mathrm{n^-}$ & extra deep p-well & $\mathrm{n^-}$ gap\ \[0.5ex\] W2R1 & -6 & $76.7\pm3.8$ & $91.1\pm3.0$ & $90.0\pm3.1$\ & -20 & $72.2\pm3.3$ & $86.6\pm3.9$ & $86.4\pm2.9$\ Charge Sharing {#subsec:chargesharing} -------------- Charge sharing occurs when a particle hit occurs in the region between two pixels and the charge is collected and registered as a signal by both of those pixels. To quantify the amount of charge sharing, the pixel response outside of the nominal pixel area was considered. First, the sum of the normalised number of hits outside of the pixel area, i.e. the red square in Figure \[fig:principle\], was calculated. Then the total normalized number of hits was found for each pixel and the ratio of the two was defined as the charge sharing percentage. Additionally, the extent of the charge sharing region was found. This was defined as the distance from the nominal pixel edge where one could still see hits in the pixel. The average of that distance was found for each pixel edge and averaged among all fully visible pixels. The uncertainty on the charge sharing extents is defined as the standard deviation of the averaged values, and as the pixel shape is asymmetric it is thus relatively large. The results for the charge sharing analysis are shown in Tables \[table:chargepercentage\] and \[table:chargeextent\], summarising the charge sharing percentages and extents respectively. For the percentages no error on the individual values was calculated, but due to the observed variation in hits from step to step, a systematic uncertainty of around 2% is expected. Additionally, the difference between the thresholds in different sectors introduces a 0.3% uncertainty on the response calculated at the pixel edges, and thus a small systematic uncertainty on the charge sharing. Both results are consistent with each other, i.e. when the percentage decreases so does the average extent of the region. For the samples from the same wafer, the charge sharing decreases with increased irradiation in the continuous $\mathrm{n^-}$ sector, which is in agreement with what can be seen on the pixel response maps, i.e. the response loss around the pixel edges. For the extra deep p-well and $\mathrm{n^-}$ gap sector the charge sharing does not decrease and a there might even be a marginal increase in charge sharing extents after irradiation. [|X|X|X|X|X|X|]{} Sample & Fluence $\mathrm{n_{eq}/cm^2}$ & TID (Mrad) & continuous $\mathrm{n^-}$ charge & p-well charge & $\mathrm{n^-}$ gap charge\ \[0.5ex\] W2R11 & 0 & 0 & $15.8$ & $20.3$ & $16.9$\ \[1ex\] W2R9 & 5e14 (p) & 66 & $7.7$ & $17.9$ & $21.2$\ W2R1 & 1e15 (n) & & $7.1$ & $23.6$ & $21.3$\ W5R9 & 5e14 (p) & 70 & $7.3$ & $14.1$ & $15.4$\ \[1ex\] W4R9 & 7e13 (p) & 9 & $8.4$ & $14.9$ & $15.7$\ [|X|X|X|X|X|X|]{} Sample & Fluence $\mathrm{n_{eq}/cm^2}$ & TID (Mrad) & continuous $\mathrm{n^-}$ charge sharing ($\mathrm{\mu m}$) & extra deep p-well charge sharing ($\mathrm{\mu m}$) & $\mathrm{n^-}$ gap charge sharing ($\mathrm{\mu m}$)\ \[0.5ex\] W2R11 & 0 & 0 & $4.5\pm1.5$ & $5.4\pm1.3$ & $4.5\pm1.3$\ \[1ex\] W2R9 & 5e14 (p) & 66 & $2.6\pm1.8$ & $5.4\pm2.2$ & $5.5\pm2.0$\ W2R1 & 1e15 (n) & & $3.2\pm2.0$ & $6.1\pm2.0$ & $6.5\pm2.6$\ W5R9 & 5e14 (p) & 70 & $2.4\pm2.0$ & $4.5\pm1.8$ & $4.5\pm2.7$\ \[1ex\] W4R9 & 7e13 (p) & 9 & $2.4\pm1.9$ & $4.1\pm1.9$ & $4.1\pm1.0$\ As the final step of the charge sharing analysis, the asymmetry of the charge sharing was quantified. The asymmetric pixel shape and double column structure of mini-MALTA lead to an asymmetry in the charge sharing. In particular this can be seen in Figures \[fig:W2R11\_pwell\] and \[fig:W2R11\_ngap\], where only every second column has charge sharing regions with values above 2. To quantify this asymmetry, the normalised pixel response outside of the nominal pixel area was considered. This is shown as a map in Figure \[fig:asymmetry\]. The average response due to charge sharing was calculated within 10 $\mathrm{\mu m}$ wide columns around the pixel edges, shown in grey. Clearly the central bin in Figure \[fig:asymmetry\] has a higher response. The asymmetry of the charge sharing is then defined as the ratio of the response of the double-columns with these higher values and those with lower values. The error on the asymmetry is calculated from the variation of the response in the different columns of the same type (i.e. high or low charge sharing) and the normalisation error from the previous pixel response calculation. The results for the charge sharing asymmetry are shown in Table \[table:chargeasymmetry\]. The asymmetry is smaller for the extra deep p-well and $\mathrm{n^-}$ gap designs compared to the continuous $\mathrm{n^-}$ layer one. There is a decrease of charge sharing asymmetry with irradiation, which could be explained by the radiation damage which causes the charge sharing regions to become broader, thus being more important than the asymmetric pixel shape. ![Illustration of the charge sharing asymmetry analysis. The response due to charge sharing is calculated in the grey columns and their ratio is defined as the charge sharing asymmetry.[]{data-label="fig:asymmetry"}](figures/charge_plot.png){width="50.00000%"} [|X|X|X|X|X|X|]{} Sample & Fluence $\mathrm{n_{eq}/cm^2}$ & TID (Mrad) & continuous $\mathrm{n^-}$ & extra deep p-well & $\mathrm{n^-}$ gap\ \[0.5ex\] W2R11 & 0 & 0 & $1.80\pm0.11$ & $1.49\pm0.07$ & $1.43\pm0.07$\ \[1ex\] W2R9 & 5e14 (p) & 66 & $1.49\pm0.12$ & $1.11\pm0.10$ & $1.17\pm0.09$\ W2R1 & 1e15 (n) & & $1.16\pm0.10$ & $1.07\pm0.06$ & $1.15\pm0.09$\ W5R9 & 5e14 (p) & 70 & $1.39\pm0.09$ & $1.02\pm0.06$ & $1.12\pm0.07$\ \[1ex\] W4R9 & 7e13 (p) & 9 & $1.22\pm0.08$ & $1.02\pm0.05$ & $1.09\pm0.07$\ Clustering Analysis ------------------- The photon response maps show the superposition of two different effects. They firstly contain information about the shape and depth of the depletion region and the charge collection, which explain the response loss in the corners of the continuous $\mathrm{n^-}$ sector. Secondly, they provide information about the cluster sizes in the chip, i.e. how many pixels see a photon deposited at a particular location. This is then related to the charge sharing, which was discussed section \[subsec:chargesharing\]. To separate these two effects a cluster analysis was performed on the data. The mini-MALTA chip records events in 25 ns windows. In the clustering analysis each of these events was considered individually and the cluster size was found, i.e. the number of neighbouring pixels which show hits. Then for each stage position the average cluster size was calculated. This was only done for all of the visible pixels in a scan, to reduce hits from the X-ray halo. There was also a cut applied on the total number of hits at each stage position: only the pixels with at least 1% of the maximum number of hits were considered in the calculation of the average cluster size. The resulting average cluster size was plotted as a function of stage position, with the results shown in Figure \[fig:ClusterResults\]. A two-dimensional map is shown, as well as a profile of the cluster size at a particular x position. As expected, the cluster size is 1 in the center of each pixel and then increases at the edges and corners. For the extra deep p-well and $\mathrm{n^-}$ gap sectors the resulting clustering maps look very similar to the pixel response maps, as the latter are dominated by charge sharing for these designs. The increase in charge sharing extents with irradiation can also be seen in the clustering maps. For the continuous $\mathrm{n^-}$ sector, average cluster sizes above 1 are also observed at the pixel edges, though the values are lower compared to the extra deep p-well and $\mathrm{n^-}$ gap sector. This effect is prominent in the photon response maps, as these are dominated by the loss in pixel response at the pixel edges due to depletion depth and charge collection. This analysis shows that there is also charge sharing present in the continuous $\mathrm{n^-}$ sector. The width of the charge sharing regions also increase with irradiation, but the average cluster size decreases. Conclusions =========== The mini-MALTA prototype was tested using an 8 keV X-ray beam at Diamond Light Source. A beam spot with a size of 2 $\mathrm{\mu m}$ was scanned across the surface of the chip in 2 $\mathrm{\mu m}$ steps. From the number of hits in each pixel at each stage position it was possible to determine the in-pixel photon response. Devices with different levels of irradiation were compared and a decrease in pixel response with irradiation was observed for the standard continuous $\mathrm{n^-}$ layer design. The two new mini-MALTA designs with an extra deep p-well implant or a gap in the $\mathrm{n^-}$ layer performed better than the standard design and showed almost no decrease in pixel response with irradiation. The dependence of pixel response on substrate voltage was studied and a decrease in pixel response at high voltages was found. The amount of charge sharing was quantified and found to be consistent with the response results and theoretical expectations. Acknowledgements ================ This project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 654168. (IJS, Ljubljana, Slovenia). This research project has been supported by the Marie Sklodowska-Curie Innovative Training Network of the European Commission Horizon 2020 Programme under contract number 675587 “STREAM”. Dr. Ben Phoenix, Prof. David Parker, Amelia Hunter, and the operators at the MC40 cyclotron in Birmingham (UK). We acknowledge Diamond Light Source for time on Beamline B16 under Proposal MM2206-1.
--- abstract: 'In this paper we present the results of an exploratory study examining the potential of voice assistants (VA) for some groups of older adults in the context of Smart Home Technology (SHT). To research the aspect of older adults’ interaction with voice user interfaces (VUI) we organized two workshops and gathered insights concerning possible benefits and barriers to the use of VA combined with SHT by older adults. Apart from evaluating the participants’ interaction with the devices during the two workshops we also discuss some improvements to the VA interaction paradigm.' author: - 'Jaros[ł]{}aw Kowalski' - Anna Jaskulska - Kinga Skorupska - | Katarzyna Abramczuk\ Cezary Biele - Wiesław Kopeć - Krzysztof Marasek bibliography: - 'bibliography.bib' title: 'Older Adults and Voice Interaction: A Pilot Study with Google Home' --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003121.10003124.10010870&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Natural language interfaces&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003123.10010860&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Interaction design process and methods&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10011738.10011775&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Accessibility technologies&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003121.10003128.10010869&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Auditory feedback&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003120.10003123.10011759&lt;/concept\_id&gt; &lt;concept\_desc&gt;Human-centered computing Empirical studies in interaction design&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10003456.10010927.10010930.10010932&lt;/concept\_id&gt; &lt;concept\_desc&gt;Social and professional topics Seniors&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010405.10010481.10003558&lt;/concept\_id&gt; &lt;concept\_desc&gt;Applied computing Consumer products&lt;/concept\_desc&gt; &lt;concept\_significance&gt;300&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10002978.10003029&lt;/concept\_id&gt; &lt;concept\_desc&gt;Security and privacy Human and societal aspects of security and privacy&lt;/concept\_desc&gt; &lt;concept\_significance&gt;100&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; ![image](images/diagram){width="170px"} Introduction ============ Advancements in AI and ML which drive improvements in NLP are making Voice User Interfaces (VUI) increasingly user-friendly and accessible to users with little prior ICT training. This may prove exceptionally useful for some groups of older adults, as it can empower them to actively and comfortably use ICT-enabled solutions on their own. To this end, it is key to discover potential barriers and ways to build on the strengths of this mode of interaction. Such insights could further inform the design of commercial devices, such as SHT with VUIs, to take advantage of the Silver Economy. To research the older adults’ interaction with Voice User Interfaces combined with Smart Home Technology we conducted two focus-group studies, following the same scenario, the results of which are presented in this abstract. First, we discuss the state of the art. Second, we describe the setup of the study, characteristics of our participants and the outline of our scenario. Third, we present the results of our study showcasing the functions our participants discovered and expected. This section is followed by our insights into the benefits and barriers of this technology. Finally, we present our conclusions and discuss the potential future work in this field. Related work ============ Multiple studies on HCI and aging, instead of exploring the aging process and opportunities, focus on health, socialization and technology stereotypes [@articleageoldproblem]. Positive studies, are still rare, and one shared feature is that they use older adults’ strengths, such as their insights from life experience [@Kopeć2018] or language proficiency [@skorupska2018smarttv] which increase with age. Moreover, older adults realize the benefits that come with an increased ICT proficiency [@aula_learning_2004] especially if it helps them achieve personal goals [@djoub_ict_2013], such as communicating with their loved ones, managing finance, engaging in e-commerce or pursuing their interests [@boulton2007ageing; @naumanen_practices_2008; @von2018influence]. For this reason, some older adults are enthusiastic about joining LivingLab ICT initiatives [@kopec2017living] despite some barriers to their involvement in ICT [@sandhu2013ict; @kopec2017spiral]. Although there are previous studies on VA usability [@Pyae:2018:IUU:3236112.3236130], also with older adults in the context of SH [@Vacher:2015:ECV:2785580.2738047] it is worth to further investigate this problem in multiple cultural contexts in order to suggest how to make Voice Interfaces more accessible [@Corbett:2016:ISA:2935334.2935386] in general to facilitate different groups of older adults’ interaction with ICT-enhanced solutions. Methods ======= Voice interaction has been implemented in mobile phones and tablets for several years now, but only recently standalone devices like Google Home, Amazon Alexa or Apple HomePod, became available. With their help it is possible to ask queries, perform tasks and control a number of Smart Home devices - and all of these aspects we addressed in our study setup. ----------------------------------------------------------------------------------------------------------------------- **Type** **Specs** ----------------- ----------------------------------------------------------------------------------------------------- Speaker with VA Google Home Smartphone Xiaomi Mi Max 2 running Android 7.1.1 TV set 42" Samsung LCD\ Lights & Three Phillips Hue lightbulbs mounted in a desk lamps, connected to a Phillips Hue bridge\ WiFi Relay + Fan & Sonoff Basic WiFi smart switch connected to a 40 cm floor fan\ ----------------------------------------------------------------------------------------------------------------------- We used a system consisting of speaker with a voice assistant agent, smart phone, and additional peripherals such as a TV set, an Android TV Set-Top Box, light controllers, Wi-Fi relays and other. For the purpose of the study all the necessary online accounts were created and preconnected in the Google Home application. The used equipment is listed in Table \[equipment\], and the connections are depicted in Figure \[fig:connections\]. We invited seven older adults: three female participants and four male participants from our LivingLabs to take part in our study. They were all retired, but remained active in various aspects e.g. as citizens engaged in their local area. All of them lived in Warsaw, Poland, and they were native Polish speakers with different English language proficiency. There was a 25 year age span: the youngest participant was 64 years old and the oldest one was 89, with a mean age of 73.14 (SD=8.64). Based on previous activities in our Living Lab, i.e. introductory interviews and DigComp surveys we can describe this group as very active users, above basic ICT skills. We chose to divide the participants into two small affinity groups to give them time to discuss and test various functionalities freely; the scenario for both sessions is presented in Sidebar \[bar:sidebar\]. The research was based on semi-structured mixed method scenario of group interviews with direct involvement of the participants, and we recorded and transcribed the sessions to create an affinity diagram of key themes and quotes. The study scenario consisted of 6 parts: 1. discussing situations in which technology can help users at home, 2. presentation of the Google Home system and voice control capabilities of devices, 3. independent use of the Google Home system by participants, 4. collecting opinions about the system, with particular emphasis on advantages, disadvantages and insights on limitations and opportunities, 5. brainstorming to generate potential new, unaddressed system applications, 6. collecting opinions about potential threats. Results ======= Overall, we observed that our group of older adults was impressed by the range of possibilities of Voice Assistants and how convenient this technology could be for them. One of them said: “This technology is for older adults, 60+, young people have no time to listen to such information: rush, work, they have a phone with everything in it” (P7). Another noted: “it is much simpler, I don’t have to devise anything, I just sit there, bored or tired, and say things (...) I just say them and it (VA) does the job” (P5). For our group of older adults four key needs emerged: 1. understanding technology and receiving feedback ((P3) “technology should give us hints: you did something wrong, do this and that” and “younger people already know how everything works and don’t understand that for us everything has to be coherent, while they can omit things, even in explaining them”); 2. accessible design with low barrier of entry, unlike regular computers (“using a mouse and searching for information is very difficult, as you need multiple repetitions to become a proficient user” and “screens are cluttered, and such user can not focus on one piece of information” (P7)); **Benefits** **Improvements** ------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- **Intuitive interaction** (via natural language) with no/little prior training required. More feedback allowing to create a step-by-step guide. **Voice control** (no motor functions involved on the part of the user). Understanding more natural utterances, including context and metaphors, as well as tackling and explaining the problem of voice priority to prevent conflicts that may arise. **Friendly manner** (friendly voice and patience). Building up on the voice recognition functions to initiate friendly conversations with reminders. **No handling of devices** (so, no device that has to be found and turned on). Solving concerns about the range of effective voice interaction in their home. **Granting independence** (the VA can do some things which, especially for people with disabilities, may require assistance). Ensuring the existence of fail-safes to resolve concerns about the reliability of VIUs in executing commands. 3. seamless incorporation into everyday life, as our participants liked the idea of being able to accomplish certain tasks using only speech not only because they might have “difficulties moving” (P1) but also because it is does not disturb their process. They mentioned the advantages of being able to ask for something when they are “just busy doing something else” (P1) or “when my hands are dirty” (P4). They think this could “save a lot of time, walking and searching” (P3). Hence, they think that voice commands “are the future” (P2)); 4. control and assurance of security (“the computer does not do what you want it to do, but what you ordered it to do and you never know if they are the same thing” (P2). Another participant expressed the fear that they “are not in control of the autonomous device” (P5). He wondered whether asking technology to turn off the stove “would really work”). In light of the aforementioned needs conversational interaction seems like a very good solution for older adults. First, it allows the user to proceed at their own pace. Such conversational agent will not “complain that it has no time” (P6). Second, it allows the users to renounce the reliance on screens and input devices and manages the information flow. Third, it is based on the use of natural language making it easy, as conversation is the default mode of interaction and it does not require a significant change of habits. Finally, a person can say a command with a specific task in mind without the fear of being distracted or sidetracked. In connection with Smart Home devices our group of older adults saw Google Home as a “central unit” (P1) that could make it easier to “operate other devices” (P4), with the TV set being mentioned by several of them, manage various “intelligent home” settings (P1) and even save energy (P6). They especially wanted IoT devices to assist them with tiring household chores, to free up their time for other activities. In terms of barriers, we could notice the concern about context awareness (“Will it tell me if I should take an umbrella with me, or do I have to ask about the weather first?” by P4) and the fear of losing diversity of devices, as not all of them would be compatible (P2). While the issue of privacy was mentioned (P5) it was not a prominent concern. Moreover, we found that our group of older adults could naturally identify various already available applications of VA with SHT and accept them as generally useful and empowering. They expected VA to act as an assistant that enables searching for information hands-free or as a memory aid that does not require handling any devices. Other functions which were mentioned consistently were a translator or even a teacher. With little training and encouragement our participants could start using VA’s to their satisfaction and empowerment. As one of our participants put it: “if there were training sessions (...) a lot of older people would warm up to it” (P6). Discussion ========== **Barriers** **Solutions and comments** ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------- **Time consuming**. Although this was mentioned, it was not a drawback for older adults as they say that they have the time. **Lacks sensors and cameras** (which would allow it to better assist with some tasks). Connecting a camera, to give the user hints (while cooking with a recipe, or to measure things) as well as sensors. **Lack of a screen to give feedback and context**. Introduction of companion screens to see context, status or key information searched for, as it is hard to store it all in memory. **Need to have compatible devices**: fear of losing diversity and individuality). Working towards compatibility between manufacturers. **Fear of malfunction** (something may not turn off, even if the signal was sent). Making clear what backup security measures are in place May go away with more exposure. **Fear of too much reliance** (afraid of a possible loss of creativity, and lack of mental and physical exercise). VA could also serve as an assistant, verifying their cognitive health and reminding them about some elements of a healthy lifestyle. **Danger of entering a “search bubble”** (without the text interaction with a lot of context sometimes it is hard to find exactly what we are looking for, or to remember what it was). A companion screen could mitigate this effect, if the user could glance at it and request to be read a specific result. In our research we identified the main benefits our group of older adults saw in the Voice User Interface and Smart Home solutions, as well as the key barriers to their implementation and use. At the same time we would like to point out the limitations to our study, as the participants were technology-conscious older adults in Poland, however, as one of them (P6) put it “there are some categories of older people who are interested in smartphones, but there are some who spend their time with a crochet, just as there are different categories among young people.” As such, some of our insights may extend to other groups, therefore to inform the design of VUI-based solutions, we present our preliminary findings regarding the benefits in Table \[benefits\] and barriers in Table \[barriers\]. Moreover, apart from identifying key barriers and benefits, we have analyzed our participants’ queries, comments and interaction with the VA in order to explore its nature and introduce some preliminary categories to take into account when designing VA functionalities, such as their role as everyday duties assistants, helping with setting alarms, shopping lists, weather and traffic information (P5), as well as leisure assistants including such specific applications as listening to audio books (P1), language learning (P6), telling a jokes (P7)) and posing SH safety assistants or caretakers able to turn off the light or the stove (P3). Conclusions and future work =========================== Our exploratory qualitative study allowed us to draw important preliminary conclusions relating to the needs of older adults, the benefits and barriers of using VA technology as well as the different possible applications of VA combined with SHT. First, we identified a number of reasons for which VA interfaces combined with IoT are well adjusted to the needs of many older adults, both cognitive, including their need to understand the technology and take control of it, and physical, accounting for accessibility and convenience. We concluded that this technology is very promising as it has the potential to empower some groups of older adults. Second, we enumerated the main benefits our group of older adults saw in the Voice User Interface and Smart Home solutions, as well as the key barriers to their implementation and use. These we then matched with relevant improvement suggestions and solutions or comments which ought to be further discussed and explored. Third, we find that our group of older adults could naturally identify various already available applications of VA with SHT and accept them as generally useful, as well as list multiple additional applications that could spare them considerable effort and free up their time. Therefore, we think that this study is an important voice in the debate on the various applications of voice-powered interaction to meet the needs of some older adults. At the same time we would like to point out the limitations to our study, as the participants were technology-conscious older adults in Poland. Thus, further research is required to verify the identified preliminary barriers and benefits as well as to explore the insights gathered and to investigate this solution with different potential user groups to verify which insights may be group specific and which are general. Acknowledgments =============== We would like to thank older adults from our Living Lab, those affiliated with Kobo Association who participated in this study.
-0.5cm -0.5cm IITAP-95-01\ INR-0891/95\ May 1995 [**BFKL QCD Pomeron in High Energy Hadron Collisions:\ Inclusive Dijet Production**]{}\ \ \ [**Abstract**]{} We calculate inclusive dijet production cross section in high energy hadron collisions within the BFKL resummation formalism for the QCD Pomeron. Unlike the previous calculations with the Pomeron developing only between tagging jets, we include also the Pomerons which are adjacent to the hadrons. With these adjacent Pomerons we define a new object — the BFKL structure function of hadron — which enables one to calculate the inclusive dijet production for any rapidity intervals. We present predictions for the K-factor and the azimuthal angle decorrelation in the inclusive dijet production for Fermilab-Tevatron and CERN-LHC energies. PACS number(s): 13.87.Ce,12.38.Cy,13.85.Hd At present, much attention is being paid to the perturbative QCD Pomeron obtained by Balitsky, Fadin, Kuraev and Lipatov (BFKL) [@Lip76]. One of the reasons is that it relates hard processes ($ -t = Q^2 \gg {\Lambda^2_{QCD}}$) and semi-hard ones ($s \gg -t = Q^2 \gg {\Lambda^2_{QCD}}$): It sums up leading energy logarithms of perturbative QCD into a singularity in the complex angular momentum plane. Several proposals to find direct manifestations of the BFKL Pomeron are available in the literature, see, e.g., \[2-9\], but it is still difficult to get the necessary experimental data. Among those proposals the first one was made by Mueller and Navelet [@Mue87]. They pointed out that the inclusive dijet production in high energy hadron collisions may serve as a probe for the BFKL Pomeron. Namely, a specific exponential growth of the cross section K-factor with the rapidity interval of tagged jets was predicted. This idea was further developed in [@Del94; @Sti94], where it was found out that the relevant object may be the azimuthal angle correlation of jets. Note, that these studies were restricted to a consideration of some special configuration of the inclusive dijet production. Namely, in [@Mue87; @Del94; @Sti94] only the production cross section of the most forward and most backward jets is considered (Fig. 1(a)). However, unlike the usual hard QCD processes with strong $k_\perp$-ordering, in BFKL Pomeron kinematics [@Gri83] there are strong rapidity ordering and weak $k_\perp$-diffusion. This means, in particular, that the most forward/backward jets do not need to have the hardest $k_\perp$. Therefore, there is no warranty that one can tag most forward/backward jets without a dedicated full-acceptance detector [@Bjo92]. Unfortunately, the available detectors (CDF and D$\emptyset$) at the Fermilab Tevatron as well as forthcoming detectors at the CERN Large Hadron Collider (LHC) have limited acceptance in $k_\perp$ (${k_\perp}_{min} \sim$ tens GeV/c) and (pseudo) rapidity for tagging jets. So, one cannot compare results of [@Mue87; @Del94; @Sti94] with the preliminary D$\emptyset$ data on the large rapidity interval dijets [@Heu94] without an analysis of jet radiation beyond the acceptance of the detectors. In this paper we study, within the BFKL approach, the inclusive dijet cross section in high energy hadron collisions without any restrictions on untagging jets. Our results provide an opportunity to confront BFKL Pomeron predictions on the inclusive dijet production with data which could be extracted from the existing CDF and D$\emptyset$ jet event samples after a modification of the jet analysis algorithms. This may be decisive in checking applicability of the factorization hypothesis for high energy hadron collisions (arguments against it see in [@Bra93]) and the leading logarithm approximation. Removing the restriction on tagging jets to be most forward/backward one should take into account additional contributions to the cross section with jets more rapid than the tagging ones. There are three such contributions: two with a couple of Pomerons (Figs. 1(b),1(c)) and one with three (Fig. 1(d)). In this paper, we will call the Pomerons developing between colliding hadrons and their descendant jets the adjacent Pomerons and the Pomeron developing between the tagging jets the inner Pomeron. These additional contributions contain extra power of $\alpha_S$ per extra Pomeron but hardly could they be regarded as corrections since they are also proportional to a kinematically dependent factor which one can loosely treat as the number of partons in the hadron moving faster than the descendant tagging jet. Contribution to the cross section of Fig. 1(a) considered by Mueller and Navelet [@Mue87] is $$\frac{x_1x_2d\sigma_{\{ P \}}}{dx_1dx_2d^{2}k_{1\perp}d^{2}k_{2\perp}} = \frac{\alpha_{S} C_A}{k^2_{1\perp}}\frac{\alpha_{S} C_A}{k^2_{2\perp}} x_1F_A(x_1,\mu^2_1)x_2F_B(x_2,\mu^2_2) f^{BFKL}(k_{1\perp},k_{2\perp},y), \label{mueller}$$ where the subscript on $\sigma_{\{P\}}$ labels the contribution to the cross section as a single-Pomeron; $C_A=3$ is a color group factor; $x_i$ are the longitudinal momentum fractions of the tagging jets; $k_{i\perp}$ are the transverse momenta; $F_{A,B}$ are the effective structure functions of colliding hadrons; $y = \ln(x_1x_2s/k_{1\perp}k_{2\perp})$ is the relative rapidity of tagging jets and, finally, $f^{BFKL}$ is the solution for the BFKL equation. If one retains in $f^{BFKL}(k_{1\perp},k_{2\perp},y)$ of Eq. (\[mueller\]) only the leading $\alpha_S$-independent contribution to its $\alpha_S$-expansion, one gets the result of [@Com84]: $$\frac{x_1x_2d\sigma_{\{P\}}}{dx_1dx_2d^{2}k_{1\perp}d^{2}k_{2\perp}} = \frac{\alpha_{S} C_A}{k^2_{1\perp}} \frac{\alpha_{S} C_A}{k^2_{2\perp}} x_1F_A(x_1,\mu^2_1)x_2F_B(x_2,\mu^2_2) \frac{1}{2}\delta^{(2)}(k_{1\perp}-k_{2\perp}), \label{com}$$ (An interesting feature of this result is that it does not contain specific contributions from gluons and quarks — they turn out to be the same up to a simple group factor in the high energy limit. This provides the possibility to hide all the nonperturbative physics in the pair of effective structure functions $F_{A,B}= G_{A,B}+\frac{C_F}{C_A}Q_{A,B}+\frac{C_F}{C_A}\overline Q_{A,B}$.) What distinguishes the cross section of Eq. (\[mueller\]) from the analogous one of Eq. (\[com\]) is a systematic resummation of $(\alpha_S y)$-corrections to the hard subprocess cross section, which is necessary when the relative rapidity of jets, $y$, is not small. The solution for the BFKL equation has the following integral representation [@Lip76]: $$f^{BFKL}(k_{1 \perp},k_{2 \perp},y)= \sum_{n=-\infty}^{\infty}\int d\nu \chi_{n,\nu}(k_{1 \perp})e^{y\omega(n,\nu)}\chi_{n,\nu}^*(k_{2 \perp}), \label{lipatov}$$ where the star means complex conjugation; $$\chi_{n,\nu}(k_{\perp})=\frac{(k_{\perp}^2)^{-\frac{1}{2}+i\nu} e^{in\phi}}{2\pi} \label{eigenfunction}$$ are Lipatov’s eigenfunctions and $$\omega(n,\nu) = \frac{2 \alpha_{S} C_A}{\pi} \biggl[ \psi(1) - Re \, \psi \biggl( \frac{|n|+1}{2} + i\nu \biggr) \biggr]$$ are Lipatov’s eigenvalues. Here $\psi$ is the logarithmic derivative of Euler Gamma-function. Making use of the above-introduced objects we rewrite Eq.(\[mueller\]) as follows: $$\frac{x_1x_2d\sigma_{\{P\}}}{dx_1dx_2d^{2}k_{1 \perp}d^{2}k_{2 \perp}} = \frac{\alpha_{S} C_A}{k^2_{1 \perp}}\frac{\alpha_{S} C_A}{k^2_{2 \perp}} x_1x_2 \sum_n\int d\nu F_A(x_1,\mu^2_1) \left[ \chi_{n,\nu}(k_{1 \perp}) e^{y\omega(n,\nu)} \chi_{n,\nu}^*(k_{2 \perp}) \right] F_B(x_2,\mu^2_2). \label{cast}$$ As one can guess, subprocesses of Fig. 1(b)-1(d) with the adjacent Pomerons contribute to the effective structure functions, i.e., one can account for them by just adding some “radiation corrections” to the structure functions of Eq.(\[cast\]): $$\begin{aligned} F_A(x_1,\mu^2_1) & \Rightarrow & \Phi_{A}(x_1,\mu^2_1,n,\nu,k_{1\perp}) \equiv F_A(x_1,\mu^2_1)+D_{A}(x_1,\mu^2_1,n,\nu,k_{1\perp}), \\ F_B(x_2,\mu^2_2)&\Rightarrow&\Phi_{B}^{\ast}(x_2,\mu^2_2,n,\nu,k_{2\perp}) \equiv F_B(x_2,\mu^2_2)+D_{B}^{\ast}(x_2,\mu^2_2,n,\nu,k_{2\perp}). \label{subs}\end{aligned}$$ The complex conjugation on $\Phi_B$ could be understood if one look at rhs of Eq.(\[cast\]) as a matrix element of a $t$-chanel evolution operator with the relative rapidity, $y$, as an evolution parameter and $F_B$ as a final state; $(n,\nu)$ are then “good quantum numbers” conserved under the evolution—this makes room for $(n,\nu)$-dependence of the corrected structure functions. We note also that the corrected structure functions may depend on the transverse momenta of the tagging jets. To get an explicit expression for the radiation correction to the effective hadron structure functions, let us consider, for example, the contribution of Fig. 1(b). In terms of the radiation correction, $D_A$, it is $$\begin{aligned} \frac{x_1x_2d\sigma_{\{PP\}A}}{dx_1dx_2d^{2}k_{1 \perp}d^{2}k_{2 \perp}} = \frac{\alpha_{S} C_A}{k^2_{1 \perp}}\frac{\alpha_{S} C_A}{k^2_{2 \perp}} x_1x_2 \times \nonumber \\ \sum_n\int d\nu D_A(x_1,\mu^2_1,n,\nu,k_{1\perp}) \left[ \chi_{n,\nu}(k_{1 \perp}) e^{y\omega(n,\nu)} \chi_{n,\nu}^*(k_{2 \perp}) \right] F_B(x_2,\mu^2_2). \label{dcomp}\end{aligned}$$ On the other hand, BFKL summation of the leading energy logarithms yields $$\begin{aligned} \label{bfklcomp} \frac{x_1x_2d\sigma_{\{PP\}A}}{dx_1dx_2d^{2}k_{1\perp}d^{2}k_{2\perp}} = \frac{\alpha_{S} C_A}{k^2_{1\perp}}\frac{\alpha_{S} C_A}{k^2_{2\perp}} x_1x_2 \frac{2\alpha_{S} C_A}{\pi^2}\int_{x_1}^1d\xi F_A(\xi,\mu^2_1) \int_{\mu_{1}}^{\xi\sqrt{s}}\frac{d^{2}q_{1\perp}}{q_{1\perp}^2} \times\nonumber\\ \int d^{2}q_{2\perp}f^{BFKL}(q_{1\perp},q_{2\perp},y_1(\xi,q_{1\perp})) f^{BFKL}(q_{2\perp}+k_{1\perp},k_{2\perp},y)F_B(x_2,\mu^2_2),\end{aligned}$$ where $\xi, q_{1\perp}$ parameterize the momentum of the most rapid untagging jet moving in the same direction as hadron $A$ (see Fig. 1(a)); $$\label{rapidity} y_1(\xi,q_{1\perp}) = \log \frac{\xi k_{1\perp}}{x_1 q_{1\perp}}$$ is the relative rapidity measuring the rapidity lapse spanned by the adjacent Pomeron. We cut the infrared-divergent transverse momentum integration by the normalization point, $\mu_1$, of $F_A$. Other integration cuts in Eq.(\[bfklcomp\]) are evident from the kinematics[^1]. Splitting the Pomerons into the product of Lipatov’s eigenfunctions and making the transverse momentum integrations in Eq.(\[bfklcomp\]) one gets that, first, Eqs.(\[dcomp\]),(\[bfklcomp\]) are compatible and, second [^2], $$\begin{aligned} \label{explicit} &&D_A(x_1,\mu^2_1,n,\nu,k_{1\perp}) = i\frac{\alpha_S C_A}{\pi^2} \frac{\Gamma(|n|/2+1/2 + i\nu)}{\Gamma(|n|/2+1/2-i\nu)} \times \nonumber \\ &&\int^{\infty}_{-\infty}d\lambda \frac{\Gamma(|n|/2+1-i(\nu-\lambda))}{\Gamma(|n|/2+1+i(\nu-\lambda))} \frac{\Gamma(1/2-i\lambda)}{\Gamma(1/2+i\lambda)} \frac{1}{(i|n|/2+\nu-\lambda+i\epsilon)}\times\nonumber\\ &&\int_x^1\frac{d\xi}{x}\left(\frac{\xi}{x}\right)^{\omega(0,\lambda)} F_A(\xi,\mu^2_1) \frac{(k_{1\perp}/\mu_1)^{1+\omega(0,\lambda)-2i\lambda}} {1+\omega(0,\lambda)-2i\lambda} \Biggl(1-\left(\frac{\mu_{1}}{\xi\sqrt{s}}\right)^ {1+\omega(0,\lambda)-2i\lambda}\Biggr).\end{aligned}$$ Some comments on the above formula are in order. The $i\epsilon$ defines the right way to deal with the singularity at $n=\nu - \lambda=0$. The dependence on the energy $\sqrt{s}$ is weak—we check it for the Tevatron and LHC energies performing the numerical calculations (see below) with and without suppression of the last factor in Eq.(\[explicit\]). To make a local summary, we have got the following compact and significant form for the dijet production inclusive cross section: $$\begin{aligned} &&\frac{x_1x_2d\sigma_{dijet}}{dx_1dx_2d^{2}k_{1 \perp}d^{2}k_{2 \perp}}= \frac{\alpha_{S} C_A}{k^2_{1 \perp}}\frac{\alpha_{S} C_A}{k^2_{2 \perp}} x_1x_2 \times \nonumber \\ &&\sum_n\int d\nu \Phi_A(x_1,\mu^2_1,n,\nu,k_{1\perp}) \left[ \chi_{n,\nu}(k_{1 \perp}) e^{y\omega(n,\nu)} \chi_{n,\nu}^*(k_{2 \perp}) \right] \Phi_B^*(x_2,\mu^2_2,n,\nu,k_{2\perp}), \label{crossphi}\end{aligned}$$ where the new structure functions $\Phi_{A,B}$ that depend on Lipatov’s quantum numbers $(n,\nu)$—we call them BFKL structure functions—can be read off Eqs.(\[subs\]),(\[explicit\]). We expect that Eq. (\[crossphi\]) may serve as an example of factorization matching the Regge limit of QCD. It would be interesting to make contact between the factorization of Eq. (\[crossphi\]) and the $k_{\perp}$-factorization of [@Col91]. Eq. (\[crossphi\]) gets simpler for $x$-symmetric dijet production with $x_1 = x_2 $ after integration over transverse momenta squared larger than a cut value, $k^2_{\perp min}=\mu_1^2=\mu_2^2=Q^2$. Thus, following [@Mue87; @Del94; @Sti94] we consider $$\begin{aligned} \label{norm} &&\int_{k^2_{\perp min}}^{e^{y^{\ast}}k^2_{\perp min}} dk^2_{1\perp}dk^2_{2\perp} \left( \frac{x_1x_2d\sigma_{dijet}}{dx_1dx_2d^{2}k_{1 \perp}d^{2}k_{2 \perp}} \right) _{x_1=x_2} \equiv \frac{(\alpha_S C_A)^2}{s}(e^{y^{\ast}}-1)\times\nonumber\\ &&F_A(e^{y^{\ast}/2}k_{\perp min}/\sqrt{s},k^2_{\perp min}) F_B(e^{y^{\ast}/2}k_{\perp min}/\sqrt{s},k^2_{\perp min}) \sum_n\frac{e^{in\phi}}{2\pi} C_n(y^{\ast},k_{\perp min}).\end{aligned}$$ This depends on the azimuthal angle, $\phi$, between the tagging jets and an effective relative rapidity, $y^{\ast}\equiv\ln({x_1x_2s}/{k_{\perp min}^2})$ [^3]. We compute the Fourier coefficients of the azimuthal angle dependence, $C_n(y^{\ast},k_{\perp min})$. The normalization of Eq. (\[norm\]) makes $C_0(y^{\ast},k_{\perp min})$ equal to the $K$-factor—the ratio of the cross section integrated over the azimuthal angle, to the Born one. The exponential growth of this quantity with rapidity was predicted in [@Mue87]. Another quantity of interest is the average cosine of the azimuthal angle between tagging jets, $<\cos(\phi-\pi)>=C_1(y^{\ast},k_{\perp min})/ C_0(y^{\ast},k_{\perp min})$ [@Del94; @Sti94] ($\phi=\pi$ for back-to-back jets). We plot our predictions for the $K$-factor in Fig. 2 and $<\cos(\phi-\pi)>$ in Fig. 3. The leading order CTEQ3L structure functions [@Lai94] with $\Lambda^{(5)}_{QCD}=132$ MeV/c have been used. We point out the following qualitative features of our numerical results: \(i) the contribution of the adjacent Pomerons to the cross section is significant—up to 60% (20%) at $k_{\perp min}=20$ GeV/c ($50$ GeV/c) at the Tevatron energy; \(ii) the contribution of the adjacent Pomerons slowly dies out as $y^{\ast}$ approaches to its kinematical bounds; \(iii) growth of the energy as well as the decrease of the lower cutoff on the transverse momenta of the tagging jets causes the contribution of the adjacent Pomerons to be even more significant; \(iv) the azimuthal angle decorrelation (deviation of the average cosine from unit) is less sensitive to the contribution of the adjacent Pomerons; As it is apparent from our plots, the resummation effects are significant not only for large rapidity intervals. Thus, the region of moderate rapidity intervals seems also to be promising for BFKL Pomeron manifestation searches. We should note here that the extraction of data on high-$k_{\perp}$ jets from the event samples in order to compare them with the BFKL Pomeron predictions should be different from the algorithms directed to a comparison with perturbative QCD predictions for the hard processes. These algorithms, motivated by the strong $k_{\perp}$-ordering of the hard QCD regime, employ hardest-$k_{\perp}$ jet selection (see, e.g., [@Alg94]). It is doubtful that one can reconcile these algorithms with the weak $k_{\perp}$-diffusion and the strong rapidity ordering of the semi-hard QCD regime, described by the BFKL resummation. We also note that our predictions should not be compared with the preliminary data [@Heu94] extracted by the most forward/backward jet selection criterion. Obviously, one should include for tagging all the registered pairs of jets (not only the most forward–backward pair) to compare with our predictions. In particular, to make a comparison with Figs. 2,3, one should sum up all the registered $x$-symmetric dijets ($x_1=x_2$) with transverse momenta harder than $k_{\perp min}$. Based on our study we draw a conclusion that the adjacent BFKL Pomerons can play a decisive role in high energy hadron collisions, as it may be seen in inclusive dijet production. We thank E.A.Kuraev and L.N.Lipatov for stimulating discussions. We are grateful to A.J.Sommerer, J.P.Vary, and B.-L.Young for their kind hospitality at the International Institute of Theoretical and Applied Physics, Ames, Iowa and support. V.T.K. is indebted to S.Ahn, C.L.Kim, A.Petridis, J.Qiu, C.R.Scmidt, and S.I.Troyan for helpful conversations. G.B.P. wishes to thank F.Paccanoni for fruitful discussions and hospitality at Padova University. [10]{} L.N.Lipatov, Yad. Fiz. [**23**]{}, 642 (1976) \[Sov. J. Nucl. Phys. [**23**]{}, 338 (1976)\];\ E.A.Kuraev, L.N.Lipatov and V.S.Fadin, Zh. Eksp. Teor. Fiz. [**71**]{}, 840 (1976) \[Sov. JETP [**44**]{}, 443 (1976)\]; [*ibid.*]{} [**72**]{}, 377 (1977) \[[**45**]{}, 199 (1977)\];\ Ya.Ya.Balitsky and L.N.Lipatov, Yad. Fiz. [**28**]{}, 1597 (1978) \[Sov. J. Nucl. Phys. [**28**]{}, 822 (1978)\];\ L.N.Lipatov, Zh. Eksp. Teor. Fiz. [**90**]{}, 1536 (1986) \[Sov. JETP [**63**]{}, 904 (1986)\] A.H.Mueller and H.Navelet, Nucl. Phys. [**B282**]{}, 727 (1987) L.L.Frankfurt and M.I.Strikman, Phys. Rev. Lett. [**63**]{}, 1914 (1989) J.Bartels, A.De Roeck and M. Loewe, Z. Phys. C [**54**]{}, 653 (1992) J.Kwiecinski, A.D.Martin and P.J.Sutton, Phys. Lett. [**B287**]{}, 254 (1992); Phys. Rev. D [**46**]{}, 921 (1992) W.K.Tang, Phys. Lett. [**B278**]{}, 363 (1992) N.N.Nikolaev and B.G.Zakharov, Phys. Lett. [**B332**]{}, 177 (1994) V.Del Duca and C.R.Schmidt, Phys. Rev. D [**49**]{}, 4510 (1994);\ V.Del Duca and C.R.Schmidt, Phys. Rev. D [**51**]{}, 4510 (1995);\ DESY 94-128, SCIPP 94/20 (1994); DESY 94-163, SCIPP 94/27 (1994) W.J.Stirling, Nucl. Phys. [**B423**]{}, 56 (1994) L.V.Gribov, E.M.Levin and M.G.Ryskin, Phys. Rep. [**C100**]{}, 1 (1983);\ E.M.Levin and M.G.Ryskin, Phys. Rep. [**C189**]{}, 267 (1990) J.D.Bjorken, Int. J. Mod. Phys. A [**7**]{}, 4189 (1992) D$\emptyset$ Collaboration, A.Brandt, [*Jet Production at Large Rapidity Intervals*]{}, talk presented at [*XX ICHEP*]{}, Glasgow, Aug. 1994; T.Heuring, [*Jets with Large Rapidity Separation*]{}, talk presented at APS DPF Meeting, Albuquerqe 1994; C.L.Kim, [*Jet Production at Large Rapidity Intervals*]{}, talk presented at Small-x Workshop, Fermilab, Sept. 1994 A.Brandenburg, O.Nachtmann and E.Mirkes, Z. Phys. C [**60**]{}, 697 (1993) B.L.Combridge and C.J.Maxwell, Nucl. Phys. [**B239**]{}, 429 (1984) V.Del Duca, M.E.Peskin and W.K.Tang, Phys. Lett. [**B306**]{}, 151 (1993) J.C.Collins and R.K.Ellis, Nucl. Phys. [**B360**]{}, 3 (1991);\ S.Catani, M.Ciafaloni and Hautmann, [*ibid.*]{} [**B366**]{}, 135 (1991) H.L.Lai et al., Phys. Rev. D [**51**]{}, 4763 (1995) A.Biatti for CDF Collaboration, [*XX ICHEP*]{} paper GLS 0368, Glasgow, Aug. 1994;\ F.Nang for D$\emptyset$ Collaboration, Proceedings of APS Division of Particle and Fields, Albuquerqe,N.M. 1994 [**Figure Captions**]{} Fig. 1: Subprocesses for the dijet production in a collision of hadrons $A$ and $B$; vertical curly lines correspond to the Reggeized gluons; horizontal ones to the real gluons radiated into the rapidity intervals; arrows mark gluons producing the tagging jets; all subprocesses contain the inner Pomeron. (a) subprocess without the adjacent Pomeron; (b) subprocess with a Pomeron adjacent to the hadron $A$; (c) same for the hadron $B$; (d) subprocess with two adjacent Pomerons. Fig. 2: The $y^{\ast}$-dependence of the dijet K-factor for various values of energy, $\sqrt{s}$, transverse momenta cutoff and normalization point, $Q^2$, of $\alpha_S$ and $F_{A,B}$. Fig. 3: The $y^{\ast}$-dependence of the average azimuthal angle cosine between the tagging jets for various values of energy, $\sqrt{s}$, transverse momenta cutoff and normalization point, $Q^2$, of $\alpha_S$ and $F_{A,B}$. [^1]: We consider here only a kinematically simple case when the rapidities of the tagging jets have different signs in the center-of-mass frame. [^2]: A simplified analog of this formula was found in a triple-jet inclusive production study [@Del93]. [^3]: Note that in the leading logarithm approximation we have $k_{\perp} \sim k_{\perp min}$, and hence $y \sim y^{\ast} $
--- abstract: 'Black holes supported by self-interacting conformal scalar fields can be considered as renormalizably dressed since the conformal potential is nothing but the top power-counting renormalizable self-interaction in the relevant dimension. On the other hand, potentials defined by powers which are lower than the conformal one are also phenomenologically relevant since they are in fact super-renormalizable. In this work we provide a new map that allows to build black holes dressed with all the (super-)renormalizable contributions starting from known conformal seeds. We explicitly construct several new examples of these solutions in dimensions $D=3$ and $D=4$, including not only stationary configurations but also time-dependent ones.' author: - 'Eloy Ayón-Beato' - Mokhtar Hassaïne - 'Julio A. Méndez-Zavaleta' title: '(Super-)renormalizably dressed black holes' --- \[sec:intro\]Introduction ========================= The nontrivial interaction between matter and geometry mathematically encoded through the equations proposed by Einstein a century ago is one of the crucial paradigms of General Relativity. As it is well-known, the nonlinearity of Einstein equations makes the task of finding physically interesting solutions very hard. In order to render this problem more easy to handle, the presence of certain symmetry can be primordial to simplify the problem or as a tool to generate nontrivial solutions from more simpler ones. As a concrete example, we mention the case of a self-gravitating scalar field conformally coupled to gravity. Indeed, in this case, the conformal symmetry of the matter source implies that the scalar curvature is zero which in turn considerably simplifies the field equations for a simple ansatz, yielding to the so-called Bekenstein black hole solution [@Bekenstein:1974sf; @BBMB]. This solution was in fact discovered by Bekenstein exploiting the conformal machinery enjoyed by the system by mapping the original action to the one of a minimally coupled scalar field. The inclusion of an electromagnetic field, also conformally invariant in four dimensions, is straightforward and yields to the charged version of the Bekenstein black hole. One may be tempted to conjecture that any conformal contribution added to the original problem will provide a generalization of this solution. However, as shown in [@AyonBeato:2002cm], the inclusion of the conformally invariant self-interaction turns to be incompatible with the asymptotic flatness of the black hole. As will be discussed later, the prize to be paid in order to consider this possibility is to add a cosmological constant, which clearly modifies the asymptotic behavior. The mere existence of the Bekenstein black hole, together with its very particular properties has attracted a lot of attention as it can be seen by reading the current literature on the topic. For example, this solution provided one of the first counterexamples to the old statements of the “no-hair” conjecture, and its uniqueness has been established in [@XZ]. Nevertheless, the solution suffers from a pathological behavior due to the fact that the scalar field diverges at the horizon which makes rather obscure its physical interpretation [@Sudarsky:1997te]. A way of circumventing this problem is to reconsider the inclusion of the self-interaction that does not spoil the conformal invariance but this time together with a cosmological constant, whose effect is precisely to push this singularity behind the horizon. The resulting configuration is the so-called Martínez-Troncoso-Zanelli (MTZ) black hole [@Martinez:2002ru], which also allows a charged generalization [@Martinez:2005di]. Due to the presence of the cosmological constant, the existence of these configurations is ensured not only for horizons with spherical topology but also with hyperbolic one. In Ref. [@Caldarelli:2013gqa], it has been shown that the inclusion of axion fields coupled to the scalar field allows the existence of black hole configurations with planar horizon topology, valid even for more general nonminimal couplings than the conformal one. Black holes dressed by a conformal scalar field in presence of a cosmological constant were in fact first analyzed in $2+1$ dimensions by Martínez and Zanelli (MZ) [@Martinez:1996gn]. The MZ black hole was later generalized to its conformally self-interacting version [@Henneaux:2002wm], and a further nonlinearly charged extension preserving the conformal invariance of the full source has been given recently in [@Cardenas:2014kaa]. It is even known the exact gravitational collapse to the neutral self-interacting version [@Xu:2014xqa]. In contrast, for higher dimensions ($D>4$) the situation is quite different; the Bekenstein conformal mapping can be lifted, but it produces only naked singularities [@Xanthopoulos:1992fm]. This result can be proved in full generality by straightforwardly integrating the static and spherically symmetric equations of motion, which in turn implies the uniqueness of the Bekenstein black hole within all the asymptotically flat black hole solutions allowed by a conformal scalar field in any dimension [@Klimcik:1993ci]. No similar result in such generality is known in presence of a cosmological constant. Nevertheless, it can be shown that in the case of the simplest metric ansatz involving a single function, the only black hole solutions are those previously mentioned for $D=3$ and $D=4$ [@AyonBeato:2001sb; @Martinez:2009kua]. For completeness, we also mention that generalizing the conformal action in higher dimensions via nonminimal couplings of the scalar field with higher-curvature Lovelock terms built out of a Weyl connection (by construction conformally invariant) whose vectorial part is a pure-gauge contribution of the derivatives of the scalar field, asymptotically (A)dS black hole solutions can be obtained [@Giribet:2014bva]. Additionally, generalizing also the gravity action to the Lovelock one it is possible to find topological black hole configurations for a self-interacting scalar field with standard nonminimal coupling to gravity [@LovelockBHSF]; some of those examples include conformally coupled scalar fields. Another important aspect of the conformal symmetry as guiding principle is the fact that the power defining the conformal self-interaction in any dimension $D$ is just $2D/(D-2)$. In mass units $c=1=\hbar$, this implies that the conformal coupling constant is dimensionless and consequently these theories are power-counting renormalizable [@Peskin:1995ev]. From the phenomenological point of view, self-interactions with lower powers than the conformal one are also relevant, since its coupling constants have positive mass dimension and they define super-renormalizable theories [@Peskin:1995ev]. In fact, the mechanism explaining the spontaneous symmetry breaking of gauge theories is naturally modeled with these kind of contributions [@Englert:1964et; @Higgs:1964pj]. Due to the recent experimental confirmation of the existence of scalar bosons in nature [@Aad:2012tfa; @Chatrchyan:2012ufa], exploring their self-gravitating behavior is an important task to study, even in their more extremal realization, i.e. their possible gravitational collapse to black holes. Examples of the final state of this collapse are given by the (super-)renormalizably dressed black holes found by Anabalon and Cisterna (AC) [@Anabalon:2012tu], for which the scalar field is no longer conformally invariant but retains its conformal coupling. Indeed, conformal invariance is explicitly broken in their source because the potential involves additional powers of the scalar field, lower than the conformal one. One of the aim of the present paper is precisely to explain the emergence of these additional contributions to the potential by means of a generating tool as the ones emphasized at the beginning. This tool does not only explain the existence of the AC black hole, but also gives rise to many new scalar black holes supported with (super-)renormalizable self-interactions. In this work we concretely show that the Einstein equations with cosmological constant for a self-interacting conformally invariant scalar field can be mapped to their counterpart having as source an also conformally coupled scalar field, but with the difference of being subject to a more general self-interaction that explicitly breaks conformal invariance. We prove this result by showing that, via a very precise map, the corresponding actions are proportional to each other. This mapping is realized through the composition of a shift on the original scalar field and a conformal transformation acting on the metric and the scalar field. For dimensions $D=3, 4$ and $6$ which correspond to the dimensions where the conformal power $2D/(D-2)$ is an integer, the expression of the resulting potential involves all the integer powers of the scalar field until the conformal one yielding to a (super-)renormalizable self-interaction. Consequently with the fact that these self-interactions involve in general more coupling constants than the conformal one, this is a one-way map allowing to obtain self-gravitating solutions of a scalar field conformally coupled to gravity and self-interacting via (super-)renormalizable contributions from any self-gravitating conformal seed, but the converse is not true. The plan of the paper is the following. In the next section, we present the new mapping from a self-gravitating conformal scalar field to another self-gravitating one which is (super-)renormalizably self-interacting and conformally coupled to gravity, both in presence of its respective cosmological constants. In dimensions $D=4$ and $D=3$, where black hole solutions dressed by a conformal scalar field are known, we use this mapping to explicitly exhibit new solutions of a (super-)renormalizable and conformally coupled scalar source in Secs. \[sec:D=4\] and \[sec:D=2+1\], respectively. Finally, the achieved conclusions are given in Sec. \[sec:conclu\]. \[sec:genconf\]From renormalizable to (super-)renormalizable sources ==================================================================== Our starting point is to consider the $D$-dimensional action of a self-gravitating conformal scalar field in presence of a cosmological constant $$\begin{aligned} \label{eq:Sconformal} S[g,\Phi]={}&\int d^{D}x\sqrt{-g} \biggl(\frac{R-2\Lambda}{2\kappa} -\frac{1}{2}\partial_{\mu}\Phi\partial^{\mu}\Phi\nonumber\\ &\qquad\qquad\qquad-\frac{1}{2}\xi_{D}R\Phi^{2} -\lambda\Phi^{\frac{2D}{D-2}}\biggr).\end{aligned}$$ Here, $R$ stands for the scalar curvature, $\Lambda$ represents the cosmological constant, $\lambda$ is the coupling constant of the conformal potential and the conformal coupling is given by $$\xi_{D}=\frac{D-2}{4(D-1)}.$$ This is the precise value of the nonminimal coupling to gravity that ensures the scalar contribution to the action (\[eq:Sconformal\]) to be invariant, up to a boundary term, under a conformal transformation $$\label{eq:conftransf} g_{\mu\nu}\mapsto\Omega^2\,g_{\mu\nu},\qquad \Phi\mapsto\Omega^{\frac{2-D}{2}}\,\Phi,$$ where $\Omega$ is an arbitrary local function. The field equations for the conformal scalar field arising from the variation of the action (\[eq:Sconformal\]) read \[eq:NMGSconf1\] $$\begin{aligned} G_{\mu\nu}+\Lambda{g}_{\mu\nu} &= {\kappa}T_{\mu\nu}, \label{eqmotion1}\\ \Box\Phi-{\xi_D}R\Phi &= \frac{2\lambda D}{D-2}\Phi^{\frac{D+2}{D-2}},\end{aligned}$$ where the conformally invariant energy-momentum tensor is defined by $$\begin{aligned} T_{\mu\nu}={}&\nabla_{\mu}\Phi\nabla_{\nu}\Phi -g_{\mu\nu}\left(\frac{1}{2}\nabla_{\sigma}\Phi\nabla^{\sigma}\Phi +\lambda\Phi^{\frac{2D}{D-2}}\right)\nonumber\\ &+\xi_D( g_{\mu\nu}\Box - \nabla_{\mu}\nabla_{\nu}+G_{\mu\nu} )\Phi^2.\end{aligned}$$ Here, we show that a slightly generalization of the transformations (\[eq:conftransf\]) can also induce some interesting features not only for the matter source but for the full action (\[eq:Sconformal\]). Indeed, we prove that there exists a special conformal frame, defined by taking a precise power of the conformal factor as an affine function of the scalar field, where a simple shift of the scalar field permits to map the action (\[eq:Sconformal\]) into a similar action but with a more involved self-interaction potential. More precisely, under the following one-parameter mapping defined by \[eq:map\] $$\begin{aligned} \bar{g}_{\mu\nu}&=\left(a\sqrt{\kappa\xi_{D}}\Phi+1\right)^{\frac{4}{D-2}}g_{\mu\nu}, \label{eq:mapg}\\ \bar{\Phi}&=\frac{1}{\sqrt{\kappa\xi_{D}}}\frac{\sqrt{\kappa\xi_{D}}\Phi+a} {a\sqrt{\kappa\xi_{D}}\Phi+1}, \label{eq:mapPhi}\end{aligned}$$ the action for a self-gravitating conformal scalar field (\[eq:Sconformal\]) transforms to a new action $$\label{eq:mapaction} \bar{S}[\bar{g},\bar{\Phi}]=(1-a^{2})\,S[g,\Phi],$$ which also describes a self-gravitating conformally coupled scalar field \[eq:Sbar\] $$\begin{aligned} \bar{S}[\bar{g},\bar{\Phi}]={}&\int d^{D}x\sqrt{-\bar{g}}\biggl(\frac{\bar{R}-2\bar{\Lambda}}{2\kappa} -\frac{1}{2}\partial_{\mu}\bar{\Phi}\partial^{\mu}\bar{\Phi} \nonumber\\ &\qquad\qquad\qquad-\frac{1}{2}\xi_{D}\bar{R}\bar{\Phi}^{2}-V(\bar{\Phi})\biggr),\end{aligned}$$ but subject to a different self-interaction potential given by $$\begin{aligned} V(\bar{\Phi})={}&\frac{1}{\left(1-a^{2}\right)^{\frac{D+2}{D-2}}} \Bigg\{\frac{\Lambda}{\kappa} \left[\left(1-a\sqrt{\kappa\xi_{D}}\bar{\Phi}\right)^{\frac{2D}{D-2}}-1\right]\nonumber\\ &+\lambda\left[ \left(\bar{\Phi}-\frac{a}{\sqrt{\kappa\xi_{D}}}\right)^{\frac{2D}{D-2}} -\left(-\frac{a}{\sqrt{\kappa\xi_D}} \right)^{\frac{2D}{D-2}}\right]\Bigg\}, \label{eq:potential}\end{aligned}$$ and in presence of a modified cosmological constant $$\label{eq:barLambda} \bar{\Lambda}=\frac{\kappa}{\left(1-a^{2}\right)^{\frac{D+2}{D-2}}} \left[\frac{\Lambda}{\kappa}+ \lambda\left(-\frac{a}{\sqrt{\kappa\xi_{D}}}\right)^{\frac{2D}{D-2}}\right].$$ Notice that these precise definitions ensure that the new potential does not contain zeroth-order terms, and hence the modified cosmological constant lacks of further contributions. We are entitled to ask the reasons for which this mapping can be relevant. Apart from generating new solutions from conformal ones, the mapping has also a nice feature in dimensions $D=3, 4$ and $6$ where the conformal power $2D/(D-2)$ is an integer. Indeed, in those cases, the resulting self-interaction (\[eq:potential\]) enhances the original conformal one with all the power-counting super-renormalizable contributions, i.e. it becomes a polynomial of degree $2D/(D-2)$, $$\label{eq:SuperRenorm} V(\bar{\Phi})=\lambda_1\bar{\Phi}+\lambda_2\bar{\Phi}^2+\cdots+ \lambda_{2D/(D-2)}\bar{\Phi}^{2D/(D-2)}.$$ Since this mapping is operated at the level of the actions and only change them by a global multiplicative factor, the solutions of the field equations (\[eq:NMGSconf1\]) can be mapped to solutions of the field equations arising from the variation of the action (\[eq:Sbar\]) which are given by \[eq:FE\] $$\begin{aligned} \bar{G}_{\mu\nu}+\bar{\Lambda}\bar{{g}}_{\mu\nu} &= {\kappa}{\bar T}_{\mu\nu}, \label{eq:motion2}\\ \bar{\Box}\bar{\Phi}-{\xi_D}\bar{R}\bar{\Phi} &= \frac{dV(\bar{\Phi})}{d\bar{\Phi}}, \label{eq:motion3}\end{aligned}$$ where now the new energy-momentum tensor is defined by $$\begin{aligned} \bar{T}_{\mu\nu}={}&\partial_{\mu}\bar{\Phi}\partial_{\nu}\bar{\Phi} -\bar{g}_{\mu\nu}\left(\frac{1}{2}\partial_{\sigma}\bar{\Phi}\partial^{\sigma}\bar{\Phi} +{V}(\bar{\Phi})\right)\nonumber\\ &+\xi_D( \bar{g}_{\mu\nu}\bar{\Box} -\bar{\nabla}_{\mu}\bar{\nabla}_{\nu}+\bar{G}_{\mu\nu} )\bar{\Phi}^2,\end{aligned}$$ and involves a much more general self-interaction potential (\[eq:potential\]). It is worth mentioning that this mapping is only effective for $a\not=\pm 1$, and for $a^2<1$ it preserves the unitarity of both actions. Additionally, for $a=0$ it reduces to the identity. Regarding the interpretation of the parameter $a$, notice that if the starting conformal configuration vanishes at infinity then this parameter in the scalar map (\[eq:mapPhi\]) is just related to the constant value $\bar{\Phi}_0$ of the bar field at infinity, i.e.$a=\sqrt{\kappa\xi_D}\bar{\Phi}_0$. As stressed before, the metric transformation (\[eq:mapg\]) is easily understood as a conformal transformation to a precise conformal frame. In contrast, the meaning of the scalar transformation (\[eq:mapPhi\]) is more subtle, but for $a^2<1$ it may be viewed as a $\mathrm{SL(2,\!I\!R)}$ transformation. As an interesting remark that will be relevant in what follows, one can mention that this mapping can be extended to a starting action given by (\[eq:Sconformal\]) supplemented with an extra piece that is conformally invariant, for instance the Maxwell action in $D=4$ or its nonlinear conformal extension in arbitrary dimension [@Hassaine:2007py]. Indeed, under the mapping (\[eq:map\]) the additional action, being conformally invariant, would remain in principle unchanged. Hence, supplementing the conformal transformation on all the involved fields with a trivial scaling in the additional field, the full action will change by the same global factor to a bar action given by (\[eq:Sbar\]) together with the bar valued version of the conformal extra piece. The procedure will become more clear in the next section since we shall include the Maxwell action in the applications of the mapping in $D=4$. We would like to point out that for a free conformal scalar field in the absence of cosmological constant, i.e.$\lambda=0$ and $\Lambda=0$, it is easy to check from both, (\[eq:potential\]) and (\[eq:barLambda\]), that the proposed transformation constitutes just a scaling of the self-gravitating free conformal action to itself and consequently a symmetry of the involved equations of motion. Starting in this case from the Bekenstein black hole the wormhole solution [@Barcelo:1999hq] is obtained, see Ref. [@Astorino:2014mda] for a recent discussion on this point. In the next sections we concentrate in the cases where $\lambda\ne0$ and $\Lambda\ne0$ and generate new classes of solutions connected by the map to well-known conformal seeds. \[sec:D=4\] Generating new solutions in $D=4$ ============================================= We now proceed to exploit the method of generating solutions for self-gravitating (super-)renormalizable scalar sources conformally coupled to gravity, which are rigged by equations (\[eq:FE\]), from known conformal solutions of equations (\[eq:NMGSconf1\]). As stressed before, the map (\[eq:map\]) can be extended in order to include an extra source that is also conformally invariant. We consider this option in four dimensions with the Maxwell action $$\label{eq:SMax} S_{\mathrm{M}}[g,A]=-\frac{1}{16\pi}\int{d^4x\,\sqrt{-g}\,F^{\mu\nu}F_{\mu\nu} },$$ which is conformally invariant. Nevertheless, in order for the Maxwell action to be mapped with the same global factor appearing in (\[eq:mapaction\]), the map (\[eq:map\]) must be extended with a trivial scaling only acting on the vector potential as $$\label{eq:barA} \bar{A}_{\mu}=\sqrt{1-a^2}A_{\mu}.$$ In this case, the Maxwell action (\[eq:SMax\]) together with the self-gravitating conformal action (\[eq:Sconformal\]) in four dimensions are mapped, under the transformation (\[eq:map\]) supplemented by the scaling (\[eq:barA\]), to the general action $$\label{eq:D=4extmap} \bar{S}[\bar{g},\bar{\Phi}]+S_{\mathrm{M}}[\bar{g},\bar{A}] =(1-a^{2})\left(S[g,\Phi]+S_{\mathrm{M}}[g,A]\right).$$ In four dimensions, there exists an asymptotically (A)dS electrically charged black hole solution with nontrivial scalar field obeying the field equations obtained from the variation of the action $S[g,\Phi]+S_{\mathrm{M}}[g,A]$. This solution corresponds to the charged version of the MTZ black hole [@Martinez:2002ru] found in [@Martinez:2005di], and given by \[eq:MST\] $$\begin{aligned} \bm{ds}^{2}={}& -\left[-\frac{\Lambda r^{2}}{3}+k\left(1-\frac{M}{r}\right)^{2}\right]\bm{dt}^{2} \nonumber\\ &+\left[-\frac{\Lambda r^{2}}{3}+k\left(1-\frac{M}{r}\right)^{2}\right]^{-1}\bm{dr}^{2} +r^{2}\bm{d\Omega}^{2}_{k},\label{eq:gMST}\\ \Phi(r)={}&\sqrt{-\frac{\Lambda}{6\lambda}}\,\frac{M}{r-M},\label{eq:phiMST}\\ \bm{A}(r)={}&-\frac{q}{r}\bm{dt},\label{eq:AMST}\end{aligned}$$ where the two-dimensional base manifold is of constant curvature denoted by $k$ and the constant $M$ related to the mass together with the electric charge $q$ are tied via the coupling constants as follows $$\label{eq:finetuneMST} q^{2}=\frac{2\pi}{9}\frac{kM^{2}\left(\kappa\Lambda+36\lambda \right)}{\kappa\lambda}.$$ In the neutral limit $q=0$ it becomes the original MTZ black hole [@Martinez:2002ru] with its known fine tuning between the coupling constants, $\Lambda/\lambda=-36/\kappa$. For $\Lambda=0=\lambda$ and $k=1$ the MTZ black hole reduces to the Bekenstein one [@Bekenstein:1974sf]. The solution (\[eq:MST\]) will be our conformal seed configuration in order to generate solutions of the field equations obtained from the variation of the action at the left hand side of (\[eq:D=4extmap\]), defined by $$\begin{aligned} \bar{G}_{\mu\nu}+\bar{\Lambda}\bar{{g}}_{\mu\nu}={}&{\kappa}\left[\bar{T}_{\mu\nu}+ \frac{1}{4\pi}\left(\bar{F}_{\mu\sigma}\bar{F}_{\nu}^{~\sigma} -\frac{1}{4}\bar{g}_{\mu\nu}\bar{F}_{\alpha\beta}\bar{F}^{\alpha\beta}\right)\right], \nonumber\\ \bar{\Box}\bar{\Phi}-\frac{1}{6}\bar{R}\bar{\Phi} ={}& \frac{dV(\bar{\Phi})}{d\bar{\Phi}}, \label{eqmotionM2}\\ \bar{\nabla}_{\mu}\bar{F}^{\mu\nu}={}&0,\nonumber\end{aligned}$$ where the potential (\[eq:potential\]) is given in four dimensions by the (super-)renormalizable one \[eq:potentialD4\] $$V(\bar{\Phi})=\lambda_1\bar{\Phi}+\lambda_2\bar{\Phi}^2+\lambda_3\bar{\Phi}^3 +\lambda_4\bar{\Phi}^4,$$ with couplings constants determined by $$\begin{aligned} \lambda_1&=-\frac{2\sqrt{6}}{3}\frac{a(\kappa\Lambda+36a^{2}\lambda)} {\kappa^{3/2}(1-a^2)^{3}},\\ \lambda_2&= \frac{a^{2}(\kappa\Lambda+36\lambda)}{\kappa(1-a^{2})^{3}}, \label{eq:lambda2}\\ \lambda_3&=-\frac{\sqrt{6}}{9}\frac{a(a^{2}\kappa\Lambda+36\lambda)} {\kappa^{1/2}(1-a^{2})^{3}},\\ \lambda_4&= \frac{1}{36}\frac{a^{4}\kappa\Lambda+36\lambda}{(1-a^{2})^{3}}, \label{eq:lambda4}\end{aligned}$$ and the cosmological constant takes the following expression $$\label{eq:LambdaD4} \bar{\Lambda}=\frac{\kappa\Lambda+36a^{4}\lambda}{\kappa(1-a^{2})^{3}}.$$ The above parameterizations can be understood as follows: the transformations (\[eq:LambdaD4\]) and (\[eq:lambda4\]) are just a one-parameter invertible linear map between the initial cosmological and renormalizable coupling constants $(\Lambda,\lambda)$ and the final ones $(\bar{\Lambda},\lambda_4)$. Rewriting the rest of the parameterizations in terms of $(\bar{\Lambda},\lambda_4)$, via this invertible linear map, one can consider the final cosmological and renormalizable coupling constants as arbitrary and the full transformations just describe a one-parameter subspace of the three-dimensional parameter space $(\lambda_1,\lambda_2,\lambda_3)$ characterizing the strictly super-renormalizable contributions. This is the precise subspace of the general problem with (super-)renormalizable self-interactions which is accessible via the mapping (\[eq:map\]) from the conformal sector. The map acting on (\[eq:MST\]) gives rise to a new charged solution of the field equations (\[eqmotionM2\]) that reads \[eq:mapMST\] $$\begin{aligned} \bm{d\bar{s}}^{2}={}& \left(\frac{r-M\left(1-\sqrt{-\frac{\kappa\Lambda}{36\lambda} }a\right)}{r-M}\right)^{2} \nonumber\\ &\times\Bigg\{ -\left[\frac{-\Lambda r^{2}}{3}+k\left(1-\frac{M}{r}\right)^{2}\right]\bm{dt}^{2} \nonumber\\ &\qquad+\left[\frac{-\Lambda r^{2}}{3}+k\left(1-\frac{M}{r}\right)^{2}\right]^{-1}\bm{dr}^{2} +r^{2}\bm{d\Omega}^{2}_{k}\Bigg\}, \label{eq:transformedgMST}\\ \bar{\Phi}(r)={}&\sqrt{\frac{6}{\kappa}}\, \frac{ar+M\left(\sqrt{-\frac{\kappa\Lambda}{36\lambda}}-a\right)} {r-M\left(1-\sqrt{-\frac{\kappa\Lambda}{36\lambda}}a\right)}, \label{eq:transformedphiMTZ}\\ \bar{\bm{A}}(r)={}&-\frac{\bar{q}}{r}\bm{dt},\end{aligned}$$ where the charge $\bar{q}$ is also tied in terms of the integration constant $M$ via the original coupling constants as $$\label{eq:barq} \bar{q}^{2}=(1-a^{2})\frac{2{\pi}}{9}\frac{kM^{2} \left(\kappa\Lambda+36\lambda\right)}{\kappa\lambda}.$$ We can easily check that in the neutral limit $\bar{q}=0$, the relation (\[eq:barq\]) leads to the same fine tuning between the original coupling constants characterizing the MTZ black hole, $$\label{eq:finetuneMTZ} \lambda=-\frac{1}{36}\kappa\Lambda,$$ and the solution becomes the Anabalon-Cisterna solution [@Anabalon:2012tu] which is supported by a potential involving all the (super-)renormalizable terms except the massive one. This can easily be explained through our mapping since for this specific fine tuning (\[eq:finetuneMTZ\]), the coupling constant defining the mass (\[eq:lambda2\]) vanishes identically. In fact, the AC solution [@Anabalon:2012tu] is exactly the result of applying the proposed map to the MTZ black hole [@Martinez:2002ru], that’s why it must respect the MTZ fine tuning (\[eq:finetuneMTZ\]). This situation changes when the electric charge is introduced since the involved fine tuning is now understood as a relation between the integration constants and not as one between the coupling constants, which in turns allows the mass term to remain turned on. The AC solution represents black holes and wormholes, and this is also the case in the above charged generalization (\[eq:mapMST\]). In the next section, we will pursue this strategy in the three-dimensional case where there exist more conformal seed configurations. \[sec:D=2+1\]Generating new solutions in $D=2+1$ ================================================ In three dimensions, we have a priori more configurations that are interesting solutions of the Einstein equations supported by a conformal scalar field (\[eq:NMGSconf1\]). On the one hand there is the well-known Martínez-Zanelli conformal black hole [@Martinez:1996gn], its self-interacting generalization [@Henneaux:2002wm] and a time-dependent solution describing the exact gravitational collapse to the last black hole [@Xu:2014xqa]. On the other hand there also exist a different kind of time-dependent solutions named stealth configurations characterized by the peculiarity that both sides (the gravity and the matter source part) of Einstein equations vanish independently. Indeed, in three dimensions, it has been shown in [@AyonBeato:2004ig] the existence of a time-dependent nontrivial self-interacting scalar field nonminimally coupled to gravity having an energy-momentum tensor that vanishes on the BTZ black hole [@Banados:1992wn]. Finally, there also exist conformal solutions supporting AdS-waves [@AyonBeato:2005qq]. In this section we will provide more new examples of self-gravitating scalar field solutions with (super-)renormalizable self-interaction starting from almost all the previously mentioned conformal seeds via the mapping (\[eq:map\]). We start by specifying the (super-)renormalizable potential (\[eq:potential\]) resulting from the map at $D=3$, which will be the common support for all the generated solutions \[eq:potentialD3\] $$V(\bar{\Phi})=\lambda_1\bar{\Phi}+\lambda_2\bar{\Phi}^2+\lambda_3\bar{\Phi}^3 +\lambda_4\bar{\Phi}^4+\lambda_5\bar{\Phi}^5+\lambda_6\bar{\Phi}^6,$$ where the coupling constants are fixed as $$\begin{aligned} \lambda_{1}&=-\frac{3}{\sqrt{2}}\frac{a(\kappa^{2}\Lambda+512a^{4}\lambda)} {\kappa^{5/2}(1-a^2)^5},\\ \lambda_{2}&=\frac{15}{8}\frac{a^{2}(\kappa^{2}\Lambda+512a^{2}\lambda )} {\kappa^{2}(1-a^2)^5},\\ \lambda_{3}&=-\frac{5\sqrt{2}}{8}\frac{ a^{3}(\kappa^{2}\Lambda+512\lambda)} {\kappa^{3/2}(1- a^2)^5},\\ \lambda_{4}&=\frac{15}{64}\frac{a^{2}(a^{2}\kappa^{2}\Lambda+512\lambda)}{\kappa(1-a^2)^5},\\ \lambda_{5}&=-\frac{3\sqrt{2}}{128}\frac{a(a^{4}\kappa^{2}\Lambda+512\lambda)} {\kappa^{1/2}(1-a^2)^5},\\ \lambda_{6}&=\frac{1}{512}\frac{a^{6}\kappa^{2}\Lambda+512\lambda}{(1-a^2)^5}, \label{eq:lambda6}\end{aligned}$$ along with the relationship between the new and the original cosmological constants, $$\label{eq:LambdaD3} \bar{\Lambda}=\frac{\kappa^{2}\Lambda+512a^{6}\lambda}{\kappa^{2}(1-a^{2})^5}.$$ As in the four-dimensional case, the above parameterizations have the following interpretation: the transformations (\[eq:LambdaD3\]) and (\[eq:lambda6\]) are again a one-parameter invertible linear map between the initial cosmological and renormalizable coupling constants $(\Lambda,\lambda)$ and the final ones $(\bar{\Lambda},\lambda_6)$. This allows to rewrite the rest of the parameterizations in terms of $(\bar{\Lambda},\lambda_6)$ and finally consider them as arbitrary. Hence, the full transformations again describe a one-parameter subspace of the five-dimensional parameter space $(\lambda_1,\ldots,\lambda_5)$ defined by the strictly super-renormalizable contributions. This is the only subspace of the general problem with (super-)renormalizable self-interactions than can be probed in three dimensions from the conformal sector using the mapping (\[eq:map\]). Let us first consider the conformal seed configuration which corresponds to the self-interacting version of the MZ conformal black hole [@Martinez:1996gn] found in [@Henneaux:2002wm], which is a solution of the field equations (\[eq:NMGSconf1\]), \[eq:gintMZ\] $$\begin{aligned} \bm{ds^{2}}={}&-\bigg(\frac{r^2}{l^2}-\frac{2\alpha B^3}{r}-3\alpha B^2\bigg)\bm{dt^2} \nonumber\\ &+\bigg(\frac{r^2}{l^2}-\frac{2\alpha B^3}{r}-3\alpha B^2\bigg)^{-1}\bm{dr^2} +r^{2}\bm{d\varphi^2},\label{eq:gMZ}\\ \Phi(r)={}&\sqrt{\frac{8B}{\kappa(r+B)}},\label{eq:phiMZ}\\ \alpha\equiv{}&\frac{\kappa^{2}-512\lambda l^{2}}{\kappa^{2}l^{2}}, \label{eq:alpha}\end{aligned}$$ where the cosmological constant is chosen to be negative $\Lambda=-1/l^{2}$ and $B$ is an integration constant. Hence, as in the four-dimensional case, a solution of the field equations (\[eq:FE\]) with the (super-)renormalizable potential given by (\[eq:potentialD3\]) can be constructed from the conformal seed (\[eq:gintMZ\]), and the resulting configuration is given by \[eq:transformedMZ\] $$\begin{aligned} \label{eq:transformedgMZ} \bm{d\bar{s}}^{2}={}&\left(\frac{a\sqrt{B}+\sqrt{r+B}}{\sqrt{r+B}}\right)^4\nonumber\\ &\times\Bigg[ -\bigg(\frac{r^{2}}{l^{2}}-\frac{2\alpha B^{3}}{r}-3\alpha B^{2}\bigg)\bm{dt}^{2}\nonumber\\ &\qquad+\bigg(\frac{r^{2}}{l^{2}}-\frac{2\alpha B^{3}}{r}-3\alpha B^{2}\bigg)^{-1}\bm{dr}^{2} +r^{2}\bm{d\varphi}^{2}\Bigg],\\ \bar{\Phi}(r)={}&\sqrt{\frac{8}{\kappa}}\frac{\sqrt{B}+a\sqrt{r+B}}{a\sqrt{B}+\sqrt{r+B}}. \label{eq:transformedphiMZ}\end{aligned}$$ We remark that, as in the four-dimensional case, one could have started from the charged version of the solution (\[eq:gintMZ\]) given in [@Cardenas:2014kaa], which is supported by a conformally invariant nonlinear electrodynamics [@Hassaine:2007py]. Hence, in this case the map will yield to a nonlinearly charged version of the configuration (\[eq:transformedMZ\]) based on the same conformal electrodynamics, but where the scalar field self-interact with the non-conformal (super-)renormalizable potential (\[eq:potentialD3\]). As a second conformal seed, we now consider the exact gravitational collapse of the self-interacting version of the MZ conformal black hole (\[eq:gintMZ\]) derived in [@Xu:2014xqa]. This is a time-dependent solution which in the limit when time goes to infinity reduces to the configuration (\[eq:gintMZ\]). We stress that it is one of the few examples of an exact gravitational collapse in the literature. Due to time dependence it is more convenient to write the metric solution in the Eddington-Finkelstein coordinates as \[eq:Xu\] $$\begin{aligned} \bm{ds}^{2}={}&-f(u)^{-2/3}\bigg( \frac{r^{2}}{l^{2}}-\frac{2\alpha B^{3}}{r}f(u)^{2} \nonumber\\ &-3\alpha B^{2}f(u)^{2/3} \bigg)\bm{du}^{2}+2f(u)^{-1/3}\bm{dudr} \nonumber\\ &+r^{2}\bm{d\varphi}^{2},\\ \Phi(u,r)={}&\sqrt{ \frac{8B}{\kappa\left( rf(u)^{-4/3} +B\right)}},\\ f(u)\equiv{}&\tanh\left( \frac{3\alpha B u}{8}\right),\end{aligned}$$ where $B$ is an integration constant and $\alpha$ is defined in terms of the coupling constants in (\[eq:alpha\]). Notice that in the limit of $u\rightarrow\infty $ we have $f(u)=1$, i.e. the black hole solution (\[eq:gintMZ\]) is just the final state of the evolutive solution (\[eq:Xu\]). As before, starting from this conformal seed solution, one can generate a new time-dependent solution of the field equations (\[eq:FE\]) with a (super-)renormalizable potential fixed by (\[eq:potentialD3\]) \[eq:transformedgCollapse\] $$\begin{aligned} \bm{d\bar{s}}^{2}={}&\left(a\sqrt{\frac{B}{rf(u)^{-4/3}+B }}+1\right)^{4} \nonumber\\ &\times\Bigg[-f(u)^{-2/3}\bigg( \frac{r^{2}}{l^{2}}-\frac{2\alpha B^{3}}{r}f(u)^{2} \nonumber\\ &\qquad-3\alpha B^{2}f(u)^{2/3}\bigg)\bm{du}^{2}+2f(u)^{-1/3}\bm{dudr} \nonumber\\ &\qquad+r^{2}\bm{d\varphi}^{2}\Bigg],\\ \label{eq:transformedphiCollapse} \bar{\Phi}(u,r)={}&\sqrt{\frac{8}{\kappa}}\,\frac{\sqrt{B}+a\sqrt{rf(u)^{-4/3}+B}} {a\sqrt{B}+\sqrt{rf(u)^{-4/3}+B}}.\end{aligned}$$ Consequently, the final state of this evolution $u\to\infty$ is just the previously generated stationary solution (\[eq:transformedMZ\]). As the last conformal seed, we consider again a time-dependent configuration but of a different sort. It represents a conformal stealth [@AyonBeato:2004ig] overflying the BTZ black hole [@Banados:1992wn]. The stealths are particular nontrivial solutions of Einstein equations where both of its sides vanish independently, that is $$G_{\mu\nu}-\frac{1}{l^2}g_{\mu\nu}=0=\kappa T_{\mu\nu}. \label{stealtheq}$$ The left hand side produces the BTZ black hole [@Banados:1992wn] assuming only rotational symmetry [@AyonBeato:2004if], while the right hand side has nontrivial solutions only when this black hole is static [@AyonBeato:2004ig]. The resulting conformal stealth is given in the Eddington-Finkelstein coordinates by $$\begin{aligned} \label{eq:BTZ} \bm{ds}^{2}={}&-\left(\frac{r^{2}}{l^{2}}-M\right)\bm{du}^{2} -2\bm{du dr}+r^{2}\bm{d\varphi}^{2},\\ \label{eq:phiStealth} \Phi(u,r)={}&\sqrt{\frac{8}{\kappa}}\frac{1}{\sqrt{\sigma(u,r)}},\\ \sigma(u,r)\equiv{}& \sqrt{\frac{8l^2\lambda-h^2}{Ml^{2}}} \Bigg[ r\cosh\left(\frac{\sqrt{M}u}{l}\right)\nonumber\\ &+\sqrt{M}l\sinh\left( \frac{\sqrt{M}u}{l}\right)\Bigg]+h.\end{aligned}$$ It is interesting to note that the mapped solution given by $$\begin{aligned} \label{transformedBTZ} \bm{d\bar{s}}^{2}={}&\left(\frac{a}{\sqrt{\sigma(u,r)}}+1\right)^4 \bigg[-\left(\frac{r^{2}}{l^{2}}-M\right)\bm{du}^{2}\\ &\qquad\qquad\qquad\qquad\quad-2\bm{dudr}+r^{2}\bm{d\varphi}^{2} \bigg], \nonumber\\ \label{eq:transformedphiStealth} \bar{\Phi}(u,r)={}&\sqrt{\frac{8}{\kappa}}\, \frac{1+a\sqrt{\sigma(u,r)}}{a+\sqrt{\sigma(u,r)}},\end{aligned}$$ is as usual a new solution of the field equations (\[eq:FE\]) exhibiting also time dependence, but now it is not longer a stealth configuration (\[stealtheq\]) for the bar fields. \[sec:conclu\]Conclusions ========================= Here, we provide new examples of self-gravitating solutions in presence of a cosmological constant $\bar{\Lambda}$ for scalar fields conformally coupled to gravity, allowed to self-interact with themselves via a potential where all the super-renormalizable contributions, defined by the positive mass-dimension coupling constants $(\lambda_1,\ldots,\lambda_{(D+2)/(D-2)})$, and the renormalizable one described by the dimensionless coupling constant $\lambda_{2D/(D-2)}$ are turned on. This was possible due to the introduction of a new one-parameter mapping connecting any self-gravitating conformal scalar solution with the previous configurations. The map consists in a conformal transformation for the metric and a $\mathrm{SL(2,\!I\!R)}$ transformation for the scalar field. All the studied cases allow the following interpretation of the mapping: the sector of the general problem with (super-)renormalizable self-interactions that can be probed with a conformal counterpart, is the one where the cosmological and renormalizable coupling constants, $\bar{\Lambda}$ and $\lambda_{2D/(D-2)}$, are arbitrary but from the $(D+2)/(D-2)$-dimensional parameter space $(\lambda_1,\ldots,\lambda_{(D+2)/(D-2)})$ of the super-renormalizable contributions only a one-parameter subspace described by the mapping is accessible. We systematically use well-known conformal solutions in the literature as seed configurations in this map to generate the new (super-)renormalizably dressed solutions. Concretely, we exhibit a charged version of the AC solution at $D=4$ [@Anabalon:2012tu], starting from the charged version [@Martinez:2005di] of the MTZ black hole [@Martinez:2002ru]. We explain why the AC configurations have no mass term in the potential and how this mass term appears now due to the presence of the electric charge. Many other similar examples are generated in $D=3$. The first of them is generated from the generalization of the MZ black hole [@Martinez:1996gn] including a conformal self-interaction [@Henneaux:2002wm]. A second one is generated from the exact gravitational collapse of the previous conformal black hole [@Xu:2014xqa], giving this time a (super-)renormalizably dressed time-dependent configuration. As a last example, the conformal stealth overflying the static BTZ black hole is used as seed [@AyonBeato:2004ig]. The resulting configuration is again time dependent but, since the map mix the gravity and matter contributions, the resulting solution is not longer a stealth configuration. All the generated (super-)renormalizably dressed configurations have the same asymptotic as their conformal seeds since the involved conformal factors take unit value at infinity, i.e.they are all asymptotically (A)dS spacetimes. On the contrary, the (super-)renormalizably self-interacting scalar fields no longer vanish at infinity as its conformal counterparts. In fact, the parameter of the introduced mapping defines the constant value of the final fields at infinity. However, it cannot be interpreted as a new integration constant since as previously discussed this parameter appears in the action probing the strictly super-renormalizable sector. Finally, at dimension $D=6$ our generating technic can also provide (super-)renormalizably dressed solutions, this time for potentials built from cubic polynomials. However, the only known conformal seeds in this dimension to our knowledge are those consisting of stealth configurations and AdS-waves [@AyonBeato:2006jf]. The relevant stealth examples live on flat spacetime [@AyonBeato:2005tu] and (A)dS$_6$ spacetime [@Ayon-Beato:SAdS]. We leave the exploration of the resulting six-dimensional configurations for future work. We thank the organizers of the fourth GravUach conference at Valdivia as well as the participants for useful discussions, specially to A. Anabalon and A. Cisterna. EAB is partially supported by grants 175993 and 178346 from CONACyT, grants 1121031, 1130423 and 1141073 from FONDECYT and “Programa Atracción de Capital Humano Avanzado del Extranjero, MEC” from CONICYT. MH is partially supported by grant 1130423 from FONDECYT. JAMZ is supported by grant 243377 and “Programa de Becas Mixtas” both from CONACyT. [10]{} J. D. Bekenstein, Annals Phys. [**82**]{}, 535 (1974). N. Bocharova, K. Bronnikov and V. Melnikov, Vestn. Mosk. Univ. Fiz. Astron. **6**, 706 (1970). E. Ayon-Beato, Class. Quant. Grav.  [**19**]{}, 5465 (2002) \[gr-qc/0212050\]. B. C. Xanthopoulos and T. Zannias, J. Math. Phys.  [**32**]{}, 1875 (1991). D. Sudarsky and T. Zannias, Phys. Rev. D [**58**]{}, 087502 (1998) \[gr-qc/9712083\]. C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D [**67**]{}, 024008 (2003) \[hep-th/0205319\]. C. Martinez, J. P. Staforelli and R. Troncoso, Phys. Rev. D [**74**]{}, 044028 (2006) \[hep-th/0512022\]. M. M. Caldarelli, C. Charmousis and M. Hassaine, JHEP [**1310**]{}, 015 (2013) \[arXiv:1307.5063 \[hep-th\]\]. C. Martinez and J. Zanelli, Phys. Rev. D [**54**]{}, 3830 (1996) \[gr-qc/9604021\]. M. Henneaux, C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D [**65**]{}, 104007 (2002) \[hep-th/0201170\]. M. Cardenas, O. Fuentealba and C. Martínez, Phys. Rev. D [**90**]{}, no. 12, 124072 (2014) \[arXiv:1408.1401 \[hep-th\]\]. W. Xu, Phys. Lett. B [**738**]{}, 472 (2014) \[arXiv:1409.3368 \[hep-th\]\]. B. C. Xanthopoulos and T. E. Dialynas, J. Math. Phys.  [**33**]{}, 1463 (1992). C. Klimčík, J. Math. Phys., 1914 (1993). E. Ayon-Beato, A. Garcia, A. Macias and J. M. Perez-Sanchez, Phys. Lett. B [**495**]{}, 164 (2000) \[gr-qc/0101079\]. C. Martínez, in *Quantum Mechanics of Fundamental Systems: The Quest for Beauty and Simplicity*, eds. J. Zanelli & M. Henneaux (Springer New York, 2009). G. Giribet, M. Leoni, J. Oliva and S. Ray, Phys. Rev. D [**89**]{}, no. 8, 085040 (2014) \[arXiv:1401.4987 \[hep-th\]\]. M. Bravo Gaete and M. Hassaine, Phys. Rev. D [**88**]{}, 104011 (2013) \[arXiv:1308.3076 \[hep-th\]\]; JHEP [**1311**]{}, 177 (2013) \[arXiv:1309.3338 \[hep-th\]\]; F. Correa and M. Hassaine, JHEP [**1402**]{}, 014 (2014) \[arXiv:1312.4516 \[hep-th\]\]. M. E. Peskin and D. V. Schroeder, [*An Introduction to quantum field theory*]{} (Addison-Wesley, USA 1995). F. Englert and R. Brout, Phys. Rev. Lett.  [**13**]{}, 321 (1964). P. W. Higgs, Phys. Rev. Lett.  [**13**]{}, 508 (1964). G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**716**]{}, 1 (2012) \[arXiv:1207.7214 \[hep-ex\]\]. S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], Phys. Lett. B [**716**]{}, 30 (2012) \[arXiv:1207.7235 \[hep-ex\]\]. A. Anabalon and A. Cisterna, Phys. Rev. D [**85**]{}, 084035 (2012) \[arXiv:1201.2008 \[hep-th\]\]. M. Hassaine and C. Martinez, Phys. Rev. D [**75**]{}, 027502 (2007) \[hep-th/0701058\]. C. Barcelo and M. Visser, Phys. Lett. B [**466**]{}, 127 (1999) \[gr-qc/9908029\]. M. Astorino, Phys. Rev. D [**91**]{}, 064066 (2015) \[arXiv:1412.3539 \[gr-qc\]\]. E. Ayon-Beato, C. Martinez and J. Zanelli, Gen. Rel. Grav.  [**38**]{}, 145 (2006) \[hep-th/0403228\]. M. Banados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett.  [**69**]{}, 1849 (1992) \[hep-th/9204099\]. E. Ayon-Beato and M. Hassaine, Phys. Rev. D [**73**]{}, 104001 (2006) \[hep-th/0512074\]. E. Ayon-Beato, C. Martinez and J. Zanelli, Phys. Rev. D [**70**]{}, 044027 (2004) \[hep-th/0403227\]. E. Ayon-Beato and M. Hassaine, Phys. Rev. D [**75**]{}, 064025 (2007) \[hep-th/0612068\]. E. Ayon-Beato, C. Martinez, R. Troncoso and J. Zanelli, Phys. Rev. D [**71**]{}, 104037 (2005). E. Ayón–Beato, C. Martínez, R. Troncoso, and J. Zanelli, “Stealths overflying (A)dS,” in preparation.
--- abstract: 'We present two-dimensional MHD numerical simulations for the interaction of high-velocity clouds with both magnetic and non-magnetic Galactic thick gaseous disks. For the magnetic models, the initial magnetic field is oriented parallel to the disk, and we consider two different field topologies (with and without tension effects): parallel and perpendicular to the plane of motion of the clouds. The impinging clouds move in oblique trajectories and fall toward the central disk with different initial velocities. The $B$-field lines are distorted and compressed during the collision, increasing the field pressure and tension. This prevents the cloud material from penetrating into the disk, and can even transform a high-velocity inflow into an outflow, moving away from the disk. The perturbation creates a complex, turbulent, pattern of MHD waves that are able to traverse the disk of the Galaxy, and induce oscillations on both sides of the plane. Thus, the magnetic field efficiently transmits the perturbation over a large volume, but also acts like a shield that inhibits the mass exchange between the halo and the disk. For the non-magnetized cases, we also uncover some novel features: the evolution of the shocked layer generates a tail that oscillates, creating vorticity and turbulent flows along its trajectory.' author: - 'Alfredo Santillán, José Franco, Marco Martos' -  Jongsoo Kim title: THE COLLISIONS OF HVCs WITH A MAGNETIZED GASEOUS GALACTIC DISK --- INTRODUCTION ============ High velocity clouds (HVC) are atomic HI cloud complexes located at high galactic latitudes, and moving with large velocities ($\mid V_{LSR} \mid \geq 90$ km/s) that do not match a simple model of circular rotation for our Galaxy (see Mirabel 1981a, Bajaja [[*et al*]{}. ]{}1985, and Wakker & van Woerden 1997). The present data indicates an excess of negative-velocity (infall) HVC over positive-velocity HVC, but the interpretation for their origin and evolution is unclear because their distances and tangential motions are unknown. Limits to the location of some particular clouds indicate $z$-heigths of a few kiloparsecs (a recent detection of highly ionized HVCs indicates even larger distances, but the relationship between ionized and neutral HVCs is unclear; Sembach [[*et al*]{}. ]{}1998), setting a possible mass range of 10$^{5}$-10$^{6}$ [M$_{\odot}$]{}  for some of these complexes. Thus, a HVC complex moving with a speed of 100 km s$^{-1}$ has a kinetic energy of about 10$^{52-53}$ erg. This range of values (equivalent to that from an OB association in the disk) indicate that the bulk motion of the HVC system, and its interaction with the galactic disk, could represent a rich source of energy and momentum for the interstellar medium (ISM). Observational signatures for the interactions of HVCs with galactic disks have been claimed in our Galaxy and in some external galaxies. The best known examples are in complexes AC and H, both located near the anticenter region (Mirabel 1981b; Mirabel & Morras 1990; Tamanaha 1997; Morras [[*et al*]{}. ]{}1998), in the direction of the Draco Nebula (Kalberla [[*et al*]{}. ]{}1984; Hirth [[*et al*]{}. ]{}1985; Herbstmeier [[*et al*]{}. ]{}1996), in M101 (van der Hulst & Sancisi 1988), and in NGC 4631 (Rand & Stone 1996). These observations support the idea that HVC-galaxy collisions can have a significant influence on the structure and energetics of the gaseous disk. Previous 2-D and 3-D numerical simulations for collisions with $non-magnetic$ (or weakly magnetic) and $thin$ disks ([[*e. g*]{}. ]{}Tenorio-Tagle [[*et al*]{}. ]{}1986,1987; Franco [[*et al*]{}. ]{}1988; Comerón & Torra 1992; Lepine & Duvert 1994; Rand & Stone 1996), indicate that the resulting ISM structures have sizes of several hundreds of parsecs, similar to those ascribed to multi-supernova remnants, or superbubbles, from OB associations. The colliding HVCs can sweep a great amount of disk mass in these non-magnetic models. The resulting shocked layer collects both the disk mass and the mass from the HVC, inducing the formation of massive gas structures far from the midplane (Franco 1986; Alfaro [[*et al*]{}. ]{}1991; Cabrera-Caño [[*et al*]{}. ]{}1996). Moreover, the fastest moving clouds can even drill holes through the whole gaseous disk, venting gas into the other side of the disk. Thus, for these thin disk cases, an efficient mass exchange can result from the interaction of the HVC system with the disk. The HVC-disk interactions, however, can have a radically different outcome in a disk that is thicker and more magnetized than assumed by these previous works (see Cox 1990 and Franco [[*et al*]{}. ]{}1995). The transition between the main gaseous disk and the halo is very broad and complex, with an intricate magnetic field structure and a number of extended gas components, including dusty high-$z$ layers and the so-called ionized Reynolds layer (Hoyle & Ellis 1963; Reynolds 1989). Using the available data at the Solar neighborhood, Boulares & Cox (1990; hereafter BC) have incorporated the distended gas components, along with the vertical gradient of the magnetic field, in a model for the thick disk of the Galaxy. They found that a time averaged hydrostatic support of the disk requires scaleheights of about 1 kpc for both the ionized Reynolds layer and the magnetic and cosmic ray pressures. These extended gas layers are not unique to our Galaxy, and recent observations indicate that they are probably common in spirals. Examples of diffuse ionized gas in external galaxies, that seem to be analogous to the Reynolds layer in our Galaxy, are the prominent $H_{\alpha}$ diffuse halos in NGC 891 and NGC 5775, and to a lesser extent in NGC 4302 (Rand 1995). In particular, NGC 891 reveals $H_{\alpha}$ emission extending up to 3 kpc above the midplane (Rand, Kulkarni & Hester 1990), along with other extraplanar structures such as worms (Dettmar 1990) and dusty filaments (Howk & Savage 1997). Also, the magnetic fields in edge-on galaxies can be traced for several kpcs above the disk (Hummel & Beck 1995), and dusty filaments extending for more than 2 kpc above the plane of the host galaxy have been observed in NGC 253 and NGC 7331 (Sofue 1987). Some of these features are expected from the action of radiation pressure on dusty HI clouds (Franco [[*et al*]{}. ]{}1991), and from the spiral density wave (Martos & Cox 1998). Regardless of some poorly known ISM parameters (such as the distribution and filling factor of the hot gas), the existence of extended gas layers (neutral or ionized) might have far reaching consequences for the structure and overall stability of the gaseous disk. Any given parcel of gas located above the scaleheight of the thin disk weighs more than the same material placed within this disk. The inclusion of an extended gas layer, then, results in a disk model with a total interstellar pressure that is higher than the previously assumed thin disk values (this is in line with recent UV studies that indicate that the thermal pressure near the Sun can be a factor of 8 higher than previous estimates; Berghöfer [[*et al*]{}. ]{}1998). The non-thermal pressure gradients at high latitudes, along with the tension of magnetic lines, must play a crucial role in the overall support of these extended disk structures (BC; Zweibel 1995). The presence of a thick magnetic disk, then, should drastically alter the results obtained with purely hydrodynamic models, or models in which the gradients are concentrated in a thin disk with a scaleheight of about 150 pc. In this paper, we present two dimensional simulations of HVCs colliding with a thick disk, including both the purely hydrodynamic and the ideal magnetohydrodynamic regimes. Magnetized models are built from an isothermal gas distribution, in which magnetic support at high $z$-locations is crucial for the initial equilibrium state. The models have the field lines parallel to the galactic disk, but we consider two different line orientations: field lines lying in the plane of motion of the HVC, and field lines that are normal to this plane. The results with these orientations, along with those of the purely hydrodynamic cases, allow us to isolate the effects of field tension: for lines lying in the plane of motion, magnetic tension reverses the motion of HVC material and creates an outflow at late times. For the case when the field lines are perpendicular to the plane of motion, which have no tension effects, the magnetic pressure prevents the cloud material from reaching deep into the disk. Thus, in either case, the magnetic field does not allow any mass exchange with the halo. In contrast, the non-magnetic cases (which demand a hot halo for the initial equilibrium) evolve without resistance and allow mass mixing. The results for these non-magnetic models confirm previous results, but some new features are uncovered here. Also, the resulting gas structures are now modified by the thicker and more pressurized nature of our models. The evolution of impinging HVCs, with a range of approaching angles and velocities, is followed with the MHD code ZEUS 3-D. The thick disk models are described in the next section. Section 3 deals with the numerical treatment of the problem. Section 4 presents our results, which are summarized and discussed in Section 5. Magnetized Disk Models ====================== The magnetic disk model is plane-parallel. Two forms of pressure, thermal and magnetic, provide the support of the initial magnetohydrostatic equilibrium state against the gravitational field provided by the disk stars. Our model does not include cosmic-ray pressure. The density and gravitational acceleration functions are given by: $$\begin{aligned} {\rho}(z) & = & \rho_0 [0.6e^{-\frac{z^2}{2(70pc)^2}}+ 0.3e^{-\frac{z^2}{2(135 pc)^2}} + 0.07 e^{-\frac{z^2}{2(135 pc)^2}} \\ \nonumber & & + 0.1e^{-\frac{|z|}{400 pc}}+0.03e^{-\frac{|z|}{900 pc}} ] \ \ \ {\rm cm}^{-3}\end{aligned}$$ and $$g(z) = 8\times 10^{-9}(1-.52e^{-\frac{|z|}{325 pc}}-.48e^{-\frac{|z|}{900 pc}} ) \ \ \ {\rm cm\, s}^{-2},$$ where the midplane gas density is $\rho_0 =2.24\times 10^{-24}$ g cm$^{-3}$. This density distribution adequately describes the observed gas $z$-structure in the solar vecinity, as discussed by BC. The functional form for the gravitational acceleration $g$ is taken from Martos (1993), and provides a good fit to the data of Bienaym' e, Robin & Cr' ez' e (1987). The total pressure is given by $p(z) = -\int_z^{z_{ext}} \rho g \, dz$, with the boundary condition $p(z_{ext} = $5 kpc)=0, and is numerically solved and set equal to the sum of the thermal, $p_t=n(z)kT_{eff}$, and magnetic, $p_b = B^2(z)/8\pi $, terms. The midplane values are taken from BC: total pressure $p(0) =2.7\times 10^{-12}$ dyn cm$^{-2}$ (this is 20% higher than the thermal pressure value derived by Berghöfer [[*et al*]{}. ]{}1998), a magnetic field strength of $B(0) = 5\ \mu$G, and an effective disk temperature of $T_{eff}(0)=10900$ K. For simplicity, the magnetic model we adopt, which may be called a “warm” magnetic disk model, is defined by $T_{eff}(z) = T_{eff}(0)$ (independent of $z$). Thus, the implicit sound speed of this warm model is similar to the observed velocity dispersion of the main HI cloud component, $\sim 8$ km s$^{-1}$. The total magnetic field intensity at midplane adopted in this model includes contributions of the orderly (with a strength of $\sim 2 \ \mu$G) and the dominant random component values (see Heiles 1996). This results in a moderate field value for a spiral galaxy, $5\ \mu$G, because an average total field strength of 19 $\mu$G has been derived for the disk of NGC 2276 (Hummel & Beck 1995). We assume magnetic field lines that are parallel to the midplane, as indicated by data near the plane (see Valleé 1997), in our initial magnetohydrostatic states. In the 2-D MHD regime, the adopted warm disk model is Parker unstable (Martos & Cox 1994) and, from a linear stability analysis for the undular mode, we have found that the minimum growth time and the corresponding wavelength are 60 Myr and 3 kpc, respectively (Kim [[*et al*]{}. ]{}1999). The 2-D models indicate that the density enhancements become clearly apparent on timescales of the order of 100 Myr (Franco [[*et al*]{}. ]{}1995; Santillán [[*et al*]{}. ]{}1999). The instability can be triggered by a HVC collision (see Franco [[*et al*]{}. ]{}1995), but the HVC-disk interaction evolves in shorter timescales (the clouds are completely shocked in only a few Myr). Thus, given that the timescales for the two events are very different, here we do not discuss the appearance of the Parker instability, and a more complete description of the instability in a thick disk ([[*i. e*]{}. ]{}the linear analyses and the nonlinear evolution) will be reported elsewhere (Kim [[*et al*]{}. ]{} 1999; Santillán [[*et al*]{}. ]{}1999). These magnetic models reflect conditions in which the total pressure decreases more slowly than the density as $z$ increases. At high $z$, the models mimic the expected dominance of non-thermal forms of pressure, and the effective signal speed is high. As a consequence, the compressibility of the plasma is effectively controlled by the magnetic term, and the medium is ”stiff“ and elastic (Martos & Cox 1998). The thermodynamic regime of the runs, isothermal or adiabatic, can only alter that character to a certain extent, but the assumed magnetic field geometry will certainly affect the response to any interaction (further details of the properties of this thick, magnetic model can be found in Martos 1993 and Martos & Cox 1994). The distinction between models with field lines in the plane of motion and perpendicular to the plane of motion, then, corresponds to whether the magnetic tension influences the dynamical evolution or not, respectively. Both types of magnetized models are inititated in equilibrium and with the same $z$-distribution for the total pressure. The total pressure distribution is shown in Figure 1a. The corresponding $z$-variations for the Alfvén and maximum magnetosonic (the quadratic sum of the Alfvén and sound velocities) wave speeds are shown in Figure 1b. The magnetosonic speed is the effective signal speed for compressional waves, and has a rapid increase inside the disk but varies slowly, from 50 to 60 km s$^{-1}$, in a wide $z$ interval from 500 to 1500 pc. For completness, we have considered an additional third model representing a non-magnetic thick disk. This model maintains the same total pressure as in the previous magnetic models. Hence, hydrostatic equilibrium determines the temperature structure along the $z$-axis, $T(z)= \int_{z_{ext}}^z \rho g \, dz /kn(z)$, with the density and gravity distributions described above. The resulting temperature and sound speed distributions are shown in Figure 1c. Numerical Method ================ The simulations are performed with the MHD code ZEUS-3D (version 4.2), which solves the three dimensional system of ideal MHD equations by finite differences on an Eulerian mesh (for a description of the code, see [@SNa], 1992b). The code can perform simulations in 3D but, due to computational restrictions, here we restrict the discussion only to two dimensional simulations. The effects of self-gravity and differential rotation of the Galaxy are not included in the present version. The role of self-gravity is not important at the densities considered here, but the effects due to the shear of the galactic disk may be important during the evolution. When differential rotation is included, there are at least two effects that may prove to be significant for the HVC-disk collisions: first, the shear can cause distortions in the resulting gaseous structures ([[*e. g*]{}. ]{}Olano 1982; Palǒus, Franco & Tenorio-Tagle 1990) and, second, it can trigger the appeareance of magneto-rotational instabilities ([[*e. g*]{}. ]{}Chandrasekar 1960; Balbus & Hawley 1991; Foglizzo & Tagger 1994). In particular, the combined effects of the Parker and the magneto-rotational instabilities may lead to interesting results (Foglizzo & Tagger 1994). These instabilities have different dynamical effects on the plasma and magnetic field lines. The Parker instability distort the magnetic lines, generating a vertical component from an originally horizontal field and redistributing the gas in the disk. The magneto-rotational process stretches the field lines radially and generates internal torques, driving radial gas flows. Depending on the initial state, the instabilities may interfere constructively, with the vertical field lines from the Parker mechanism feeding back onto the magneto-rotational mechanism. In other cases, however, both processes operate in a stabilizing manner (Foglizzo & Tagger 1994). These issues are important and require detailed three dimensional studies with differential rotation that, unfortunately, are beyond our present capabilities. The 2-D results of the present paper cannot include the galactic shear, and the 2D computational domain lies in the plane defined by the azimuthal and vertical directions of the Galaxy. Our frame of reference is one in which the Galactic gas is at rest, and the origin of our 2D Cartesian grid is the local neighborhood. The coordinates ($x$, $z$) represent distances along and perpendicular to the midplane, respectively. The $x$-axis is anchored at a constant galactocentric radius ([[*i. e*]{}. ]{}is quasi-azimuthal, defined by the tangent of the local field orientation in our Galaxy; see Heiles 1994 and Valleé 1997), and the motion of the HVC is always considered in the ($x$, $z$) plane. Two different setups for the magnetic field are considered: $B$ parallel to the $x$-axis, in the plane of motion of the HVC, and $B$ parallel to the $y$-axis, perpendicular to the plane of motion (and no deformation of the field is introduced by the dynamics of the gas). The $y$-direction is also defined in the midplane, but it would correspond to the galactocentric radial vector direction. For an efficient use of computer resources, we mostly worked with moderate resolutions of 200$\times$200 zones, but verified that the results were not different from those obtained in runs with resolutions of 400$\times$400 zones. We performed runs with a variety of different sizes but, for simplicity, the physical intervals of the simulations presented here are 3 kpc $\times$ 3 kpc (the $z$-axis runs from -1.5 kpc to 1.5 kpc). Thus, one zone has an extent of 15 pc per dimension, or better, with our linear zoning scaling. The boundary conditions are cyclic (periodic) in $x$, and free outflow in $z$. The evolution was computed in both the isothermal ($\gamma = 1.01$) and adiabatic ($\gamma = 1.67$) regimes, since explicit cooling or heating functions are not included in our numerical scheme. For simplicity, all infalling clouds were given the same dimensions, $210\times 105$ pc (longer in the $x$ direction), and they are threaded by the magnetic field strength corresponding to their initial locations. We performed some runs with other cloud sizes but, except for the sizes of the initial pertubations, the results are similar to the ones described here. Since the evolution is followed on the $x-z$ plane, and we have set the initial density of the clouds to $n$=1 cm$^{-3}$, the mass and energy densities of the models were $5.0\times 10^{-25}$ g cm$^{-3}$ and $2.5\times 10^{-11}$ erg cm$^{-3}$ (this would correspond to a cloud mass and kinetic energy of $3.5 \times 10^5$ [M$_{\odot}$]{} and $3.5\times 10^{52}\ v^2_{100}$ erg, respectively, where $v_{100}$ is the cloud velocity in units of 100 km s$^{-1}$, if we set the third dimension to the quadratic mean of the other two). We positioned the cloud centers at several selected heights, from 350 pc to 4050 pc, and made a series of runs with different incoming velocities and incident angles. The velocity range spanned was from 0 to 200 km s$^{-1}$ ([[*i. e*]{}. ]{} from free-fall to nearly the largest observed approaching velocity), and the angles were varied from 0$^\circ$ to 60$^\circ$ with the vertical ($z$) axis. Regardless of the initial position of the cloud, the entire cloud is shocked in less than 3 Myr. The evolution of the interaction is fast and takes place in a relatively small region (with dimensions of several dozens of cell sizes). Thus, the details of the early shock structure (which depend on the initial cloud conditions) are not resolved in our simulations (the interested reader can find a detailed discussion of high resolution simulations for cloud collisions in Klein & McKee 1994 and Mac Low [[*et al*]{}. ]{}1994), and we focus here only in the larger scale outcome of the impact ([[*i. e*]{}. ]{}in structures of the order of a hundred pc or larger). A summary of the runs presented in this paper is given in Table 1. Results ======= Non–magnetic thick disk ----------------------- As stated before, previously published calculations of HVC impacts have been performed with a thin Galactic disk model, and most of them are perpendicular collisions in the purely hydrodynamic regime. We start by comparing them with our results obtained with the non-magnetic disk model described above. This non-magnetic model requires a hotter halo and the thermal sound speed increases rapidly inside the main disk (see Figure 1c). Loosely speaking, the basic structures formed by the collisions are similar to those described in previous modeling. For instance, as found in earlier works, the sizes and shapes of the shocked layers in the disk resemble some of the HI supershells observed by Heiles (1984) in our Galaxy. There are, however, some clear differences with previous results, and they are mainly due to our more extended gas distributions ([[*i. e*]{}. ]{}the structures formed at high $z$-locations are denser, better defined, and last longer than in the thin disk cases). Also, the resulting rear wakes are now completely formed and their morphologies and vorticities are clearly apparent. In particular, here we see one conspicuous structure, the tail, that has been either missed or disregarded in former studies (this is likey due to the fact that most previous models have located the HVC much closer to the midplane). The importance of this feature is better appreciated in (magnetic and non-magnetic) oblique impacts, and may be one of the possible sources of turbulence in the Reynolds’ layer (see Benjamin 1998 and Tufte [[*et al*]{}. ]{}1998). ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 0^\circ$ Our first example is a purely hydrodynamic simulation of a collision perpendicular to the disk (impact angle $\theta = 0^\circ$). The evolution is shown in the four snapshots displayed in Figure 2. The simulation is performed in the isothermal mode and the HVC center position is located at 1250 pc from midplane, with an initial velocity of 200 km s$^{-1}$. In all the following figures, the density is shown in logarithmic grayscale plots and the velocity field is indicated by arrows sized proportionally to the local speed. The first two snapshots show the initial conditions and the shock evolution at 3.2 Myr, respectively. The impact creates a strong galactic shock directed downwards, and a reverse shock that penetrates into the cloud. The galactic shock tends to move radially away from the location of impact, but momentum conservation keeps it strongest in the direction of motion of the impinging cloud. The lateral components of the shock, then, are milder and become a sonic perturbation in relatively short timescales (see Tenorio-Tagle [[*et al*]{}. ]{}1986 and Franco [[*et al*]{}. ]{}1988). The cloud has been completely shocked at the time of the second snapshot, and the lateral shocks have already disappeared. The cloud mass is locked in the shocked layer and, due to its supersonic motion, a vacuum is formed behind the layer. This rear vacuum begins to be filled up by material falling from higher locations, as well as from gas re-expanding from the shocked layer. This creates a pair of vortices, one at each side of the layer, and a plume, or tail, is formed at the central part of the rear wake. This is clearly seen in the third snapshot, at 9.5 Myr. At this time, the shocked layer is already collecting gas from the denser parts of the disk. The shock front decelerates as it penetrates into the disk, and reaches the midplane at about 13 Myr. After crossing the midplane, the shock accelerates in the decreasing density gradient, and blows out into the other side of the halo. The beginning of the blow out process is apparent in the last snapshot, at 19 Myr. The swirling motions of the rear wake, and the large extent (about 1 kpc) and shape of the shocked layer inside the disk are clearly displayed in this frame. A large fraction of the original cloud mass remains locked up in the shocked layer, and a small amount of it has re-expanded back into the rear wake and tail. In turn, the tail has expanded sideways and it has a density minimum at the central part. This minimum is promoted by the acceleration of the shock front after crossing the midplane. The size of the perturbed region has grown close to 2 kpc in this last snapshot. ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 30^\circ$ Figure 3 shows the hydrodynamical evolution for an oblique collision at $\theta =30^\circ$. Again, the original cloud is located at 1250 pc from midplane, with 200 km s$^{-1}$, and the run is performed in the isothermal mode. Now the cloud momentum has an important lateral component, which is conserved during the evolution because the gravitational force has only a $z$-component. The first two frames show the evolution of the shocked layer at 3.2 and 6.3 Myr, respectively. The initial cloud is again completely shocked in a relatively short timescale, before the first frame, and the vacuum left by the cloud motion is filled up by infalling material and by re-expansion of the shocked layer. The shapes of the interstellar structures are modified by the lateral velocity component, but the main features of the hydrodynamical evolution are similar to the ones described in the previous case. The motion of the shocked layer creates the rear wake (with vorticity and swirling motions), and a tail that extends downstream to locations close to the point of impact. The tail is now denser and more conspicuous than in the previous case, and it has the appeareance of an elongated finger or cometary tail. There is a visible shock in the second frame (at 6.3 Myr), when the central tail is forming. The slower downward speeds, and the inclination of the structure, allows for the gas of the tail to catch up with the main body of the shocked layer. The velocity vectors within the structure are now larger, and clearly show the re-expansion into the rarefied regions. The shear between this faster flow and its surrounding medium is subject to Kelvin-Helmholtz instabilities ([[*e. g*]{}. ]{}Shore 1992), but we cannot resolve the instability here. The oscillatory motion of the finger-like tail, is due to the combined effects of the vorticity of the rear flow and the unresolved instability. The prominence of this tail structure increases with both increasing HVC velocities and larger collision angles. The main shock crosses the midplane at about 13 Myr, and accelerates afterwards. As in the previous case, a density minimum is generated behind the accelerated layer. Given that the acceleration is promoted by the density gradient, is then directed along the $z$-axis and creates an elongation of the structure in this direction. This is clearly apparent in the third frame, at 22.2 Myr. Again, the shock front begins to blow out of the disk at about these times. The tail has also re-expanded at this time, and its vorticity and oscillations have created a chaotic velocity field along the trajectory of the interaction. The late times evolution displays a wealth of features, illustrated in the last snapshot at 47.7 Myr (to provide a better perspective, the midplane is now located in the middle of the frame). The central parts of the disk are distorted and compressed, with complex structures extending towards the impact zone. The disk scaleheight is altered on both sides of the plane, and the interphase between the upper and lower disk layers is marked but wavy, as in a water-air interphase. At the incoming side, there are rounded tongues, and some elongated structures breaking out from the disk, accompanied by a pair of vortices above. At the other side, the blow out expansion is clearly apparent and the front is reaching the lower edge of the grid. Magnetic disk with $B$ parallel to the $x$-axis: the role of tension -------------------------------------------------------------------- The rigidity and elasticity given to the disk by the magnetic field is better accentuated in 2D when the plane of motion of the HVC is parallel to the field lines, and the colliding gas distorts the initial field configuration. We illustrate the response of these deformed field lines with three representative cases. In these cases, the tension of the magnetic field dominates the evolution, and the results are completely different from those of the purely hydrodynamic cases. For the figures of these magnetic cases, where the densities and velocities are indicated as before, the $B$-field lines are now displayed with continuous lines. ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 0^\circ$ The first MHD simulation is illustrated in Figure 4. It corresponds to a collision perpendicular to the disk with cloud parameters identical to those of Figure 2 (cloud located at 1250 pc, with velocity 200 km s$^{-1}$ and impact angle $\theta = 0^\circ$), except that the simulation is now performed in the adiabatic mode. The initial shock is strong and has a magnetic Mach number close to 4, with a compression factor approaching 3 (a very strong shock has a compression factor of 4, as in the non-magnetic case). With these parameters, there are thermal shocks on both sides of the main shock front at the early evolutionary stages. The lateral and downward shocks, however, disappear in relatively short timescales, and the MHD waves begin to move ahead of the shocked layer. The first snapshot shows the expansion at $t$ = 3.2 Myr after impact. A series of MHD waves are already driven in all directions, creating compressions and a round shell-like structure (the “bubble”) ahead of the shocked layer. The lateral disturbances are Alfvén waves moving along the field lines, and the disturbances in the $z$-direction are magnetosonic waves that compress the field lines. The initial shock fronts (in either direction) move faster than any of these waves, and are responsible for the strong deformation of the initial field configuration but, as stated before, the key parameter determining the outcome of this interaction is the downward distorsion of the magnetic field. During the first 7 Myr, a substantial fraction of the energy goes into the compression and tension of the distorted field lines (the second frame shows the evolution at 6.3 Myr). Also, the disk material that is inside (and above) the distorted sections slides down along the field lines, like in an inclined plane, and settles down at the location of the magnetic valleys. Thus, the distortions disrupt the local hydrostatic equilibrium, and there is a clear infall of material in the perturbed region. This creates a dense “head” of the perturbation moving towards the disk. The fast magnetosonic wave, moving in the upper disk layers at an average speed of 50 km s$^{-1}$, creates a strong perturbation as it enters into the denser parts of the disk (it becomes a very mild magnetic shock, that is apparent in the third frame, with a magnetic Mach number always close to unity) and crosses the midplane at $t \sim 12$ Myr. At about 8 Myr, the energy stored in the magnetic field begins to be released as the field lines rebound, reversing the motion of the gas and lifting the dense head (that has already penetrated a few hundred pc into the lower layers) back to the upper parts of the disk. Thus, as seen in the third and forth frames (9.5 and 15.9 Myr, respectively), a high-velocity outflow moving away from the disk is created. This is a novel result in which an incoming flow is forced to become an outflow by magnetic tension. At about 13 Myr, and later on, the compressional and Alfvén waves carry most of the available energy. The magnetosonic waves are able to perturb the other side of the halo, and the Alfvén waves continue to drive the expanding structure and create gas infall outside the location of the bubble. Thus, the resulting structure is characterized by rising motions inside the bubble (from the tensil restoring motion) and by lateral infall outside it. Figure 5 shows a run with the same initial parameters of the previous model, but now the evolution is isothermal. In this case, as expected, the shocked gas piles up in a thin dense shocked layer that carries the momentum of the cloud. Thus, now the compression and distortion of the field lines is more pronounced in the zones where the momentum of the dense shocked layer is concentrated, but the main evolutionary features are similar to the ones described in the previous case. The evolutionary times shown in Figure 5 are identical to those shown in Figure 4 (3.2, 6.3, 9.5, and 15.9 Myr). Comparison with Figure 4 illustrates that the deformation of the field lines is now more acute, leading to a sharp V-shaped form (almost a discontinuity) at 6.3 Myr. A thin, dense, vertical structure is formed from the material that slides down along the distorted field lines. The compressional wave also becomes a weak MHD shock as it enters into the disk, and the lateral expansions, driven by Alfvén waves, are similar to the ones described in the previous case (outflows in the central zones, and inflows in the external regions). Except for differences in details and timescales, cases initiated at other $z$ locations and with different velocities behave in similar ways as the ones described in Figures 4 and 5: the field prevents the penetration of the cloud material into the disk and creates a net outflow at the late times evolution. For instance, an isothermal case started at $ z = 350$ pc and with a speed of 100 km s$^{-1}$ encounters a rapid tensional rebound at about $t = 6$ Myr. ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 30^\circ$ The oblique magnetic case is illustrated in Figure 6 by an isothermal collision with $\theta = 30^\circ$ and 200 km s$^{-1}$, as in Figure 3. The galactic shock has an important lateral component, producing a strong compression in the $x$-direction, as seen in the first and second frames (3.2 and 6.3 Myr, respectively). Once again, the magnetic field rebounds but the lines tend to recover their original configuration in shorter timescales than in the perpendicular cases. The motion of the shocked layer is again reversed as the lines rebound (third frame at 15.9 Myr), and a series of prominent disk oscillations and MHD waves are apparent during most of the evolution. The horizontal component of the flow is maintained for a longer time (for instance, it has a velocity of 47 km s$^{-1}$ at 20 Myr), and the patterns of the velocity fields and magnetic field distortions are completely different to those of the perpendicular cases. As before, the Alfvén waves detach from the shocked layer, and create a region with infalling gas that surrounds the shocked structure. The magnetosonic waves also traverse the disk (becoming a weak MHD shock), and perturb the other side of the halo. The tail is again formed behind the shocked layer but the magnetic field now constraints the flows and quenches the vorticity. The dense head moves almost parallel to the plane after rebound. This creates a magnetic shear flow but the tension of the lines prevents the appeareance of Kelvin-Helmholtz instabilities ([[*e. g*]{}. ]{}Frank [[*et al*]{}. ]{} 1996; Malagoli [[*et al*]{}. ]{}1996; Jones [[*et al*]{}. ]{}1997). The asymmetries in the distorted lines produce two important effects in the tail. First, the downstream (right) field distortion has a larger extent, with a softer slope, than the one created upstream (left). Thus, there is more mass sliding down towards the tail from the downstream side. This provides additional momentum to the tail, and creates a large rarified region behind it, that is maintained for a long timescale (up to the end of the run). Second, the gas that slides down from the upstream side provides an effective (ram pressure) force that opposes to the motion of the tail. This is a Rayleigh-Taylor unstable situation ([[*e. g*]{}. ]{}Shore 1992) but, as in the case of the Kelvin-Helmholtz instabilities, we cannot resolve the instability. The undular shape of the tail is long lived and is due to the unresolved instability. It is still present at the fourth frame, at 41.2 Myr (also note that the shape of the field lines are already distorted due to the Parker instability). In summary, a complex network of asymmetrical features is created, but the distorted tail moving almost parallel to the field lines is the most prominent structure of the run. Runs with other approaching angles and velocities generate the same type of features, but with logical differences. As the incident angle is increased, the asymmetries are increased: the lateral component of the velocity is increased and the downward penetration is reduced, but the lateral effects become more pronounced. The amplitude of the maximum distortion in the field lines is reduced accordingly, and line rebouncing occurs at earlier evolutionary times. For angles larger than $\theta \sim 60^\circ$, the flow becomes almost parallel to the $x$-direction. Nonetheless, the lines oscillate due to the collision and the oscillations transmit MHD waves in all directions, creating perturbations that are weaker but similar to those described before. Cases with $B$ perpendicular to the $x-z$ plane: the absence of tension ----------------------------------------------------------------------- In theses cases, the magnetic field lines are oriented along the $y$-axis (pointing outside of the figures). The lines preserve their straight alignment in the $y$ direction as their footpoints are dragged along, in the $x-z$ plane, by the flow motions. Thus, the lines are not distorted and there are no Alfvén waves in these case. Also, there are no tension effects, but the effects of magnetic pressure and field compression are certainly present. Thus, the cloud gas can travel longer paths and penetrate deeper into the disk. Also, the total pressure provides a very effective drag that slows down the flow faster than in the purely hydrodynamical case, and distorts the morphology of the shocked layer. The evolution of these runs is intermediate to the previous non-magnetic and magnetic-with-tension cases. ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 0^\circ$ Figure 7 shows a model with the same cloud parameters as those of Figures 2 and 5 (isothermal evolution with 200 km s$^{-1}$ directed along the $z$-axis, and the HVC is located at 1250 pc). The early times evolution resembles the one of the purely hydrodynamical case described in section 4.1.1. The shocked layer collects the gas and aquires a bow-shock form. The vacuum left behind the shocked layer creates a swirling gas inflow, and a pair of vortices are apparent behind the shocked layer during most of the simulation. As before, a tail moving behind the shocked layer is created, and the differences with the non-magnetic case begin to be apparent when the decelerating shocked layer reaches the velocity of the effective signal speed (at about 2 Myr). After this moment, the precursor compressional perturbation begins to move ahead of the shocked layer (it is already apparent, and located some 100 pc ahead of the shocked layer, in the first frame at 3.2 Myr). As in the previous magnetic cases, the layer and its precursor wave sink into the disk (second frame at 6.3 Myr), and the precursor magnetosonic wave becomes a new magnetic shock wave that creates a second shocked layer. Due to the lack of tension, the new MHD shock in this case is stronger than in the previous magnetic cases (with a magnetic Mach number ranging increasing from 1 to almost 3, as it penetrates into the central regions). The resulting new shocked layer collects only disk material, and is clearly present in the third frame at 9.5 Myr. This shocked disk layer is the one that actually penetrates into the central parts of the disk, and excites the excites the magnetosonic perturbation that goes into the other side of the halo (last frame at 19 Myr). The final fate of the first shocked layer, on the other hand, which is the one that contains the gas from the impinging cloud, is controlled by the magnetic pressure. The gas is forced to re-expand by the compressed magnetic lines that have had accumulated in the space between the two shocked layers, and it goes back to high $z$-locations. Again, then, the cloud material cannot penetrate into the disk. ### HVC with $V_{HVC}$ = 200 km s$^{-1}$, and $\theta = 30^\circ$ Figure 8 shows a model with $\theta = 30^\circ$ and the same parameters as in Figures 4 and 6. The evolution follows the same basic features that were described in these previous models: a wake with swirling motions and a tail are created behind the shocked layer, and a second shock front appears when the compressional wave begins to penetrate the denser parts of the disk (first and second frames at 3.2 and 9.5 Myr, respectively). Again, the structure and evolution of the tail plays an important role in the evolution. The gas of the tail catches up with the main body of the shocked layer, and a prominent and elongated, finger-like, flow structure is created. As in the previous non-magnetic case of Figure 4, the flow is subject to Kelvin-Helmholtz instabilities, and the oscillatory motion of the structure (clearly apparent in the third frame at 19 Myr) is due to vorticity and the unresolved instability. Similarly, the prominence of the finger-like structure increases with both increasing approaching velocities and larger collision angles. In the case of the second shocked layer, there is one important difference with respect to the previous perpendicular case: the two shocked layers are not aligned. The snapshot at 19 Myr shows that the new shock is directed along the $z$-axis. This is due to the fact that the density gradient is directed along this same axis. Thus, the second shocked layer moves perpendicular to the plane of the disk, and the relative orientation between the two layers is sensitive to the approaching angle of the cloud. The strength of this second shock increases with increasing cloud velocities, but decreases with increasing collision angles. Again, as in all previous magnetic cases, the cloud material is unable to penetrate into the disk. The late times evolution of the tail is almost identical to the one of the non-magnetic case in Figure 3. The rounded (almost circular) tongues and vortices extending along the trajectory toward the impact region (last frame at 41.2 Myr), are long lived (up to the end of the run). Discussion and Conclusions ========================== We have presented simulations of HVCs collisions, at different incidence angles and velocities, with magnetic and non-magnetic models of the Galactic thick disk. In general terms, the structures formed in the non-magnetic cases are similar to those discussed in previous studies, but some novel features are uncovered here: the motion of the shocked layer creates a rear wake, with vorticity, and a conspicuous tail. In oblique collisions, the tail becomes more prominent and aquires an oscillatory motion that leads to a highly chaotic, turbulent, velocity field along the trajectory of the interaction. Also, in contrast with thin disk results, where the perturbed region has dimensions similar to those of the original HVC, the resulting structures are larger and better delineated. The response of a magnetized thick disk, on the other hand, reveals new aspects of the interaction. Such a disk, with a strong magnetic support at high-$z$, also has important consequences for processes such as SN and superbubble evolution ([[*e. g*]{}. ]{}Slavin & Cox 1993; Tomisaka 1994, 1998). In contrast with non-magnetic HVC-disk interactions, the cloud now encounters substantial resistance through its evolution in the halo and cannot merge with the gaseous disk. The results with a magnetic field indicate that the perturbed volume is certainly much larger than that of the non-magnetic counterparts. Moreover, if the disk is Parker unstable, as it is the case of our warm magnetic model, the collisions are able to excite different oscillation modes in the disk and the halo, and do trigger the Parker instability (see Franco [[*et al*]{}. ]{}1995 and Santillán [[*et al*]{}. ]{}1998). With a $B$-field in the $x$-direction, the MHD waves propagate in all directions but any gas flow towards the disk is drastically quenched. The tension effectively stops the shocked gas, and reverses the motion of the flow, preventing any penetration of the original HVC mass into the disk and creating gas motions, with velocities in the range of 40 to 60 km s$^{-1}$, away from the disk. Thus, at least for this restricted field geometry, the magnetic field represents an effective shield that prevents any direct mass exchange between the halo and the disk. For a $B$-field perpendicular to the plane of motion, the strength of the shock also decreases rapidly but the compressional waves now have a more direct effect on the central disk. The results for this field topology, which has magnetic pressure but behaves in a tensionless manner, are intermediate to the non-magnetic cases and the ones with magnetic tension. The shocked layer can move deeper into the disk, but the buildup of magnetic pressure in the compressed gas eventually stops the motion of the layer and forces its re-expansion. The compressional waves, however, are transformed into a new secondary shock front that penetrates into the disk. This creates a more complex double shocked layer structure that lasts over several million of years. The interaction of the new shock front with the inner disk layers alters visibly the structure of the disk at large scales. Here again, however, the cloud material cannot penetrate very deep into the disk. The uniformity and symmetry of the disk and field modeled here are obvious idealizations. At high $z$, a likely vertical component of the field should modify the gas transport, and field line wandering would probably make the fluid more viscous than modeled here. Thus, one might suspect that our results would be altered in a 3D simulation. For instance, as suggested by the referee, the field lines in the disk would split apart and let the cloud to go through with much less resistance. Thus, our conclusions about bouncing clouds cannot be conclusive and this issue requires a 3D verification. Due to the stochastic nature of the Galactic magnetic field (Parker 1979), however, we anticipate that some aspects of the behavior found in the two magnetic field topologies considered here are bound to be present in more realistic 3-D cases. In particular, it is hard to imagine a situation in which a cloud would not have to fight magnetic tension from tangled and compressed field lines. Thus, at least in some cases, a thick disk containing a bulky bundle of tangled lines could act as an effective shield against material penetration into the innermost layers of the disk. Also, as stated before, the combined effects of the Parker and magneto-rotational instabilities require three dimensional studies with differential rotation. We are already making test runs with additional field morphologies and, also, 3-D cases with a moderate resolution. The results are encouraging, and a detailed analysis will be presented elsewhere (Martos [[*et al*]{}. ]{}1999). Summarizing, the magnetic field provides an adequate coupling for the energy and momentum exchange between the disk and the halo, but inhibits the mass exchange. The interactions can create strong MHD perturbations, with a turbulent network of flows and vertical gas structures. Thus, the interstellar $B$-field topology plays a paramount role in the final outcome of the interaction with colliding clouds, and further studies with a magnetized disk will shed more light on the origin and fate of the HVC system. We are grateful to Bruce Elemgreen, the referee, and Steve Shore, the editor, for useful comments and suggestions. We also thank M. Norman, M. MacLow and R. Fielder for continued consultory on Zeus. JF acknowledges useful, and heated, discussions with Bob Benjamin and Bill Wall during the Interstellar Turbulence conference, in Puebla, Mexico. AS thanks Victor Godoy and Juan Lopez for their help with the visualization. This work has been partially supported by DGAPA-UNAM grant IN130698, CONACyT grants 400354-5-4843E and 400354-5-0639PE, and by a R&D grant from Cray Research Inc. The numerical calculations were performed using UNAM’s ORIGIN-2000 supercomputer. Alfaro, E. J., Cabrera-Caño, J. & Delgado, A. J. 1991, ApJ, 378, 106 Bajaja, E., Cappa de Nicolau, C.E., Cersosimo, [[*et al*]{}. ]{}1985, ApJS, 58, 143 Balbus, S. A. & Hawley, J. F. 1991, ApJ, 376, 214 Benjamin, R.A. 1998, in Interstellar Turbulence, ed. J. Franco & A. Carramiñana, Cambridge: Cambridge University, in press Berghöfer, T. W., Bowyer, S., Lieu, R. & Knude, J. 1998, ApJ, 500, 838 Bienaymé, O., Robin, A.C. & Crezé, M. 1987, A&A, 180, 94 Boulares A., & Cox, D. P. 1990, ApJ, 365, 544 (BC) Cabrera-Caño, J. Moreno, E. Franco, J. & Alfaro, E. 1995, ApJ, 448, 149 Chandrasekar, S. 1990, ProcNatAcadSc, 46, 253 Comerón, F. & Torra, J. 1992, A&A, 261, 94 Cox, D. P. 1988, in Supernova Remnants and the Interstellar Medium, ed. Roger & Landecker (Cambridge: Cambridge University Press), 73 Cox, D. P. 1990, in Interstellar Disk–Halo Conection in Galaxies, ed. H. Bloemer (Kluwer), 143. Dettmar, R.–J. 1990, A&A, 232, L15 Foglizzo, T. & Tagger, M. 1994, A&A, 287, 297 Franco, J. 1986, RevMexAA, 12, 287 Franco, J., Ferrini, F., Ferrara, A. & Barsella, B. 1991, ApJ, 366, 443 Franco, J., Santillán, A. & Martos, M. A. 1995, in Formation of the Milky Way, ed. E. Alfaro & A. Delgado, Cambridge Univ. Press, 97 Franco, J., Tenorio-Tagle, G., Bodenheimer, P., Różyczka, M. & Mirabel, I. F. 1988, ApJ, 333, 826 Frank, A., Jones, T. W., Ryu, D. & Gaalaas, J. B. 1996, ApJ, 460, 777 Heiles, C. 1996, in Polarimetry of the Interstellar Medium, ed. W. G. Roberge & D. C. B. Whittet, PASP Conf Ser, 97, 457 Heiles, C. 1984, ApJS, 55, 585 Herbstmeier, U., Kalberla, P. M. W., Mebold U., [[*et al*]{}. ]{}1996, ApJS, 117, 497 Hirth, W., Mebold, U. & Müller, P. 1985, A&A, 153, 249 Hoyle, F., & Ellis, G.R.A. 1963, Australian JPhys, 16, 1 Howk, J.C., & Savage, B. 1997, AJ, 114, 2463 Hummel, E. & Beck, R. 1995, AA, 303, 691 Jones, T. W., Gaalaas, J. B., Ryu, D. & Frank, A. 1996, ApJ, 482, 230 Kalberla, P. W. M., Herbstmeier, U. & Mebold, U. 1984, in Local Interstellar Medium, eds. Y. Kondo, F. Bruhweiler & B. Savage, NASA-CP2345 Kim, J, Franco, J., Martos, M.A. & Santillán, A., 1999, in preparation. Klein, R.I. & McKee, C.F. 1994, in Numerical Simulations in Astrophysics, ed. Franco, Lizano, Aguilar & Daltabuit (Cambridge: Cambridge University Press), 251. Kuntz, K.D., & Danly, L. 1996, ApJ, 457, 703 Lepine, J. R. D. & Duvert, G. 1994, A&A, 286, 60 Malagoli, A., Bodo, G. & Rosner, R. 1996, ApJ, 456, 708 Mac Low, M.–M, [[*et al*]{}. ]{}1994, ApJ, 433, 757 Lockman, F.J., Hobbs, L. & Shull, J. 1986, ApJ, 301, 380 Martos, M. A. 1993, Ph.D. Thesis, UW-Madison Martos, M.A. & Cox, D.P. 1994, in Numerical Simulations in Astrophysics, ed. Franco, Lizano, Aguilar & Daltabuit (Cambridge: Cambridge University Press), 229. Martos, M. A. & Cox, D. P. 1998, ApJ, 509, 2 Martos, M. A., Kim, J., Franco, J. & Santilla, A. 1999, in preparation Mirabel, I. F. 1981a, RevMexAA, 6, 245 Mirabel, I. F. 1981b, ApJ, 247, 97 Mirabel, I. F. & Morras, R. 1990, ApJ, 356, 130 Morras, R., Bajaja, E. & Arnal, E. M. 1998, A&A, 334, 659 Morras, R., Bajaja, E., Arnal, E. M. & Poppel, W. G. L. 1999, in preparation Olano, C. A. 1982, A&A, 112, 195 Palǒus, J., Franco, J., & Tenorio-Tagle, G. 1990, A&A, 227, 175 Parker, E.N. 1966, ApJ, 145, 811 Parker, E.N. 1979, Cosmical Magnetics Fields, Clarendon Press, Oxford Rand, R. 1995, AAS, 187, 4811 Rand, R. & Stone, J. M. 1996, AJ, 111 190 Rand, R. and Kulkarni, S. 1989, ApJ, 760, 343 Rand, R., Kulkarni, S. & Hester, J. 1990, ApJ, 362, L35. Reynolds, R.J. 1989, ApJ, 339, L29 Santillán, A., Kim, J., Franco, J. & Martos, M.A. 1999, in preparation. Sembach, K., Savage, B., Lu, L. & Murphy, E. M. 1998, ApJ, in press Shore, S.N. 1992, An Introduction to Astrophysical Hydrodynamics, Academic Press, San Diego Sofue, Y. 1987, PASJ, 39, 547 Stone, J.M., & Norman, M.L. 1992, ApJS, 80, 753 Stone, J.M., & Norman, M.L. 1992, ApJS, 80, 791 Tamanaha, C. M. 1997, ApJS, 109, 139 Tenorio-Tagle, G., Bodenheimer, P., Różyczka, M. & Franco, J. 1986, A&A, 170, 107 Tenorio-Tagle, G., Franco, J., Bodenheimer, P. & Różyczka, M. 1987, A&A, 179, 219 Tomisaka, K. 1994 in Numerical Simulations in Astrophysics, ed. Franco, Lizano, Aguilar & Daltabuit (Cambridge: Cambridge University Press), 134. Tomisaka, K. 1998, MNRAS, 298, 797 Tufte, S. L., Reynolds, R. & Haffner, L. M. 1998, in Interstellar Turbulence, ed. J. Franco & A. Carramiñana, Cambridge: Cambridge University, in press van der Hulst, T. & Sancisi, R. 1988, AJ, 95, 1354 Valleé, J.P. 1997, in Fundamental of Cosmic Physics 19, 1. Wakker, B. P., & van Woerden, H. 1997, ARA&A, 35, 217. Zweibel, E.G. 1995, in The Physics of the Interstellar Medium and Intergalactic Medium, ASP Conference Series, Vol. 80, ed. Ferrara, McKee, Heiles & Shapiro, 524 [lccccl]{} 1 & Isothermal & - & 200 & 0 2 & Isothermal & - & 200 & 30 3 & Isothermal & x & 200 & 0 4 & Adiabatic & x & 200 & 0 5 & Isothermal & x & 200 & 30 6 & Isothermal & y & 200 & 0 7 & Isothermal & y & 200 & 30
--- author: - 'Davide Addona[^1]' - 'Giorgio Menegatti[^2]' - 'Michele Miranda jr.[^3]' bibliography: - 'biblio\_minkowski.bib' nocite: '[@*]' title: On integration by parts formula on open convex sets in Wiener spaces --- [[*Keywords*:]{} [ Infinite dimensional analysis; Wiener spaces; integration-by-parts formula; convex analysis; geometric measure theory]{}]{}\ [[*SubjClass*\[2000\]:]{} Primary: 46G02; Secondary: 28B02, 58E02]{} Introduction ============ We consider a separable Banach space $X$ endowed with a Gaussian measure $\gamma$, whose Cameron-Martin space is denoted by $H$. The covariance operator is denoted by $Q:X^*\rightarrow X$, where $X^*$ is the topological dual of $X$, and $\Omega\subseteq X$ is an open and convex domain. The aim of this paper is proving an integration-by-parts formula for the domain $\Omega$. To be more precise, we are going to show that for any Lipschitz function $\psi:X\rightarrow\R$ it holds that $$\begin{aligned} \label{ansimare} \int_\Omega \partial_k^* \psi d\gamma=\int_{\partial \Omega}\psi \frac{\partial_k\p}{|\nabla_H\p|_H}d\mathscr S^{\infty-1}, \qquad k\in\N.\end{aligned}$$ Here, $\p$ is the Minkowski functional of $\Omega$ and $\mathscr S^{\infty-1}$ is the (spherical) Hausdorff-Gauss surface measure introduced in [@FD92], where the surface measure is denoted by $\rho$. However, we use the notation $\mathscr S^{\infty-1}$ which has been introduced in [@AMP10] and is more familiar with the language of geometric measure theory. The measure $\rho$ is the generalization of the Airault-Malliavin surface measure [@AM88]. The crucial tools to obtain formula are convex analysis and geometric measure theory in infinite dimension. The former ensures that the Minkowski functional $\p$ related to the open convex domain $\Omega$ satisfies regularity conditions. Indeed, it is well known that the Minkowski functional related to an open convex set is convex and continuous (see [@Phe93]) and therefore we infer that $\p$ is Lipschitz, and therefore Gâteaux differentiable almost everywhere. This allows us to write the exterior normal vector of $\Omega$ in terms of $\p$, as in finite dimensional setting. Geometric measure theory has been recently developed, starting from the first definition of functions of bounded variation ($BV$ functions for short) in abstract Wiener spaces (which we denote by $BV(X,\gamma)$) given by [@F00] and [@FH01]. However, the authors propose a stochastic approach, defining the sets of finite perimeter in terms of reflected Brownian motions and by using the theory of Dirichlet forms. In [@AmbMirManPal10] the authors prove the results of [@FH01] and further properties of $BV$ functions in abstract Wiener spaces in a purely analytic setting, closer to the classical one. In particular, they prove the equivalence between different definitions of $BV(X,\gamma)$ in terms of total variation $V_H(f)$ of a function $f$, by approximation with more regular functions throughout the functional $L_H(f)$ and by means of the Ornstein-Uhlenbeck semigroup $(T_t)_{t\geq0}$. The latter is the analogous in the Gaussian setting of the heat semigroup in the original definition of $BV$ functions given by De Giorgi in [@DG53]. We recall the definition of the space $BV(X,\gamma)$ of the functions of bounded variation on $X$ (see e.g. [@FH01] and [@AmbMirManPal10 Definition 3.1]). We say that $f\in L^1(\log L)^{1/2}(X,\gamma)$ is a function of bounded variation if there exists a finite signed Radon measure $\mu\in\mathscr M(X;H)$ such that for any $h\in QX^*$ it follows that $$\begin{aligned} \int_Xf\partial_h^*\Psi d\gamma=-\int_X\Psi d[h,\mu]_H,\end{aligned}$$ for any $\Psi\in \mathcal{FC}_b^1(X)$. Further, if $U\subset X$ is a Borel set and $f=\1_U$, if $f$ has bounded variation then we say that $U$ has finite perimeter and we denote by $P(U,\cdot)$ the associated measure. The definition of $BV$ functions on an open set $A\subset X$ is more complicated, since of the lackness of local compactness in infinite dimension. However, $BV$ functions on open domains $A$ has been investigated in [@AdMeMi18], where, as in [@AmbFusPal00], the authors provide different characterizations of the space $BV(A,\gamma)$ by means of the total variation $V_\gamma(f,A)$ and in terms of approximations with more regular functions throughout the functional $L_\gamma(f,A)$. We stress that the characterization in terms of the Ornstein-Uhlenbeck semigroup of $BV(A,\gamma)$ is not an easy task since at the best of our knowledge there is no good definition of $(T_t)_{t\geq0}$ on a general open domain $A$. However, in [@Cappa] it has been defined the Ornstein-Uhlenbeck semigroup $(T^C_t)_{t\geq0}$ on the convex set $C\subset X$ by means of finite dimens$^{•}$ional approximations, and in [@LMP15] the authors relate the variation of a function $f$ with the behaviour of $T_t^Cu$ near $0$. Sets of finite perimeter play a crucial role in our investigation. As in the finite dimensional case, the measure associated to sets of finite perimeter is strictly connected with a surface measure. In [@FD92] it is introduced a notion of surface measure in infinite dimension, the spherical Hausdorff-Gauss surface measure $\mathscr S^{\infty-1}$, which is defined by means of finite dimensional spherical Hausdorff measure $\mathscr S^{n-1}$, $n\in\N$. This is different from the classical Hausdorff measure $\mathscr H^{n-1}$ even if the relation $\mathscr H^{n-1}\leq \mathscr S^{n-1}\leq 2\mathscr H^{n-1}$ holds true and they coincide on rectifiable sets. This choice is due to the fact that spherical Hausdorff-Gauss surface measure $\mathscr S^{n-1}$ enjoy a monotonicity property (see [@AMP10 Lemma 3.2], [@FD92 Proposition 6(ii)] or [@HI10 Proposition 2.4]) which allows to define measure $\mathscr S^{\infty-1}$ as limit on direct sets. Further details are given in Section \[crumiro\]. Properties of sets of finite perimeter have been widely studied in [@AMP10], [@CLMN12] and [@HI10]. In particular, [@AMP10 Theorem 5.2] and [@HI10 Theorem 2.11] show that if $U$ has finite perimeter in $X$, then $P(U,B)=\mathscr S^{\infty-1}(B\cap\partial^* U)$, where $\partial ^*U$ is the cylindrical essential boundary introduced in [@HI10 Definition 2.9]. It is worth noticing that in the infinite-dimensional setting things do not work as well as for the Euclidean case; [@P81] gives an example of an infinite-dimensional Hilbert space $X$, a Gaussian measure $\gamma$ and a set $E \subset X$ such that $0 < \gamma(E) < 1$ and $$\begin{aligned} \label{copiato} \lim_{r\rightarrow0}\frac{\gamma(E\cap B_r(x))}{\gamma(B_r(x))}=1, \mbox{ for every } x\in X.\end{aligned}$$ In the same work, it is also shown that if the eigenvalues of the covariance $Q$ decay to zero sufficiently fast, then it is possible to talk about density points; in some sense, the requirement on the decay gives properties of $X$ closer to the finite-dimensional case. For these reasons, in general the notion of point of density as given in is not a good notion. However, [@AF11] gives a definition of points of density $1/2$ by means of the Ornstein-Uhlenbeck semigroup $(T_t)_{t\geq0}$. The properties of $\Omega$ give other important consequences. At first, we show that, as in finite dimension, for any open convex set $C\subset X$ we have $\partial C=\partial ^*C$, where $\partial C$ denotes the topological boundary of $C$. Further, from [@CLMN12 Proposition 9], it follows that $\Omega$ has finite perimeter and therefore from the above reasoning it follows that $P(\Omega,B)=\mathscr S^{\infty-1}(B\cap \partial^*\Omega)=\mathscr S^{\infty-1}(B\cap\partial \Omega)$. This explain why in the right-hand side of the measure $\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\partial\Omega$ appears. Finally, we stress that is the generalization of the integration-by-parts formula proved in [@CL14]. Here, the authors deal with subsets of $X$ of the type $\mathcal O:=\{x\in X:G(x)<0\}$, where $G:X\rightarrow \R$ is a suitable regular function which satisfy a sort of nondegeneracy assumption, and they prove that $$\begin{aligned} \label{sospiro} \int_{\mathcal O}\partial_k^*\varphi d\gamma=\int_{G^{-1}(0)}\varphi \frac{\partial_kG}{|\nabla_HG|_H}\varphi d\mathscr S^{\infty-1}, \qquad k\in\N,\end{aligned}$$ for any Lipschitz function $\varphi:X\rightarrow\R$. $G^{-1}(0)$ coincides $\mathscr S^{\infty-1}$-almost everywhere with $\partial\mathcal O$. Thanks to , the authors set the bases of a theory of the traces for Sobolev functions in abstract Wiener spaces and proved the existence of a trace operator ${\rm Tr}$. However, this theory if far away to be complete. Indeed, in general if $f$ belongs to the Sobolev space $W^{1,p}(\mathcal O,\gamma)$ with $p\in(1,+\infty)$ (see [@CL14] for the definition of $W^{1,p}(\mathcal O,\gamma)$), then ${\rm Tr}f\in L^q(\partial \mathcal O,\rho)$ with $1\leq q<p$. The case $q=p$ is still an open problem, and in this direction some result is known if $G$ satisfies some additional conditions, which are not even fulfilled by the balls in Hilbert spaces. We recall that in the case $\mathcal O=X$ the surface integral in disappears and therefore is the usual integration-by-parts formula in abstract Wiener space (see e.g. [@Bog98 Chapter 5]). Comparing and we notice that the Minkowski functional $\p$ of $\Omega$ plays the role of the function $G$ in [@CL14]. However, $\p$ in general does not satisfies the assumptions of [@CL14] for $G$ and in this sense our result is a generalization of . Moreover, our work suggests a different way to get the integration-by-parts formula by using procedures and techniques inherit from the geometric measure theory. This different approach gives the hope to develop in future papers a more general trace theory for Sobolev and BV functions in abstract Wiener spaces. The paper is organized as follows. In Section \[preliminaries\] we define the abstract Wiener space $(X,\gamma,H)$ and the main tools of differential calculus in infinite dimension, i.e., the $H$-gradient, the $\gamma$- divergence and the Sobolev spaces $W^{1,p}(\Omega,\gamma)$, with $p\in[1,+\infty)$. Moreover, we recall the definition of functions of bounded variation both on $X$ and on an open set $A\subset X$. In Section \[crumiro\] we recall the definition of $\mathscr S^{\infty-1}$ and, thanks to an infinite dimensional version of the area formula, we prove that the epigraph of a Sobolev function has finite perimeter. Finally, in Section \[coppadelmondo\] we prove the integration-by-parts formula . To this aim we initially show that, thanks to [@AMP10 Lemma 6.3], it is possible to choice a direction $h\in QX^*$ such that $|D_\gamma\1_\Omega|\left(\left\{x\in X:[\nu_\Omega(x),h]_H=0\right\}\right)=0$, where $\nu_\Omega$ is the Radon-Nikodym density of $D_\gamma\1_\Omega$ with respect $|D_\gamma\1_\Omega|$, i.e., $D_\gamma\1_\Omega=\nu_\Omega|D_\gamma\1_\Omega|$. We set $\Omega_h^\perp:=\{x\in \Omega:\hat h(x)=0\}$, where $\hat h\in X^*$ satisfies $h=Q\hat h$. Then, there exist two functions $f,g:\Omega_h^\perp\rightarrow \R$ such that $\partial \Omega=\Gamma(f,\Omega_h^\perp)\cup\Gamma(g,\Omega_h^\perp)\cup N$, where $N$ is a Borel set with null $\mathscr S^{\infty-1}$-measure and $\Gamma(f,\Omega_h^\perp):=\{y+f(y)h:y\in\Omega_h^\perp\}$. By applying the results of Section \[crumiro\] it follows that $D_\gamma\1_\Omega=-\nu_f\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,\Omega_h^\perp)+ \nu_g\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp)$. To conclude, we show a relation between $\p$ and $f$ and $g$, which gives . Preliminaries ============= Let us fix some notations. We denote by $(X,\gamma,H)$ an abstract Wiener space, i.e. a separable infinite dimensional Banach space $X$ endowed with a Radon centered non degenerate Gaussian measure $\gamma$ with Cameron–Martin space $H$. We recall that $H$ is continuously and compactly embedded in $X$ and that there exists $Q:X^*\to X$ such that $QX^*\subset H\subset X$, all these embeddings being dense by the non-degeneracy of $\gamma$. The decomposition $Q=R_\gamma\circ j$ holds, where $j:X^*\to L^2(X,\gamma)$ is just the identification of an element of $X^*$ as a function in $L^2(X,\gamma)$ and $R_\gamma:L^2(X,\gamma)\to X$ is defined in terms of Bochner integral as $$R_\gamma(f)= \int_X f(x) x\, \gamma(dx).$$ The reproducing kernel is defined as $$\mathscr{H}= \overline{j(X^*)}\subset L^2(X,\gamma),$$ and the restriction of $R_\gamma$ on $\mathscr{H}$ gives a one–to–one correspondence between $H$ and $\mathscr{H}$. For any $h\in H$ we shall denote by $\hat h\in \mathscr{H}$ the unique element such that $R_\gamma(\hat h)=h$. Then, the Cameron–Martin space inherits the Hilbert structure with inner product $$[{h},{k}]_H=\int_X \hat h(x)\hat k(x)\gamma(dx).$$ We denote by $\mathcal{F}C^1_b(X)$ the set of bounded functions $\varphi:X\to \R$ such that there exists $n\in\N$, $x^*_1,\ldots x^*_n\in X^*$ and $v\in C^1_b(\R^n)$ (the space of bounded continuous functions with bounded continuous derivatives) with $$\varphi(x)=v(x^*_1(x),\ldots, x^*_n(x)).$$ Without loss of generality, we can suppose that $Qx_1^*,\ldots,Qx_n^*$ are orthonormal vectors in $H$. Further, we denote the set cylindrical $H$-valued vector fields by $\mathcal F C_b^1(X,H)$, where $\Phi\in\mathcal FC_b^1(X,H)$ if there exist $n\in\N$ and $h_1,\ldots, h_n\in H$ and $\varphi_1,\ldots,\varphi_n\in \mathcal F C_b^1(X)$ such that $$\begin{aligned} \Phi(x):=\sum_{i=1}^n\varphi_i(x) h_i.\end{aligned}$$ For any $\varphi\in \mathcal{F}C^1_b(X)$ and $h\in H$ we set $$\partial_h \varphi(x)=\lim_{t\to 0} \frac{\varphi(x+th)-\varphi(x)}{t}.$$ The separability of $X$ implies that $H$ is separable. For any $\varphi\in\mathcal F C_b^1(X)$, $\varphi(x)=v(x_1^*(x),\ldots,x_n^*(x))$ for some $n\in\N$, $x_1,\ldots,x_n\in X^*$ and $v\in C_b^1(\R^n)$, we define its $H$–gradient by $$\nabla_H \varphi(x)=\sum_{i=1}^n \partial_{Qx^*_i} \varphi(x) Qx_i^*,$$ If $H'\subset H$ is a closed subspace and $Qx_i^*\in H'$ for any $i=1,\ldots,n$, then we write $\nabla_{H'}\varphi(x)=\nabla_H\varphi$ to enlight the dependence of $\varphi$ on the directions of $H'$. For any $h\in H$ we also denote by $$\partial^*_h \varphi(x)=\partial_h \varphi(x) -\varphi(x)\hat h(x)$$ the formal adjoint (up to the sign) of $\partial_h$, in the sense that, for any $\varphi,\psi\in \mathcal{F}C^1_b(X)$, it holds that $$\int_X \varphi\partial_h \psi d\gamma =-\int_X \partial^*_h \varphi \psi d\gamma.$$ We introduce the divergence operator ${\rm div}_\gamma:\mathcal FC_b^1(X,H)\longrightarrow \R$ by setting $$\begin{aligned} {\rm div}_\gamma \Phi(x):=\sum_{i=1}^n \partial_{h_i}\varphi_i(x)-\varphi_i(x)h_i,\end{aligned}$$ with $\Phi(x)=\sum_{i=1}^n\varphi_i(x) h_i\in \mathcal FC_b^1(X,H)$. Further, for any $\Phi\in \mathcal FC_b^1(X,H)$ and any $\psi\in\mathcal FC_b^1(X)$ the following integration-by-parts formula holds: $$\begin{aligned} \int_X[\nabla_H\psi,\Phi]_Hd\gamma=-\int_X\psi{\rm div}_\gamma \Phi d\gamma. \end{aligned}$$ We stress that it is possible to fix an orthonormal basis $\{h_i\}_{i\in \N}$ of $H$ such that $h_i=Qx^*_i$ with $x^*_i\in X^*$ for any $i\in \N$. For any $h\in QX^*$ we introduce the continuous projection $\pi_h:X\longrightarrow H$ defined by $\pi_hx=\hat h(x) h$, where $R_\gamma(\hat h)=h$. This fact induces the decomposition $X=X_h^\perp \oplus \langle h\rangle$, where $X_h^\perp ={\rm ker}({\pi_h})={\rm ker}(\hat h)$. Therefore, for any $x\in X$ we shall write $x=y+z$, where $y=x-\pi_hx\in X_h^\perp$ and $z=\pi_hx$. Clearly, this decomposition is unique. Such a decomposition implies that the measure $\gamma$ can be split as a product measure $$\gamma=\gamma_h^\perp \otimes \gamma_h$$ where $\gamma_h^\perp$ and $\gamma_h$ are centred non-degenerate Gaussian measures on $X_h^\perp$ and $\langle h\rangle$, respectively. If $|h|_H=1$, then $\gamma_h$ is a standard Gaussian measure, i.e. letting $z=th$, we have $$\gamma_h(dz)=\gamma_1(dt)= \mathscr{N}(0,1)(dt)=\frac{1}{\sqrt{2\pi}} e^{-\frac{t^2}{2}} dt.$$ This argument can be generalized for any finite dimensional subspace $F\subset QX^*\subset H$: indeed, if $F=\langle h_1,\ldots,h_m\rangle$ with $\{h_i\}_{i=1,\ldots,m}$ orthonormal elements of $H$ and $h_i\in Q X^*$ for any $i=1,\ldots,m$, then we can write $X=X_F^\perp \oplus F$, where $X_F^\perp ={\rm ker}(\pi_F)$, $\pi_F:X\longrightarrow F$ and $$\pi_F(x)=\sum_{i=1}^m\hat h_i(x)h_i,$$ and $\pi_F(h):=\sum_{i=1}^m[h,h_i]_Hh_i$ for any $h\in H$. We identify $F$ with $\R^m$ and for any $z\in F$ we denote by $|z|$ its norm in $\R^m$. We can also decompose $\gamma=\gamma_F^\perp\otimes \gamma_F$ where $\gamma_F^\perp$ and $\gamma_F$ are centred non degenerate Gaussian measures on $X_F^\perp$ and $F$, respectively. Further,[ $$\gamma_F(dz) = \frac{1}{(2\pi)^{m/2}} e^{-\frac{|z|^2}{2}}\ dz.$$ ]{} We recall the definition of Sobolev spaces and functions with bounded variation in Wiener spaces. Let $\Omega\subset X$ be an open set. By ${\rm Lip}_b(\Omega)$ we denote the set of bounded Lipschitz continuous functions on $\Omega$, by ${\rm Lip}_c(\Omega)$ we denote the set of functions $\varphi\in {\rm Lip}(X)$ [with bounded support and]{} such that ${\rm dist}({\rm supp}(\varphi), \Omega^c)>0$, and by [$\mathcal{F}C^1_b(\Omega)$ we denote the set of restrictions of functions of $\mathcal{F}C^1_b(X)$ to $\Omega$]{}. Clearly, ${\rm Lip}_c(\Omega)\subset{\rm Lip}_b(\Omega)$ and $\mathcal{F}C^1_b(\Omega)\subset{\rm Lip}_b(\Omega)$. [Analogously, we define ${\rm Lip}(\Omega, H)$ as the set of functions $\varphi:\ \Omega\rightarrow H$ such that there exists a positive constant $L$ which satisfies $|\varphi(x)-\varphi(y)|_H\leq L\|x-y\|_X$ for any $x,y\in X$.]{} ${\rm Lip}_b(\Omega,H)$ and ${\rm Lip}_c(\Omega,H)$ are defined in obvious way. [We shall denote by $\mathscr{M}(\Omega,H)$ the set of [$H$-valued]{} Borel measures defined on $\Omega\subset X$. For such measures the total variation turns out to be given by $$|\mu|(\Omega)= \sup \left\{ \int_\Omega {\langle \Phi,d\mu\rangle}_H: \Phi\in {\rm Lip}_{c}(\Omega,H), |\Phi(x)|_H\leq 1\ \forall x\in \Omega \right\}. \label{norma}$$ Equation has been proved in [@LMP15 Lemma 2.3] with ${\rm Lip}_0(\Omega,H)$ instead of ${\rm Lip}_c(\Omega,H)$, but the same arguments can be adapted to prove .]{} We can state the following preliminary result. Let $1\leq p<\infty$. Then, the operator $\nabla_H: \mathcal{F}C^1_b(\Omega)\subset L^p(\Omega,\gamma) \to L^p(\Omega,\gamma,H)$ is closable. We denote by $W^{1,p}(\Omega,\gamma)$ the domain of its closure. [The same is true if we use ${\rm Lip}_{b}(\Omega)\subset L^p(\Omega,\gamma)$ instead of $\mathcal{F}C^1_b(\Omega)$, and the definition of $W^{1,p}(\Omega,\gamma)$ is equivalent.]{} The above statement is true for $\Omega=X$ from [@Bog98 Chapter 5]. By linearity, it is sufficient to prove that if $f_j\to 0$ in $L^p(\Omega,\gamma)$ and $\nabla_H f_j \to F$ in $L^p(\Omega,\gamma,H)$, then $F=0$. To this aim, we fix $\varphi\in {\rm Lip}_{c}(\Omega)$: notice that $|\nabla_H \varphi|_H\in L^\infty(X,\gamma)$, then the zero extension $\tilde\varphi=\varphi \cdot \1_\Omega$ of $\varphi$ belongs to ${\rm Lip}(X)$ and $\partial^*_h \varphi\in L^\infty(X,\gamma)$ for any $h\in H$. Since $f_j\in \mathcal{F} C^1_b(X)$ for any $j\in \N$ and $f_j\to 0$ in $L^p(\Omega,\gamma)$. Then we get $$\begin{aligned} 0=& \lim_{j\to +\infty} \int_\Omega f_j \partial^*_h \varphi d\gamma = \lim_{j\to +\infty} \int_X f_j \partial^*_h \tilde\varphi d\gamma = \lim_{j\to +\infty} -\int_X \partial_h f_j \tilde \varphi d\gamma = \lim_{j\to +\infty} -\int_\Omega \partial_h f_j \varphi d\gamma \\ =& -\lim_{j\rightarrow+\infty}\int_\Omega[\nabla_hf_j,\varphi]_Hd\gamma = -\int_\Omega [{F},{h}]_H \varphi d\gamma,\end{aligned}$$ for any $h\in H$. Now, ${\langle F,h\rangle}_H \in L^p(\Omega,\gamma)\subseteq L^1 (\Omega,\gamma)$, so we can define $\mu\in \mathscr{M}(\Omega,H)$ by $\mu={\langle F,h\rangle}_H \gamma$. Therefore, gives $\mu\equiv0$. This implies that ${\langle F,h\rangle}_H=0$ $\gamma$–a.e. for every $h\in H$, and then $F=0$. To prove the second part of the statement, we recall that the restriction to $\Omega$ of a function in $W^{1,p}(X,\gamma)$ is in $W^{1,p}(\Omega,\gamma)$, and by [@Bog98 Chapter 5] we have ${\rm Lip}_b(X)\subseteq W^{1,p}(X,\gamma)$. Finally, a function in ${\rm Lip}_b(\Omega)$ can be extended to a function in ${\rm Lip}_b(X)$, and therefore ${\rm Lip}_b(\Omega)\subseteq W^{1,p}(\Omega,\gamma)$ and we can conclude. From the definition of $W^{1,p}(\Omega,\gamma)$, it is easy to prove that for any $f\in W^{1,p}(\Omega,\gamma)$ and $\Phi\in {\rm Lip}_{c}(\Omega,H)$ the following integration by parts formula holds: $$\int_\Omega f{\rm div}_\gamma \Phi d\gamma =-\int_\Omega {\langle \nabla_H f,\Phi\rangle}_H d\gamma.$$ We close this section by giving the definition of functions of bounded variation both on $X$ and on open domains. For precise study of such functions see [@AdMeMi18]. We recall the definition on $X$. \[jane\] [Let]{} $p>1$. We say that $u\in L^{p}(X,\gamma)$ is a function with bounded variation, i.e., $u\in BV(X,\gamma)$, if there exists a Borel measure [$D_\gamma u\in \mathscr M(X,H)$ (said weak gradient) such that for any $\varphi\in \mathcal F C_b^1(X)$ and any $i\in\N$ we have $$\int_\Omega u \partial^*_{i} \varphi d\gamma =-\int_\Omega \varphi d(D_\gamma u)_i,$$ where $(D_\gamma u)_i=[{D_\gamma u},{h_i}]_H$.]{} If $E\in\mathcal B(X)$ and $u=\1_E$, then we say that $E$ has finite perimeter in $X$ if $u\in BV(X,\gamma)$ and we write $P_\gamma(E,B):=|D_\gamma\1_E|(B)$, for any $B\in\mathcal B(X)$. For further informations on $BV(X,\gamma)$ we refer to [@AmbMirManPal10]. \[austen\] Let $\Omega\subseteq X$ an open set and [let $p>1$]{}. We say that $u\in L^{{p}}(\Omega,\gamma)$ is a function with bounded variation, $u\in BV(\Omega,\gamma)$, if there exists a measure [$ D_\gamma u\in \mathscr M(\Omega,H)$]{} (said weak gradient) such that for any $\varphi\in {\rm Lip}_{c}(\Omega)$ and any $i\in \N$ we have $$\int_\Omega u \partial^*_{i} \varphi d\gamma =-\int_\Omega \varphi d(D_\gamma u)_i,$$ where $(D_\gamma u)_i=[{D_\gamma u},{h_i}]_H$. In [@AmbMirManPal10] $BV(X,\gamma)$ has been defined starting from the Orlicz space $L({\rm Log}L)^{1/2}(X,\gamma)$ instead of $L^p(X,\gamma)$ with $p>1$. Since $L^p(X,\gamma)\subset L({\rm Log}L)^{1/2}(X,\gamma)$ for any $p>1$ Definition \[jane\] is less general then [@AmbMirManPal10 Definition 3.1], but in our situation it is enough.\ Moreover, the same holds for Definition \[austen\] where $X$ is replaced by the open set $\Omega\subset X$ (see [@AdMeMi18]). [It is clear that for $\Omega=X$ the above definitions are equivalent.]{} [Moreover, if $f\in L^{p}(X,\gamma)$ is a function with bounded variation with weak gradient $D_\gamma u$, clearly for every $\Omega$ open subset of $X$, $f$ is of bounded variation with weak gradient $D_\gamma u {\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Omega$, the restriction of the measure $D_\gamma u$ to the set $\Omega$.]{} [In each case, if $D_\gamma u$ exists it is unique.]{} Epigraph of Sobolev functions {#crumiro} ============================= Fixed $h\in QX^*$ and an open set $A\subset X^\perp_h$ and a function $f:A\to \R$, we define the graph of $f$ by $$\Gamma(f,A):= \left\{ x=y+f(y)h: y\in A \right\}$$ and the epigraph of $f$ by $${\rm Epi}(f,A):=\left\{ x=y+t h: y\in A, t>f(y) \right\}.$$ Let us recall the definition of spherical Hausdorff measure in a Wiener space setting (see [@AMP10], [@FD92] and [@HI10] for more details). For a given $F\subset H$ finite dimensional space with $F\subset QX^*$, we define $$\mathscr{S}_{F}^{\infty-1}(B) =\int_{X_F^\perp} \gamma_F^\perp(dy) \int_{B_y} G_m(z) \mathscr{S}^{m-1}(dz), \qquad \forall B\in \B(X),$$ where $m={\rm dim}F$, $\mathscr{S}^{m-1}$ is the spherical Hausdorff measure on $F$, $$G_m(z)=\frac{1}{(2\pi)^{m/2}} e^{-\frac{|z|^2}{2}}.$$ and, for any $y\in X^\perp_F$, $$B_y=\{z\in F: y+z\in B\} =(B-y)\cap F.$$ Since $\mathscr S^{\infty-1}_F\leq \mathscr S^{\infty -1}_G$ if $F\subseteq G$ (see e.g. [@AMP10 Lemma 3.2], [@FD92 Proposition 6(ii)] or [@HI10 Proposition 2.4]), we can define the measure [ $$\begin{aligned} \mathscr S^{\infty-1}=\sup_{F}\mathscr S^{\infty-1}_F.\end{aligned}$$ The definition immediately implies that, If $A\subset X$ is a Borel set which satisfies $\mathscr S^{\infty-1}(A)<+\infty$, then $\gamma(A)=0$. If we now consider an increasing family $\mathcal{F}=(F_{n})_{n\in\N}\subset QX^*$ whose closure is dense in $H$, by monotone convergence we have that is well defined as a measure $$\mathscr{S}_{\mathcal{F}}^{\infty-1} =\sup_{n\in \N} \mathscr{S}^{\infty-1}_{F_n}.$$ ]{} From the definition, it follows that $\mathscr S^{\infty-1}_\mathcal F\leq \mathscr S^{\infty-1}$ for any $\mathcal F$ which satisfies the above condition. However, the first part of the proof of the following result shows that they coincide if we restrict them to the graph of Sobolev functions. \[nipote\] Let $h\in QX^*$ with $|h|_H=1$, let $A\subseteq X_h^\perp$ be an open set and let $f\in W^{1,1}(A,\gamma_h^\perp)$. Then: - for any representative $\tilde f$ of $f$ we have $\mathscr S^{\infty-1}(\Gamma(\tilde f,A))<+\infty$. In particular, $\gamma(\Gamma(\tilde f,A))=0$. - If $\tilde f_1,\tilde f_2$ are two representatives of $f$, then $\mathscr S^{\infty-1}(\Gamma(\tilde f_1,A)\Delta \Gamma(\tilde f_2,A))=0$ and $\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(\tilde f_1,A)=\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(\tilde f_2,A)$. - For any bounded Borel function $g:X\longrightarrow \R$ we have $$\begin{aligned} \label{areaformula} \int_{\Gamma(f,A)} g(x) \mathscr{S}^{\infty-1}(dx) =\int_A g(y+f(y)h) G_1(f(y)) \sqrt{1+|\nabla_H f(y)|_H^2} \gamma_h^\perp(dy).\end{aligned}$$ Let us then show that for any $B\in\B(X)$ it follows that $$\mathscr{S}^{\infty-1} (\Gamma(f,A)\cap B)= \int_{A} \1_B(y+f(y)h)G_{1}(f(y)) \sqrt{1+|\nabla_{H}f(y)|_{H}^{2}}\ \gamma_{h}^\perp (dy), \label{bisnonno}$$ where we still denote by $f$ a representative of $f$, since $(i)$, $(ii)$ and $(iii)$ follow from . We consider $F\subset QX^*$ with $\text{dim}(F)=m<\infty$, $h\in F$, $\widetilde{F}=X_h^\perp \cap F$, $\pi_{F}$ and $ \pi_{\widetilde{F}}$ canonical projections of $X$ on $F$ and $\widetilde{F}$, respectively, and we set $X_{F}^{\perp}=\text{ker}(\pi_{F})$ and $X_{\widetilde{F}}^{\perp}=\text{ker}(\pi_{\widetilde{F}})$. This gives $X_h^\perp=\widetilde{F}\oplus X_{F}^{\perp}$ and $F=\widetilde{F}\oplus\langle h\rangle$. Moreover, if we denote by $\gamma_{F}$, $\gamma_{\widetilde{F}}$, $\gamma_{F}^{\perp}$, $\gamma_{\widetilde{F}}^{\perp}$ the nondegenerate Gaussian measures on $F,\widetilde{F}$, $X_{F}^{\perp}$, $X_{\widetilde{F}}^{\perp}$, respectively, we get $\gamma=\gamma_{F}\otimes\gamma_{F}^{\perp}$, $\gamma=\gamma_{\widetilde{F}}\otimes\gamma_{\widetilde{F}}^{\perp}$ and $\gamma_h^\perp=\gamma_{\widetilde F}\otimes\gamma_F^\perp$. Then, $$\mathscr{S}_{F}^{\infty-1}({\Gamma(f,A)\cap B})=\int_{X_{F}^{\perp}}\gamma_{F}^{\perp}(dy)\int_{{(\Gamma(f,A)\cap B)}_{y}}G_{m}(z)\ \mathscr{S}^{m-1}(dz),$$ where $G_{m}(z):=\frac{1}{\sqrt{{(2\pi)^{m}}}}\exp-\frac{\left\Vert z\right\Vert {}_{F}^{2}}{2}$ for $z\in F$ and $$(\Gamma(f,A)\cap B)_{y}= \{z\in F|z+y\in\Gamma(f,A)\cap B\},$$ for all $y\in X_{F}^{\perp}$. For any $y\in X_{F}^\perp$, the set $A_y$, which a priori is contained in $F$, is indeed contained in $\widetilde F$ since $A\subset X_h^\perp$. We consider the function $f_y:A_y\longrightarrow\R$ defined by $f_y(z):=f(y+z)$. Since $f\in W^{1,1}(A,\gamma_h^\perp)$, it follows that $f_y\in W^{1,1}({A_y},\gamma_{{ F}})$ for $\gamma_{{\widetilde F}}^\perp$-a.e. $y\in X_{{F}}^\perp$. Let us denote by $\Gamma(f_{y},A_y)\subseteq F$ the graph of $f_{y}$ on $A_y$. Since $(\Gamma(f,A))_y=\Gamma(f_y,A_y)$ and $$(\Gamma(f,A)\cap B)_{y}=\Gamma(f_{y},A_y) \cap B_y.$$ Therefore, writing $z\in\Gamma(f_{y},A_y)$ as $z=\tilde z+[z,h]_H h$ with $\tilde z\in \widetilde F$, we get $$G_{m}(z)=G_{m}(\tilde{z}+f_{y}(\tilde{z})h)=G_{m-1}(\tilde{z})G_{1}(f_{y}(\tilde{z}))$$ Since $f_y$ is a finite–dimensional Sobolev function, it follows that $$\begin{aligned} \int_{\Gamma({f}_y, A_{y})} & \1_{B}(y+z)G_{m}(z)\ \mathscr{S}^{m-1}(dz)\\ =& \int_{A_y} \1_B(y+\tilde z+f_y(\tilde z)h) G_{m-1}(\tilde{z})G_{1}(f_{y}(\tilde{z})) \sqrt{1+|\nabla_{F} f_{y}(\tilde{z})|^{2}}\ d\tilde{z}\\ =& \int_{A_y} \1_B(y+\tilde z+f_y(\tilde z)h) G_1(f_y(\tilde z)) \sqrt{1+|\nabla_{F} f_{y}(\tilde{z})|^{2}} \gamma_{\widetilde F}(d\tilde z).\end{aligned}$$ Hence, $$\begin{aligned} \mathscr{S}_{F}^{\infty-1}(\Gamma(f,A)\cap B) =&\int_{X_{F}^{\perp}}\gamma_{F}^{\perp}(dy)\int_{A_y} \1_B(y+\tilde z+f_y(\tilde z)h) G_{1}(f_{y}(\tilde{z}))\sqrt{1+|\nabla_{F} f_{y}(\tilde{z})|_H^{2}}\ \gamma_{\widetilde{F}}(d\tilde{z})\\ =& \int_{A} \1_B(y+f(y)h) G_{1}(f(y))\sqrt{1+|\pi_{F}(\nabla_{H}f(y))|_{H}^{2}}\ \gamma_{h}^\perp (dy) \\ \leq & \int_{A} \1_B(y+f(y)h) G_{1}(f(y))\sqrt{1+|\nabla_{H}f(y)|_{H}^{2}}\ \gamma_{h}^\perp (dy).\end{aligned}$$ Therefore, $$\begin{aligned} \mathscr S^{\infty-1}(\Gamma(f,A)\cap B) \leq & \int_{A} \1_B(y+f(y)h) G_{1}(f(y))\sqrt{1+|\nabla_{H}f(y)|_{H}^{2}}\ \gamma_{h}^\perp (dy).\end{aligned}$$ If we now consider an increasing family $\mathcal{F}=(F_{n})_{n\in\N}\subset QX^*$ whose closure is dense in $H$ [and $h\in F_1$, by monotone convergence we obtain that $$\mathscr{S}_{\mathcal{F}}^{\infty-1} (\Gamma(f,A)\cap B) =\sup_{n\in \N} \mathscr{S}^{\infty-1}_{F_n}(\Gamma(f,A)\cap B) =\int_{A} \1_B\sqrt{1+|\nabla_H f(y)|_H^2} {\gamma_h^\perp}(dy).$$ Hence we have $$\begin{aligned} \mathscr{S}^{\infty-1} (\Gamma(f,A)\cap B) =\mathscr S_{\mathcal F}^{\infty-1}(\Gamma(f,A)\cap B)= & \int_{A} \1_B(y+f(y)h)\sqrt{1+|\nabla_H f(y)|_H^2} {\gamma_h^\perp}(dy) \label{prozio}\end{aligned}$$ ]{} . If we take $B=X$ in , then we have $$\begin{aligned} \mathscr S^{\infty-1}(\Gamma(f,A)) = & \int_{A} \sqrt{1+|\nabla_H f(y)|_H^2} {\gamma_h^\perp}(dy) \leq \int_A\left(1+|\nabla_Hf(y)|_H\right)\gamma_h^\perp(y) \\ \leq & \gamma_h^\perp(A)+\|f\|_{W^{1,1}(A,\gamma_h^\perp)}<+\infty.\end{aligned}$$ . Let $\tilde f_1$ and $\tilde f_2$ be two representatives of $f$. Let us set $N:=\{y\in A:\tilde f_1(y)\neq\tilde f_2(y)\}$. Then, $\gamma_h^\perp(N)=0$ and it is easy to see that if $x=y+\tilde f_1(y)h\in \Gamma(\tilde f_1,A)\setminus \Gamma(\tilde f_2,A)$, then $y\in N$. Therefore, from with $f$ replaced by $\tilde f_1$ and $B$ replaced by $\Gamma(\tilde f_1,A)\setminus \Gamma(\tilde f_2,A)$ we deduce that $$\begin{aligned} \mathscr S^{\infty-1}(\Gamma(\tilde f_1,A)\setminus \Gamma(\tilde f_2,A)) = & \int_A\1_{(\Gamma(\tilde f_1,A)\setminus \Gamma(\tilde f_2,A))}(y+\tilde f_1(y)h)\sqrt{1+|\nabla_H \tilde f_1(y)|_H^2} {\gamma_h^\perp}(dy) \\ \leq & \int_A\1_{N}(y)\sqrt{1+|\nabla_H \tilde f_1(y)|_H^2} {\gamma_h^\perp}(dy)=0.\end{aligned}$$ The same arguments give $\mathscr S^{\infty-1}(\Gamma(\tilde f_2,A)\setminus \Gamma(\tilde f_1,A))=0$, and we get the first part of the statement. As far the second part is concerned, it is enough to notice that for any Borel set $B\in \B(X)$ we have $$\begin{aligned} \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(\tilde f_1,A)(B) = & \mathscr{S}^{\infty-1} (\Gamma(\tilde f_2,A)\cap B) = \int_{A} \1_B(y+f(y)h)\sqrt{1+|\nabla_H \tilde f_1(y)|_H^2} {\gamma_h^\perp}(dy) \\ = & \int_{A} \1_B(y+f(y)h)\sqrt{1+|\nabla_H \tilde f_2(y)|_H^2} {\gamma_h^\perp}(dy) = \mathscr{S}^{\infty-1} (\Gamma(\tilde f_2,A)\cap B)\\ = & \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(\tilde f_2,A)(B).\end{aligned}$$ . follows from simply approximating $g$ by means of simple functions. \[giampaolo\] Assume that the function $g$ is does not depend on the component of $x$ along $h$, i.e., there exists a Borel function $\ell:A\longrightarrow \R$ such that $g(x)=\ell(y)$, where $y=x-\pi_hx$. Then, if $\tilde g$ is a bounded Borel function such that $\tilde g(x)=g(x)$ for $\gamma$-a.e. $x\in X$, then $$\begin{aligned} \int_{\Gamma(f,A)} g d\mathscr{S}^{\infty-1}=\int_{\Gamma(f,A)} \tilde g d\mathscr{S}^{\infty-1}.\end{aligned}$$ Indeed, for $\gamma$-a.e. $x\in X$ we have $\tilde g(x)=\ell(y)$, with $y=x-\pi_hx$, and therefore $$\begin{aligned} \int_{\Gamma(f,A)} g d\mathscr{S}^{\infty-1} = & \int_Ag(y+f(y)h)\sqrt {1+|\nabla_Hf(y)|^2}\gamma_h^\perp(dy) =\int_A\ell(y)\sqrt {1+|\nabla_Hf(y)|^2}\gamma_h^\perp(dy) \\ = & \int_A\tilde g(y+f(y)h)\sqrt {1+|\nabla_Hf(y)|^2}\gamma_h^\perp(dy) = \int_{\Gamma(f,A)} \tilde g d\mathscr{S}^{\infty-1}.\end{aligned}$$ \[bicicletta\] Let $h\in QX^*$ with $|h|_H=1$, let $A\subseteq X_h^\perp$ be an open set and let $f$ be a Borel representative of an element of $W^{1,1}(A,\gamma_h^\perp)$. Then, the [Borel]{} set $$Epi(f,A)=\left\{ x=y+th: y\in A, t>f(y) \right\}$$ has finite perimeter in the cylinder $C_A=A\oplus \langle h\rangle$ with $$\begin{aligned} D_\gamma \1_{Epi(f,A)}(dx) =\frac{-\nabla_H f(y)+h}{\sqrt{1+|\nabla_H f(y)|_H^2}} \mathscr{S}^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,A)(dx), \label{fedor}\end{aligned}$$ where $x=y+f(y)h$. As a byproduct, we get $$P({\rm Epi}(f,A),C_A)= \int_A G_1( f(y)) \sqrt{1+|\nabla_H f(y)|^2_H} \gamma_h^\perp (dy). $$ At first, we stress that from Proposition \[nipote\]$(ii)$ and Remark \[giampaolo\] formula does not depend on the choice of the representative $f$. Let us denote by $\nu_f$ the vector defined on $C_A$ by $$\begin{aligned} \nu_f(x)=\frac{-\nabla_H f(y)+h}{\sqrt{1+|\nabla_H f(y)|_H^2}}, \label{ricola}\end{aligned}$$ where $x=y+th$ with $y\in A$ and $t\in\R$. First of all, we notice that for $\varphi\in {\rm Lip}_{c}(C_A)$ we have that $$\begin{aligned} \int_{{\rm Epi}(f,A)} \partial^*_h \varphi(x) \gamma(dx) =& \int_A \gamma_h^\perp (dy) \int_{f(y)}^\infty \partial^* \varphi_y(t) \gamma_1(dt) \\ =& -\int_A G_1(f(y))\varphi(y+f(y)h) \gamma_h^\perp(dy)\\ =& -\int_A G_1(f(y)) \frac{\varphi(y+f(y)h)}{\sqrt{1+|\nabla_H f(y)|_H^2}} \sqrt{1+|\nabla_H f(y)|_H^2}\gamma_h^\perp(dy)\\ =& -\int_{\Gamma(f,A)} \varphi [{\nu_f},{h}]_H d\mathscr{S}^{\infty-1}.\end{aligned}$$ In the last equality we have applied with $g=\nu_f$ on $C_A$ and $g= 0$ on $(C_A)^c$, noticing that $g(x)=\ell(x-\pi_{h}x)$ with $x\in C_A$. Let us now fix $k\in h^\perp$, $k\in Q X^*$ and we consider $W=\text{ker}(\pi_h)\cap\text{ker}(\pi_k)$; we have $X=W\oplus \langle h,k\rangle$ and $\gamma=\gamma_W\otimes \gamma_{\langle h,k\rangle}$. We notice that for $\gamma_W$-a.e. $w\in W$ $$({\rm Epi}(f,A))_w =\{ z_1h+z_2k\in \langle h,k\rangle : z_1> f_w(z_2k), z_2 k\in A_w\}, \quad A_w=\{z_2k: w+z_2k \in A\},$$ and the map $f_w:A_w\longrightarrow \R$ belongs to $W^{1,1}(A_w,\gamma_W)$. Then, the set ${\rm Epi}(f_w,A_w)$ has finite perimeter for $\gamma_W$-a.e. $w\in W$ with bounded inner normal given by $\nu_w=(-f'_w,1)/\sqrt{1+(f'_w)^2}$. For any $\varphi\in {\rm Lip}_{c}(C_A)$ we get $$\begin{aligned} \int_{{\rm Epi}(f,A)} \partial^*_k \varphi(x)\gamma(dx) =& \int_W \gamma_W(dw) \int_{{\rm Epi}(f_w,A_w)} \partial^*_2 \varphi_w(z) \gamma_{\langle h,k\rangle}(dz)\\ =& \int_W \gamma_W(dw) \int_{\Gamma(f_w,A_w)} \frac{f_w'}{\sqrt{1+(f'_w)^2}} \varphi_w G_1(f_w)d\mathscr{S}^1\\ =& \int_W \gamma_W(dw) \int_{A_w} f_w'(z_2) \varphi(w+f_{{w}}(z_2)h+z_2k) G_1(f_{{w}}(z_2))\gamma_1(dz_2)\\ =& \int_A \partial_k f(y) \varphi(y+f(y)h) G_1(f(y)) \gamma_h^\perp(dy) \\ =& -\int_A \varphi({y+f(y)h}) [{\nu_f(y+f(y) h)},{k}]_H G_1(f(y)) \sqrt{1+|\nabla_H f(y)|_H^2} \gamma_h^\perp(dy)\\ =& -\int_{\Gamma(f,A)} \varphi [{\nu_f},{k}]_H d\mathscr{S}^{\infty-1},\end{aligned}$$ where we have used the fact that ${\rm ker}(\pi_h)=W + {\langle k\rangle}$ and that $\gamma_h^\perp =\gamma_W\otimes \gamma_k$. Let us consider an orthonormal basis $\{h,h_n:n\in\N\}\subset QX^*$ of $H$. Then, we have proved that for any $\varphi\in {\rm Lip}_{c,b}(C_A,H)$ and any $k\in\{h,h_n:h\in\N\}$, $$\int_{{\rm Epi}(f,A)} \partial_k^* \varphi(x)\gamma(dx) = -\int_{\Gamma(f,A)} {\varphi}[{\nu_f},k]_H \mathscr{S}^{\infty-1},$$ i.e. the measure $$\mu= \nu_f \mathscr{S}^{\infty-1}_{\mathcal{F}} {\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,A)\in \mathscr{M}(C_A,H)$$ is the distributional derivative of $\1_{{\rm Epi}(f,A)}$. Finally, Proposition \[nipote\]$(i)$ implies that ${\rm Epi}(f,A)$ has finite perimeter in $C_A$. We conclude this section providing a useful result on epigraphs of convex and concave functions. \[business\] Let $h\in QX^*$ with $|h|_H=1$, let $D\subset X_h^\perp$ be an open convex domain, let $g$ be a continuous convex function and let $f$ be a continuous concave function both defined on $D$. Then, ${\rm Epi}(g,D)$ and $C_D\setminus \overline{{\rm Epi}(f,D)}$ are open convex subsets of $X$, and therefore ${\rm Epi}(g,D)$ and ${\rm Epi}(f,D)$ have finite perimeter in $X$. Indeed, since a function is convex if and only if its epigraph is convex, from [@CLMN12 Proposition 9] it follows that ${\rm Epi}(g,D)$ is convex. Analogously, $C_D$ and $C_D\setminus \overline{{\rm Epi}(f,D)}$ have finite perimeter in $X$ since they are open convex sets. This implies that also $X\setminus {\rm Epi}(f,D)$ has finite perimeter in $C_D$, and therefore ${\rm Epi}(f,D)$ has finite perimeter. Integration by parts formula on convex sets {#coppadelmondo} =========================================== In this section we consider a nonempty open convex set $\Omega \subset X$. By [@CLMN12 Proposition 9], $\Omega$ has finite perimeter in $X$ and $\gamma(\partial\Omega)=0$, i.e. $\1_\Omega\in BV(X,\gamma)$. Without loss of generality we can assume that $0\in \Omega$, and we define $$\Omega{=}\{ \p<1\},$$ with $\p$ being the gauge of the convex set or the Minkowski functional associated with $\Omega$ centered in $0$ defined by $$\p(x)=\inf \{ \lambda>0: x \in \lambda\Omega\}.$$ The main result proved in this section is the following theorem. $\nabla_H\p$ is defined $\mathscr S^{\infty-1}$-almost everywhere and non-zero on $\partial\Omega$, for any $k\in H$ and any $\psi\in {\rm Lip}_{b}(X)$ we have that $$\begin{aligned} \int_\Omega\partial^*_k\psi d\gamma =\int_{\partial\Omega} \psi\frac{\partial_k\p}{|\nabla_H\p|_H}d\mathscr S^{\infty-1}.\end{aligned}$$ \[triciclo\] The proof of Theorem \[triciclo\] is postponed to the end of the section. Let us introduce some useful tools about convex functions (we refer to [@Phe93 Chapter 5] for further details). We consider the dual ball of $\p$ defined by $$\begin{aligned} C(\p):=\{x^*\in X^*:\langle x^*,x\rangle \leq \p(x) \ \forall x\in X\},\end{aligned}$$ Moreover, we recall that, for any $x_0\in X$, the subdifferential $\partial \p(x_0)$ is the set of the elements $x^*\in X^*$ which satisfy $$\begin{aligned} x^*(x-x_0) \leq \p(x)-\p(x_0), \quad \forall x\in B(x_0,r),\end{aligned}$$ for some $r>0$. We will use the following property of the subdifferential of a convex function. [[@Phe93 Proposition 1.11]]{} \[stereo\] Let $f$ be a convex function which is continuous at $x_0\in D$, where $D$ is a convex domain. Then, $\partial f(x_0)$ is nonempty. We state the following characterization of the subdifferential $\partial \p(x)$ of a Minkowski functional in terms of $C(\p)$ (see [@Phe93 Lemma 5.10]). \[pasta\] $x^*\in \partial \p(x)$ if and only if $x^*\in C(\p)$ and $x^*(x) =\p(x)$. [In our case,]{} thanks to [@AMP10 Lemma 6.2] we may consider $h\in QX^*$ such that $$\begin{aligned} |D_\gamma\1_\Omega|(\{ [{\nu_\Omega},{h}]_H=0\})=0, \label{sinonimo}\end{aligned}$$ with $D_\gamma \1_\Omega =\nu_\Omega |D_\gamma \1_\Omega|$. This Lemma simply says that we may choose a direction $h$ such that the vertical part of $\partial \Omega$ with respect to $h$ is $|D_\gamma\1_\Omega|$-negligible. We denote by $h^*$ the element of $X^*$ such that $h=Qh^*$. Once such a direction has been fixed, we may define the open convex set $\Omega_h^\perp \subseteq X_h^\perp$ by $$\Omega_h^\perp =\{ y\in X^\perp_h : \exists t\in \R \mbox{ s.t. } y+th\in \Omega \}.$$ For any $y\in \Omega_h^\perp$, the set $$\Omega_y =\{ t\in \R: y+th \in \Omega\}$$ is an open interval, and therefore there exist $f:\Omega^\perp_h\to (-\infty,+\infty]$, $g:\Omega^\perp_h\to [-\infty,+\infty)$ such that $\Omega_y$ is the interval $(g(y),f(y))$, [i.e., $\Omega$ is between the graph of $g$ and that of $f$.]{} \[nevicata\] The functions $f$ and $g$ satisfy the following properties: - If there exists $y\in\Omega_h^\perp$ such that $f(y)=+\infty$, then $f\equiv+\infty$ on $\Omega_h^\perp$. Analogously, if $g(y)=-\infty$ for some $y\in\Omega_h^\perp$, then $g\equiv-\infty$ on $\Omega_h^\perp$. - if $f$ is not infinite then it is a concave function. Analogously, if $g$ in not infinite then it is a convex function. [To show $(i)$, let us assume that there exists $y_0\in \Omega_h^\perp$ such that $f(y_0)=+\infty$, and let $y\in \Omega_h^\perp$. Therefore, there exists $y_1\in \Omega_h^\perp$ and $\lambda>0$ s.t. $y=\lambda y_0 + (1-\lambda)y_1$. From the definition of $\Omega_h^\perp$ there exists $t_1\in\R$ s.t. $x_1=y_1+t_1h\in\Omega$, and since $f(y_0)=+\infty$ we have $x_0=y_0+t h\in \Omega$ for every $t\in(g(y_0),+\infty)$. Since $\Omega$ is convex, we have $\lambda x_0+(1-\lambda)x_1=y+(\lambda t+(1-\lambda)t_1)h\in\Omega$ and therefore $$\begin{aligned} f(y)\geq \lambda t+(1-\lambda)t_1, \quad t\in(g(y),+\infty),\end{aligned}$$ which gives $f(y)=+\infty$.]{} [The same argument holds for $g$, i.e., if there exists $y_0\in \Omega^\perp_h$ such that $g(y_0)=-\infty$, then $g\equiv -\infty$.]{} Let us prove $(ii)$. Assume that $g>-\infty$ on $\Omega_h^\perp$. We fix $y_1,y_2 \in \Omega_h^\perp$, $t_1\in \Omega_{y_1}$, $t_2\in \Omega_{y_2}$, then for any $\lambda\in[0,1]$ $$\lambda (y_1+t_1 h)+(1-\lambda)(y_2+t_2h) =\lambda y_1+(1-\lambda) y_2+ (\lambda t_1 +(1-\lambda)t_2)h \in \Omega.$$ This means that $\tilde y:=\lambda y_1+(1-\lambda)y_2\in\Omega_h^\perp$ and $\lambda t_1+(1-\lambda)t_2\in\Omega_{\tilde y}$. Therefore, $$g(\lambda y_1+(1-\lambda) y_2) \leq \lambda t_1 +(1-\lambda)t_2 {\leq} f(\lambda y_1+(1-\lambda) y_2).$$ Since this is true for any $t_1$ and $t_2$, this implies that $$g(\lambda y_1+(1-\lambda) y_2)\leq \lambda g(y_1)+(1-\lambda)g(y_2)$$ hence $g$ is convex. same arguments reveal that for any $\lambda\in[0,1]$ we have $$\lambda f(y_1)+(1-\lambda) f(y_2) \leq f(\lambda y_1+(1-\lambda) y_2),$$ which implies that $f$ is concave. Thanks to Lemma \[nevicata\] (and by the fact that $\Omega$ is a nonempty set, hence it is impossible that $f=-\infty$ everywhere or $g=+\infty$ everywhere), only the following four cases occur: 1. $f\equiv +\infty$, $g\equiv -\infty$ and $\Omega_h^\perp =X^\perp_h$, i.e. $\Omega=\Omega_h^\perp\oplus \langle h\rangle$; 2. $f\equiv +\infty$ and $g(y)>-\infty$ for any $y\in \Omega_h^\perp$, and then $\Omega={\rm Epi}(g,\Omega_h^\perp)$ . 3. $g\equiv -\infty$ and $f(y)<+\infty$ for any $y\in \Omega_h^\perp$, and then $\Omega=\{ x=y+th: y\in \Omega_h^\perp, t<f(y)\}$. 4. $-\infty< g(y)<f(y)<+\infty$ for any $y\in \Omega_h^\perp$ and $$\Omega=\{ x=y+th: y\in \Omega_h^\perp, t\in (g(y),f(y))\}.$$ From now on we shall assume to be in the last case, since in the other three cases the following lemmas remain true, with the convention that $\Gamma(f,\Omega_h^\perp)=\varnothing$ if $f=+\infty$, and $\Gamma(g,\Omega_h^\perp)=\varnothing$ if $g=-\infty$. Before passing to the infinite dimension, we state a property of open convex sets in finite dimension. \[camilleri\] For any open convex set $C\subset \R^n$, $\partial^* C=\partial C$, i.e., each point of $\partial C$ has density different from $0$ and $1$. Let $n=2$, let us fix $x\in\partial C$ and [let $\p$ be its Minkowski function. By Proposition \[stereo\], there exists $\nu\in\partial \p(x) $, hence $C$ remains below the hyperplane with equation $\langle \nu,\cdot -x\rangle=0$ (it suffices to remember the definition of $\partial \p$ and the fact that if $y\in C$ then $\p(y)<1$, while $\p(x)=1$]{}), which implies that its density is not greater than $1/2$. Further, let $B(x_0,r)\subset C$. The convexity of $C$ implies that the convex hull of $\{x,B(x_0,r)\}$ is contained in $C$, and in particular the triangle with vertices $x,x_0$ and $x_1$, where $x_1$ satisfies $|x_1-x_0|=r$ and $x_1-x_0\perp x-x_0$. Therefore, for any $\rho>0$ a sector of angle $2\arctan(r|x-x_0|^{-1})$ of the ball $B(x,\rho)$ is contained in $C$. This gives $$\begin{aligned} \frac{|C\cap B(x,\rho)|}{|B(x,\rho)|}\geq 2{\rm arctg}\left(\frac{r}{|x-x_0|}\right)>0,\end{aligned}$$ for any $\rho>0$, and so the density of $x$ is greater than $0$. The general case $n\in\N$ follows from similar arguments. Let $\mathcal F$ be a countable family of finite dimensional subspaces of $QX^*$ stable under finite union and such that $\cup_{F\in\mathcal F}F$ is dense in $H$. In [@HI10] the $\mathcal F$-essential boundary of $\Omega$ is defined by $$\begin{aligned} \partial^*_{\mathcal F}\Omega=\bigcup_{F\in\mathcal F}\bigcap_{G\supset F, G\in\mathcal F}\partial^*_G\Omega,\end{aligned}$$ where $$\begin{aligned} \partial_F^*\Omega:=\{y+z:y\in {\rm Ker}(\pi_F), \ z\in\partial^*(\Omega_y)\},\end{aligned}$$ for any $F\in\mathcal F$. In general, $\partial_F^*\Omega$ does not satisfy any monotonicity property with respect to $F\in\mathcal F$. However, in the case of open convex sets we recover the finite dimensional situation with the next Lemma. \[salama\] Let $\Omega\subset X$ be an open convex set and let $\mathcal F$ be as above. Then, $\partial ^*_{\mathcal F}\Omega=\partial\Omega$. [At first, we claim that $\partial^*_{F}\Omega\subseteq \partial^*_G\Omega$ if $F\subseteq G$, for any $F,G\in\mathcal F$. Let $F\in\mathcal F$ and let $y+z\in \partial^*_{F}\Omega$. This means that $y\in{\rm Ker}(\pi_{F})$ and $z\in\partial(\Omega_y)$ ($\partial(\Omega_y)=\partial^*(\Omega_y)$ since it is convex, see Remark \[camilleri\]). Let $G\in\mathcal F$ be such that $F\subseteq G$. In particular, there exists a finite dimensional subspace $L$ of $QX^*$ such that $G=F \oplus L$. If $L=\{0\}$, we are done. Assume that $L\neq \{0\}$. Therefore, $y+z=y-\pi_L y+\pi_{L}y+z=:\tilde y+\tilde z$, where $\tilde y=y-\pi_L y$ and $\tilde z:=\pi_Ly+z$. Clearly, $\tilde y\in {\rm Ker}(\pi_{G})$ and $\tilde z\in G$. It remains to prove that $\tilde z\in\partial^*(\Omega_{\tilde y})$. Since $\Omega_{\tilde y}$ is a finite dimensional open convex set, from Remark \[camilleri\] it is equivalent to show that $\tilde z\in\partial(\Omega_{\tilde y})$. By contradiction, we suppose that $\tilde z\in \Omega_{\tilde y}$. Then, $y+z=\tilde y+\tilde z\in \Omega$, and so $z\in \Omega_y$. This contradicts the assumptions, since $\Omega_y$ is open and $z\in\partial^*(\Omega_y)=\partial(\Omega_y)$. Moreover, $\tilde z\in \overline{\Omega_{\tilde y}}$. Indeed, since $z\in\partial (\Omega_y)$, there exists a sequence $(z_n)\in \Omega_y$ which converges to $z$ in $X$. Obviously, the sequence $(\tilde z_n:=\pi_L(y)+z_n)$ converges to $\tilde z$ in $X$ and $\tilde y+\tilde z_n=y+z_n\in\Omega$, which means that $\tilde z\in(\overline{\Omega_{\tilde y}})$. Hence, $\tilde z\in\partial(\Omega_{\tilde y})=\partial^*(\Omega_{\tilde y})$, since $\Omega_{\tilde y}$ is convex. This means that $\tilde y+\tilde z\in\partial^*_{G}(\Omega)$, and the claim is therefore proved. ]{} In particular, the claim implies that $\partial_{\mathcal F}^*\Omega=\cup_{F\in\mathcal F}\partial_{F}^*\Omega$. We remark that $\cup_{F\in\mathcal F}F$ is dense in $X$. This fact easily follows from the density of $\cup_{F\in\mathcal F}F$ in $H$, the density of $H$ in $X$ and the continuous embedding $H\hookrightarrow X$. We stress that, for any $F\in \mathcal F$ and any $y\in{\rm Ker}(\pi_{F})$, arguing as above we deduce that $\partial(\Omega_y)\subset(\partial \Omega)_y$. Hence, $\partial^*_{\mathcal F}\Omega\subset \partial \Omega$. To show the converse inclusion, we consider $x\in \partial \Omega$. Since $\Omega$ is open, there exists an open ball $B\subset \Omega$. Clearly, $\tilde B:=x-B$ is an open ball in $X$, and the density of $\cup_{F\in\mathcal F}F$ in $X$ implies that there exists $F\in\mathcal F$ and $\xi\in F$ such that $\xi=x-y$, with $y\in B$, i.e., $x=y+\xi$. If we define $y_F=y-\pi_{F}y\in{\rm Ker}(\pi_{F})$ and $z_F:=\pi_{F}y+\xi$, it remains to prove that $z_F\in\partial (\Omega_{y_F})=\partial^*(\Omega_{y_F})$. Clearly, $z_F\notin\Omega_{y_F}$, otherwise $x=y_F+z_F\in\Omega$. Further, since $y\in \Omega$ and $x\in\partial \Omega$, for any $\lambda\in[0,1)$ we have $y+\lambda\xi=y+\lambda(x-y)\in\Omega$. Taking a sequence $(\lambda_m)_m\in\N\subset (0,1)$ converging to $1$, we obtain a sequence $(\eta_m=\lambda_mz_F)_{m\in\N}\subset\Omega_{y_F}$ which converges to $z_F$ in $F$, and so $z_F\in\overline {\Omega_{y_F}}$, which gives $x\in\partial^*_F\Omega$. \[amarena\] From [@AMP10] and [@HI10], we know that for any $B\in\mathcal B(X)$ we have $|D_\gamma\1_\Omega|(B)=\mathscr S^{\infty-1}_{\mathcal F}(B\cap \partial_{\mathcal F}^*\Omega)$, for any countable family $\mathscr F$ of finite dimensional subspaces of $QX^*$ stable under finite union such that $\cup_{F\in\mathcal F} F$ is dense in $H$. In particular, if $\mathcal F'$ satisfies the same assumptions as $\mathcal F$, then from Lemma \[salama\] we deduce that $\mathscr S^{\infty-1}_{\mathcal F}(B\cap\partial\Omega)=\mathscr S^{\infty-1}_{\mathcal F'}(B\cap \partial\Omega)$. Therefore, $\mathscr S^{\infty-1}_{\mathcal F}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\partial\Omega=\mathscr S^{\infty-1}_{\mathcal F'}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\partial\Omega$ for any $\mathcal F,\mathcal F'$ as above and from the definition of $\mathscr S^{\infty-1}$ we infer that $\mathscr S^{\infty-1}_\mathcal F{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}{\partial \Omega}=\mathscr S^{\infty-1}_{\mathcal F'}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}{\partial \Omega}= \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\partial\Omega$. In particular, we get $|D_\gamma\1_\Omega|(B)=\mathscr S^{\infty-1}(B\cap\partial\Omega)$ for any $B\in\mathcal B(X)$. \[cementine\] Let $\Omega$ be an open convex set, $h\in QX^*$, $C:=\Omega_h^\perp\oplus \langle h\rangle$ and let $f,g$ be the functions introduced in Lemma \[nevicata\]. Then $$\begin{aligned} \label{cubo} \partial \Omega=\Gamma(f,\Omega_h^\perp)\cup\Gamma(g,\Omega_h^\perp)\cup N,\end{aligned}$$ where the sets in the right-hand side of are pairwise disjoint, and $\mathscr S^{\infty-1}(N)=0$. In particular, $$\mathscr{S}^{\infty-1} (\partial \Omega\setminus (\Gamma(f,\Omega_h^\perp)\cup \Gamma(g,\Omega_h^\perp)))=0.$$ Since $C$ is convex, from Remark \[amarena\] it follows that $D_\gamma \1_C=\nu_C \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\partial C $. Further, $\partial \Omega=(\partial\Omega\cap C)\cup N$. Since $\Omega\subset C$, we have $N=\partial \Omega\cap \partial C$, and by [@AMP15 Corollary 2.3] $\nu_\Omega(x)=\nu_C(x)$ for $\mathscr S^{\infty-1}$-a.e. $x\in \partial\Omega\cap\partial C$. By construction, $[ \nu_C(x),h]_H=0$ for $\mathscr S^{\infty-1}$-a.e. $x\in\partial C$, and so $[\nu_\Omega(x),h]_H=0$ for $\mathscr S^{\infty-1}$-a.e. $x\in N$. Therefore, gives $|D_\gamma\1_{\Omega}|(N)=0$, and since $N\subset \partial\Omega$, from Remark \[amarena\] we deduce that $\mathscr S^{\infty-1}(N)=\mathscr S^{\infty-1}(N\cap \partial\Omega)=|D_\gamma\1_{\Omega}|(N)=0$. It remains to show that $\partial \Omega\cap C=\Gamma(g,\Omega_h^\perp) \cup \Gamma(f,\Omega_h^\perp)$. At first, we suppose that $x\in \Gamma(g,\Omega_h^\perp)$. Hence, there exists $y\in \Omega_h^\perp$ such that $x=y+g(y)h$. Arguing as above, we deduce that $x\in \overline {\Omega}\setminus \Omega=\partial \Omega$, and clearly $x\in C$. Further, the same arguments hold true for $x\in\Gamma(f,\Omega_h^\perp)$. Inclusion $\supseteq$ is therefore proved. To show the converse inclusion, we assume that $x\in \partial\Omega\cap C$. Therefore, there exists $\delta\in\R$ such that $x+\delta h\in\Omega$. Let us assume that $\delta>0$. If we set $y:=(I-\pi_h)x\in\Omega_h^\perp$ and $z:=\pi_hx$, then $y+z+th\in\Omega$ for any $t\in(0,\delta)$ (because $\Omega$ is convex), i.e., $z+th\in\Omega_y$ for any $t\in(0,\delta)$. Letting $t\rightarrow0$, we get that $\pi_hx\in \overline{\Omega_y}$. Necessarily, $\pi_hx\notin\Omega_y$, otherwise $x=y+z\in\Omega$, which contradicts the fact that $x\in\partial \Omega$. Hence, $z\in\partial (\Omega_y)=\{g(y),f(y)\}$ and, since $\delta>0$, we deduce that $z=g(y)$ which means $x=y+g(y)h\in\Gamma(g,\Omega_h^\perp)$. If $\delta<0$, arguing as above we infer that $z=f(y)$, from which it follows that $x=y+f(y)h\in\Gamma(f,\Omega_h^\perp)$. \[james\] For any $y_0\in \Omega_h^\perp$, there exists $r_0=r_0(y_0)>0$ such that $f,g$ are bounded Lipschitz functions on $B(y_0,r_0)\cap X_h^\perp$. As a byproduct, $f$ and $g$ are Gâteaux differentiable $\gamma_h^\perp$-a.e. $\in B(y_0,r_0)\cap X_h^\perp$ and belong to $W^{1,1}(B(y_0,r_0)\cap X_h^\perp,\gamma_h^\perp)$, for any $y_0\in \Omega_h^\perp$. Let us consider the function $g$; the argument for $f$ is similar. We show that for any $y_0\in\Omega_h^\perp$ there exists $r_0>0$ such that $g\in{\rm Lip}(B(y_0,r_0))$. To this aim, let us fix $y_0\in\Omega_h^\perp$. Hence, there exists $t_0\in\R$ such that $x_0:=y_0+t_0h\in\Omega$, and we can find $r_0>0$ such that $B(x_0,2r_0)\subset \Omega$. We claim that $B(y_0,2r_0)\cap X_h^\perp\subset\Omega_h^\perp$ and $g(y)\leq t_0$ for any $y\in B(y_0,2r_0)\cap X_h^\perp$: indeed, $\|y+t_0h-x_0\|_X=\|y-y_0\|_X<2r_0$, and so $y+t_0h\in B(x_0,2r_0)\subset \Omega$. This implies that $y\in\Omega_h^\perp$ and $t_0\in\Omega_{y}$, which means $g(y)\leq t_0$ for any $y\in B(y_0,2r_0)\cap X_h^\perp$. Hence, $g$ is convex and bounded from above on a symmetric domain. We claim that $g$ is bounded on $B(y_0,2r_0)\cap X_h^\perp$. Indeed, for any $y\in B(y_0,2r_0)\cap X_h^\perp$ let us consider $y'=y_0-(y-y_0)$. Then, we have $$\begin{aligned} g(y_0)=g\left(\frac12y+\frac12y'\right)\leq \frac12g(y)+\frac12g(y')\leq \frac12 g(y)+\frac12 t_0.\end{aligned}$$ Hence, $g(y)\geq 2g(y_0)-t_0$. Since $B(y_0,r)+rB(0,1)= B(y_0,2r)$, we infer that $g\in {\rm Lip}(B(y_0,r)\cap X_h^\perp)$ (see [@Phe93 Proposition 1.6 and the successive Remark therein]). The remain part follows from [@Bog98 Theorems 5.11.1 and 5.11.2] and from the definition of Sobolev space $W^{1,1}(A,\gamma_h^\perp)$ with $A\subset X_h^\perp$ open set. \[huxley\] We denote by $D_Gf$ and $D_Gg$ the Gâteaux derivatives of $f$ and $g$, respectively, where they are defined, and analogously by $\nabla_Hf$ and $\nabla_Hg$ their $H$-derivatives where they are defined. - The family $\mathscr A:=\{B(y_0,r_0)\cap X_h^\perp\subset \Omega_h^\perp: y_0\in\Omega_h^\perp, \ f,g\in{\rm Lip}_b(B(y_0,r_0)\cap X_h^\perp)\}$ is an open covering of $\Omega_h^\perp$. Since $X$ is separable, $\mathscr A$ admits a countable subcovering $\{B(y_n,r_n)\cap X_h^\perp\subset \Omega_h^\perp: y_n\in\Omega_h^\perp, \ f,g\in{\rm Lip}_b(B(y_n,r_n)\cap X_h^\perp), \ n\in\N\}$. Hence, $\nabla_Hf(y)$ and $\nabla_H g(y)$ (and also $D_Gf(y)$ and $D_Gg(y)$) are defined $\gamma_h^\perp$-a.e. $y\in \Omega_h^\perp$ and for such a values of $y$ we have $\nabla_Hf(y)=R_{\gamma}D_Gf(y)$ and $\nabla_Hg(y)=R_{\gamma}D_Gg(y)$. - From [@AlbMaRoc97 Corollary 1.4] there exists a partition of unity of Lipschitz functions subordinated to $\{B(y_n,r_n)\cap X_h^\perp:n\in\N\}$, i.e., there exists an open locally finite covering $\{A_n:n\in\N\}$ of $\Omega_h^\perp$ such that for any $n\in\N$ there exists $m=m(n)$ with $\overline{A_n}\subset B(y_m,r_m)\cap X_h^\perp$, and there exists a family $\{\psi_n:n\in\N\}\subset {\rm Lip}_b(X_h^\perp)$ such that ${\rm supp}(\psi_n)\subset A_n$ for any $n\in\N$, $\psi_n\geq 0$ for any $n\in\N$ and $\sum_{n\in\N}\psi_n=1$. Now we are ready to show the link between $D_\gamma \1_\Omega$ and $f$ and $g$. \[quattro\] Let $\Omega$, $\Omega_h^\perp$, $f$ and $g$ as above. Then, $$\begin{aligned} \label{aldous} D_\gamma\1_\Omega=-\nu_{f}\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,\Omega_h^\perp)+\nu_g\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp),\end{aligned}$$ where $\nu_f$ and $\nu_g$ have been defined in . Let $\varphi\in \mathcal F C_b^1(X)$. Since $\Omega$ has finite perimeter, for any $k\in H$ we have $$\begin{aligned} \int_\Omega\partial_k^*\varphi d\gamma=-\int_X\varphi d[D_\gamma\1_\Omega,k]_H.\end{aligned}$$ From Proposition \[business\] with $D$ in place of $\Omega_h^\perp$, we know that both ${\rm Epi}(g,\Omega_h^\perp)$ and ${\rm Epi}(f,\Omega_h^\perp)$ have finite perimeter, and $\Omega={\rm Epi}(g,\Omega_h^\perp)\setminus(\Gamma(f,\Omega_h^\perp)\cup{\rm Epi}(f,\Omega_h^\perp))$. Therefore, $$\begin{aligned} \int_{\Omega}\partial_k^*\varphi d\gamma =& \int_{{\rm Epi}(g,\Omega_h^\perp)}\partial_k^*\varphi d\gamma-\int_{{\rm Epi}(f,\Omega_h^\perp)}\partial_k^*\varphi d\gamma \\ = & -\int_X\varphi d[D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)},k]_H + \int_X\varphi d[D_\gamma\1_{{\rm Epi}(f,\Omega_h^\perp)},k]_H \\ = & -\int_X\varphi d[\nu,k]_H,\end{aligned}$$ since Lemma \[cementine\] gives $\gamma(\Gamma(f,\Omega_h^\perp))=0$. Here, $\nu=D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)}-D_\gamma\1_{{\rm Epi}(f,\Omega_h^\perp)}$. Therefore, $$\begin{aligned} \label{hugo} D_\gamma\1_\Omega=D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)}-D_\gamma\1_{{\rm Epi}(f,\Omega_h^\perp)}.\end{aligned}$$ [By the finiteness of the perimeter of ${\rm Epi}(g,\Omega_h^\perp)$ we have that $|D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)}|= \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp) $ is a finite measure. Further, for any $\varphi\in {\rm Lip}_b(X_h^\perp)$ such that $\overline{{\rm supp}(\varphi)}\subset B(y_{m(n)},r_{m(n)})\cap X_h^\perp$ for some $n\in\N$, any $\theta\in{\rm Lip}_b(X)$ and any $k\in H$ we have $$\begin{aligned} \int_X\theta(x)\varphi(x-\pi_hx) [D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)},k]_H(dx) = & - \int_X\1_{{\rm Epi}(g,\Omega_h^\perp)}\partial_k^*(\theta(x)\varphi(x-\pi_hx))\gamma(dx) \notag \\ = & - \int_X\1_{{\rm Epi}(g,B(y_{m(n)},r_{m(n)})\cap X_h^\perp)}\partial_k^*(\theta(x)\varphi(x-\pi_hx))\gamma(dx) \notag \\ = & \int_X\theta(x)\varphi(x-\pi_hx)[D_\gamma\1_{{\rm Epi}(g,B(y_{m(n)},r_{m(n)})\cap X_h^\perp)},k]_H(dx).\label{brunone}\end{aligned}$$ By density equality holds for any $\theta \in\B_b(X)$. Let $\{\psi_n:n\in\N\}$ be the partition of unity introduced in Remark \[huxley\] $(ii)$ and let $B\in \mathcal B(X)$. We have that $\psi_n\geq0$ everywhere for any $n\in\N$, so $$\begin{aligned} \sum_{n\in\N}\int_{X}\psi_n d\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp)<\infty.\end{aligned}$$]{} Since $g,f\in W^{1,1}({B(y_{m(n)},r_{m(n)})\cap X_h^\perp})$, taking into account Theorem \[bicicletta\] and we have $$\begin{aligned} \int_B [\nu_g,k]_H&d\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp) \\ = & \int_{X}\sum_{n\in\N}\psi_n(x-\pi_hx)\1_B(x) [\nu_g(x),k]_H\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp)(dx)\\ = & \sum_{n\in\N}\int_{X}\psi_n(x-\pi_hx)\1_B(x) [\nu_g(x),k]_H\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp)(dx)\\ = & \sum_{n\in\N}\int_{X}\psi_n(x-\pi_hx)\1_B(x) d[\nu_g(x),k]_H \mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,B(y_{m(n)},r_{m(n)})\cap X_h^\perp) (dx)\\ = & \sum_{n\in\N}\int_{X}\psi_n(x-\pi_hx)\1_B(x) [D_\gamma\1_{{\rm Epi}(g,\B(y_{m(n)},r_{m(n)})\cap X_h^\perp)},k]_H (dx)\\ = & \sum_{n\in\N}\int_{X}\psi_n(x-\pi_hx)\1_B(x) [D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)},k]_H (dx)\\ = & \int_X\sum_{n\in\N}\psi_n(x-\pi_hx)\1_B (x) [D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)} ,k]_H(dx)\\ = &\int_B d[D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)},k]_H,\end{aligned}$$ where $\nu_g$ has been defined in and we can change series and integral thanks to the dominated convergence theorem. This shows that $$\begin{aligned} D_\gamma\1_{{\rm Epi}(g,\Omega_h^\perp)}=\nu_g\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp)=\frac{-\nabla_H g(y)+h}{\sqrt{1+|\nabla_H g(y)|_H^2}}\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(g,\Omega_h^\perp).\end{aligned}$$ The same argument applied to $f$ gives $$\begin{aligned} D_\gamma\1_{{\rm Epi}(f,\Omega_h^\perp)}=\nu_f\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,\Omega_h^\perp)=\frac{-\nabla_H f(y)+h}{\sqrt{1+|\nabla_H f(y)|_H^2}}\mathscr S^{\infty-1}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}\Gamma(f,\Omega_h^\perp),\end{aligned}$$ and the thesis follows from . We cannot directly apply to since $f$ and $g$ do not belong to $W^{1,1}(\Omega_h^\perp,\gamma_h^\perp)$, but they belong to $W^{1,1}(B(y_n,r_n)\cap X_h^\perp,\gamma_h^\perp)$ with $n\in\N$. Hence, we don’t have global summability and we have to use the partition of unity. Since $\Omega$ is an open convex set, $\p$ is defined everywhere and $\partial \Omega=\{x\in X:\p(x)=1\}$. Moreover, it follows that $\p$ is a continuous convex function. Our aim is to prove that $\p(x)$ is Gâteaux differentiable $\mathscr S^{\infty-1}$-a.e. $x\in\partial \Omega$. We recall a characterization of Gâteaux differentiability of a continuous convex function (see [@Phe93 Proposition 1.8]). \[aperitivo\] Let $x_0\in X$. A continuous convex function $\psi$ defined on an open set $D\ni x_0$ is Gâteaux differentiable at $x_0$ if and only if there exists a unique linear functional $x^*\in X^*$ such that $$\begin{aligned} x^*(x-x_0)\leq \psi(x)-\psi(x_0), \quad \forall x\in D.\end{aligned}$$ In this case, $x^*=d\psi(x_0)$. In particular, by Lemma \[james\] for any $y\in B(\tilde y,r_{\tilde y})$, for $\mathscr S^{\infty-1}$-a.e. $\tilde y\in \Omega_h^\perp$ and suitable $r_{\tilde y}>0$ we have $$\begin{aligned} - D_Gf(\tilde y)(y-\tilde y) \leq -f(y)+f(\tilde y), \quad D_Gg(\tilde y)(y-\tilde y) \leq g(y)-g(\tilde y), \end{aligned}$$ where $D_Gf$ and $D_Gg$ is the Gâteaux differential of $f$ and $g$, respectively. We introduce the following notation. Let $y^*\in (X_h^\perp)^*$, let $h\in QX^*$ and let $h^*\in X^*$ such that $Qh^*=h$. Then, $x^*:=y^*\otimes h^*\in X^*$ denotes the element of $X^*$ such that $x^*(x)=y^*(y)+t$ for any $x=y+th$, with $y\in X_h^\perp$ and $t\in\R$. Now we have all the ingredients to prove the Gâteaux differentiability of $\p$. \[grasso\] In our setting, let $x\in\Gamma(f,\Omega_h^\perp)$ such that $f$ is Gâteaux differentiable at $y$, where $x=y+f(y)h$. Then, it holds that $$\begin{aligned} \label{cena} D_G\p(x)=\frac{-D_Gf(y)\otimes h^*}{(-D_Gf(y)\otimes h^*)(x)}.\end{aligned}$$ Analogously, if $x\in\Gamma(g,\Omega_h^\perp)$ and $g$ is Gâteaux differentiable at $y$, where $x=y+g(y)h$, then we get $$\begin{aligned} \label{cenetta} D_G\p(x)=\frac{D_Gg(y)\otimes -h^*}{(D_Gg(y)\otimes-h^*)(x)}.\end{aligned}$$ In particular, $\p$ is Gâteaux differentiable and $H$-differentiable for $\mathscr S^{\infty-1}$-a.e. $x\in \partial \Omega$, and $$\begin{aligned} \label{tazza} \nabla_H\p(x)= \begin{cases} \displaystyle \frac{-\nabla_Hf(y)\otimes h}{( -D_Gf(y)\otimes h^*)(x)}, & x=y+f(y)h, \ {\textrm{ $f$ G\^ateaux diff. at $y$}}, \vspace{2mm}\\ \displaystyle \frac{\nabla_Hg(y)\otimes - h}{( D_Gg(y)\otimes- h^*)(x)}, & x=y+g(y)h, \ {\textrm{ $g$ G\^ateaux diff. at $y$}}. \end{cases}\end{aligned}$$ We fix $x_0\in\Gamma(f,\Omega_h^\perp)$ such that $f$ is Gâteaux differentiable at $y_0$, with $x_0:=y_0+f(y_0)h$ and $y_0\in\Omega_h^\perp$. Since $\p$ is continuous, from Proposition \[stereo\] we know that $\partial \p(x_0)$ is nonempty. We claim that any element of $\partial \p(x_0)$ equals . If the claim is true, by Proposition \[aperitivo\] it follows that $\p$ is Gâteaux differentiable at $x_0$ and $D_G\p(x_0)$ satisfies . Hence, it remains to prove the claim. Let $x^*\in \partial \p(x_0)$. Lemma \[pasta\] implies that $x^*\in C(\p)$, i.e., $x^*(x)\leq \p(x)$ for any $x\in X,$ and $x^*(x) =\p(x_0)=1$. Since $y_0\in\Omega_h^\perp$ and $\Omega^\perp_h$ is an open set, there exists $r>0$ such that, for any $y\in B(y_0,r)\subset {\Omega_h^\perp}$, the element $x:=y+f(y)h\in\Gamma(f,\Omega_h^\perp)\subset\partial \Omega$. Therefore, $x^*(x)\leq \p(x)=1$ and $$\begin{aligned} 0 \geq & x^*(x) -x^*(x_0) =x^*( x-x_0) = x^*(y+f(y)h-y_0-f(y_0)h) = x^*(y-y_0)+x^*(h) (f(y)-f(y_0),\end{aligned}$$ which implies that $$\begin{aligned} \label{caramelle} x^*(y-y_0) \leq x^*(h)(f(y_0)-f(y)).\end{aligned}$$ Let us show that $x^*(h)>0$. Indeed, if by contradiction we assume that $x^*(h)\leq0$, then for any $t<0$ we have $$\begin{aligned} \p(x_0+th)\geq x^*(x_0+th)=x^*(x_0)+t x^*(h)\geq 1.\end{aligned}$$ This means that $x_0+th=y_0+(t+f(y_0))h\notin \Omega $ for any $t<0$. This contradicts the fact that $y_0+ch\in \Omega$ for any $c\in (g(y_0),f(y_0))$, since $y_0\in\Omega_h^\perp$. We have therefore proved that $x^*(h) >0$. Dividing both sides of by $x^*(h)$ we get $$\begin{aligned} z^*(y-y_0)\leq (-f)(y)-(-f)(y_0), \quad \forall y\in B(y_0,r),\end{aligned}$$ where $z^*:=( x^*(h))^{-1}x^*$. Since $(-f)$ is Gâteaux differentiable at $y_0$, Proposition \[aperitivo\] gives $z^*=D_G(-f)(y_0)=-D_Gf(y_0)$ on $X_h^\perp$. Now we compute $ x^*(h)$. From $x^*(x_0)=1$, we get $$\begin{aligned} 1=x^*(x_0) =x^*(y_0)+f(y_0)x^*(h)=-D_Gf(y_0)(y_0)x^*(h)+f(y_0)x^*(h). \end{aligned}$$ Hence, $$\begin{aligned} \label{alfred} x^*(h)=\frac1{-D_Gf(y_0)(y_0)+f(y_0)} =\frac1{(-D_Gf(y_0)\otimes h^*)(x_0)}.\end{aligned}$$ We are almost done. Indeed, for any $x\in X$, we consider the decomposition $x=y+th$ with $y\in X_h^\perp$ and $t\in \R$. Above computations reveal $$\begin{aligned} x^*(x)= & x^*(y)+t x^*(h) = \frac{ -D_Gf(y_0)(y)}{( -D_Gf(y_0)\otimes h^*)(x_0)}+\frac{t}{( -D_Gf(y_0)\otimes h^*)(x_0)} \\ = & \frac{ -D_Gf(y_0)(y) +t}{( -D_Gf(y_0)\otimes h^*)(x_0)} = \frac{ (-D_Gf(y_0)\otimes h^*)(x)}{( -D_Gf(y_0)\otimes h^*)(x_0)}.\end{aligned}$$ The claim is therefore proved. The same arguments applied to $g$ give . We have proved that $\gamma_h^\perp$a.e. $y\in\Omega_h^\perp$ the function $\p$ is Gâteaux differentiable at $x_f=y+f(y)h$ and $x_g=y+g(y)h$. Equivalently, there exists a $\gamma_h^\perp$-negligible set $V\subset\Omega_h^\perp$ such that $\p$ is Gâteaux differentiable on $\Gamma(f,\Omega_h^\perp\setminus V)\cup \Gamma(g,\Omega_h^\perp\setminus V)$. Moreover, $$\begin{aligned} \int_{\Gamma(f,\Omega_h^\perp)}\1_{\Gamma(f,V)}d\mathscr S^{\infty-1} = \int_{\Omega_h^\perp}\1_{\Gamma(f,V)}(y+f(y)h)G_1(f(y))\sqrt{1+|\nabla_Hf(y)|_H^2}\gamma_h^\perp(dy)=0,\end{aligned}$$ since $\1_{\Gamma(f,V)}(y+f(y)h)=\1_V(y)$ and $\gamma_h^\perp(V)=0$. This gives $\mathscr S^{\infty-1}(\Gamma(f,V))=0$ and, analogously, we get $\mathscr S^{\infty-1}(\Gamma(g,V))=0$. From Lemma \[cementine\] we infer that $\p(x)$ is Gâteaux differentiable $\mathscr S^{\infty-1}$-a.e. $x\in\partial \Omega$. The last part of the statement follows because, as recalled in Remark \[huxley\]$(i)$, $\nabla_H=R\gamma D_G$. Now we are ready to prove Theorem \[triciclo\]. By the last part of Theorem \[grasso\], $\nabla_H\p$ is defined and non-zero $\mathscr {S}^\infty-1$-almost everywhere on $\partial\Omega$. As a consequence of we deduce that $(-D_Gf(y_0)\otimes h^*)(x_0)>0$ for any $y_0\in\Omega_{h}^\perp$ such that $x_0=y_0+f(y_0)h\in\Gamma(f,\Omega_h^\perp)$ and $f$ is differentiable at $y_0$, and $(D_Gg(y_0)\otimes (- h^*))(x_0)>0$ for any $y_0\in\Omega_{h}^\perp$ such that such that $x_0=y_0+g(y_0)h\in\Gamma(g,\Omega_h^\perp)$ and $g$ is differentiable at $y_0$. Hence, gives $$\begin{aligned} \label{ritardo} \frac{\nabla_H\p(x)}{|\nabla_H\p(x)|_H}=\nu_f(x),\end{aligned}$$ if $x\in \Gamma(f,\Omega_h^\perp)$ and $f$ is differentiable at $y$, with $x=y+f(y)h$, and $$\begin{aligned} \label{investimento} \frac{\nabla_H\p(x)}{|\nabla_H\p(x)|_H}=-\nu_g(x),\end{aligned}$$ if $x\in \Gamma(g,\Omega_h^\perp)$ and $g$ is differentiable at $y$, with $x=y+g(y)h$. Let $k\in H$ and let $\psi\in{\rm Lip}_b(X)$. From , and we get $$\begin{aligned} \int_\Omega\partial_k^*\psi d\gamma = & -\int_X\psi d[ D_\gamma\1_\Omega,k]_H = \int_{\Gamma(f,\Omega_h^\perp)}\psi [ \nu_f,k]_H d \mathscr S^{\infty-1}-\int_{\Gamma(g,\Omega_h^\perp)}\psi [ \nu_g,k]_H d \mathscr S^{\infty-1} \\ = & \int_{\Gamma(f,\Omega_h^\perp)}\psi\frac{\partial_k\p}{|\nabla_H\p|}d \mathscr S^{\infty-1}+\int_{\Gamma(g,\Omega_h^\perp)}\psi \frac{\partial_k\p}{|\nabla_H\p|}d \mathscr S^{\infty-1} = \int_{\partial\Omega}\psi\frac{\partial_k\p}{|\nabla_H\p|}d \mathscr S^{\infty-1}.\end{aligned}$$ [^1]: email: davide.addona@unimib.it [^2]: email: giorgio.menegatti@unife.it [^3]: email: michele.miranda@unife.it
--- abstract: 'We demonstrate experimental generation of spatially-entangled photon-pairs by spontaneous parametric down conversion (SPDC) using a partial spatially coherent pump beam. By varying the spatial coherence of the pump, we show its influence on the downconverted photon’s spatial correlations and on their degree of entanglement, in excellent agreement with theory. We then exploit this property to produce pairs of photons with a specific degree of entanglement by tailoring of the pump coherence length. This work thus unravels the fundamental transfer of coherence occuring in SPDC processes, and provides a simple experimental scheme to generate photon-pairs with a well-defined degree of spatial entanglement, which may be useful for quantum communication and information processing.' author: - Hugo Defienne - Sylvain Gigan bibliography: - 'Biblio.bib' - 'Biblio.bib' title: 'Spatially-entangled Photon-pairs Generation Using Partial Spatially Coherent Pump Beam' --- Quantum entanglement is considered as one of the most powerful resource for quantum information. In this respect, pairs of photons are the simplest system showing genuine quantum entanglement in all their degrees of freedom: spatial, spectral and polarization [@brendel_pulsed_1999; @kwiat_new_1995; @howell_realization_2004]. Most of the fundamental experiments and related applications are implemented using polarization-entangled photons. Examples range from the first test of Bell’s inequality [@aspect_experimental_1982] to the recent development of long-distance quantum communication systems [@liao_satellite--ground_2017]. In the last years, there has been renewed interest in continuous variable entanglement between transverse position and momentum of photon-pairs [@walborn_spatial_2010]. Indeed, their infinite-dimensional Hilbert space holds high potential for developing powerful information processing algorithms [@tasca_continuous-variable_2011] and secured cryptography protocols [@walborn_quantum_2006]. Furthermore, spatially-entangled photon-pairs sources are at the basis of many quantum imaging approaches, including ghost imaging [@pittman_optical_1995], sub-shot-noise [@brida_experimental_2010] and sub-Rayleigh imaging [@xu_experimental_2015]. All these quantum applications crucially rely on properties of the down-converted photons. In this respect, their degree of entanglement is a fundamental parameter that generally defines the power of the quantum-based technique. As concrete examples, it sets the information bound in high-dimensional quantum communication systems [@dixon_quantum_2012] and the spatial resolution in certain quantum imaging scheme [@reichert_biphoton_2017]. However, most apparatus used to produce entangled pairs are not flexible and adapting pairs characteristics to specific use is generally a challenging task. In this work, we propose an novel experimental approach based on spontaneous parametric down conversion (SPDC) with a partial spatially coherent pump beam to produce entangled photon-pairs with tunable degree of spatial entanglement. SPDC is the most popular technique to produce spatially-entangled photon-pairs. In its conventional form, a coherent Gaussian beam of light (i.e. the pump beam) illuminates a non-linear crystal ($\chi^2$ non-linearity) that produces pairs of photons in accordance with energy and momentum conservation [@hong_theory_1985]. Properties of down-converted photons, including their degree of entanglement, are set by the crystal parameters and the pump beam properties [@rubin_transverse_1996; @souto_ribeiro_partial_1997; @joobeur_coherence_1996; @fonseca_transverse_1999; @saleh_wolf_2005]. During this process, coherence properties of the pump beam get entirely transferred to those of the two photon-field [@monken_transfer_1998; @kulkarni_transfer_2017; @ismail_polarization-entangled_2017]. Interestingly, none of these experimental studies consider the use of a non-perfectly spatially coherent pump beam to produce photon-pairs, with the notable exception of the recent work of Y. Ismael et al. [@ismail_polarization-entangled_2017] that investigates polarization-entanglement between photons. Theoretically, the link between spatial coherence properties of the pump and the degree of entanglement of the down-converted field has been precisely established in [@jha_spatial_2010; @olvera_two_2015; @giese_influence_2018]. In this work, we first investigate experimentally the influence of the pump spatial coherence on the correlation properties of the spatially-entangled photon pairs. We then demonstrate the dependency of the degree of entanglement, characterized by the Schmidt number [@fedorov_gaussian_2009], with the coherence of the pump. Finally, we exploit this effect to generate photon-pairs with a well-defined degree of entanglement by manipulating the transverse coherence length of the pump. Figure \[Figure1\].a shows the apparatus used to produce spatially entangled photon-pairs. A partially coherent beam of light is generated by intercepting the propagation path of a continuous-wavelength ($405$nm) Gaussian laser beam with a (rotating or not) random diffuser (plastic sleeve). Blue photons interact with a tilted non-linear crystal of $\beta$-baryum borate (BBO) to produce infrared pairs of photons by type I SPDC. At the output of the crystal, transverse momentum $\vec{k}$ of photons is mapped onto pixels of an electron multiplied charge coupled device (EMCCD) camera by a Fourier-lens imaging system ($f_3$). When the diffuser is maintained static, the crystal is illuminated by a speckle pattern (Figure \[Figure1\].b). A direct intensity image (Figure \[Figure1\].c) is acquired by photons accumulation on the camera sensor and shows an homogeneous structure, very similar to the one observed without diffuser (Figure \[Figure2\].a1). However, when measuring the joint probability ditribution $\Gamma$ with the EMCCD camera [@reichert_massively_2018; @defienne_general_2018], its projection along the sum-coordinate diagonal shows a central peak surrounded by a speckle pattern (Figure \[Figure1\].c). The sum-coordinate projection represents the probability of detecting the two photons with symmetric momentum relative to their mean $\vec{k_1}+\vec{k_2}$ [@moreau_realization_2012; @tasca_imaging_2012] (see [@supmat] section 4). The presence of this speckle together with the absence of any spatial structure in the direct intensity image demonstrates that first-order spatial coherence of the pump field (i.e. intensity speckle pattern) gets entirely transferred to second-order coherence of the down-converted field (i.e. coincidence speckle pattern). As a consequence, spatial incoherence properties of the pump must be retrieved in the momentum correlations of the pairs. When the diffuser is rotated faster than the camera integration time, the pump acts as a partial spatially coherent beam. Using a Gaussian-Schell model for the pump beam [@mandel_coherence_1965] and a Gaussian approximation for the down converted field [@fedorov_gaussian_2009] (see [@supmat] section 1), $\Gamma$ is written as $$\label{eq1} \Gamma(\vec{k_1},\vec{k_2}) \sim \exp \left(- \frac{ \sigma_r^2 |\vec{k_1}-\vec{k_2}|^2}{2} \right) \exp \left(- \frac{|\vec{k_1}+\vec{k_2}|^2}{2 \sigma_k^2} \right)$$ The position-correlation width $\sigma_r$ only depends on the crystal length $L$ and the pump frequency $\lambda_p$ as $\sigma_r = \sqrt{\alpha L \lambda_p/(2 \pi)}$ ($\alpha = 0.455$ [@chan_transverse_2007]). The momentum-correlation width $\sigma_k$ depends on the pump beam waist $\omega$ and its correlation length $\ell_c$ as $$\label{eq2} \sigma_k = \sqrt{\frac{1}{\ell_c^{2}} + \frac{1}{4 \omega^{2}}}$$ For a given crystal, varying the coherence properties of the pump beam (i.e. waist and correlation length) modifies the spatial structure of the two-photon wave function and its associated joint probability distribution. In particular, decreasing the correlation length at fixed waist induces an increase of the momentum-correlation width: when one photon of a pair is detected at $\vec{k}$, the area of maximum probability detection for its twin is centered at $-\vec{k}$ and spreads as $\sigma_k^2 \sim \ell_c^{-2}$. This effect is shown in Figure \[Figure2\]. For a perfectly coherent pump beam (no diffuser), the direct intensity image (Figure \[Figure2\].a1) shows a well-defined homogeneous disk and the $X_+$-projection of $\Gamma$ (Figure \[Figure2\].a2) shows a strong anti-diagonal. The $X_+$-projection image represents the joint probability of detecting one photon with momentum $k_{y_1}$ ($k_{x_1}$ can take any possible values) and its twin with momentum $k_{y_2}$ and $k_{x_2}=-k_{x_1}$ (see [@supmat] section 4). Such strong anti-diagonal is a clear signature of transverse momentum conservation in SPDC using a collimated pump beam. When a rotating diffuser is used (single layer of plastic sleeve), the pump beam becomes partially coherent which results in a blurring of the edges of the direct intensity disk (Figure \[Figure2\].b1) and an increase of the diagonal width in the $X_+$-coordinate projection (Figure \[Figure2\].b2). Broadening of momentum correlations with the decrease of pump spatial coherence shows very well when using rougher diffusers, respectively made by superimposing two layers of plastic sleeves (Figure \[Figure2\].c1 and c2) and three layers (Figure \[Figure2\].d1 and d2). A quantitative analysis of this effect is provided in Figure \[Figure3\]. On the one hand, values of $\sigma_k$ are determined by fitting sum-coordinate projection of $\Gamma$ (Figure \[Figure3\].b) by a Gaussian model [@fedorov_gaussian_2009]. On the other hand, values of $\ell_c$ are measured by removing the crystal and Fourier-imaging the pump beam directly onto the camera (see [@supmat] section 3). The linear regression of $\sigma_k^2 = f(1/\ell_c^2)$ (Figure \[Figure3\].a) returns a slope value of $0.82 \pm 0.3 $ with a determination coefficient of $0.98$. This result is in very good accordance with equation \[eq2\] and shows the relevance of the theoretical model [@jha_spatial_2010; @giese_influence_2018]. Correlation length ($\mu$m) $\sigma_k (rad.mm^{-1} ) $ $\sigma_r (\mu m)$ $K$ (exp.) $K$ (theory) ----------------------------- ---------------------------- -------------------- -------------- -------------- $+\infty$ $2.4 \pm 0.1$ $7.9 \pm 0.3$ $727 \pm 74$ $591$ 122 $9.7 \pm 0.3$ $8.4 \pm 0.2$ $38 \pm 4$ $115$ 59 $17.2 \pm 0.3$ $7.1 \pm 0.1$ $17 \pm 1$ $32$ 41 $22.5 \pm 0.5$ $7.1 \pm 0.2$ $10 \pm 1$ $16$ Not only does partial coherence influence momentum correlations between pairs, but it also modifies their degree of entanglement. An universal metric to quantify it is the Schmidt number $K$, that is directly related to the non-separability of the two-photon state [@law_analysis_2004]. Experimentally, $K$ is estimated from measurements of $\sigma_k$ and $\sigma_r$ using the formula $K = 1/4 \left[ 1/(\sigma_r \sigma_k)+\sigma_r \sigma_k\right]^2$ [@fedorov_schmidt_2015]. While $\sigma_k$ is determined using the apparatus described previously (Figure \[Figure1\]), values of $\sigma_r$ are measured using a different experimental configuration in which the output surface of the crystal is imaged onto the EMCCD camera (see [@supmat] section 2). As reported in Table \[table1\], $\sigma_r$ is constant for all diffusers and does not depend on the pump coherence properties. In consequence, the measured degree of entanglement $K$ \[$K$ (exp.) in Table \[table1\]\] decreases with the reduction of the correlation length $\ell_c$. As a comparison, values of $K$ \[$K$ (theory) in Table \[table1\]\] are calculated directly from crystal and pump properties using the theoritical model (equation \[eq1\]) $$\label{eq3} K = \frac{1}{4} \left[ \frac{2 \omega l_c \sqrt{2 \pi}}{ \sqrt{ \alpha L \lambda_p (l_c^2+4 \omega^2) }} + \frac{ \sqrt{\alpha L \lambda_p (l_c^2+4 \omega^2)}}{2 \omega l_c \sqrt{2 \pi}} \right]^2$$ with $L \approx 0.9mm$ (cristal thickness), $\lambda_p \approx 405$nm (pump wavelength), $\alpha = 0.455$ [@chan_transverse_2007] and $\omega \approx 89 \mu m$ (pump waist). Despite the many approximations that have been made and taking into accounts experimental uncertainties, we observe an excellent agreement between theoretically expected values of $K$ and those measured experimentally. Knowing the characteristics of the crystal and the pump therefore allows predicting reasonably well the degree of entanglement of the source. For a given crystal, we show that manipulating the pump coherence using rotating random diffusers enable the deterministic control of the degree of entanglement in the two-photon field generated. The future of quantum optical technologies depends on our capacity to detect [@reichert_massively_2018; @defienne_general_2018] and manipulate photons [@defienne_adaptive_2018; @peng_manipulation_2018], but it also crucially relies on our ability to generate photons with properties adapted to specific application. In our work, we show how to produce spatially-entangled photons with specific degree of entanglement by controlling the spatial coherence of the pump beam with rotating random diffusers. For this, we investigated the fundamental transfer of coherence between the pump and the down-converted field and showed a good agreement with the theory [@jha_spatial_2010; @giese_influence_2018]. This novel source may play an important role in free-space quantum communications, since it has been recently shown in theory that a two-photon field is less susceptible to atmospheric turbulence when it was generated by a partial spatially coherent beam [@qiu_influence_2012]. In this respect, the use of a spatial light modulator in place of the random diffusers will be the next natural step to enable tailoring entanglement in real-time and use it as a tunable parameter to produce quantum states that are optimal for a given protocol and strength of turbulence. Incoherent two-photon illumination could also plays an important role in optical imaging to improve resolution [@hong_two-photon_2018]. Finally, this work may have technological impact as it paves the way towards the development of cheap and compact photon-pairs source using Light Emitting Diodes as pump beams [@salter_entangled-light-emitting_2010]. Theoretical model ================= Joint probability distribution in momentum-space $\Gamma(\vec{k_1},\vec{k_2})$ ------------------------------------------------------------------------------ As demonstrated in [@giese_influence_2018] (Equation B.8), the joint probability distribution $\Gamma(\vec{k_1},\vec{k_2})$ for a partial spatially coherent pump beam is written as $$\label{eqSM1} \Gamma(\vec{k_1},\vec{k_2}) \sim |\tilde{\chi}(|\vec{k_1}-\vec{k_2}|^2)|^2 \tilde{V}(\vec{k_1}+\vec{k_2},\vec{k_1}+\vec{k_2})$$ where $\tilde{\chi}$ is the phase-matching function and $\tilde{V}$ is the transverse momentum-correlation function of the pump field. In our work, we use two distinct approximations: - A Gaussian approximation [@fedorov_gaussian_2009] for $\tilde{\chi}$: $$\label{eqSM2} |\tilde{\chi}(|\vec{k_1}-\vec{k_2}|^2)|^2 \sim \exp \left[- \frac{\sigma_r^2 |\vec{k_1}-\vec{k_2}|^2}{2} \right]$$ where $\sigma_r = \sqrt{\alpha L \lambda_p/(2 \pi)}$, with $\lambda_p$ is the pump wavelength, $L$ the crystal length and $\alpha = 0.455$ [@chan_transverse_2007]. - A Gaussian-Schell approximation [@mandel_coherence_1965] for the partial spatially coherent pump beam, which results in $\tilde{V}$ being written as $$\label{eqSM3} \tilde{V}(\vec{k},\vec{k'}) \sim \exp \left[- \frac{\omega^2 |\vec{k}-\vec{k'}|^2}{2} - \frac{ |\vec{k}+\vec{k'}|^2}{8 \sigma_k^2} \right]$$ where $\sigma_k = \sqrt{1/l_c^2+1/(4 \omega^2)}$, with $l_c$ the coherence length of the pump and $\omega$ its waist. Combining Equations \[eqSM2\], \[eqSM3\] and \[eqSM1\] leads to Equation \[eq1\]. Sum-coordinate projection of $\Gamma(\vec{k_1},\vec{k_2})$ ---------------------------------------------------------- The sum-coordinate projection of $\Gamma$, denoted $P_+^{\Gamma}$, is calculated by integrating equation \[eq1\] along $\vec{k_1}+\vec{k_2}$ and takes the simple form $$\label{eqSM31} P_+^{\Gamma}(\vec{k_1}+\vec{k_2}) \sim \exp \left(- \frac{ |\vec{k_1}+\vec{k_2}|^2}{2 \sigma_k^2} \right)$$ This model is used to fit the experimental data shown in Figure \[Figure3\].b and to determine values of $\sigma_k$ reported in Table \[table1\]. Correlation-positions and $\sigma_r$ measurements ================================================= Position-correlations --------------------- Position-correlations between pairs of photons are observed by imaging the output surface of the crystal and measuring the joint probability distribution $\Gamma$, as shown in Figure \[FigureSM1\].a. The diffuser is maintained static and is the same than the one used in Figure \[Figure1\]. The direct intensity image (Figure \[FigureSM1\].b) is acquired by photons accumulation on the camera sensor and shows an speckle structure. When measuring the joint probability ditribution $\Gamma$ with the EMCCD camera [@reichert_massively_2018; @defienne_general_2018], its projection along the minus-coordinate diagonal shows a central peak (Figure \[FigureSM1\].c). The minus-coordinate projection image represents the probablity of detecting two photons from a pair separated by a (oriented) distance $r_1-r_2$ [@moreau_realization_2012; @tasca_imaging_2012]. The strong peak at the center is a clear signature of the strong correlations in position between pairs of photons. $\sigma_r$ measurements using partially coherent pump beams ----------------------------------------------------------- Values of $\sigma_r$ are determined using the experimental setup described Figure \[FigureSM1\].a. The same rotating diffusers (respectively composed by one, two and three layers of plastic sleeve) than those of Figure \[Figure2\] and Figure \[Figure3\] are used to generate partially coherent pump beams with different correlation lengths. Interestingly, Figure \[FigureSM2\] shows that neither the direct intensity images (Figure \[FigureSM2\].a1-d1) nor the $X_-$-coordinate projections (Figure \[FigureSM2\].a2-d2) depend on the coherence properties of the pump beam. The $X_-$-coordinate image represents the joint probability of detecting one photon at position $y_1$ ($x_1$ can take any possible values) and its twin with momentum $y_2$ and $x_2 \approx x_1$ (see section 4). The strong diagonal is a clear signature of position-correlations: both photons are always produced at the same position in the crystal during the SPDC process, and this property does not depend on the coherence properties of the pump beam. Similarly to the calculations of section 1 and those of [@giese_influence_2018], the use of a Gaussian approximation [@fedorov_gaussian_2009] and a Gaussian-Schell model [@mandel_coherence_1965] allows writing the joint probability distribution $\Gamma(\vec{r_1},\vec{r_2})$ as $$\label{eqSM4} \Gamma(\vec{r_1},\vec{r_2}) \sim \exp \left(- \frac{ |\vec{r_1}-\vec{r_2}|^2}{2 \beta \sigma_r^2 } \right) \exp \left(- 2 \omega^2 |\vec{r_1}+\vec{r_2}|^2 \right)$$ where $\omega$ is the pump beam waist and $\beta = (\alpha+\alpha^{-1})/\alpha$ ($\alpha = 0.455$ [@chan_transverse_2007]). The minus-coordinate projection of $\Gamma$, denoted $P_-^{\Gamma}$, is calculated by integrating equation \[eqSM4\] along $\vec{r_1}-\vec{r_2}$ and takes the simple form $$\label{eqSM41} P_-^{\Gamma}(\vec{r_1}-\vec{r_2}) \sim \exp \left(- \frac{ |\vec{r_1}-\vec{r_2}|^2}{2 \beta \sigma_r^2 } \right)$$ The minus-coordinate projection images acquired for different correlation lengths are shown in Figure \[FigureSM2\].a3-d3. Values of $\sigma_r$ are determined by fitting the minus-coordinate images by equation \[eqSM41\] and are reported in Table \[table1\]. Pump beam analysis and coherence length $\ell_c$ measurement ============================================================ Properties of the pump beam, namely its waist $\omega$ and correlation length $\ell_c$, are measured using the two experimental configurations described in Figure \[FigureSM3\].a and b. Intensity distribution of the pump beam in the crystal plane ------------------------------------------------------------ The intensity distribution of the pump beam in the crystal plane is measured using the experimental configuration described in Figure \[FigureSM3\].b. Figures \[FigureSM3\].c-f show results of four acquisitions performed without diffuser (Figure \[FigureSM3\].c), with a rotating diffuser composed by one layer of plastic sleeve (Figure \[FigureSM3\].d), two layers (Figure \[FigureSM3\].e) and three layers (Figure \[FigureSM3\].f). Since the diffusers rotate with a period much shorter than the acquisition time of the camera, the distribution of pump intensity at the crystal plane is homogenous and does not depend on the coherence properties of the pump. Beam waist and correlation length measurements ---------------------------------------------- Measurements of $\omega$ and $\ell_c$ are performed using the experimental configuration of Figure \[Figure1\].a. In this case, the pump field at the crystal plane is Fourier-imaged onto the EMCCD camera via lens $f_3$. Figures \[FigureSM3\].g-j show four direct intensity images acquired respectively without diffuser (Figure \[FigureSM3\].g), with a rotating diffuser composed by one layer of plastic sleeve (Figure \[FigureSM3\].h), two layers (Figure \[FigureSM3\].i) and three layers (Figure \[FigureSM3\].j). For a perfectly coherent pump, the width of the focus (denoted $\sigma_{p_0}$) in Figure \[FigureSM3\].g is inversely proportional to the beam waist $\omega$ $$\label{eqSM4} \omega = \frac{1}{\sigma_{p_0}}$$ Fitting this intensity distribution by Gaussian model provides an estimation of $\omega \approx 89 \mu$m. For partially coherent pump beams, intensity distributions in the Fourier domain shown in Figure \[FigureSM3\].h-j are written as $$\label{eqSM6} I_p(\vec{k_p}) \sim \exp \left[ - \frac{|\vec{k_p}|^2}{2 \sigma_p^2} \right]$$ where $\sigma_p = 2 \sqrt{1/\ell_c^2+1/(4 \omega^2)}$ (Gaussian-Schell model [@mandel_coherence_1965]). Fitting these distributions with equation \[eqSM6\] allows determining $\sigma_p$ in each case and calculating $\ell_c$ with the formula $$\label{eqSM4} \ell_c = \frac{2}{\sqrt{\sigma_p^2-\sigma_{p_0}^2}}$$ Values of $\ell_c$ are reported in Table \[table1\]. Image processing ================ Measurement process ------------------- We use an EMCCD Andor Ixon Ultra 897 to measure the joint probability distribution $\Gamma$ of spatially entangled photon pairs using a technique described in [@defienne_general_2018]. The camera was operated at $-60^{\circ}$C, with a horizontal pixel shift readout rate of $17$Mhz, a vertical pixel shift every $0.3\,\mu$s and a vertical clock amplitude voltage of $+4$V above the factory setting. When the camera is illuminated by photon pairs, a large set of images is first collected using an exposure time chosen to have an intensity per pixel approximately $5$ times larger than mean value of the noise ($\sim 171$ grey values). No threshold is applied. Processing the set of images using the fomula provided in [@defienne_general_2018] finally enables to reconstruct $\Gamma$. Projections of the joint probability distribution ------------------------------------------------- In our experiment, $\Gamma$ takes the form of a 4-dimensional matrix containing $(75 \times 75)^2 \sim 10^{8}$ elements, where $75 \times 75$ corresponds the size of the illuminated region of the camera sensor. The information content of $\Gamma$ is analyzed using four types of projections: 1. The sum-coordinate projection, defined as $$P_+^{\Gamma}(\boldsymbol{\vec{k}_{+}}) = \sum_{\vec{k}} \Gamma(\vec{k}_{+}-\vec{k},\vec{k})$$ It represents the probability of detecting pairs of photons generated in all symmetric directions relative to the mean momentum $\vec{k}_{+}$. 2. The minus-coordinate projection, defined as $$P_-^{\Gamma}(\boldsymbol{\vec{r}_{-}}) = \sum_{\vec{r}} \Gamma(\vec{r}_{-}+\vec{r},\vec{r})$$ It represents the probability for two photons of a pair to be detected in coincidence between pairs of pixels separated by an oriented distance ${\vec{r}_-}$. 3. The $X_+$-coordinate projection, defined as $$\begin{aligned} P_{X+}^{\Gamma} (k_{y_1},k_{y_2}) &= \sum_{k_{x}} \Gamma(k_{y_1},k_{y_2}|k_{x},-k_{x}) \\ &= \sum_{k_{x}} \frac{\Gamma(k_{y_1},k_{y_2},k_{x},-k_{x})}{\sum_{k_{x_1},k_{x_2}} \Gamma(k_{y_1},k_{y_2},k_{x_1},k_{x_2})}\end{aligned}$$ It represents the probability of detecting one photon with momentum $k_{y_1}$ (with no constraints on $k_{x_1}$) given that the other is detected with a momentum $k_{y_2}$ and $k_{x_2}=-k_{x_1}$ \[symetric columns\]. 4. The $X_-$-coordinate projection, defined as $$\begin{aligned} P_{X-}^{\Gamma} ({y_1},{y_2}) &= \sum_{{x}} \Gamma(y_1,y_2|x,x+1) \\ &= \sum_{{x}} \frac{\Gamma({y_1},{y_2},{x},{x}+1)}{\sum_{{x_1},{x_2}} \Gamma({y_1},{y_2},{x_1},{x_2})}\end{aligned}$$ It represents the probability of detecting one photon at position ${y_1}$ (with no constraints on ${x_1}$) given that the other is detected with a momentum ${y_2}$ and ${x_2}={x_1}+1$ \[adjacent columns\].
--- abstract: 'The Large Intelligent Surface (LIS) concept has emerged recently as a new paradigm for wireless communication, remote sensing and positioning. Despite of its potential, there are a lot of challenges from an implementation point of view, with the interconnection data-rate and computational complexity being the most relevant. Distributed processing techniques and hierarchical architectures are expected to play a vital role addressing this. In this paper we perform algorithm-architecture codesign and analyze the hardware requirements and architecture trade-offs for a discrete LIS to perform uplink detection. By doing this, we expect to give concrete case studies and guidelines for efficient implementation of LIS systems.' author: - '\' bibliography: - 'IEEEabrv.bib' - 'LIS\_architecture.bib' title: Processing Distribution and Architecture Tradeoff for Large Intelligent Surface Implementation --- Introduction {#section:intro} ============ The LIS concept has te potential to revolutionize wireless communication, wireless charging and remote sensing [@husha_data; @husha_data2; @husha_asign; @husha_pos] by the use of man-made surfaces electromagnetically active. In Fig. \[fig:LIS\_concept\] we show the concept of a LIS serving three users simultaneously. A LIS consists of a continuous radiating surface placed relatively close to the users. Each part of the surface is able to receive and transmit electromagnetic (EM) waves with a certain control, so the EM waves can be focused in 3D space with high resolution, creating a new world of possibilities for power-efficient communication. As pointed out in [@husha_data], there is no practical difference between a continuous LIS and a grid of antennas (discrete LIS) as the surface area grows, provided that the antenna spacing is sufficiently dense. Based on this, we study a discrete version of a LIS for practical reasons through the rest of this paper. There are important challenges from implementation point of view. The large number of antennas present in the LIS produces a huge amount of baseband data-rate, which needs to be routed to the Central Digital Signal Processor (CDSP) through the backplane network. As an example, a $2m\times20m$ LIS contains $\sim 28,500$ antennas in the 4GHz band (assuming spacing of half wavelength), with the corresponding radio frequency (RF) and analog-to-digital converter (ADC) blocks. Then, if each ADC uses 8bits per I and Q, that makes a total baseband data-rate of 45.5Tbps. This is orders of magnitude higher than the massive MIMO counterpart, where this issue has been analyzed [@cavallaro; @jeon_li; @jesus_journal_MaMi; @li_jeon; @muris]. LIS is fundamentally different to massive MIMO due to the potential very large physical size of the surface and the amount of data to be handled, which requires specific processing, resources and performance analysis. [@juan_VTC19; @IIC_ArXiv] are preliminary works addressing the distributed processing issue with high-level architecture and performance analysis, but they do not perform an evaluation of the required cost. For the best of our knowledge, there is not publication which performs analysis of the processing distribution, performance and the corresponding cost together for LIS. ![A LIS serving multiple users simultaneously.[]{data-label="fig:LIS_concept"}](LIS_concept.eps){width="0.85\linewidth"} In this paper, we propose to tackle those challenges leveraging algorithm and architecture co-design. At the algorithm level, we explore the unique features of LIS (e.g., very large aperture) to develop uplink detection algorithms that enable the processing being performed locally and distributed over the surface. This will significantly relax the requirement for interconnection bandwidth. At the hardware architecture design level, we propose to panelize the LIS to simplify manufacturing and installation. A hierarchical interconnection topology is developed accordingly to provide efficient and flexible data exchange between panels. Based on the proposed algorithm and architecture, extensive analysis has been performed to enable trade-offs between system capacity, interconnection bandwidth, computational complexity, and processing latency. This will provide high-level design guidelines for the real implementation of LIS systems. Large Intelligent Surfaces {#section:LIS} ========================== In this article we consider a LIS for communication purpose only. Due to the large aperture of the LIS, the users are generally located in the near field. A consequence of this is that the LIS can harvest up to 50% of the transmitted user’s power. This is one of the fundamental differences to the current 5G massive MIMO. One consequence of this difference, is that the transmitted power in uplink/downlink is much lower than in traditional systems, opening the door for extensive use of low-cost and low-power analog components. Another important characteristic of LIS is that users are not seen by the entire surface as shown in Fig. \[fig:LIS\_concept\], which can be exploited by the use of localized digital signal processing, demanding an uniform distribution of computational resources and reduced inter-connection bandwidth, without significantly sacrificing the system capacity. System Model ------------ We consider the transmission from $K$ single antenna users to a LIS with a total area $A$, containing $M$ antenna elements. We assume the antennas are distributed evenly with a distance of half wavelength. The $M\times 1$ received vector at the LIS is given by $$\mathbf{y} = \sqrt{\rho}\Hbf\mathbf{x}+\mathbf{n},$$ where $\mathbf{x}$ is the $K\times 1$ user data vector, $\mathbf{H}$ is the $M \times K$ normalized channel matrix such that $\|\Hbf\|^{2}=MK$, $\rho$ the $\mathrm{SNR}$ and $\mathbf{n} \sim \mathcal{CN}(0,\I)$ is a $M \times 1$ noise vector. Assuming the location of user $k$ is $(x_{k},y_{k},z_{k})$, where the LIS is in $z=0$. The channel between this user and a LIS antenna at location $(x,y,0)$ is given by the complex value [@husha_data] $$h_{k}(x,y)=\frac{\sqrt{z_{k}}}{2\sqrt{\pi} d_{k}^{3/2}}\exp{\left( -\frac{2\pi j d_{k}}{\lambda} \right)}, \label{eq:channel}$$ where $d_{k}=\sqrt{z_{k}^{2}+(x_{k}-x)^2+(y_{k}-y)}$ is the distance between the user and the antenna, and Line of Sight (LOS) between them is assumed. $\lambda$ is the wavelength. Panelized Implementation of LIS ------------------------------- ![Overview of the LIS processing distribution and backplane interconnection. Backplane interconnection in red.[]{data-label="fig:LIS_backplane"}](LIS_backplane.eps){width="0.5\linewidth"} An overview of the processing distribution and interconnection in a LIS is shown in Fig. \[fig:LIS\_backplane\]. As it can be seen, we propose that a LIS can be divided into units which are connected with backplane interconnections. We will use the term $\emph{panel}$ to refer to each of these units. Each panel contains a certain number of antennas (and transceiver chains). A processing unit, named Local Digital Signal Processor (LDSP) is in charge of the baseband signal processing of a panel. LDSPs are connected via backplane interconnection network to a Central DSP (CDSP), which is linked to the backbone network. In the backplane network, there are Processing Swiching Units (PSU) performing data aggregation, distribution, and processing at different levels. Based on the general LIS implementation framework, the number of panels $P$, the panel area $\Ap$, the number of antennas per panel $\Mp$, the algorithms to be executed in LDSP and CDSP, and the backplane topology are important design parameters we would like to investigate in this paper. Uplink Detection Algorithms {#section:algo} =========================== The LIS performs a linear filtering $$\hatx= \Wbf \y = \sqrt{\rho}\Wbf\Hbf \mathbf{x} + \Wbf\mathbf{n}$$ of the incoming signal to the panels, where $\Wbf$ is the $K \times M$ equalization-filter matrix, and $\hatx$ the estimated value of $\x$. Reduced Matched Filter (RMF) ---------------------------- The Reduced Matched Filter [@IIC_ArXiv] is a reduced complexity version of the full MF, where the $N_{p}$ strongest received users by the $i$-th panel according to their respective CSI are used as filtering matrix, this is $$\Wbf_{\text{RMF},i} = \left[ \h_{k_1}, \h_{k_2}, ..., \h_{k_{Np}} \right]^H, \label{eq:W_RMF}$$ where $\Wbf_{\text{RMF},i}$ is the $\Np \times \Mp$ filtering matrix of the $i$-th panel, and $\h_{n}$ is the $\Mp \times 1$ channel vector for the $n$-th user, $\{k_{i}\}$ represents the set of indexes relative to the $N_{\text{p}}$ strongest users. The corresponding strength of user $n$ is defined as $\|\h_{n}\|^{2}$. Iterative Interference Cancellation (IIC) ----------------------------------------- IIC is an algorithm that allows panels to exchange information in order to cancel inter-user interference. The detailed description of the algorithm can be found in [@IIC_ArXiv], and the pseudocode for the processing at the $i$-th panel is shown below, $[\U_z,\Sbf_z] = \text{svd} (\Z_{i-1})$\ $\Hbf_{eq}=\Hbf_{i} \U_{z} \Sbf_{z}^{-1/2}$\ $\U_{eq} = \text{svd} (\Hbf_{eq})$\ $\Wbfh_i=\U_{eq}(1:\Np)$\ $\Z_i = \Z_i + \Hbfh_i \Wbfh_i \Wbf_i \Hbf_{i}$ where $\Hbf_i$ is the $\Mp \times K$ local channel state information (CSI) matrix seen by the panel, $\Z_{i-1}$ is the $K \times K$ matrix received from $(i-1)$-th panel (neighbor), and $\Wbf_i$ the local filtering matrix. $\U_z$ and $\Sbf_z$ are the left unitary matrix and singular values of $\Z_{i-1}$ respectively. $\U_{eq}$ is the left unitary matrix of $\Hbf_{eq}$, and $\Wbf_i$ is made by the eigenvectors associated to the $\Np$ strongest singular values. Each iteration of the algorithm is performed in a different panel. Matrix $\mathbf{Z}$ is passed from one panel to another by dedicated links. Local DSP and Hierarchical Interconnection {#section:arch} ========================================== In this session, we describe the corresponding LDSP and backplane architecture that supports both the RMF and IIC algorithms. We assume the OFDM-based 5G New Radio (NR) frame structure and consider uplink detection only. \ Local DSP in each Panel ----------------------- The architecture of the LDSP is depicted in Fig. \[fig:panel\]. After the RF and ADC, FFT blocks perform time-to-frequency domain transformation. The processing of the uplink signal is divided in two phases: formulation and filtering. During the formulation phase, the Channel Estimation block (CE) estimates a new $\Hbf_i$ for each channel coherence interval. In this paper we assume perfect channel estimation. The Filter Coefficient calculation (FC) block receives $\Hbf_i$ and computes the filtering matrix $\Wbf_i$. FC performs complex conjugate transpose in the case of RMF and executes Algorithm \[algo:IIC\] in the case of IIC. $\Wbf_i$ is then written to the memory. During the filtering phase, the Filters block reads $\Wbf_i$ and apply it to the incoming data. The Filters block reduces the $\Mp \times 1$ input to a $\Np \times 1$ output ($\Np \ll \Mp$), which is sent to the backplane for further processing. Hierarchical Backplane Interconnection -------------------------------------- To reduced the required interconnection bandwidth, a hierarchical backplane topology is developed to fully explore the data locality in the proposed algorithms. As shown in Fig. \[fig:panel\], the backplane is divided into local direct panel-to-panel link (marked in blue) and global interconnection (marked in red and will be described in detail in the next sub-section). The local link is dedicated for low-latency data exchange between two neighboring panels, e.g., the $\mathbf{Z}_{i-1}$ in the IIC algorithm. The global interconnection will aggregate the $\Np \times 1$ filtering result from each panel to CDSP for final decision. Tree-based Global Interconnection and Processing ------------------------------------------------ For the global interconnection, we propose to use a tree topology with distributed processing to minimize latency (the latency grows logarithmically with the number of panels), as shown in Fig. \[fig:LIS\_topology\]. There are several levels of processing switching units (PSU) in the tree to aggregate and/or combine the panel outputs. These hierarchical PSUs can reduce the overall bandwidth requirement of the backplane and also the processing load of CDSP. Fig. \[fig:LIS\_topology\] also shows the detailed block diagram of a PSU. It is flexible to support both RMF and IIC, and can be extended for other algorithms. Combination and bypass functionalities are used in RMF, while for IIC the streams are bypassed to the CDSP for final decision. Implementation Cost and Simulation Results {#section:analysis} ========================================== In this section, we analyze the implementation cost of the proposed uplink detection algorithms with the corresponding implementation architecture, in terms of computational complexity, interconnection bandwidth, and processing latency. The trade-offs between system capacity and implementation cost is then presented to give high-level design guidelines. For convenience, we summarize the system parameters in Table \[table:table\_params\]. $\textbf{Parameter}$ $\textbf{Definition}$ ---------------------- -------------------------------------- $M_{\text{p}}$ number of antennas per panel $A_{\text{p}}$ panel area $N_{\text{p}}$ number of filtered outputs per panel $w_{\text{filt}}$ bit-width of the panel output $K$ number of users $f_{\text{B}}$ signal bandwidth (Hz) $N_{cs}$ number of coherent subcarriers : System parameters[]{data-label="table:table_params"} Computational Complexity ------------------------ In Table \[table:C\], we summarize the required computational complexity for both RMF and ICC algorithms. The complexity includes both formulation phase and filtering phase and are normalized to panel area $A_P$. In the filtering phase, the operations are the same for RMF and ICC, which is applying a liner filter of size $N_P\times M_P$ to the $M_P\times 1$ input vector. $\textbf{Method}$ RMF IIC ------------------- ----- ----- : Computational complexity in $\text{MAC}/s/m^2$.[]{data-label="table:C"} The formulation phase of RMF includes the computation of $\|\h\|^{2}$ for each user. For the IIC algorithm, the steps required for the formulation phase are shown in Algorithm \[algo:IIC\]. For step 1, which consists of of a singular value decomposition (SVD) of the $K \times K$ Gramian matrix $\Z_{i-1}$, complexity is $17 K^{3}$ [@golub]. Step 2 has a complexity of $(\Mp+1)K^2$, step 3 requires a complexity of $4 \Mp^2 K + 13 K^3$, and step 4 and 5 need $\Mp K \Np + \Np K^2$. In Table \[table:C\], $b=\Mp+\Np+1$ and $c=4\Mp^2 + \Mp\Np$. Interconnection bandwidth ------------------------- The normalized (to panel area) bandwidth requirement for the global interconnection can be formulated as $R_{\text{global}}={\frac{2 w_{\text{filt}} \Np \fB}{\Ap}}$ \[bps/$m^2$\]. The corresponding bandwidth requirement for the local panel-to-panel link is (only needed for the IIC algorithm) $R_{\text{local}}={\frac{2 w_{\text{W}} K^{2} f_{\text{B}}}{N_{\text{cs}}\Ap}}$ \[bps/$m^2$\]. Processing Latency ------------------ The processing latency of the filtering phase can be formulated as $L_{filtering} = T_{\text{Filter}} + \log_{4}(P) T_{\text{PSU}}$, where $T_{\text{Filter}}$ is the time needed for performing the linear filtering and $T_{\text{PSU}}$ represents the PSU processing time as well as the PSU-to-PSU communication time. The latency of the formulation phase differs for RMF and IIC. For RMF, the formulation phase is done in parallel in all the panels. The corresponding latency $L_{\text{form,RMF}}$ depends on the computational complexity $C_{\text{form, RMF}}$, the clock frequency, and the available parallelism in the computation. On the other hand, the latency for IIC includes both computation and panel-to-panel communication. The worst case is $L_{\text{form,IIC}} = P T_{\text{compute,IIC}} + (P-1) T_{\text{panel-panel}}$, where $T_{\text{compute, IIC}}$ is the time for computing the filter coefficient and $T_{\text{panel-panel}}$ is the transmission latency between two consecutive panels. Results and Trade-offs {#section:results} ---------------------- The scenario for simulation is shown in Fig. \[fig:LIS\_scenario\]. Fifty users are uniformly distributed in a $20m \times 40m$ area in front of a $2.25m \times 22.5m$ (height x width) LIS. ![Top view of the simulation scenario.[]{data-label="fig:LIS_scenario"}](LIS_scenario.eps) The average sum-rate capacity at the interface between panels and processing tree for both algorithms is show in Fig. \[fig:results\]. The figures show the trade-offs between computational complexity ($C_{\text{filt}}$ in the vertical axis) and interconnection bandwidth ($R_{\text{global}}$ in the horizontal axis). Dashed lines represent points with constant panel size $\Ap$, which is another design parameter for LIS implementation. To illustrate the trade-off, we marked points A, B, and C in the figures, presenting 3 different design choices to a targeted performance of 610bps/Hz. Comparing the same points in both figures, it can be observed the reduction in complexity and interconnection bandwidth of IIC compared to RMF. We can also observe as small panels (e.g., point C comparing to point A) demand lower computational complexity in expense of higher backplane bandwidth. Once $\Ap$ is fixed, the trade-off between system capacity and implementation cost (computational complexity and interconnection data-rate) can be performed depending on the application requirement. Conclusions {#section:conclusions} =========== In this article we have presented distributed processing algorithms and the corresponding hardware architecture for efficient implementation of large intelligent surfaces (LIS). The proposed processing structure consists of local panel processing units to compress incoming data without losing much information and hierarchical backplane network with distributed processing-switching units to support flexible and efficient data aggregation. We have systematically analyzed the system capacity and implementation cost with different design parameters and provided design guidelines for the implementation of LIS.
--- author: - 'K. Ferrière' bibliography: - 'BibTex.bib' date: 'Received ; accepted ' title: Interstellar magnetic fields in the Galactic center region --- \[intro\]Introduction ===================== The Galactic center (GC) region constitutes a very special environment, which differs from the rest of the Galaxy both by its stellar population and by its interstellar medium (ISM). Here, we are not directly concerned with the stellar population, which we tackle only to the extent that it affects the ISM, but we are primarily interested in the ISM under its various facets. More specifically, our purpose is to develop a comprehensive model of the ISM in the GC region, which can be conveniently used for a broad range of applications. The particular application we personally have in mind is to study the propagation and annihilation of interstellar positrons, which, according to the measured annihilation emission, tend to concentrate toward the central parts of the Galaxy . In a previous paper , we focused on the interstellar gas in the Galactic bulge (GB), which we defined as the region of our Galaxy interior to a Galactocentric radius $ \simeq 3$ kpc. We first reviewed the current observational knowledge of its complex spatial distribution and physical state. We then used the relevant observational information in conjunction with theoretical predictions on gas dynamics near the GC to construct a parameterized model for the space-averaged densities of the different (molecular, atomic and ionized) gas components. In the present paper, we direct our attention to interstellar magnetic fields. We start by providing a critical overview of some 25 years of observational and interpretative work on their properties. Since different investigation methods lead to different and sometimes contradictory conclusions, we explore the possible sources of divergence and try to filter out the dubious observational findings and the questionable theoretical interpretations. In the light of the most recent studies, we then strive to piece everything together into a coherent picture of interstellar magnetic fields in the GC region. In Sect. \[gas\_distri\], we give a brief summary of the main results of Paper I. In Sect. \[observ\], we review the observational evidence on interstellar magnetic fields in the GC region, based on four different diagnostic tools. In Sect. \[discussion\], we provide a critical discussion of the various observational findings and of their current theoretical interpretations. In Sect. \[additional\], we examine if and how the interstellar magnetic field in the GC region connects with the magnetic field in the Galaxy at large and look for additional clues from external galaxies. In Sect. \[conclu\], we present our conclusions. \[gas\_distri\]Spatial distribution of interstellar gas ======================================================= Following Paper I, we assume that the Sun lies at a distance $r_\odot = 8.5$ kpc from the GC, we denote Galactic longitude and latitude by $l$ and $b$, respectively, and employ two distinct Galactocentric coordinate systems: (1) the cartesian coordinates $(x,y,z)$, with $x$ measured in the Galactic plane ($b = 0^\circ$) along the line of sight to the Sun (positively toward the Sun), $y$ along the line of intersection between the Galactic plane and the plane of the sky (with the same sign as $l$) and $z$ along the vertical axis (with the same sign as $b$); (2) the cylindrical coordinates $(r,\theta,z)$, with $\theta$ increasing in the direction of Galactic rotation, i.e., clockwise about the $z$-axis, from $\theta = 0$ along the $x$-axis (see Fig. \[fig:coordinates\]). We further denote by $r_\perp$ the horizontal distance from the GC projected onto the plane of the sky, which, near the GC, is numerically given by $r_\perp \simeq (150~{\rm pc}) \ (|l| / 1^\circ)$. ![\[fig:coordinates\] Our $(x,y,z)$ and $(r,\theta,z)$ Galactocentric coordinate systems: (a) face-on view from the northern Galactic hemisphere; (b) full three-dimensional view from a point in the fourth Galactic quadrant. ](figure1a.eps "fig:")\ ![\[fig:coordinates\] Our $(x,y,z)$ and $(r,\theta,z)$ Galactocentric coordinate systems: (a) face-on view from the northern Galactic hemisphere; (b) full three-dimensional view from a point in the fourth Galactic quadrant. ](figure1b.eps "fig:") From the point of view of the interstellar gas, the GB possesses two prominent structural elements: the central molecular zone (CMZ) and the surrounding GB disk (also misleadingly called the H[i]{} nuclear disk) . The CMZ is a thin sheet of gas, dominated at $\sim 90\%$ by its molecular component. On the plane of the sky, it appears approximately aligned with the Galactic plane and displaced eastward by $\sim 50$ pc from the GC, it extends horizontally from $y \sim -150$ pc to $y \sim +250$ pc, and it has an [*FWHM*]{} thickness $\sim 30$ pc in H$_2$ and $\sim 90$ pc in H[i]{}. Its projection onto the Galactic plane was modeled in Paper I as a $500~{\rm pc} \times 200~{\rm pc}$ ellipse, centered on $(x_{\rm c},y_{\rm c}) = (-50~{\rm pc},50~{\rm pc})$ and inclined clockwise by $70^\circ$ to the line of sight. Its H$_2$ mass was estimated at $\sim 1.9 \times 10^7~M_\odot$ and its H[i]{} mass at $\sim 1.7 \times 10^6~M_\odot$. The GB disk is a much bigger structure, whose gas content is $\sim 80\% - 90\%$ molecular. In contrast to the CMZ, it does not appear offset from the GC, but it is noticeably tilted out of the Galactic plane (counterclockwise by $\sim 7^\circ-13^\circ$) and inclined to the line of sight (near-side–down by as much as $\sim 20^\circ$). On the plane of the sky, it extends out to a distance $\sim 1.3$ kpc on each side of the GC, and it has an [*FWHM*]{} thickness $\sim 70$ pc in H$_2$ and $\sim 200$ pc in H[i]{}. According to the model described in Paper I , the GB disk has the shape of a $3.2~{\rm kpc} \times 1.0~{\rm kpc}$ ellipse, making an angle of $51\fdg5$ clockwise to the line of sight and featuring a $1.6~{\rm kpc} \times 0.5~{\rm kpc}$ elliptical hole in the middle (just large enough to enclose the CMZ). This “holed" GB disk has an H$_2$ mass $\sim 3.4 \times 10^7~M_\odot$ and an H[i]{} mass $\sim 3.5 \times 10^6~M_\odot$. In addition to the molecular and atomic gases, which are confined either to the CMZ or to the holed GB disk, the GB also contains ionized gas, which spreads much farther out both horizontally (beyond the boundary of the GB at $r \simeq 3$ kpc) and vertically (up to at least $|z| \simeq 1$ kpc). This ionized gas can be found in three distinct media with completely different temperatures. The warm ionized medium (WIM), with $T \sim 10^4$ K, is distributed into two components: an extended, $\sim 2$ kpc thick disk, which continues into the Galactic disk, and a localized, $\sim (240~{\rm pc})^2 \times 40~{\rm pc}$ ellipsoid, nearly centered on the GC. The hot ionized medium (HIM), with $T \sim$ a few $10^6$ K, exists throughout the entire GB. Finally, the very hot ionized medium (VHIM), with $T \gtrsim 10^8$ K, is confined to a $\sim (270~{\rm pc})^2 \times 150~{\rm pc}$ ellipsoid, centered on the GC and tilted clockwise by $\sim 20^\circ$ to the Galactic plane. In Paper I, the H$^+$ masses inside the GB were estimated at $\sim 5.9 \times 10^7~M_\odot$ in the WIM (with $\sim 5.8 \times 10^7~M_\odot$ in the extended disk and $\sim 6.0 \times 10^5~M_\odot$ in the localized ellipsoid), $\sim 1.2 \times 10^7~M_\odot$ in the HIM and $\sim 1.0 \times 10^5~M_\odot$ in the VHIM. Altogether, the interstellar hydrogen content of the GB amounts to $\sim 1.3 \times 10^8~M_\odot$, with $\sim 5.3 \times 10^7~M_\odot$ (41%) in molecular form, $\sim 5.2 \times 10^6~M_\odot$ (4%) in atomic form and $\sim 7.1 \times 10^7~M_\odot$ (55%) in ionized form. Furthermore, if helium and metals represent, respectively, 40% and 5.3% by mass of hydrogen, the above hydrogen masses have to be multiplied by a factor 1.453 to obtain the total interstellar masses. The broad outlines of the observed structure of the interstellar GB can be understood in terms of the theoretical properties of closed particle orbits in the gravitational potential of the Galactic bar. Basically, outside the bar’s inner Lindblad resonance (ILR), particles travel along $x_1$ orbits, which are elongated along the bar, whereas inside the ILR, they travel along $x_2$ orbits, which are elongated perpendicular to the bar. In the presence of hydrodynamical (e.g., pressure and viscous) forces, the interstellar gas does not strictly follow closed particle orbits. Instead, it gradually drifts inward through a sequence of decreasing-energy orbits, and near the ILR it abruptly switches from the high-energy $x_1$ orbits to the lower-energy $x_2$ orbits. The area covered by the $x_1$ orbits forms a truncated disk, which can naturally be identified with our holed GB disk, and, inside the hole, the area covered by the $x_2$ orbits forms a smaller disk, which can be identified with the CMZ. Although the two families of particle orbits are in theory orthogonal to each other, hydrodynamical forces in the interstellar gas smooth out the $90^\circ$ jump in orbit orientation at the ILR, so that the CMZ actually leads the bar by less than $90^\circ$. \[observ\]Observational overview of interstellar magnetic fields ================================================================ The story begins in the 1980s, with the discovery of systems of radio continuum filaments running nearly perpendicular to the Galactic plane. The most striking of these systems is the bright radio arc (known as the Radio Arc or simply the Arc) crossing the plane near Sgr A, at $l \simeq + 0\fdg18$ . The Radio Arc is composed of a unique set of a dozen bundled filaments, which appear long ($\sim 40$ pc), narrow ($\sim 1$ pc) and remarkably regular and straight. On the other side of the GC, a fainter radio filament seems to emanate from Sgr C, at $l \simeq - 0\fdg56$ [@liszt_85]. The bright Radio Arc and the fainter Sgr C filament lie vertically at the feet of the eastern and western ridges, respectively, of the $\sim 1^\circ$ $\Omega$-shaped radio lobe structure observed above the GC . The observed morphology of the radio filaments suggests that they trace the orientation of the local interstellar magnetic field . This view has been borne out by radio polarization measurements, starting with those of and in the pair of polarized radio lobes (or plumes) that make up the northern and southern extensions of the Radio Arc. The authors assumed optically thin synchrotron emission and corrected the measured polarization position angles for Faraday rotation, whereupon they found that the transverse (to the line of sight) magnetic field ${\bf B}_\perp$ is indeed oriented along the axis of the plumes. Thus, the orientation of the radio filaments provides evidence that the surrounding interstellar magnetic field is approximately normal to the Galactic plane. In addition, their slight outward curvature indicates that the magnetic field does not remain vertical at large distances from the plane, but that it turns instead to a more general poloidal geometry [@morris_90]. derived a crude estimate for the magnetic field strength $B$ in the polarized radio lobes, based on their measured (supposedly synchrotron) radio intensity and on the assumption of energy equipartition between the magnetic field and the energetic particles. They thereby obtained a magnetic field strength of several $10~\mu$G, compatible with the line-of-sight field $\sim 10~\mu$G deduced from Faraday rotation measures . , on the other hand, estimated the equipartition field strength in the radio filaments at $\sim 0.2$ mG. also provided an independent magnetic field strength estimate for the filaments of the Radio Arc with the help of a simple dynamical argument . The fact that the filaments remain nearly straight all along their length and that they pass through the Galactic plane with little or no bending suggests that their magnetic pressure, $P_{\rm mag}$, is strong enough to withstand the ram pressure of the ambient interstellar clouds, $P_{\rm ram}$. The condition $P_{\rm mag} \gtrsim P_{\rm ram}$, together with a conservative estimation of $P_{\rm ram}$, then leads to the stringent requirement that $B \gtrsim 1$ mG inside the radio filaments. A similar dynamical argument can be applied to the less prominent nonthermal radio filaments (NRFs) discovered later on. Going one step further, it is then possible to estimate the magnetic field strength outside the NRFs by invoking pressure balance. As argued by [@morris_90], the external ISM must supply a confining pressure for the NRFs. This pressure cannot be of thermal origin, because even the very hot gas, which has the highest thermal pressure, falls short by a factor $\sim 30$. The confining pressure must, therefore, be of magnetic origin, which means that the mG field inferred to exist inside the NRFs must also prevail outside. According to this argument, the reason why the NRFs stand out in the radio maps is not because they have a stronger magnetic field than their surroundings, but because they contain more relativistic electrons. To quote [@morris_90], the NRFs “are illuminated flux tubes within a relatively uniform field”. To sum up, the picture emerging at the end of the 1980s for the interstellar magnetic field within $\sim 70$ pc of the GC is that of a pervasive, poloidal, mG field. The magnetic energy contained in this mG field is as high as $\sim (1-2) \times 10^{54}$ ergs inside 70 pc [@morris_90],[^1] which is equivalent to the energy released by $\sim (1000 - 2000)$ supernova explosions, and which is roughly comparable to the kinetic energy associated with Galactic rotation in the considered region, while being significantly larger than both the kinetic energy in turbulent motions and the thermal energy of the very hot gas. The simple view of a pervasive, poloidal, mG field gained wide acceptance, until a variety of subsequent observations gradually called it into question. In the remainder of this section, we review the relevant observations that have either given a new twist to the simple picture described above or otherwise contributed to enhance our understanding of the interstellar magnetic field near the GC. \[radio\]Radio continuum observations ------------------------------------- After the initial discovery of , numerous NRFs were identified in the GC region. Although the Radio Arc is clearly a unique structure, the other NRFs share a number of observational characteristics with the Radio Arc’s filaments. [@morris_96] drew up the inventory of all the GC NRFs known at the time and summarized their distinctive properties, i.e., the properties that make them unique to the GC region. In brief, the GC NRFs are a few to a few tens of parsecs long and a fraction of a parsec wide; they look straight or mildly curved along their entire length; they run roughly perpendicular to the Galactic plane; and their radio continuum emission is both linearly polarized and characterized by a spectral index consistent with synchrotron radiation. To this list, we should add that the GC NRFs have equipartition or minimum-energy field strengths[^2] of several $10~\mu$G, typically $\sim (50 - 200)~\mu$G . Since they appear to be magnetically dominated, it is likely that these values underestimate the true field strengths. This could perhaps partly explain why the equipartition/minimum-energy field strengths are systematically lower than the mG field strength deduced from the dynamical condition $P_{\rm mag} \gtrsim P_{\rm ram}$. Even so, while there is general agreement that the GC NRFs represent magnetic flux tubes lit by synchrotron-emitting electrons, the question of how strong their intrinsic magnetic field really is remains open to debate. Another unsettled question is whether their strong field is limited to their interiors or representative of the GC region as a whole. Amongst all the NRFs cataloged by [@morris_96], the one dubbed the Snake, located at $l \simeq - 0\fdg90$ ($r_\perp \simeq 135$ pc), is atypical by its morphology which exhibits two marked kinks along its length . This kinked shape probably reveals that the magnetic field inside the Snake is weaker than the mG field required to withstand the ambient ram pressure. Incidentally, a weaker field is easier to reconcile with the minimum-energy field strength $\simeq 90~\mu$G derived by . Since the Snake lies at a greater projected radius than the other NRFs, a simple way to explain its weaker field is to invoke an overall decrease in the interstellar magnetic field strength outside a radius $\gtrsim 100$ pc. Another interesting NRF, discovered somewhat later by , is the so-called Pelican, located at $l \simeq - 1\fdg15$ ($r_\perp \simeq 172$ pc). It, too, is noticeably kinked, but what distinguishes it most from the other NRFs is its orientation parallel to the Galactic plane. Its equipartition field strength is $\simeq 70~\mu$G, and the measured polarization position angles, corrected for Faraday rotation, confirm that its intrinsic ${\bf B}_\perp$ is everywhere aligned with its long axis. The Pelican’s anomalous features provide a hint that the interstellar magnetic field outside a certain radius, say, $\sim 200$ pc, is not only less intense than inside $\sim 100$ pc, but also oriented approximately parallel to the Galactic plane. In fact, the actual situation is probably more complex, as an NRF oriented at $45^\circ$ to the Galactic plane was found much closer to the GC, at $l \simeq - 0\fdg68$ ($r_\perp \simeq 102$ pc) . presented a sensitive 20-cm continuum survey of the GC region $(-2^\circ < l < 5^\circ$, $-40' < b < 40')$, which resulted in a new catalog of more than 80 linear radio filaments, including all the previously well-established NRFs as well as many new good candidates. The authors also drew a schematic diagram of the distribution of all the radio filaments, which puts them into perspective, with their respective positions, orientations, sizes and shapes (see their Fig. 29). A cursory look at the diagram confirms that the vast majority of filaments are nearly straight and that they have a general tendency to run along roughly vertical axes. However, upon closer inspection, it appears that only the longer filaments strictly follow this tendency. The shorter filaments exhibit a broad range of orientations, with only a loose trend toward the vertical. On the other hand, ’s diagram does not point to any obvious correlation between either the orientation or the shape (overall curvature, presence of kinks) of the filaments and their projected distance from the GC. The high-resolution, high-sensitivity 330 MHz imaging survey of leads to similar conclusions. While the brightest NRFs (with the exception of the Pelican) tend to align along the vertical, the newly discovered fainter NRFs (or candidate NRFs) have more random orientations, with a mean angle to the vertical $\simeq 35^\circ \pm 40^\circ$. Here, too, some of the strongly inclined or nearly horizontal NRFs lie much closer to the GC than the Pelican. The GC region was also imaged at 74 MHz and 330 MHz by . Their high-resolution 74 MHz image reveals, aside from discrete emission and thermal absorption features, a $6^\circ \times 2^\circ$ source of diffuse nonthermal emission centered on the GC (note that the NRFs detected at higher frequencies are resolved out in this image). The source of diffuse emission is also clearly visible in the lower-resolution 330 MHz image. estimated the integrated 74 MHz and 330 MHz flux densities and the spectral index of the (supposedly synchrotron) diffuse emission, and from this they derived a minimum-energy field strength (on spatial scales $\gtrsim 5$ pc) $\simeq (6~\mu{\rm G})~(\phi / f)^{2/7}$, where $\phi$ is the proton-to-electron energy ratio and $f$ the filling factor of the synchrotron-emitting gas. They noted that even the combination of extreme values $\phi \simeq 100$ and $f \simeq 0.01$ leads to a minimum-energy field strength $\lesssim 100~\mu$G. Although the large-scale magnetic field could easily be stronger than the minimum-energy value, advanced a number of cogent arguments against a large-scale field as strong as 1 mG. With a 1 mG field, they argued, the radiative lifetime of synchrotron-emitting electrons would only be $\sim 10^5$ yr, “shorter than any plausible replenishment timescale”. In addition, the measured synchrotron flux would imply a cosmic-ray electron energy density of only $\sim 0.04$ eV cm$^{-3}$, which is about 5 times lower than in the local ISM, whereas several lines of evidence (in particular, from the observed Galactic diffuse $\gamma$-ray emission) point to comparable values. Because of the uncertainties inherent in the equipartition/minimum-energy assumption made to exploit radio synchrotron emission data, there have been attempts to derive the relativistic-electron density by separate means, for instance, based on X-ray emission spectra. detected X-ray emission from the radio filament G359.90$-$0.06, which they tentatively attributed to inverse Compton scattering of far-infrared photons from dust by the relativistic electrons that produce the radio synchrotron emission. Relying on dust thermal emission observations to constrain the dust parameters, they were able to infer the relativistic-electron density from the X-ray flux and then the magnetic field strength from the radio synchrotron flux. In this manner, they estimated the field strength in the considered NRF at $\sim (30-130)~\mu$G. Quite unexpectedly, this loose range of values falls below the equipartition field strength, estimated here at $\sim (140-200)~\mu$G. \[Faraday\]Faraday rotation measurements ---------------------------------------- The first radio polarization studies of the Radio Arc by and included measurements at four different frequencies around 10 GHz ($\lambda = 3$ cm), which enabled the authors to examine the wavelength-dependence of both the polarization position angle and the polarization degree of the incomig radiation. focused on two strongly polarized regions along the Radio Arc and its extensions, which roughly coincide with the peaks in polarized intensity in the southern and northern radio lobes. They derived unusually large rotation measures (RMs) $\simeq -1660~{\rm rad~m^{-2}}$ for the southern region (Source A) and $\simeq +800~{\rm rad~m^{-2}}$ for the northern region (Source B). With their assumed values of the thermal-electron density, $n_{\rm e} \sim 100~{\rm cm}^{-3}$, and of the path length through the Faraday-rotating medium, $L \sim 5$ pc, the derived RM of Source A translates into a line-of-sight magnetic field $B_\parallel \sim -4~\mu$G.[^3][^4] However, if Faraday rotation occurs within the synchrotron-emitting region itself (rather than in the foreground ISM), as suggested by the steep decrease in the polarization degree with increasing wavelength, then the implied value of $B_\parallel$ is in fact twice higher. , who mapped a whole $0\fdg6 \times 1\fdg2$ area straddling the Radio Arc, obtained positive RMs almost everywhere in the northern lobe, with a maximum value $\simeq +600~{\rm rad~m^{-2}}$ near the peak in polarized intensity, and negative RMs around the peak in polarized intensity of the southern lobe, with a minimum value $\simeq -2000~{\rm rad~m^{-2}}$ and an average value $\simeq -1500~{\rm rad~m^{-2}}$. They also found negative RMs near the northeastern edge of the northern lobe (a marginal detection, later confirmed by ) and positive RMs in the extended, patchy tail of the southern lobe (not confirmed by ). For the region around the peak of the southern lobe, they assumed $n_{\rm e} \sim 30~{\rm cm}^{-3}$ and $L \sim 5$ pc, thereby arriving at $B_\parallel \sim -10~\mu$G. Again, this value should be multiplied by a factor of 2 if Faraday rotation is internal to the source of synchrotron emission. Evidently, the line-of-sight fields inferred here from Faraday RMs can only be regarded as rough estimates, given the large uncertainties in the nature of the Faraday-rotating medium, in its location with respect to the GC (can it truly be localized to the GC?) and with respect to the synchrotron-emitting region (are both domains spatially coincident or separate?), in its line-of-sight depth, and in its thermal-electron density. But even so, the above studies bring to light a clear magnetic field reversal between the northern and southern radio lobes as well as two possible reversals near the northeastern edge of the northern lobe and between the peak and the tail of the southern lobe. To explain the main field reversal, invoked the presence of a magnetic flux tube running through the Galactic plane and bent somewhere near the midplane in such a way that the magnetic field globally points toward (away from) the observer above (below) the midplane. This bending could be due either to a local interaction with a nearby molecular cloud or to Galactic rotation. The secondary reversals, if confirmed, could be explained by a propagating Alfvén wave, which would also account for the waving pattern observed in the distribution of ${\bf B}_\perp$. Let us emphasize that ’s interpretation does not contradict the notion that the NRFs composing the Radio Arc are nearly straight, as the bending implied by $|B_\parallel| \sim 10~\mu$G is indeed very modest if $B \sim 1~$mG. One might even see here an independent piece of evidence in support of a mG field, or at least, of a field $\gg 10~\mu$G – unless the bending fortuitously occurs in the vertical plane containing the line of sight. Further, high-resolution observations at $\lambda \, 6$ cm and $\lambda \, 20$ cm by toward a segment of the Radio Arc comprising Source A of yielded negative RMs down to $\simeq -5500~{\rm rad~m^{-2}}$ in the vicinity of Source A and positive RMs up to $\simeq +350~{\rm rad~m^{-2}}$ farther south. Similar observations by toward a small area of the northern lobe comprising Source B yielded positive RMs up to $\simeq +1450~{\rm rad~m^{-2}}$. Both sets of RMs are in reasonably good agreement with the results obtained at $\lambda \, 3$ cm by and . However, in contrast to these authors, attributed their observed RMs to a combination of internal and foreground Faraday rotation, while , noticing locations with large RMs and only weak depolarization, suggested that Faraday rotation takes place outside the filaments, possibly in a helical magnetic structure surrounding them and having $L \sim 0.3$ pc, $n_{\rm e} \sim 2200~{\rm cm}^{-3}$ and $B_\parallel \sim -10~\mu$G. On the other hand, observed the Radio-Arc + polarized-lobes complex at 42.5 GHz ($\lambda = 0.7$ cm), a higher frequency at which Faraday rotation and Faraday depolarization should be nearly negligible. Not only did they confirm the RMs derived by and , but they also made their case for internal Faraday rotation stronger. Clearly, the contradictory interpretations proposed by different authors for the Radio Arc and polarized lobes attest to the difficulty of locating the Faraday screen with respect to the synchrotron source. Let us now turn to the Snake (see Sect. \[radio\]), which was observed by at five different frequencies ranging between 0.84 GHz and 8.64 GHz. By analyzing the wavelength-dependence of its polarization properties, they found that the Snake experiences both internal Faraday rotation, with Faraday depth $\sim +1400~{\rm rad~m^{-2}}$, and foreground Faraday rotation, with RM $\sim +5500~{\rm rad~m^{-2}}$. For the external Faraday-rotating medium, they estimated $n_{\rm e} \sim 10~{\rm cm}^{-3}$, $L \sim 100$ pc and thus $B_\parallel \sim 7~\mu$G, while for the Snake itself, they mentioned that the measured internal Faraday depth could be reproduced with $n_{\rm e} \sim 10~{\rm cm}^{-3}$, $L \sim 1$ pc (approximate width of the Snake) and $B_\parallel \sim 90~\mu$G (minimum-energy field strength; see Sect. \[radio\]). Note that the latter value is not particularly relevant, as there is no reason why $B_\parallel$ (which is probably only a minor component of ${\bf B}$) would equal the minimum-energy field strength. Another instructive NRF is G359.54+0.18, observed by at $\lambda \, 3.6$ cm and $\lambda \, 6$ cm. The authors obtained RMs down to $\simeq -4200~{\rm rad~m^{-2}}$, with typical values in the range $\sim -3000~{\rm rad~m^{-2}}$ to $-1500~{\rm rad~m^{-2}}$. They argued that Faraday rotation occurs in a foreground screen, which they suggested could be the $10^8$ K gas present in the inner 100 pc of the Galaxy. With $n_{\rm e} \sim 0.03~{\rm cm}^{-3}$ , this gas would require $B_\parallel \sim -1$ mG to explain the typical RM values. And if, as suggested by the small-scale RM variations along the filament, the $10^8$ K gas is clumped with a filling factor $\Phi \sim 0.01$, the values of $n_{\rm e}$ and $B_\parallel$ should both be divided by $\sqrt{\Phi} \sim 0.1$, which would make $|B_\parallel|$ even more unrealistically high. However, we note that the $10^8$ K gas encloses but a small fraction of the interstellar free electrons on the line of sight, the vast majority of them residing in the warm $\sim 10^4$ K phase of the ISM (see Paper I). Accounting for all the interstellar free electrons, supposed to be distributed according to ’s model, we find that their integrated density from the GC to Galactic radius $r$ rises steeply with $r$ out to $\sim 200$ pc, with $\int_{0}^{200\,{\rm pc}} n_{\rm e} \ ds \simeq 1230~{\rm cm^{-3}~pc}$, then gradually levels off to $\int_{0}^{r_\odot} n_{\rm e} \ ds \simeq 1960~{\rm cm^{-3}~pc}$. It then follows that the above typical RMs translate into $\overline{B}_\parallel \sim -\,(1-2)~\mu$G, where the overbar denotes a line-of-sight average weighted by $n_{\rm e}$, i.e., in the present context, a line-of-sight average heavily weighted toward the region $r \lesssim 200$ pc. Obviously, $\overline{B}_\parallel$ could significantly underestimate the typical local $B_\parallel$ in this region if the magnetic field reverses along the line of sight. For the Pelican, observed by at $\lambda \, 3.6$ cm, $\lambda \, 6$ cm and $\lambda \, 20$ cm, the measured RMs vary smoothly from $\simeq -500~{\rm rad~m^{-2}}$ at the western end to $\simeq +500~{\rm rad~m^{-2}}$ at the eastern end, with a peak value $\simeq -1000~{\rm rad~m^{-2}}$. Faraday rotation is most probably external, as implied by the particularly high polarization degrees. If so, the fact that the Pelican has somewhat lower RMs than the other NRFs could indicate that it lies slightly closer to the Sun, and the sign reversal in RM could reflect the presence of a magnetic perturbation in front of the Pelican, with field lines bent out of the plane of the sky. Otherwise, the authors made no attempt to convert their measured RMs into estimates for $B_\parallel$. More RM studies toward other NRFs have been carried out in the last decade . The derived RMs typically range from a few hundred to a few thousand ${\rm rad~m^{-2}}$. When these RMs are used to estimate $B_\parallel$ near the GC, values of a few $\mu$G are usually obtained. As already mentioned above, these values are extremely uncertain, due to our poor knowledge of the precise free-electron density distribution along the considered line of sight. Furthermore, their exact significance remains unclear, first because they refer to line-of-sight averages (with possible cancellations if $B_\parallel$ reverses sign) through ionized regions only, and second because $B_\parallel$ is likely to represent but a small component of the total magnetic field. In this respect, the inferred values of $B_\parallel$ might tell us more about the rigidity of the interstellar magnetic field near the GC than about its strength. RM studies also provide valuable information on the magnetic field geometry near the GC. By collecting all the available RMs toward NRFs within $1^\circ$ of the GC, were able to uncover a definite pattern in the sign of RM, such that ${\rm RM} > 0$ in the quadrants $(l>0,b>0)$ and $(l<0,b<0)$ and ${\rm RM} < 0$ in the quadrants $(l>0,b<0)$ and $(l<0,b>0)$. This pattern, they explained, could be understood as the result of an initially axial magnetic field (pointing north) being sheared out in the azimuthal direction by the Galactic differential rotation (dense gas near the Galactic plane tends to rotate faster than diffuse gas higher up). The RM results described in this section show that the actual situation is not as clear-cut, as we came across several filaments that exhibit both signs of RM in the same quadrant. For the filaments of the Radio Arc, one sign clearly dominates, and the other sign could be attributed to an Alfvén wave traveling along the filaments . For the Pelican (which lies slightly outside $1^\circ$ of the GC), both signs are equally important, and the sign reversal could be attributed to a foreground magnetic perturbation (see above). ’s conclusions are not supported by the recent work of and , who considered the more extended area $(|l|<6^\circ,|b|<2^\circ)$. observed 59 background extragalactic sources through this area, at $\lambda \, 3.6$ cm and $\lambda \, 6$ cm, and obtained RMs in the range $\simeq -1180~{\rm rad~m^{-2}}$ to $+4770~{\rm rad~m^{-2}}$. remarked that these RMs are predominantly positive, with a mean value $\simeq +413~{\rm rad~m^{-2}}$, and they found no evidence for a RM sign reversal either across the rotation axis or across the midplane. As discussed by the authors, this observed RM distribution is consistent with either the large-scale Galactic magnetic field having a bisymmetric spiral configuration or the magnetic field in the central region of the Galaxy being oriented along the Galactic bar. Interestingly, the 4 extragalactic sources of the sample that fall closer than $1^\circ$ of the GC conform neither to ’s pattern (only 2 have the expected RM sign) nor to the general pattern of the more extended area (only 2 have ${\rm RM} > 0$). It is clear that a much broader sample of RM data toward the GC would be needed to draw firm conclusions on the magnetic field geometry near the GC. It would also be necessary to get a better handle on how much of the observed RMs can truly be attributed to the GC region. argued that their RMs have negligible contributions from the sources themselves, they estimated the contribution from the Galactic disk to be small and, based on the measured RMs of 7 pulsars located within the survey area (with distances between 1.5 kpc and 7.7 kpc and with a mean RM of only $\simeq (-7 \pm 46)~{\rm rad~m^{-2}}$), they ruled out the possibility that a single high-RM object (H[ii]{} region or supernova remnant) could bias their entire sample. From this, they concluded that their RMs arise mainly from the central $\sim 2$ kpc of the Galaxy. Relying on ’s observations, they further estimated the free-electron column density across the central 2 kpc at $\sim 800~{\rm cm^{-3}~pc}$, which they combined with the mean RM $\simeq +413~{\rm rad~m^{-2}}$ to derive a mean $\overline{B}_\parallel \sim 0.6~\mu$G. Again, this value by itself is not very meaningful, as it represents an $n_{\rm e}$-weighted average, over a $12^\circ \times 4^\circ$ cone opening up from the Sun toward the GC, of a quantity that undergoes repeated sign reversals. More useful is the estimate of the turbulent (or random) component of $B_\parallel$. From the RM structure function and the r.m.s RM of their sample, estimated $\delta B_\parallel \sim 6~\mu$G in the region $r \lesssim 1$ kpc and $\delta B_\parallel \sim 20~\mu$G in the region $r \lesssim 150$ pc. \[IR\_Smm\]Infrared and sub-millimeter polarimetry -------------------------------------------------- Interstellar dust grains tend to spin about their short axes and to align the latter along the local magnetic field. As a result, the dust thermal emission, at far-infrared (FIR) and sub-millimeter (Smm) wavelengths, is linearly polarized perpendicular to the ambient magnetic field. It then follows that FIR/Smm polarimetry makes it possible to map out the direction of the interstellar magnetic field on the plane of the sky, in regions of strong dust emissivity, i.e., in high-density regions [e.g., @hildebrand_88]. The first successful applications of this technique to the GC area were by . [@davidson_96] reviewed the existing FIR polarization measurements toward dense regions located within $\sim 100$ pc of the GC. In the four regions that she discussed (certainly in the Circumnuclear Disk[^5] (CND) and associated Northern Streamer, in the Arched Filaments and in the Sickle, and possibly in Sgr B2), the measured field direction is roughly parallel to the Galactic plane. [@davidson_96] argued that this field direction could be explained by the dense gas moving relative to the surrounding diffuse gas and either distorting the local poloidal magnetic field or dragging its own distorted field from another Galactic location. She also quoted a mean magnetic field strength $\sim 6$ mG in the Arched Filaments, as inferred from the Chandrasekhar-Fermi relation . Compared to FIR polarimetry, which probes the warmer parts of molecular clouds, Smm polarimetry applies to their colder parts. Polarized Smm emission from the GC region was first detected by , who carried out $350~\mu$m polarimetric observations of three separate $2' \times 2'$ areas, centered on the CND and on the peaks of the M$-$0.02$-$0.07 and M$-$0.13$-$0.08 molecular clouds, respectively. In all three areas, the measured field directions are more-or-less inclined to the Galactic plane. In the CND, they are similar to the field directions given by FIR polarimetry; in the curved ridge of M$-$0.02$-$0.07, they are everywhere aligned with the ridge, consistent with the field being compressed together with the gas by the expansion of Sgr A East; and in M$-$0.13$-$0.08, they are on average parallel to the cloud long axis, consistent with the field being stretched out by the tidal forces that gave the cloud its elongated shape. These first results already show that the magnetic field configuration near the GC is governed by a combination of different factors. A much larger area about the GC, extending 170 pc in longitude and 30 pc in latitude, was observed in $450~\mu$m polarized emission by . Their polarization map clearly shows that the magnetic field threading molecular clouds is, on the whole, approximately parallel to the Galactic plane. To reconcile the horizontal field measured in molecular clouds with the poloidal field traced by NRFs, suggested that the large-scale magnetic field near the GC is predominantly poloidal in the diffuse ISM and predominantly toroidal in dense regions along the Galactic plane, where it was sheared out in the azimuthal direction by the differential rotation of the dense gas. reported further $350~\mu$m polarimetric observations of the central 50 pc of the Galaxy, which led them to refine the conclusions of . They found that the measured field direction depends on the molecular gas density in such a way that it is generally parallel to the Galactic plane in high-density regions and generally perpendicular to it in low-density regions. They proposed two possible scenarios to explain their results. In their preferred scenario, the large-scale magnetic field was initially poloidal everywhere, but in dense molecular clouds, where the gravitational energy density exceeds the magnetic energy density, it became sheared out into a toroidal field by the clouds’ motions. In the alternative, less likely scenario, the large-scale magnetic field was initially toroidal everywhere, but outside dense molecular clouds, it became distorted into a poloidal field by winds due to supernova explosions. If the first scenario is correct, a characteristic field strength inside GC molecular clouds can be estimated by assuming that clouds where the field is half-way between toroidal and poloidal (i.e., inclined by $45^\circ$ to the vertical) are those where gravitational and magnetic energy densities are equal. A crude estimation of the gravitational energy then yields a characteristic field strength $\sim 3$ mG inside molecular clouds.[^6] While dust grains [*emit*]{} polarized thermal radiation at FIR/Smm wavelengths, they [*absorb*]{} starlight at optical and near-infrared (NIR) wavelengths. Optical starlight from the GC region becomes completely extinct before reaching us, but NIR starlight suffers only partial extinction, such that it reaches us linearly polarized in the direction of the magnetic field (the dust-weighted average field along the line of sight to the observed stars). For this reason, NIR polarimetry toward the GC has become a new tool to trace the average magnetic field direction on the plane of the sky, in the GC region. obtained a NIR polarization map of a $50~{\rm pc} \times 50~{\rm pc}$ area centered on the GC. Compared to earlier NIR polarimetric observations toward the GC , they were able, for the first time, to separate out the contribution from foreground dust and to isolate the polarization arising within $\sim (1-2)$ kpc of the GC. Their inferred distribution of polarization position angles exhibits a strong peak in a direction nearly parallel to the Galactic plane, in good agreement with the results of FIR/Smm polarimetry. However, in contrast to , found no indication that the magnetic field direction depends on gas density – the field appears to be everywhere horizontal, including in the diffuse ISM. These first NIR results are preliminary, but they demonstrate the potential of NIR polarimetry to probe the GC magnetic field. This potential will probably prove particularly valuable in directions where the FIR/Smm emission flux is too weak to perform FIR/Smm polarimetry. \[Zeeman\]Zeeman splitting measurements --------------------------------------- The line-of-sight magnetic field in dense, neutral (atomic or molecular) regions can in principle be measured directly through Zeeman splitting of radio spectral lines. In practice, though, near the GC, the task is made difficult by the very broad linewidths of GC clouds and by the line-of-sight blending of physically unrelated spectral features. The first factor sets a stringent observational limit ($B_\parallel \sim {\rm a~few}\ 0.1$ mG), below which magnetic fields cannot be detected with this method. The first Zeeman splitting measurements toward the GC date back to the early 1990s and pertain to the CND. measured the Zeeman splitting of the H[i]{} 21-cm absorption line and reported the tentative detection of $B_\parallel \sim +0.5$ mG[^7] near the northern edge of the CND as well as a $3 \sigma$ upper limit $\sim 1.5$ mG near the southern edge. From the Zeeman splitting of the OH 1667-MHz absorption line, derived $B_\parallel \sim -2$ mG both in the southern part (firm detection) and in the northern part (marginal detection) of the CND; for other OH clouds within the central $\sim 200$ pc, they obtained $3 \sigma$ upper limits $\sim (1-2)$ mG. and searched further for Zeeman splitting in H[i]{} 21-cm absorption over different areas of the CND. While the former found only an upper limit $\sim 0.5$ mG for each of the northern and southern parts of the CND, the latter reported 7 detections (1 positive and 6 negative values of $B_\parallel$) ranging between $-4.7$ mG and $+1.9$ mG toward the northern part of the CND; the 6 strongest detections were argued to arise from the Northern Streamer rather than from the CND itself. The disparate Zeeman results obtained for the CND are not necessarily contradictory; they can be reconciled if $B_\parallel$ varies substantially – especially if $B_\parallel$ changes sign – across the CND. In that case, averaging over broad portions of the CND lowers the overall Zeeman signal, and sometimes does so to below the detectability threshold . Zeeman splitting measurements outside the CND were performed by , who observed 13 selected positions within a few degrees of the GC, in the OH 1665-MHz and 1667-MHz absorption lines. The large velocities and broad linewidths of the absorption features together with the relatively high molecular densities required to produce the observed absorption made the authors confident that the absorbing clouds are located close to the GC. All the measurements led to non-detections, with $3 \sigma$ upper limits to $B_\parallel$ $\sim (0.1-1)$ mG. Here, beam dilution of the Zeeman signal, added to line-of-sight averaging, could be partly responsible for the absence of detections. Nonetheless, ’s negative results provide evidence that the mean (or uniform) magnetic field through GC molecular clouds is either weaker than the mG field believed to thread NRFs or nearly perpendicular to the line of sight. The latter possibility seems ruled out for some of the considered clouds, notably, for the cloud associated with the Arched Filaments, whose observed line-of-sight–velocity gradient should be accompanied by field-line shearing along the line of sight. More generally, FIR/Smm polarimetry indicates that the magnetic field inside molecular clouds is approximately horizontal (see Sect. \[IR\_Smm\]), which makes it unlikely that all the clouds studied by would have their magnetic field nearly perpendicular to the line of sight. From this, one might conclude that the mean magnetic field through GC molecular clouds is generally below the mG level. On the other hand, mapped the Zeeman effect in H[i]{} 21-cm absorption toward the Main and North cores of Sgr B2, and they reported values of $B_\parallel$ at 4 different locations, all between $\simeq -0.1$ mG and $-0.8$ mG. Sgr B2 corresponds to one of the positions examined by , for which they managed to derive neither a value of $B_\parallel$ nor an upper limit to $B_\parallel$. This failure to obtain information on $B_\parallel$ was due to a combination of coarse angular resolution ($\sim 8'$, as opposed to $\sim 10''$ for ’s map), small-scale magnetic-field structure and blending of the OH absorption lines with OH maser emission lines. Zeeman splitting of OH (1720 MHz) maser emission from the Sgr A region was measured by . The preliminary analysis of yielded very strong magnetic fields, with $B_\parallel \sim +\,(2.0 - 3.7)$ mG in the Sgr A East shell and $B_\parallel \sim -\,(3.0 - 4.0)$ mG in the CND. Follow-up, higher-resolution observations by confirmed the presence of very strong fields, with $B_\parallel$ reaching $\sim +3.7$ mG in Sgr A East and $\sim -4.8$ mG in the CND. One should, however, keep in mind that OH masers arise in very special environments, for instance, in highly compressed regions behind interstellar shock waves. Therefore, field strengths inferred from OH maser emission lines may [*a priori*]{} not be considered typical of the ISM near the GC. Like in Faraday rotation studies, Zeeman splitting results have to be taken with caution. When a true detection is made, it provides only a lower limit to the local $B_\parallel$ in the observed region, insofar as the Zeeman signal is reduced (sometimes severely) by averaging over the observed area and along the line of sight. In addition, $B_\parallel$ itself represents only a fraction of the total field strength, though not necessarily a small fraction as in Faraday rotation studies, given that the magnetic field in dense regions is probably nearly horizontal (see Sect. \[IR\_Smm\]). It should also be emphasized that the magnetic field in dense regions might not be representative of the magnetic field in the general ISM: like in the Galactic disk, molecular clouds near the GC are likely to harbor much stronger fields than their surroundings. \[discussion\] Discussion ========================= A fragmentary picture of the interstellar magnetic field in the GC region (over $\sim 300$ pc along the Galactic plane and $\sim 150$ pc in the vertical direction) emerges from the overview given in Sect. \[observ\]. In the diffuse intercloud medium, sampled by the observed NRFs, the field appears to be approximately poloidal on average, with considerable scatter (see Sect. \[radio\]), whereas in dense interstellar clouds, probed by FIR/Smm polarimetry, the field appears to be approximately horizontal (see Sect. \[IR\_Smm\]). The field strength, $B$, is still a matter of controversy: in NRFs, $B$ lies somewhere between $\sim 100~\mu$G (equipartition/minimum-energy field strength, supported by inverse Compton scattering data) and $\gtrsim 1$ mG (dynamical condition $P_{\rm mag} \gtrsim P_{\rm ram}$); in the general intercloud medium, $B$ lies between $\sim 10~\mu$G (minimum-energy field strength) and $\sim 1$ mG (assumption of pressure balance with the NRFs); and for dense interstellar clouds, some Zeeman splitting measurements yield $|B_\parallel| \sim (0.1-1)$ mG (with values up to a few mG in the Sgr A region), while others lead only to upper limits $\sim (0.1-1)$ mG. \[field\_direction\]Magnetic field direction: theoretical interpretation ------------------------------------------------------------------------ The approximately poloidal direction of the large-scale magnetic field in the intercloud medium near the GC can be explained in different ways. The simplest explanation, originally proposed by , is that the inflow of Galactic disk matter that presumably created the CMZ dragged along the primordial Galactic magnetic field and compressed its vertical component into the CMZ, while letting its horizontal component diffuse away perpendicular to the disk. This idea was elaborated upon by , who emphasized the importance of vertical ambipolar diffusion in removing horizontal magnetic flux, especially in the CMZ where the vertical ambipolar diffusion velocity (which is proportional to $B^2$) reaches particularly high values. Neglecting outward turbulent diffusion, estimated that a pregalactic vertical field $\sim 0.2~\mu$G was required to account for the present-day CMZ field, which they supposed to be $\sim 1$ mG. A pregalactic magnetic field as strong as $0.2~\mu$G might be unrealistic, unless field amplification occurred during the very process of galaxy formation – for instance, through turbulent dynamo action in the protogalaxy . But even if the pregalactic field was weaker, the scenario of inward field advection might still be adequate provided that the large-scale vertical field was rapidly amplified (to nearly its present-day value) by a dynamo operating in the Galactic disk and that its direction remained constant both in time and in space (at least out to $\sim 10$ kpc from the GC). To show this quantitatively, we now derive a rough upper limit to the CMZ field strength that might be expected under these conditions. At the Galactic position of the Sun ($r_\odot = 8.5$ kpc), the present-day large-scale vertical field can be inferred from Faraday rotation measures of high-latitude Galactic pulsars and extragalactic radio sources; thus derived $B_z \simeq 0.37~\mu$G, while Sui et al. (2009) recently obtained $B_z \simeq 0~\mu$G toward the North Galactic Pole and $B_z \simeq 0.5~\mu$G toward the South Galactic Pole. Here, we adopt $B_z \simeq 0.25~\mu$G as a conservative estimate. We further assume that the interstellar plasma, together with the frozen-in field lines, slowly drift inward, with a radial velocity at the solar circle $v_r \simeq -(0.5 - 1.0)~{\rm km~s}^{-1}$ . If these values apply over most of the $\sim 10^{10}$ yr lifetime of the Galaxy and if the inflowing vertical magnetic flux accumulates inside the CMZ at roughly the same rate as it crosses the solar circle, then the resulting present-day vertical field in the CMZ (whose projection onto the Galactic plane can be approximated as a $500~{\rm pc} \times 200~{\rm pc}$ ellipse; see Sect. \[gas\_distri\]) is $\sim (0.85 - 1.7)$ mG. This estimate is in surprisingly close agreement with the $\sim 1$ mG observational value assumed by . However, the agreement may be a little fortuitous, and one has to bear in mind that our estimate is contingent upon a number of important and uncertain hypotheses, such as rapid dynamo amplification in the Galactic disk, no sign reversals in the large-scale vertical field, steady-state inflow and accumulation into the CMZ, negligible turbulent diffusion out of the CMZ... If one or more of these hypotheses fail, the actual CMZ field is less than estimated here. Another category of scenarios rely on outflows from the Galactic nucleus. In one such scenario, a magnetic field was generated in the accretion disk around the central black hole, through creation of a seed field by a Biermann battery and subsequent amplification of this seed field either by the large-scale shear alone or by a standard dynamo; the generated field was then expelled into the surrounding ISM by Galactic winds or collimated outflows .[^8] [*A priori*]{}, the strong shear in the accretion disk must have caused the generated field to be nearly toroidal. Therefore, if the accretion disk was parallel to the Galactic plane and if the field was expelled horizontally, this process alone could be a good candidate to explain a [*horizontal*]{} large-scale magnetic field in the GC region, but not a [*poloidal*]{} field. This horizontal field would only be a few $\mu$G, unless, for some reason, the expelled field remained confined to the CMZ. In this case, the horizontal large-scale magnetic field in the CMZ (approximated again as a $500~{\rm pc} \times 200~{\rm pc}$ ellipse) would be $\sim (0.25 - 0.6)$ mG, as can be found by using the expression of the nuclear field provided by , together with a central black hole mass $\simeq 4 \times 10^6~M_\odot$ . In reality, magnetic activity in the immediate vicinity of the central black hole is certainly more complex than described above. As it turns out, recent NIR polarimetric imaging observations of Sgr A$^*$, in conjunction with earlier radio, infrared and X-ray data, provide loose evidence that the accretion disk might indeed be nearly parallel to the Galactic plane . However, they also suggest that the nuclear field switches back and forth between toroidal (as expected from strong shear) and poloidal (consistent with recurrent vanishing of the accretion disk). And even if the nuclear field were strictly toroidal, it could, after ejection, be twisted by the Coriolis force into a more poloidal field. Still in the category of outflow scenarios, a poloidal large-scale magnetic field in the GC region could find its roots in stellar activity very close to the GC [e.g., @sofue_84; @chevalier_92] – for instance, in a relatively recent nuclear starburst . To start with, stellar fields are continuously injected into the ISM via stellar winds and supernova explosions; stellar fields alone cannot account for the present-day interstellar magnetic field, but one could conceive that a very early generation of them provided the seed field for a local dynamo (see below). More relevant here, the energy release at the GC must have driven the interstellar matter into expanding motions and created a vertically elongated shell structure, or wall, into which interstellar field lines were pushed back and compressed. This mechanism was first invoked by [@sofue_84] to explain the $\Omega$-shaped radio lobe detected by , and it was further studied by and by [@chevalier_92]. The result is that a pre-existing horizontal field could have turned into a seemingly poloidal configuration (as seen from the Sun), while a pre-existing vertical field could have been substantially enhanced at the position of the shell. Finally, with the strong shear flows and the highly turbulent motions existing near the GC, dynamo action there is almost inescapable. argued that dynamo amplification in the CMZ is unlikely, on the grounds that the magnetic field there is way above equipartition with the turbulence. However, their argument is based on a supposed mG field, which we now regard as a probable overestimate (see Sect. \[field\_strength\]). If we assume that the CMZ is characterized by a turbulent velocity $v_{\rm turb} \sim 15~{\rm km~s}^{-1}$ (value adopted by for the cloud random velocity), a space-averaged hydrogen density $\langle n_{\rm H} \rangle \sim 300~{\rm cm}^{-3}$ (see Paper I) and a total-to-hydrogen mass ratio of 1.453 (see Sect. \[gas\_distri\]), we find that turbulent pressure corresponds to the magnetic pressure of 0.2 mG field. In consequence, the saturated field strength expected from dynamo action in the CMZ is $\sim 0.2$ mG, and any field $\lesssim 0.2$ mG would be dynamically consistent with dynamo amplification in the CMZ – or, more generally, in a region encompassing the CMZ and the ionized gas around it. If the large-scale magnetic field in the GC region was indeed amplified by a local dynamo, its final configuration must depend on the shape of the dynamo domain. showed that in astrophysical objects composed of a flattened disk-like structure and a quasi-spherical halo-like structure (such as spiral galaxies), a dynamo generates a quadrupolar (i.e., symmetric in $z$) field if the disk-like structure is more dynamo-active and a dipolar (i.e., anti-symmetric in $z$) field if the halo-like structure is more dynamo-active. This theoretical result applied to the GC region implies that a dynamo field there must be quadrupolar if the disk-like CMZ dominates dynamo action and dipolar if the surrounding halo-like volume of ionized gas dominates. The latter possibility seems [*a priori*]{} more likely, given that the halo of ionized gas represents most of the volume, encloses most of the interstellar mass and is highly turbulent, but one cannot draw any definite conclusions without resorting to detailed numerical dynamo calculations. It is also noteworthy that, if the magnetic field in interstellar clouds is largely decoupled from that in the intercloud medium , a local dynamo should naturally produce a dipolar field in the intercloud medium. Either way, a poloidal (and hence dipolar) large-scale magnetic field in the GC region could very well be explained by a local dynamo. The different scenarios described above refer to the large-scale component of the interstellar magnetic field in the intercloud medium near the GC. We emphasize that they are not mutually exclusive. In particular, it is quite possible (even likely, in our view) that inward advection from the Galactic disk, outflows from the Galactic nucleus and local dynamo amplification all came into play. We also note that the wide scatter observed in the orientations of NRFs, especially the smaller ones, is almost certainly due to interstellar turbulence, regardless of the exact mechanism responsible for the appearance of filaments. We now turn to the magnetic field inside GC molecular clouds. According to , the reason why the field within dense clouds is approximately horizontal can be understood in two alternative ways. A first possibility would be that the GC region was originally pervaded by a large-scale azimuthal magnetic field, as could be the case if a strong toroidal field was generated in the accretion disk around the central black hole and then expelled horizontally into the surrounding ISM or if the field was amplified by a local dynamo dominated by the disk-like CMZ (see above). In this scenario, the original field direction would have been preserved only in the dense and massive molecular clouds; outside of them, it would have been distorted by winds due to supernovae. However, for the reasons outlined above, it is more likely that the GC region was originally pervaded by a large-scale poloidal magnetic field. In molecular clouds, this field would have been sheared out in a horizontal direction, either by the cloud bulk motions (from differential rotation or from turbulence) with respect to the diffuse intercloud medium or by the forces (of compressive or tidal nature) that created and/or shaped the clouds . Along different lines, suggested that the magnetic field within molecular clouds is only loosely coupled to the intercloud field [see also @morris_07]. Decoupling between cloud and intercloud fields is easily achieved if, as expected, clouds are rotating . Initially, clouds must be magnetically connected to the diffuse intercloud medium from which they form. However, as they contract and spin up, they rapidly wind up the field lines that thread them, until, following magnetic reconnection or ambipolar diffusion, the wound-up field inside them becomes detached from the external field. Since clouds generally rotate about roughly vertical axes, their final internal fields will be predominantly horizontal. \[field\_strength\]Magnetic field strength: critical discussion --------------------------------------------------------------- At the present time, the most uncertain aspect of the large-scale interstellar magnetic field in the GC region is its strength, with current estimates ranging from $\sim 10~\mu$G to $\gtrsim 1$ mG in the diffuse ISM. The low values are observational estimates obtained for a supposed equipartition/minimum-energy state, while the high values are theoretical estimates based on pressure balance considerations. Both types of estimates involve some questionable assumptions and raise a number of problems, which we now briefly review and discuss, starting with the high-$B$ estimates. The reasoning leading up to the picture of a pervasive mG magnetic field hinges on two separate arguments (see beginning of Sect. \[observ\]). First, the apparent resistance of NRFs to collisions with ambient molecular clouds implies $P_{\rm mag} \gtrsim P_{\rm ram}$, which translates into $B \gtrsim 1$ mG, inside NRFs. Second, pressure confinement of NRFs requires $B \gtrsim 1$ mG in the external ISM as well. Let us examine both arguments in turn. The first argument already presents some weaknesses. It is true that the majority of NRFs are nearly straight, even though a few of them appear to be significantly distorted and a few others could potentially have deformations that escape detection from Earth because of projection effects. A moot point, however, is whether NRFs are truly colliding with molecular clouds. pointed out that all the well-studied NRFs at the time of their writing showed definite signs (e.g., slight bending, brightness discontinuity) of physical interactions with adjacent molecular clouds. On the other hand, [@yusef_03] noted that dynamical collisions with molecular clouds should produce OH (1720 MHz) maser emission, yet none had been detected at the apparent interaction sites. In addition, some NRFs do not seem to have any molecular cloud associated with them . Another issue concerns the exact constraint imposed by the rigidity of a truly colliding NRF. derived the condition $P_{\rm mag} \gtrsim P_{\rm ram}$, and they adopted $n_{\rm H_2} \sim 10^4~{\rm cm}^{-3}$ to estimate $P_{\rm ram}$. This value of $n_{\rm H_2}$ is probably too high, as the filling factor of molecular clouds with $n_{\rm H_2} \gtrsim 10^4~{\rm cm}^{-3}$ is only $\lesssim 3\%$ (see Paper I) or even as low as $\sim 1\%$ , so that the likelihood that a given NRF be colliding with such a high-density cloud is rather low. Most of the NRFs undergoing a cloud collision could actually be colliding with a lower-density cloud or with the low-density envelope of a dense cloud (Boldyrev, private communication). More importantly, the condition $P_{\rm mag} \gtrsim P_{\rm ram}$ itself may in fact be too stringent. [@chandran_01] showed that a filament interacting with a cloud at a single location along its length has only small distortions so long as the Alfvén speed in the intercloud medium, $V_{\rm A} = B / \sqrt{4 \pi \rho}$, is much greater than the cloud velocity, $v_{\rm cloud}$. With $\rho = 1.453 \, m_{\rm p} \, n_{\rm H}$, $\langle n_{\rm H} \rangle \sim (1-10)~{\rm cm}^{-3}$ in the intercloud medium (see Paper I) and $v_{\rm cloud} \sim 15~{\rm km~s}^{-1}$ , the easier condition $V_{\rm A} \gg v_{\rm cloud}$ is equivalent to $B \gg (8-26)~\mu$G. The bottom line is that for some NRFs, like those of the Radio Arc, which remain nearly straight despite multiple physical interactions with molecular clouds [@morris_90], the original conclusion that $B \gtrsim 1$ mG seems reasonably solid. For other NRFs, the evidence for a mG field is less compelling, or even completely absent. The second argument is even more questionable. [@morris_90] reasoned that NRFs must be pressure-confined and, considering the possibility of plasma confinement, he estimated the thermal pressure of the very hot gas at $n \, T \sim 10^7~{\rm cm}^{-3}~{\rm K}$. Since this is much less than the pressure of a 1 mG magnetic field, $P_{\rm mag} / k \simeq 3 \times 10^8~{\rm cm}^{-3}~{\rm K}$, he concluded that the confining pressure cannot be thermal, but instead must be magnetic. This line of reasoning raises several important issues, some of which might possibly invalidate it. First, [@morris_90] might have considerably underestimated the very hot gas thermal pressure. presented X-ray spectroscopic imaging observations of a $1^\circ \times 1^\circ$ area around the GC, which point to a much higher value. Attributing the diffuse X-ray continuum emission to thermal bremsstrahlung, they derived an electron temperature $k T_{\rm e} \gtrsim 10$ keV ($T_{\rm e} \gtrsim 10^8$ K) and an electron density $n_{\rm e} \sim (0.3 - 0.4)~{\rm cm}^{-3}$. If ions and electrons are in collisional equilibrium, the corresponding total thermal pressure is $n \, T \gtrsim (6-8) \times 10^7~{\rm cm}^{-3}~{\rm K}$. But found evidence that the thermal pressure of the ions might be at least one order of magnitude higher than that of the electrons. If this were to be confirmed, the very hot gas would be able to supply the required confining pressure for NRFs with mG fields. Interestingly, the $\simeq 270$ pc radial extent of the very hot gas region would then naturally explain why NRFs are observed over a $\sim 300$ pc region only. However, we have to emphasize that the plasma pressure inferred from ’s work might be a little extreme. More recently, observed a smaller, $17' \times 17'$ area around the GC, with better signal-to-noise ratio and finer angular resolution. To explain the observed diffuse X-ray emission, they had to appeal to two plasma components: a soft component, with $k T \simeq 0.8$ keV and $n_{\rm e} \sim (0.1 - 0.5~{\rm cm}^{-3}) \, (d / 50~{\rm pc})^{-1/2}$, and a hard component, with $k T \simeq 8$ keV and $n_{\rm e} \sim (0.1 - 0.2~{\rm cm}^{-3}) \, (d / 50~{\rm pc})^{-1/2}$, where $d$ is the line-of-sight depth of the X-ray emitting region. Both components were assumed to be in collisional-ionization equilibrium, consistent with the measured energies and flux ratios of the spectral lines. Under these conditions and with $d \simeq 270$ pc , the total thermal pressure of the plasma is $\sim (1-2) \times 10^7~{\rm cm}^{-3}~{\rm K}$, close to [@morris_90]’s original estimate and too low to confine NRFs. To be sure, the above pressure estimates make sense only to the extent that the observed diffuse X-ray emission is truly produced by a thermal plasma. These estimates do not allow us to conclude either way on the potential ability of hot plasma to confine NRFs. Another interesting possibility is that NRFs could be confined by magnetic tension forces . noticed that the linear filaments of the Radio Arc appear to be surrounded by a helical magnetic structure winding about them. Having estimated that magnetic pressure in the filaments is much higher than the ambient gas pressure, they advanced the view that the magnetic field is in a local force-free state. Accordingly, the electric current flows along field lines, and the field-aligned current induces a locally toroidal field component, in agreement with the observed helical structure. This toroidal component, in turn, provides a confining magnetic tension force. Observational evidence for helical or twisted magnetic fields has been found in filaments other than those of the Radio Arc . From a theoretical point of view, field-aligned currents and helical fields can be explained in various ways. [@benford_88] proposed an electrodynamic model for the Radio Arc, wherein the motion of a partially ionized molecular cloud across a strong poloidal magnetic field drives electric currents around a closed-loop circuit originating at the cloud and continuing along the poloidal field lines . For the Snake, envisioned the picture of a magnetic flux tube having both ends anchored in rotating molecular clouds; in this picture, the differential rotation between both clouds generates a locally toroidal field component and, hence, give rise to a helical pattern. Each of these two models could potentially be applied to other NRFs, bearing in mind that twisted magnetic fields are subject to the kink instability. In any event, even if the operation of magnetic tension forces could be firmly established, it would still be difficult to quantify their actual contribution to the confinement of NRFs without a detailed knowledge of the exact magnetic configuration in and around NRFs. Finally, and most importantly, NRFs do no need to be confined at all; they could very well be transient or dynamic structures out of mechanical balance with their surroundings. As a matter of fact, several plausible models for their origin describe them in these non-equilibrium terms. In the cometary-tail model of ,[^9] NRFs are the long and thin magnetic wakes produced by a weakly magnetized Galactic wind impinging on GC molecular clouds. The advected magnetic field drapes around the clouds and stretches out behind them, growing to the point where its pressure balances the ram pressure of the Galactic wind. For typical wind parameters, the field inside the wakes thus reaches $\sim 1$ mG, independent of the ambient field strength, which could be as low as $\sim 10~\mu$G (estimate based on the predicted wake lengths and on stability considerations). The overall orientation of the wakes is governed by the direction of the wind, which, the authors claim, is roughly vertical. Hence, the observed tendency of NRFs to run vertical would reflect the direction of the wind rather than that of the large-scale magnetic field (as usually presumed). An advantage of this dynamical model is that it avoids the MHD stability problems faced by static equilibrium models. It also provides an efficient mechanism to accelerate electrons to relativistic energies, through wave-particle interactions in the turbulent cascades driven near the current sheets of the wakes. Still in the spirit of a dynamic ISM, NRFs could be regarded as direct manifestations of the intense turbulent activity characterizing the GC region . Turbulence there naturally leads to a highly intermittent magnetic field distribution, with strongly magnetized filamentary structures arising in an otherwise weak-field (e.g., $B \sim 10~\mu$G) background. Because the turbulent intensity varies on a scale comparable to or only slightly larger than the outer scale of the turbulence, magnetic flux is expelled from the region of intense turbulence (diamagnetic pumping) before the field has a chance to grow strong everywhere. In this context, the observed NRFs would be nothing else than the strongly magnetized filaments, and their field strength, set by the external turbulent pressure, would typically be $\gtrsim 0.1$ mG. The preferentially vertical direction of the longest NRFs could be related to the $\Omega$-shaped radio lobe detected by ; more generally, it could be a direct consequence of the roughly poloidal geometry of the large-scale magnetic field. As for the synchrotron-emitting electrons, they could be either pervasive or concentrated near localized sources. Aside from the possible shortcomings in the derivation of a mG field, the claims for such a strong field have elicited a number of important criticisms. A frequently voiced objection is that the ensuing synchrotron lifetimes, $t_{\rm syn}$, are too short. The synchrotron lifetime is indeed a strongly decreasing function of field strength, given by $t_{\rm syn} \propto E^{-1} \, B_\perp^{-2}$ for a relativistic electron of energy $E$ and by $t_{\rm syn} \propto \nu^{-1/2} \, B_\perp^{-3/2}$ at the corresponding synchrotron frequency, $\nu \propto B_\perp \, E^2$. At the 74 MHz and 330 MHz frequencies of the diffuse nonthermal emission detected by , the synchrotron lifetimes in a $B_\perp = 1$ mG field would only be $t_{\rm syn} \simeq 1.2 \times 10^{5}$ yr and $t_{\rm syn} \simeq 0.6 \times 10^{5}$ yr, respectively. Unless the source of nonthermal emission is short-lived, relativistic electrons would need to be injected or re-accelerated on such short timescales. considered this to be implausible, whereas [@morris_07] estimated that the local supernova rate is more than sufficient to meet the required timescales. At the higher 1.5 GHz and 5 GHz frequencies of many NRF observations, the synchrotron lifetimes in a 1 mG field would be even shorter: $t_{\rm syn} \simeq 2.7 \times 10^{4}$ yr and $t_{\rm syn} \simeq 1.5 \times 10^{4}$ yr, respectively (supposing $B_\perp \simeq B$, a reasonable assumption in the case of NRFs). Here, the trouble is not so much that these short lifetimes would impose similarly short injection/re-acceleration timescales – as explained above, NRFs could be short-lived structures, and if they are not, one could easily imagine that they are fueled by local, long-lived sources of relativistic electrons. The real concern is that electrons might not be able to travel far enough [e.g., @yusef_03; @morris_07]. Presumably, relativistic electrons stream along field lines at about the Alfvén speed in the ionized gas, $V_{\rm A,i} = B / \sqrt{4 \pi \rho_i}$, such that, over their lifetimes $t_{\rm syn}$, they travel distances $d \simeq V_{\rm A,i} \, t_{\rm syn} \propto \nu^{-1/2} \, \rho_i^{-1/2} \, B^{-1/2}$. The ionized gas density, $\rho_i$, is quite uncertain, as witnessed by the widely different values adopted by different authors. Since the ionized phase of the ISM is largely dominated by its warm component, where helium is only weakly ionized, we may let $\rho_i \simeq m_{\rm p} \, n_{\rm e}$. If we further approximate the free-electron density, $n_{\rm e}$, by its space-averaged value from ’s model, we find that $n_{\rm e} \simeq 10~{\rm cm}^{-3}$ close to the GC and $n_{\rm e} \gtrsim 1~{\rm cm}^{-3}$ out to $\simeq 220$ pc along the Galactic plane and up to $\simeq 40$ pc along the vertical, i.e., in most of the region of interest. For a 1 mG field, the Alfvén speed is $V_{\rm A,i} \simeq (2200~{\rm km~s}^{-1}) \, (n_{\rm e} / 1~{\rm cm}^{-3})^{-1/2}$ and the traveled distances at 1.5 GHz and 5 GHz are $d \simeq (60~{\rm pc}) \, (n_{\rm e} / 1~{\rm cm}^{-3})^{-1/2}$ and $d \simeq (33~{\rm pc}) \, (n_{\rm e} / 1~{\rm cm}^{-3})^{-1/2}$, respectively. These calculated distances are compatible with the observed lengths of most NRFs, but they are difficult to reconcile with some NRFs being as long as $\sim 60$ pc. This difficulty can be circumvented if the longest NRFs lie in a particularly low-$n_{\rm e}$ environment, or if particle injection occurs at more than one location along them , or else if re-acceleration takes place more-or-less continuously along them [e.g., @morris_07]. Incidentally, re-acceleration also provides a possible explanation for the observed constancy of the radio spectral index along the lengths of several NRFs , although it is not immediately clear why re-acceleration would precisely counteract the spectral steepening due to synchrotron losses. The above discrepancy between the calculated distances traveled by relativistic electrons in a 1 mG field and the observed lengths of the longest NRFs is only marginal. Furthermore, there exist viable ways, other than decreasing the field strength, to bring them into agreement. But more fundamentally, the discrepancy poses a problem only to the extent that NRFs are “illuminated flux tubes”, i.e., flux tubes into which relativistic electrons have been injected. Should they instead be regions of compressed magnetic field , the mismatch would become totally irrelevant. For all these reasons, we feel that the short synchrotron lifetimes and the associated short distances traveled by relativistic electrons may not be held up against a mG field inside NRFs. To conclude our discussion of the high-$B$ estimates, the claim that NRFs have a mG field may have to be toned down. While there is good evidence that a fraction of them do, others could possibly have a weaker field. As for the general ISM, we see no cogent reason to believe that it is magnetized at the mG level. We now turn to the low-$B$ estimates and discuss the validity of the equipartition/minimum-energy assumption, which, for brevity, we will refer to as the equipartition assumption. This hypothesis admits no rigorous theoretical justification, yet it seems to yield acceptable results in large-scale regions of the Galactic disk as well as in external galaxies as wholes . In these objects, the large-scale magnetic field is predominantly horizontal, the total field strength is typically a few $\mu$G and cosmic-ray sources (e.g., supernova shocks) likely abound. As a result, vast amounts of cosmic rays are injected into the ISM, where the horizontal field lines tend to keep them confined. Cosmic-ray pressure can then build up until it reaches a value, presumably $\sim P_{\rm mag}$, that is high enough to break the magnetic confinement – for instance, via the Parker instability. Not only does this instability enable cosmic rays to escape from the galactic disk through the formation and rupture of giant magnetic loops, but it also leads to magnetic field amplification through enhanced dynamo action . Both effects conspire to maintain cosmic-ray pressure comparable to magnetic pressure. The situation near the GC is completely different. Cosmic-ray sources are undoubtedly much more abundant there than in the Galactic disk. On the other hand, the large-scale magnetic field is approximately vertical, so that cosmic rays streaming along field lines naturally follow the shortest path out of the Galaxy . In addition, if the field is as strong as $\sim 1$ mG, cosmic rays stream away at a very high speed ($\sim V_{\rm A,i}$).[^10] Thus, both the injection and the escape of cosmic rays proceed at a much faster rate than in the disk. More to the point, the two processes are not related by a self-regulating mechanism, such as the “cosmic-ray valve” operating in the disk, which would, at the same time, keep cosmic-ray and magnetic pressures close to equipartition. Observationally, the obvious rigidity and organized structure of NRFs strongly support the idea that they are magnetically dominated . The case of the general ISM is not so self-evident. Observations of the diffuse $\gamma$-ray emission from the GC region reveal a pronounced $\gamma$-ray excess within $\sim 0\fdg6$ of the GC , which could speak in favor of an elevated cosmic-ray pressure there. However, it is unlikely that this $\gamma$-ray excess can entirely be attributed to cosmic-ray interactions with interstellar matter. In fact, showed that their $\gamma$-ray data could be reproduced with a combination of unresolved compact sources (such as pulsars) and a truly diffuse interstellar contribution from cosmic rays having the same density as in the inner Galaxy. This result, combined with ’s study of the diffuse nonthermal radio emission from the GC region, lends some credence to the equipartition assumption. To conclude our discussion of the low-$B$ estimates, NRFs probably have super-equipartition magnetic fields, whereas the general ISM might have its magnetic field in rough equipartition with cosmic rays. New observations with the Fermi gamma-ray space telescope will hopefully shed more light on this topic, although using the Fermi data to obtain better constraints on the cosmic-ray electron energy density near the GC might prove rather tricky. In principle, the task will require (1) removing the foreground $\gamma$-ray emission along the long line of sight toward the GC region, (2) isolating the diffuse interstellar emission produced by cosmic rays, (3) separating the contributions from cosmic-ray nuclei ($\pi^0$ decay) and electrons (bremsstrahlung and inverse Compton scattering) and (4) having a good knowledge of both the interstellar matter distribution (for the $\pi^0$ decay and bremsstrahlung components) and the interstellar radiation field (for the inverse Compton component). One difficulty for cosmic-ray electrons is that their contribution to $\gamma$-ray emission in the energy range covered by Fermi (ideally $\sim 30~{\rm MeV} - 300~{\rm GeV}$) is overshadowed by the contribution from cosmic-ray nuclei. On the other hand, the inverse Compton component, observed slighty above or below the Galactic plane, is expected to be less contaminated by foreground emission, insofar as the photon density is much higher in the Galactic bulge than along the line of sight. Altogether, there is good hope that Fermi observations of the inverse Compton emission from the GC region will help to determine the cosmic-ray electron density with better accuracy, and, when coupled with the low-frequency radio continuum data, help to constrain the magnetic field strength near the GC. Faraday rotation studies do not add any significant constraints on the interstellar magnetic field strength. The inferred values of $|B_\parallel|$ are typically a few $\mu$G, with considerable uncertainty (see Sect. \[Faraday\]). These values of $|B_\parallel|$ indicate that the large-scale magnetic field in the diffuse ionized medium is $\gtrsim$ a few $\mu$G, which is compatible with both an equipartition field $\sim 10~\mu$G and a dynamically dominant field $\sim 1$ mG. However, if the large-scale field has a roughly poloidal geometry, with only a small component along the line of sight, the equipartition estimate might be a little too low to explain the Faraday rotation results. For dense neutral clouds, Zeeman splitting studies lead to a mixture of positive detections, with $|B_\parallel| \sim (0.1-1)$ mG (outside the Sgr A region), and non-detections, with $|B_\parallel| \lesssim (0.1-1)$ mG, again subject to important uncertainty as well as possible dilution of the Zeeman signal (see Sect. \[Zeeman\]). These mixed results can be understood if the magnetic field inside dense clouds is roughly horizontal, such that its line-of-sight component may lie anywhere between 0 and the total field strength. The latter could then be $\sim 1$ mG or loosely range from a few 0.1 mG to a few mG. A less direct, and even cruder, estimation of the field strength inside molecular clouds was made by , based on their Smm polarimetric observations. Interpreting the measured dependence of field direction on gas density in terms of shearing of an initially poloidal field, they came up with a characteristic field strength $\sim 3$ mG (see Sect. \[IR\_Smm\]). \[additional\]Additional input ============================== \[connection\]Connection with the rest of the Galaxy ---------------------------------------------------- How does our view of the interstellar magnetic field in the GC region fit in with what we know about the magnetic field in the Galaxy at large? Are both magnetic systems connected in any way or are they completely independent? Our current knowledge of the overall distribution and morphology of the Galactic magnetic field away from the GC relies primarily on synchrotron emission and Faraday rotation studies. Synchrotron intensity measurements give access to the total field strength distribution, subject to the assumption of equipartition between magnetic fields and cosmic rays (discussed in Sect. \[field\_strength\]). Based on the synchrotron map of , [@ferriere_98] thus found that the total field has a value $\simeq 5~\mu$G near the Sun, a radial scale length $\simeq 12$ kpc and a local vertical scale height $\simeq 4.5$ kpc. In addition, synchrotron polarimetry indicates that the local ratio of ordered (regular + anisotropic random) to total fields is $\simeq 0.6$ [@beck_01], implying an ordered field $\simeq 3~\mu$G near the Sun. Faraday rotation measures of Galactic pulsars and extragalactic radio sources also provide valuable information, more specifically relevant to the uniform (or regular) magnetic field, ${\bf B}_{\rm u}$, in the ionized ISM. Here is a summary of what we have learnt from them, first on the strength and second on the direction of ${\bf B}_{\rm u}$. From pulsar RMs, we now know that $B_{\rm u} \simeq 1.5~\mu$G near the Sun and that $B_{\rm u}$ increases toward the GC, to $\gtrsim 3~\mu$G at $r = 3$ kpc , i.e., with an exponential scale length $\lesssim 7.2$ kpc. $B_{\rm u}$ also decreases away from the Galactic plane, albeit at a very uncertain rate – extragalactic-source RMs suggest an exponential scale height $\sim 1.4$ kpc . In the Galactic disk, ${\bf B}_{\rm u}$ is nearly horizontal and generally dominated by its azimuthal component. Near the Sun, ${\bf B}_{\rm u}$ points toward $l \simeq 82^\circ$, corresponding to a magnetic pitch angle $p \simeq -8^\circ$ and implying a clockwise direction about the $z$-axis (see Fig. \[fig:coordinates\]). However, ${\bf B}_{\rm u}$ reverses several times with decreasing radius, the number and radial locations of the reversals being still highly controversial . These reversals have often been interpreted as evidence that the uniform field is bisymmetric (azimuthal wavenumber $m \!=\! 1$), although an axisymmetric ($m \!=\! 0$) field would be expected from dynamo theory. Recently, showed that neither the axisymmetric nor the bisymmetric picture is consistent with the existing pulsar RMs, and they concluded that the uniform field must have a more complex pattern. Along the vertical, ${\bf B}_{\rm u}$ is roughly symmetric in $z$,[^11] at least close enough to the midplane . In the Galactic halo, ${\bf B}_{\rm u}$ could have a significant vertical component, with $(B_{\rm u})_z \sim 0.37~\mu$G or $(B_{\rm u})_z \sim 0.25~\mu$G on average between the North and South Poles (Sui et al. 2009) at the position of the Sun. In contrast to the situation in the disk, the azimuthal component of ${\bf B}_{\rm u}$ shows no sign of reversal with decreasing radius. Along the vertical, ${\bf B}_{\rm u}$ is roughly antisymmetric in $z$ (counterclockwise at $z>0$ and clockwise at $z<0$) inside the solar circle and roughly symmetric (clockwise at all $z$) outside the solar circle . developed comprehensive 3D models of the Galactic magnetic field, constrained by an all-sky map of extragalactic-source RMs together with observations of the Galactic total and polarized emission over a wide range of radio frequencies. They obtained a good fit to the data for axisymmetric models where the disk field is purely horizontal, has constant pitch angle $p = -12^\circ$, reverses inside the solar circle and is symmetric in $z$ (clockwise near the Sun), while the halo field is purely azimuthal and antisymmetric in $z$ (counterclockwise/clockwise above/below the midplane at all radii). They also came to the conclusion that bisymmetric models are incompatible with the RM data. Finding ${\bf B}_{\rm u}$ to have quadrupolar parity in the disk and dipolar parity in the halo is consistent with the predictions of dynamo theory and with the results of galactic dynamo calculations, even if, in the very long run (i.e., on timescales longer than current galactic ages), the field may eventually evolve toward a single-parity state . Moreover, the signs of $(B_{\rm u})_r/(B_{\rm u})_\theta$ in the disk (negative throughout) and $(B_{\rm u})_z/(B_{\rm u})_\theta$ in the halo (negative/positive above/below the midplane) are those expected from azimuthal shearing of radial and vertical fields, respectively, by the large-scale differential rotation of the Galaxy. Let us now look into the possible connections between the Galactic magnetic field described above and the GC field discussed in the previous sections. Here, we are only interested in the uniform component of the field, which, for brevity, we will simply refer to as the field. As explained at the beginning of Sect. \[observ\], the observed orientations of NRFs suggest that the GC field is approximately poloidal, and hence dipolar, in the diffuse intercloud medium. However, they do not tell us whether the field is pointing north or south. Faraday rotation studies could, under certain conditions, provide the sought-after sign information, but unfortunately they lead to conflicting conclusions. noted that the available RMs toward NRFs within $1^\circ$ of the GC exhibit a sign pattern indicative of an antisymmetric field running counterclockwise above the midplane and clockwise below it (see Sect. \[Faraday\]) – exactly as in the Galactic halo. They further suggested that this pattern could result from azimuthal shearing by the Galactic differential rotation of an initially vertical field pointing north ($B_z > 0$) – again as in the halo. In contrast, found that the RMs of background extragalactic sources seen through the area $(|l|<6^\circ,|b|<2^\circ)$ are mostly positive in each of the four quadrants, consistent with a symmetric field pointing toward us. Such a symmetric field cannot be produced by large-scale shearing, or by any symmetric distortion, of a poloidal field. It is, therefore, probably unrelated to the dominant poloidal field and may not be used to constrain its (north or south) direction. Zeeman splitting studies are not of great help here. The only true detections outside the Sgr A region and away from OH masers pertain to Sgr B2, where measured $B_\parallel < 0$ (see Sect. \[Zeeman\] and footnote \[note\]). If the line-of-sight field in the dense clouds sampled by Zeeman measurements has the same sign as in the surrounding intercloud medium, ’s results suggest ${\rm RM} > 0$ toward Sgr B2. Since Sgr B2 lies in the $(l>0,b>0)$ quadrant, this RM sign is in agreement with the general RM patterns of both and . Irrespective of the exact RM pattern and of the sign of $B_z$ near the GC, the predominantly poloidal, and hence dipolar, GC field could naturally connect with the dipolar halo field. Both fields together could actually form a single magnetic system, which, in turn, could be the outcome of a large-scale quasi-spherical dynamo – corresponding, for instance, to an A0 (antisymmetric & $m \!=\! 0$) dynamo mode . However, one may not jump to the conclusion that the poloidal field is a pure dipole, as proposed by [@han_02] and often assumed in the cosmic-ray propagation community . A pure dipole can be expected in a current-free medium, but not inside the highly conducting ISM. Even the superposition of a pure dipole and an azimuthal field is unlikely here, for several reasons. From a theoretical point of view, this particular combination would rule out any ring current. Moreover, numerical simulations of galactic dynamos always yield more complex magnetic geometries. From an observational point of view, if the poloidal field near the GC were a pure dipole, NRFs crossing the midplane would be curved inward, whereas, on the whole, they exhibit a slight outward curvature [@morris_90]. Besides, the NRF spatial distribution displayed in Fig. 29 of does not at all convey the sense of a pure dipole. Finally, as noted by [@han_02] himself, a global dipole with $B_z > 0$ at the position of the Sun should have $B_z < 0$ near the GC, which is exactly opposite to ’s finding. The matter is in fact a little more subtle, as a dipole field may have either sign of $B_z$ near the GC, depending on Galactic polar angle, $\Theta$ (for reference, $B_z \begin{array}{c} > \\ \noalign{\vspace{-10pt}} < \end{array} 0$ for $\Theta \begin{array}{c} > \\ \noalign{\vspace{-10pt}} < \end{array} \Theta_{\rm crit}$, where $\Theta_{\rm crit} = \arccos \, (1/\!\sqrt{3}) = 54\fdg7$). Anyway, a more realistic guess would be that the poloidal field, out to beyond the solar circle, is everywhere pointing north and slightly curved outward, which would be consistent with the inward advection scenario. The azimuthal component of the GC field, revealed through Faraday rotation, does not have a well-established parity. If it has dipolar parity , it is probably directly coupled to the dominant poloidal component and, together with it, connected to the dipolar halo field. On the other hand, if the azimuthal component has quadrupolar parity , it is probably decoupled from the poloidal component, but it could connect with the quadrupolar disk field if the latter is bisymmetric, or at least possesses a bisymmetric mode. If the quadrupolar disk field is axisymmetric or contains an axisymmetric mode, the associated poloidal field lines cannot remain horizontal all the way in to the GC, but they have to diverge vertically somewhere before reaching the GC region. They can then, on each side of the Galactic plane, turn around in the halo and arc back to the disk, thereby forming one large or several smaller magnetic loops . To close up the discussion, let us mention the possibility that the GC region could harbor its own magnetic system, independent of the magnetic field pervading the Galaxy at large. The origin of such a separate magnetic system could be a GC dynamo, possibly modified by outflows from the nucleus. \[external\]Clues from external galaxies ---------------------------------------- Could observations of external galaxies shed some light on the properties of the interstellar magnetic field near the GC? External galaxies do not give access to the wealth of details that can be detected in our own Galaxy, but they provide a global view which can help fill in some of the gaps in the picture of the Galactic magnetic field. The observational status of interstellar magnetic fields in external spiral galaxies was recently reviewed by [@beck_08] and [@krause_08]. For practical purposes, external galaxies are generally considered as either face-on (if they are not or mildly inclined) or edge-on (if they are strongly inclined). While both groups bring along complementary pieces of information, the tools used to study them are identical and the same as for our Galaxy, namely, synchrotron emission and Faraday rotation. Face-on galaxies are ideally suited to determine the strength and the horizontal structure of magnetic fields in galactic disks. Total field strengths, derived from measurements of the synchrotron total intensity together with the equipartition assumption, are $\sim 10~\mu$G on average – more precisely, $\sim 5~\mu$G in radio-faint galaxies, $\sim 15~\mu$G in more active (higher star-formation rate) galaxies and up to $\sim (50-100)~\mu$G in starburst galaxies. Ordered field strengths, derived from measurements of the synchrotron polarized intensity, are $\sim (1-5)~\mu$G on average. The total field is always strongest in the optical spiral arms, where it can reach up to $\sim (20-30)~\mu$G (for normal galaxies), whereas the ordered field is generally somewhat stronger in the interarm regions, where it can reach up to $\sim (10-15)~\mu$G. Finally, according to synchrotron polarization maps, the ordered field tends to follow the orientation of the optical spiral arms, such that magnetic pitch angles are typically $\sim 10^\circ - 40^\circ$ (in absolute value). Edge-on galaxies are best suited to determine the vertical structure of magnetic fields in galactic disks and halos. Most of them appear to possess extended synchrotron halos, the vertical size of which implies that the total field has a vertical scale height $\sim 7$ kpc, with little scatter amongst galaxies. Since the polarization degree increases away from the midplane, the ordered field probably has an even greater vertical scale height. Polarization maps indicate that the ordered field is generally nearly horizontal close to the midplane. For galaxies with high-sensitivity measurements (e.g., NGC891, NGC5775, NGC253, M104), the ordered field becomes more vertical in the halo, with $|B_z|$ increasing with both $|z|$ and $r$. The resulting X-shaped field is extremely different from the dipole-like field that might be expected from dynamo theory (see Sect. \[connection\]), not only along the rotation axis, where it goes horizontal instead of vertical, but also at large distances from the center, where it diverges from the midplane instead of curving back to it. One possible theoretical explanation for the existence of such X-shaped halo fields involves galactic winds with roughly radial streamlines . The one notable exception amongst edge-on galaxies is NGC4631. In the disk of this galaxy, the ordered field runs nearly vertical throughout the innermost $\sim 5$ kpc, and it is only at larger radii that it turns roughly horizontal. In the halo, the ordered field has a more radial appearance, which, away from the rotation axis, bears some resemblance to the X shape observed in other edge-on galaxies. The unusual magnetic pattern of NGC4631 could perhaps be related to the existence of a central starburst and/or to the large-scale rotation being almost rigid inside $\sim 5$ kpc. As in our Galaxy, the measured synchrotron polarization angles only give the orientation of the ordered magnetic field, not its direction. The field direction can sometimes be gathered from Faraday rotation measures (of either the galactic synchrotron emission itself or background radio sources). This is how, by combining the observed polarization angles and rotation measures, it has been possible to identify well-defined azimuthal dynamo modes in a (small) number of galaxies. For instance, a dominating axisymmetric spiral mode was inferred in the disks of M31 and IC342, while a dominating bisymmetric spiral mode was suggested for the disk of M81. Somewhat intriguingly, though, very few field reversals have been detected in the disks of external galaxies [@beck_01], in contrast to what might be expected from the situation in our Galaxy. Regarding the vertical parity, evidence was found for a symmetric field in the disks of NGC891 and IC342. Observations of external galaxies have a typical spatial resolution of a few 100 pc to a few kpc (Marita Krause, private communication). Evidently, this resolution is way too low to enable detection of strongly magnetized filaments similar to the NRFs observed near the GC. Even a large-scale vertical field confined to the innermost $\sim 300$ pc would probably escape detection. In other words, external galaxies could possibly host vertical fields near their centers, but (with the exception of NGC4631) such vertical fields could not extend beyond the central kpc or so. Moreover, in none of the edge-on galaxies observed so far (neither in those featuring an X-shaped field nor in NGC4631) does the halo field resemble a dipole. Ultimately, what can be retained from observations of external galaxies is the following: If our Galaxy does not differ too much from the nearby galaxies whose magnetic fields have been mapped out, one may conclude that the vertical field detected close to the GC is only local, i.e., restricted to a region smaller than $\sim 1$ kpc. One can then imagine two different possibilities: either the GC field is completely separate from the general Galactic field, or else it merges smoothly with the poloidal halo field, though not in the form of an approximate dipole. \[conclu\]Conclusions ===================== Based on all the observational evidence presented in Sect. \[observ\], we may be reasonably confident that the interstellar magnetic field in the GC region is approximately poloidal on average in the diffuse intercloud medium and approximately horizontal in dense interstellar clouds. Direct measurements of field strengths are scanty and altogether not very informative. However, our critical discussion of the existing observational and theoretical estimations in Sect. \[field\_strength\] prompts us to conclude, at least tentatively, that the general ISM is pervaded by a relatively weak magnetic field ($B \sim 10~\mu$G), close to equipartition with cosmic rays ($B_{\rm eq} \sim 10~\mu$G), and that it contains a number of localized filamentary structures (the observed NRFs) with much stronger fields (up to $B \sim 1$ mG), clearly above equipartition ($B_{\rm eq} \sim 100~\mu$G). In dense interstellar clouds, the field would be a few 0.1 mG to a few mG strong. In our view, the high-$B$ filamentary structures are probably dynamic in nature. They could, for instance, have a turbulent origin, as suggested by . If the GC region is the open magnetic system envisioned by these authors, turbulent dynamo action there should produce strongly magnetized filaments, with $P_{\rm mag} \sim P_{\rm turb}$, i.e., according to our estimation in Sect. \[field\_direction\], $B \sim 0.2$ mG. Of course, there would be a whole range of field strength, allowing some filaments to have $B \gtrsim 1$ mG. Besides, other mechanisms could also come into play, such as the formation of magnetic wakes behind molecular clouds embedded in a Galactic wind . The higher ram pressure associated with the wind could then account for a larger number of filaments with mG fields. It is interesting to note that if all NRFs have $B \sim 1$ mG, the synchrotron data can be reproduced with a roughy uniform (within a factor $\sim 10$) relativistic-electron density, $n_{\rm e}$. Indeed, the factor $\sim 10$ between the equipartition field strengths inside NRFs ($B_{\rm eq} \sim 100~\mu$G) and in the general ISM ($B_{\rm eq} \sim 10~\mu$G) translates into a factor $\sim 100$ between their respective equipartition electron densities. Now, since synchrotron emissivity is $\propto n_{\rm e} \, B_\perp^{1-\alpha} \, \nu^\alpha$, the synchrotron emission of NRFs can equally be explained by the equipartition values $B \sim B_{\rm eq} \sim 100~\mu$G and $n_{\rm e} \sim n_{\rm e,eq}$ or by the magnetically dominated values $B \sim 1~{\rm mG} \sim 10 \, B_{\rm eq}$ and $n_{\rm e} \sim 10^{\alpha-1} \, n_{\rm e,eq}$. The synchrotron spectral index, $\alpha$, varies considerably amongst NRFs, from $\sim -2$ to $\sim +0.4$, with $\alpha \sim -0.6$ being quite typical . For NRFs with decreasing spectra ($\alpha < 0$), a mG field then implies an electron density $\sim 10 - 1000$ times lower than the equipartition value, i.e., within a factor $\sim 10$ of the equipartition electron density in the general ISM. The situation with highly variable magnetic field and roughly uniform relativistic-electron density is in sheer contrast with the conventional picture of a uniformly strong magnetic field. Qualitatively, the former could be understood as the net result of two antagonistic effects: On the one hand, when a high-$B$ filament forms by compression, the attached electrons are compressed together with the field lines. On the other hand, once the electrons find themselves in a high-$B$ filament, they cool off (via synchrotron radiation) more rapidly and they stream away along field lines (at about the Alfvén speed) faster than electrons in the surrounding medium. In reality, both the picture of a uniformly strong magnetic field and the picture of a roughly uniform relativistic-electron density are probably too extreme. It is much more likely that the GC region resides in an intermediate state, where both the field strength and the electron density are higher in NRFs than in the general ISM. Such a state is automatically achieved if all NRFs have a field strength comprised between the equipartition value ($B_{\rm eq} \sim 100~\mu$G) and $B \sim 1$ mG. The author would like to thank J. Ballet, R. Beck, S. Boldyrev, R. Crutcher, M. Hanasz, J.  Knödlseder, M. Krause, A. Marcowith, F. Martins, I. Moskalenko, G. Novak, W. Reich, A. Shukurov, A. Strong, X. Sun and the referee, T. LaRosa, for their valuable comments and detailed answers to her questions. [^1]: At the present time, NRFs are observed over a larger area, extending $\sim 300$ pc along the Galactic plane and $\sim 150$ pc in the vertical direction . The magnetic energy of a space-filling mG field in this larger region is a huge $\sim 10^{55}$ ergs. [^2]: The minimum-energy state corresponds almost, but not exactly, to energy equipartition between the magnetic field and the energetic particles [@miley_80], which, in turn, differs from pressure equipartition by a factor $\simeq 3$. Both equipartition and minimum-energy field strengths are difficult to estimate, because they depend on a number of parameters whose values are quite uncertain. In particular, they depend on the proton-to-electron energy ratio (often set to unity in this context), on the lower and upper cutoff frequencies of the synchrotron spectrum, on the spectral index, and on the line-of-sight depth of the filaments (usually taken equal to their plane-of-sky width). [^3]: The exact formula for the rotation measure is ${\rm RM} = 0.81 \ \int n_{\rm e} \ B_\parallel \ ds$, with ${\rm RM}$ expressed in ${\rm rad~m^{-2}}$, $n_{\rm e}$ in ${\rm cm^{-3}}$, $B_\parallel$ in $\mu{\rm G}$ and $s$ in ${\rm pc}$. [^4]: By convention in the Faraday rotation community, a positive (negative) value of $B_\parallel$ corresponds to a magnetic field pointing toward (away from) the observer. [^5]: The Circumnuclear Disk is a compact torus of neutral (mainly molecular) gas and dust surrounding Sgr A West (the H[ii]{} region centered on the point-like source Sgr A$^*$) and extending between $\sim 1.5$ pc and 7 pc from the GC . [^6]: This rough estimate of $B$ was necessarily going to be larger than the mG estimate derived for the diffuse ISM by from the dynamical condition $P_{\rm mag} \gtrsim P_{\rm ram}$ (see beginning of Sect. \[observ\]), since the orbital velocity of molecular clouds in the Galactic gravitational potential ($\sim 150~{\rm km~s}^{-1}$) exceeds their turbulent velocity ($\sim 15~{\rm km~s}^{-1}$). [^7]: \[note\] The convention for the sign of $B_\parallel$ in the Zeeman splitting community is opposite to that adopted in the Faraday rotation community. Here, a positive (negative) value of $B_\parallel$ corresponds to a magnetic field pointing away from (toward) the observer. Note that in their original paper, mistakenly quoted a negative $B_\parallel$; the error was later corrected by . [^8]: In a related model , the central accretion disk generates magnetic loops which break off and expand away from their source. Although this model provides a possible explanation for the NRF phenomenon, it does not discuss the origin of the large-scale magnetic field in the CG region. [^9]: Neither the cometary-tail model of nor the turbulent model of is able to explain the Radio Arc, which differs from the other isolated NRFs in that it is clearly organized on a large scale. [^10]: Regardless of the field strength, it is likely that the powerful winds emanating from the GC or produced by supernova explosions also contribute to cosmic-ray escape. However, since these winds tend to evacuate magnetic fields as well, it is not clear that their net effect is to reduce cosmic-ray pressure relative to magnetic pressure. [^11]: A magnetic field is said to be symmetric (antisymmetric) in $z$, or, equivalently, quadrupolar (dipolar), if its horizontal component is an even (odd) function of $z$ and its vertical component an odd (even) function of $z$.
--- abstract: 'We study a nonlinear Robin problem driven by the $p$-Laplacian and with a reaction term depending on the gradient (convection term). Using the theory of nonlinear operators of monotone-type and the asymptotic analysis of a suitable perturbation of the original equation, we show the existence of a positive smooth solution.' address: - 'National Technical University, Department of Mathematics, Zografou Campus, Athens 15780, Greece' - 'Faculty of Applied Mathematics, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland & Institute of Mathematics “Simion Stoilow" of the Romanian Academy, P.O. Box 1-764, 014700 Bucharest, Romania' - 'Faculty of Education and Faculty of Mathematics and Physics, University of Ljubljana, SI-1000 Ljubljana, Slovenia' author: - 'Nikolaos S. Papageorgiou' - 'Vicenţiu D. Rădulescu' - 'Dušan D. Repovš' title: Positive solutions for nonvariational Robin problems --- Introduction ============ Let $\Omega\subseteq{\mathbb R}^N$ be a bounded domain with a $C^2$-boundary $\partial\Omega$. In this paper we deal with the following nonlinear Robin problem with gradient dependence: $$\label{eq1} \left\{\begin{array}{ll} -\Delta_pu(z)=f(z,u(z),Du(z))& \mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n_p}+\beta(z)|u|^{p-2}u=0&\mbox{on}\ \partial\Omega. \end{array}\right\}$$ In this problem, $\Delta_p$ denotes the $p$-Laplacian differential operator defined by $$\Delta_pu={\rm div}\,(|Du|^{p-2}Du)\ \mbox{for all}\ u\in W^{1,p}(\Omega),\ 1<p<\infty.$$ The reaction term $f(z,x,y)$ is gradient dependent (a convection term) and it is a Carathéodory function (that is, for all $(x,y)\in{\mathbb R}\times{\mathbb R}^N$ the mapping $z\mapsto f(z,x,y)$ is measurable and for almost all $z\in\Omega$ the map $(x,y)\mapsto f(z,x,y)$ is continuous). In the boundary condition, $\frac{\partial u}{\partial n_p}$ denotes the conormal derivative defined by extension of the map $$C^1(\overline{\Omega})\ni u\mapsto |Du|^{p-2}\frac{\partial u}{\partial n}=|Du|^{p-2}(Du,n)_{{\mathbb R}^N},$$ with $n(\cdot)$ being the outward unit normal on $\partial\Omega$. The boundary coefficient $\beta(\cdot)$ is nonnegative and it can be identically zero, in which case we recover the Neumann problem. We are looking for positive solutions of problem (\[eq1\]). The presence of the gradient in the reaction term precludes the use of variational methods. In this paper, our approach is based on the nonlinear operator theory and on the asymptotic analysis of a perturbation of problem (\[eq1\]). Positive solutions for elliptic problems with convection were obtained by de Figueiredo, Girardi and Matzeu [@3], Girardi and Matzeu [@6] (semilinear equations driven by the Dirichlet Laplacian), Ruiz [@13], Faraci, Motreanu and Puglisi [@2], and Huy, Quan and Khanh [@7] (nonlinear Dirichlet problems). For Neumann problems we refer to the works of Gasinski and Papageorgiou [@5], and Papageorgiou, Rădulescu and Repovš [@12], where the differential operator is of the form ${\rm div}\,(a(u)Du)$. In all the above papers, the method of proof is different and is based either on the fixed point theory (the Leray-Schauder alternative principle), on the iterative techniques, or on the method of upper-lower solutions. Mathematical Background and Hypotheses ====================================== Let $X$ be a reflexive Banach space. We denote by $X^*$ its topological dual and by $\left\langle \cdot,\cdot\right\rangle$ the duality brackets for the dual pair $(X,X^*)$. Suppose that $V:X\rightarrow X^*$ is a nonlinear operator which is bounded (that is, it maps bounded sets to bounded sets) and everywhere defined. We say that $V(\cdot)$ is “pseudomonotone", if the following property holds: $$\begin{aligned} &&u_n\stackrel{w}{\rightarrow}u\ \mbox{in}\ X,V(u_n)\stackrel{w}{\rightarrow}u^*\ \mbox{in}\ X^*\ \mbox{and}\ \limsup\limits_{n\rightarrow\infty}\left\langle V(u_n),u_n-u\right\rangle{\leqslant}0\\ &&\hspace{4cm}\Downarrow\\ &&\hspace{1cm}u^*=V(u)\ \mbox{and}\ \left\langle V(u_n),u_n\right\rangle\rightarrow\left\langle V(u),u\right\rangle.\end{aligned}$$ Pseudomonotonicity is preserved by addition and any maximal monotone everywhere defined operator is pseudomonotone. Moreover, as is the case of maximal operators, pseudomonotone maps exhibit remarkable surjectivity properties. \[prop1\] If $V:X\rightarrow X^*$ is pseudomonotone and strongly coercive (that is,\ $\frac{\left\langle V(u),u\right\rangle}{||u||} \rightarrow+\infty$ as $||u||\rightarrow\infty$), then $V$ is surjective. From the above remarks we see that if $A:X\rightarrow X^*$ is maximal monotone everywhere defined and $K:X\rightarrow X^*$ is completely continuous (that is, if $u_n\stackrel{w}{\rightarrow}u$ in $X$, then $K(u_n)\rightarrow K(u)$ in $X^*$), then $u\rightarrow V(u)=A(u)+K(u)$ is pseudomonotone. A nonlinear operator $A:X\rightarrow X^*$ is said to be of type $(S)_+$, if the following property holds: $$u_n\stackrel{w}{\rightarrow}u\ \mbox{in}\ X\ \mbox{and}\ \limsup\limits_{n\rightarrow\infty}\left\langle A(u_n),u_n-u\right\rangle{\leqslant}0\Rightarrow u_n\rightarrow u\ \mbox{in}\ X.$$ For further details on these notions and related issues, we refer to Gasinski and Papageorgiou [@4]. In the analysis of problem (\[eq1\]) we will use the Sobolev space $W^{1,p}(\Omega)$, the Banach space $C^1(\overline{\Omega})$ and the boundary Lebesgue space $L^p(\partial\Omega)$. We denote by $||\cdot||$ the norm of the Sobolev space $W^{1,p}(\Omega)$ defined by $$||u||=[||u||^p_p+||Du||^p_p]^{1/p}\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ The Banach space $C^1(\overline{\Omega})$ is an ordered Banach space with positive (order) cone defined by $$C_+=\{u\in C^1(\overline{\Omega}):u(z){\geqslant}0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ This cone has a nonempty interior given by $${\rm int}\, C_+=\{u\in C_+:u(z)>0\ \mbox{for all}\ z\in\Omega,\left.\frac{\partial u}{\partial n}\right|_{\partial\Omega\cap u^{-1}(0)}<0\ \mbox{if}\ \partial\Omega\cap u^{-1}(0)\neq\emptyset\}.$$ This interior contains the open set $$D_+=\{u\in C_+:u(z)>0\ \mbox{for all}\ z\in\overline{\Omega}\}.$$ In fact, $D_+$ is the interior of $C_+$ when $C^1(\overline{\Omega})$ is endowed with the $C(\overline{\Omega})$-norm topology. On $\partial\Omega$ we consider the $(N-1)$-dimensional Hausdorff (surface) measure $\sigma(\cdot)$. Using this measure on $\partial\Omega$ we can define in the usual way the “boundary" Lebesgue spaces $L^q(\Omega)$ ($1{\leqslant}q{\leqslant}\infty$). From the theory of Sobolev spaces, we know that there exists a unique continuous linear map $\gamma_0:W^{1,p}(\Omega)\rightarrow L^p(\partial\Omega)$, known as the “trace map", such that $\gamma_0(u)=u|_{\partial\Omega}$ for all $u\in W^{1,p}(\Omega)\cap C(\overline{\Omega})$. So, the trace operator extends the notion of “boundary values" to all Sobolev functions. We have $${\rm im}\,\gamma_0= W^{\frac{1}{p'},p}(\partial\Omega)\left(\frac{1}{p}+\frac{1}{p'}=1\right)\ \mbox{and}\ {\rm ker}\,\gamma_0=W^{1,p}_0(\Omega).$$ The trace map is compact into $L^q(\partial\Omega)$ for all $q\in\left[1,\frac{(N-1)p}{N-p}\right)$ if $p<N$ and for all $q{\geqslant}1$ if $N{\leqslant}p$. In the sequel, for the sake of notational simplicity, we drop the use of the trace map $\gamma_0$. All restrictions of Sobolev functions on $\partial\Omega$ are understood in the sense of traces. Let $A:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ be the nonlinear operator defined by $$\left\langle A(u),h\right\rangle=\int_{\Omega}|Du|^{p-2}(Du,Dh)_{{\mathbb R}^N}dz\ \mbox{for all}\ u,h\in W^{1,p}(\Omega).$$ \[prop2\] The operator $A:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ is bounded, continuous, monotone (hence also maximal monotone) and of type $(S)_+$. Given $x\in{\mathbb R}$, we define $x^{\pm}=\max\{\pm x,0\}$. Then for $u\in W^{1,p}(\Omega)$ we set $u^{\pm}(\cdot)=u(\cdot)^{\pm}$. We have $$u^{\pm}\in W^{1,p}(\Omega),\ u=u^+-u^-,\ |u|=u^++u^-.$$ Given a measurable function $g:\Omega\times{\mathbb R}\times{\mathbb R}^N\rightarrow{\mathbb R}$ (for example, a Carathéodory function), we denote by $N_g(\cdot)$ the Nemitsky (superposition) map defined by $$N_g(u)(\cdot)=g(\cdot,u(\cdot),Du(\cdot))\ \mbox{for all}\ u\in W^{1,p}(\Omega).$$ Evidently, $z\mapsto N_g(u)(z)$ is measurable. We denote by $|\cdot|_N$ the Lebesgue measure on ${\mathbb R}^N$. Consider the following nonlinear eigenvalue problem $$\label{eq2} \left\{\begin{array}{ll} -\Delta_p u(z)=\hat{\lambda}|u(z)|^{p-2}u(z)&\mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n_p}+\beta(z)|u|^{p-2}u=0&\mbox{on}\ \partial\Omega. \end{array}\right\}$$ We make the following hypothesis concerning the boundary coefficient $\beta(\cdot)$: $H(\beta):$ $\beta\in C^{0,\alpha}(\partial\Omega)$ with $\alpha\in(0,1)$ and $\beta(z){\geqslant}0$ for all $z\in\partial\Omega$. If $\beta\equiv 0$, then we recover the Neumann boundary condition. An “eigenvalue" is a real number $\hat{\lambda}$ for which problem (\[eq2\]) admits a nontrivial solution $\hat{u}\in W^{1,p}(\Omega)$, known as the “eigenfunction" corresponding to the eigenvalue $\hat{\lambda}$. From Papageorgiou and Rădulescu [@11] (see also Winkert [@14]), we have that $$\hat{u}\in L^{\infty}(\Omega).$$ So, we can apply Theorem 2 of Lieberman [@8] and infer that $$\hat{u}\in C^1(\overline{\Omega}).$$ From Papageorgiou and Rădulescu [@10] we know that problem (\[eq2\]) admits a smallest eigenvalue $\hat{\lambda}_1\in{\mathbb R}$ with the following properties: - $\hat{\lambda}_1{\geqslant}0$, in fact $\hat{\lambda}_1=0$ if $\beta\equiv 0$ (Neumann problem) and $\hat{\lambda}_1>0$ if $\beta\not\equiv 0$. - $\hat{\lambda}_1$ is isolated in the spectrum $\hat{\sigma}(p)$ of (\[eq2\]) (that is, we can find $\epsilon>0$ such that $(\hat{\lambda}_1,\hat{\lambda}_1+\epsilon)\cap\hat{\sigma}(p)=\emptyset$). - $\hat{\lambda}_1$ is simple (that is, if $\hat{u},\hat{v}\in C^1(\overline{\Omega})$ are eigenfunctions corresponding to $\hat{\lambda}_1$, then $\hat{u}=\xi\hat{v}$ for some $\xi\in{\mathbb R}\backslash\{0\}$). $$\label{eq3} \bullet\ \hat{\lambda}_1=\inf{\Bigg\{}\frac{||Du||^p_p+\int_{\partial\Omega}\beta(z)|u|^{p}d\sigma}{||u||^p_p}:u\in W^{1,p}(\Omega),u\neq 0{\Bigg\}}.\hspace{3.2cm}$$ The infimum in (\[eq3\]) is realized on the corresponding one-dimensional eigenspace. From the above property it follows that the elements of this eigenspace do not change sign. Let $\hat{u}_1$ be the $L^p$-normalized (that is, $||\hat{u}_1||_p=1$) positive eigenfunction corresponding to $\hat{\lambda}_1$. We know that $\hat{u}_1\in C_+$. In fact, the nonlinear strong maximum principle (see, for example, Gasinski and Papageorgiou [@4 p. 738]), implies that $\hat{u}_1\in D_+$. An eigenfunction $\hat{u}$ corresponding to an eigenvalue $\hat{\lambda}\not=\hat{\lambda}_1$, is necessary nodal (that is, sign changing). For more on the spectrum of (\[eq2\]) we refer to Papageorgiou and Rădulescu [@11]. The next lemma is an easy consequence of the above properties of the eigenpair $(\hat{\lambda}_1,\hat{u}_1)$ (see Mugnai and Papageorgiou [@9 Lemma 4.11]). \[lem3\] If $\vartheta\in L^{\infty}(\Omega),\ \vartheta(z){\leqslant}\hat{\lambda}_1$ for almost all $z\in\Omega$, $\vartheta\not\equiv \hat{\lambda}_1$, then there exists $c_0>0$ such that $$||Du||^p_p+\int_{\partial\Omega}\beta(z)|u|^pd\sigma-\int_{\Omega}\vartheta(z)|u|^pdz{\geqslant}c_0||u||^p$$ for all $u\in W^{1,p}(\Omega).$ Our hypotheses on the reaction term $f(z,x,y)$ are the following: $H(f):$ $f:\Omega\times{\mathbb R}\times{\mathbb R}^N\rightarrow{\mathbb R}$ is a Carathéodory function such that $f(z,0,y)=0$ for almost all $z\in\Omega$, for all $y\in{\mathbb R}^N$, and - $|f(z,x,y)|{\leqslant}a(z)[1+x^{p-1}+|y|^{p-1}]$ for almost all $z\in\Omega$, all $x{\geqslant}0$, all $y\in{\mathbb R}^N$, with $a\in L^{\infty}(\Omega)$; - there exists a function $\vartheta\in L^{\infty}(\Omega)$ such that $$\begin{aligned} &&\vartheta(z){\leqslant}\hat{\lambda}_1\ \mbox{for almost all}\ z\in\Omega,\ \vartheta\not\equiv \hat{\lambda}_1,\\ &&\limsup\limits_{x\rightarrow+\infty}\frac{f(z,x,y)}{x^{p-1}}{\leqslant}\vartheta(z)\ \mbox{uniformly for almost all}\ z\in\Omega,\ \mbox{and all}\ y\in{\mathbb R}^N; \end{aligned}$$ - for every $M>0$, there exists $\eta_M\in L^{\infty}(\Omega)$ such that $\eta_M(z){\geqslant}\hat{\lambda}_1$ almost everywhere in $\Omega$, $\eta_M\not\equiv\hat{\lambda}_1$ and $$\liminf\limits_{x\rightarrow 0^+}\frac{f(z,x,y)}{x^{p-1}}{\geqslant}\eta_M(z)\ \mbox{uniformly for almost all}\ z\in\Omega,\ \mbox{and all}\ |y|{\leqslant}M.$$ Since we are looking for positive solutions and the above hypotheses concern only the positive semiaxis ${\mathbb R}_+=\left[0,+\infty\right)$, we may assume without loss of generality that $$\label{eq4} f(z,x,y)=0\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x{\leqslant}0, \ \mbox{and all}\ y\in{\mathbb R}^N.$$ The following function satisfies hypotheses $H(f)$ (for the sake of simplicity we drop the $z$-dependence): $$f(x,y)=\left\{\begin{array}{ll} \eta x^{p-1}+x^{r-1}|y|^{p-1}&\mbox{if}\ 0{\leqslant}x{\leqslant}1\\ \vartheta x^{p-1}+(\eta-\vartheta)x^{q-1}+x^{\tau-1}|y|^{p-1}&\mbox{if}\ 1<x \end{array}\right.$$ with $1<\tau,q<p<r<\infty$ and $\vartheta<\hat{\lambda}_1<\eta$. Positive solution ================= We introduce the following perturbation of $f(z,x,y)$: $$\hat{f}(z,x,y)=f(z,x,y)+(x^+)^{p-1}.$$ Also, let $\epsilon>0$ and $e\in D_+$. We consider the following auxiliary Robin problem: $$\label{eq5} \left\{\begin{array}{ll} -\Delta_pu(z)+|u(z)|^{p-2}u(z)=\hat{f}(z,u(z),Du(z))+\epsilon e(z)&\mbox{in}\ \Omega,\\ \frac{\partial u}{\partial n_p}+\beta(z)|u|^{p-2}u=0&\mbox{on}\ \partial\Omega. \end{array}\right\}$$ \[prop4\] If hypotheses $H(\beta),H(f)$ hold and $\epsilon>0$, then problem (\[eq5\]) has a solution $u_{\epsilon}\in D_+$. Let $N_{\hat{f}}$ be the Nemitsky map corresponding to the function $\hat{f}(z,x,y)$. We have $N_{\hat{f}}:W^{1,p}(\Omega)\rightarrow L^{p'}(\Omega)$ $\left(\frac{1}{p}+\frac{1}{p'}=1\right)$ (see hypothesis $H(f)(i)$). By Krasnoselskii’s theorem (see, for example, Gasinski and Papageorgiou [@4 Theorem 3.4.4, p. 407]) we deduce that $$\label{eq6} N_{\hat{f}}(\cdot)\ \mbox{is continuous}.$$ Also let $\psi_p:W^{1,p}(\Omega)\rightarrow L^{p'}(\Omega)$ be defined by $$\psi_p(u)(\cdot)=|u(\cdot)|^{p-2}u(\cdot).$$ This map is bounded, continuous, monotone, hence also maximal monotone (recall that also $L^{p'}(\Omega)\hookrightarrow W^{1,p}(\Omega)^*$). Finally, let $\hat{A}:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ be defined by $$\left\langle \hat{A}(u),h\right\rangle=\left\langle A(u),h\right\rangle+\int_{\partial\Omega}\beta(z)|u|^{p-2}uhd\sigma,$$ where, as before, $$\left\langle A(u),h\right\rangle=\int_{\Omega}|Du|^{p-2}(Du,Dh)_{{\mathbb R}^N}dz\quad \mbox{for all}\ u,h\in W^{1,p}(\Omega).$$ Evidently, $\hat{A}(\cdot)$ is bounded, continuous, monotone, hence also maximal monotone. We introduce the operator $V:W^{1,p}(\Omega)\rightarrow W^{1,p}(\Omega)^*$ defined by $$V(u)=\hat{A}(u)+\psi_p(u)-N_{\hat{f}}(u)-\epsilon e.$$ Clearly, $V(\cdot)$ is bounded. $V(\cdot)$ is pseudomonotone. We need to show that the properties $$\label{eq7} u_n\stackrel{ w}{\rightarrow} u\ \mbox{in}\ W^{1,p}(\Omega)\ \mbox{and}\ \limsup\limits_{n\rightarrow\infty}\left\langle V(u_n),u_n-u\right\rangle{\leqslant}0$$ imply that $$V(u_n)\stackrel{w}{\rightarrow} V(u)\ \mbox{in}\ W^{1,p}(\Omega)^*\ \mbox{and}\ \left\langle V(u_n),u_n\right\rangle\rightarrow\left\langle V(u),u\right\rangle.$$ We have $$\begin{aligned} \label{eq8} &&\left\langle V(u_n),u_n-u\right\rangle\nonumber\\ &=&\left\langle \hat{A}(u_n),u_n-u\right\rangle+\int_{\Omega}|u_n|^{p-2}u_n(u_n-u)dz-\int_{\Omega}\hat{f}(z,u_n,Du_n)(u_n-u)dz-\nonumber\\ &&-\epsilon\int_{\Omega}e(u_n-u)dz. \end{aligned}$$ Note that since $W^{1,p}(\Omega)\hookrightarrow L^p(\Omega)$ compactly, we have $$\label{eq9} u_n\rightarrow u\ \mbox{in}\ L^p(\Omega).$$ Also, we have $$\{|u_n|^{p-2}u_n\}_{n{\geqslant}1}\subseteq L^{p'}(\Omega)\ \mbox{is bounded}.$$ Hence, because of Hölder’s inequality and (\[eq9\]), we have $$\label{eq10} \int_{\Omega}|u_n|^{p-2}u_n(u_n-u)dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty.$$ Also, hypothesis $H(f)(i)$ implies that $$\{N_{\hat{f}}(u_n)\}_{n{\geqslant}1}\subseteq L^{p'}(\Omega)\ \mbox{is bounded}.$$ Therefore we also have $$\label{eq11} \int_{\Omega}\hat{f}(z,u_n,Du_n)(u_n-u)dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty.$$ Finally, we clearly have $$\label{eq12} \int_{\Omega}e(u_n-u)dz\rightarrow 0\ \mbox{as}\ n\rightarrow\infty\ (\mbox{see (\ref{eq9})}).$$ Thus, if in (\[eq8\]) we pass to the limit as $n\rightarrow\infty$ and use (\[eq7\]), (\[eq10\]), (\[eq11\]), and (\[eq12\]) we obtain $$\limsup\limits_{n\rightarrow\infty}\left\langle \hat{A}(u_n),u_n-u\right\rangle{\leqslant}0.$$ By the compactness of the trace map, we have $$\begin{aligned} &&\int_{\partial\Omega}\beta(z)|u_n|^{p-2}u_n(u_n-u)d\sigma\rightarrow 0,\\ &\Rightarrow&\limsup\limits_{n\rightarrow\infty}\left\langle A(u_n),u_n-u\right\rangle{\leqslant}0,\\ &\Rightarrow&u_n\rightarrow u\ \mbox{in}\ W^{1,p}(\Omega)\ (\mbox{see Proposition \ref{prop2}}). \end{aligned}$$ On account of this convergence, we have $$\begin{aligned} &&\psi_p(u_n)\rightarrow\psi_p(u)\ \mbox{and}\ N_{\hat{f}}(u_n)\rightarrow N_{\hat{f}}(u)\ \mbox{in}\ L^{p'}(\Omega)\ \mbox{as}\ n\rightarrow\infty\ \mbox{(see (\ref{eq6}))},\\ &&\hat{A}(u_n)\rightarrow\hat{A}(u)\ \mbox{in}\ W^{1,p}(\Omega)^*\ \mbox{as}\ n\rightarrow\infty. \end{aligned}$$ So, we can finally assert that $$\begin{aligned} &&V(u_n)\rightarrow V(u)\ \mbox{in}\ W^{1,p}(\Omega)^*\ \mbox{and}\ \left\langle V(u_n),u_n\right\rangle\rightarrow\left\langle V(u),u\right\rangle,\\ &\Rightarrow&V(\cdot)\ \mbox{is pseudomonotone}. \end{aligned}$$ This proves the claim. For all $u\in W^{1,p}(\Omega)$ we have $$\begin{aligned} \label{eq13} &&\left\langle V(u),u\right\rangle\nonumber\\ &=&||Du||^p_p+\int_{\partial\Omega}\beta(z)|u|^pd\sigma+||u^-||^p_p-\int_{\Omega}f(z,u,Du)udz- \epsilon\int_{\Omega}eudz. \end{aligned}$$ Hypotheses $H(f)(i),(ii)$ imply that given $\epsilon>0$, we can find $c_1=c_1(\epsilon)>0$ such that $$\label{eq14} f(z,x,y)x{\leqslant}(\vartheta(z)+\epsilon)x^p+c_1\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x{\geqslant}0,\ \mbox{and all}\ y\in{\mathbb R}^N.$$ Using (\[eq14\]) in (\[eq13\]), we obtain $$\begin{aligned} &&\left\langle V(u),u\right\rangle\\ &{\geqslant}&||Du^-||^p_p+||u^-||^p_p+||Du^+||^p_p+\int_{\partial\Omega}\beta(z)(u^+)^pd\sigma-\int_{\Omega}\vartheta(z)(u^+)^pdz-\epsilon||u^+||^p\\ &&-c_2||u||-c_1|\Omega|_N\ \mbox{for some}\ c_2>0,\\ &\Rightarrow&\left\langle V(u),u\right\rangle{\geqslant}||u^-||^p+(c_0-\epsilon)||u^+||^p-c_2||u||-c_1|\Omega|_N\ (\mbox{see Lemma \ref{lem3}}). \end{aligned}$$ Choosing $\epsilon\in(0,c_0)$, we see that $$\begin{aligned} \label{eq15} &&\left\langle V(u),u\right\rangle{\geqslant}c_3||u||^p-c_4\ \mbox{for some}\ c_3,c_4>0,\nonumber\\ &\Rightarrow&V(\cdot)\ \mbox{is strongly coercive (recall that $p>1$)}. \end{aligned}$$ Then the claim and (\[eq15\]) permit the use of Proposition \[prop1\]. So, we can find $u_{\epsilon}\in W^{1,p}(\Omega),u_{\epsilon}\neq 0$ (since $e\neq 0$) such that $$\begin{aligned} \label{eq16} &&V(u_{\epsilon})=0\ \mbox{in}\ W^{1,p}(\Omega)^*\nonumber\\ &\Rightarrow&\left\langle A(u_{\epsilon}),h\right\rangle+\int_{\partial\Omega}\beta(z)|u_{\epsilon}|^{p-2}u_{\epsilon}hd\sigma-\int_{\Omega}(u^-_{\epsilon})^{p-1}hdz\nonumber\\ &&=\int_{\Omega}f(z,u_{\epsilon},Du_{\epsilon})hdz+\epsilon\int_{\Omega}ehdz\ \mbox{for all}\ h\in W^{1,p}(\Omega). \end{aligned}$$ In (\[eq16\]) we choose $h=-u^-_{\epsilon}\in W^{1,p}(\Omega)$ and use (\[eq4\]) and hypothesis $H(\beta)$. Then $$\begin{aligned} &&||Du^-_{\epsilon}||^p_p+||u^-_{\epsilon}||^p_p{\leqslant}0\ (\mbox{recall that}\ e\in D_+),\\ &\Rightarrow&u_{\epsilon}{\geqslant}0,\ u_{\epsilon}\neq 0. \end{aligned}$$ Then from (\[eq16\]) we have $$\begin{aligned} \label{eq17} &&\left\langle A(u_{\epsilon}),h\right\rangle+\int_{\partial\Omega}\beta(z)u^{p-1}_{\epsilon}hd\sigma=\int_{\Omega}f(z,u_{\epsilon},Du_{\epsilon})hdz+\epsilon\int_{\Omega}ehdz\ \mbox{for all}\ h\in W^{1,p}(\Omega)\nonumber\\ &\Rightarrow&-\Delta_pu_{\epsilon}(z)=f(z,u_{\epsilon}(z),Du_{\epsilon}(z))+\epsilon e(z)\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ &&\frac{\partial u_{\epsilon}}{\partial n_p}+\beta(z)u^{p-1}_{\epsilon}=0\ \mbox{on}\ \partial\Omega\ (\mbox{see Papageorgiou and R\u adulescu \cite{10}}). \end{aligned}$$ By Winkert [@14] and Papageorgiou and Rădulescu [@11], we have $$u_{\epsilon}\in L^{\infty}(\Omega).$$ Applying Theorem 2 of Lieberman [@8], we obtain $$u_{\epsilon}\in C_+\backslash\{0\}.$$ Let $M=||u_{\epsilon}||_{C^1(\overline{\Omega})}$. Hypotheses $H(f)(i),(iii)$ imply that we can find $\hat{\xi}_M>0$ such that $$f(z,x,y)+\hat{\xi}_Mx^{p-1}{\geqslant}0$$ for almost all $z\in\Omega$, all $x\in[0,M]$, and all $|y|{\leqslant}M$. Using this in (\[eq17\]), we have $$\begin{aligned} &&\Delta_pu_{\epsilon}(z){\leqslant}\hat{\xi}_Mu_{\epsilon}(z)^{p-1}\ \mbox{for almost all}\ z\in\Omega,\\ &\Rightarrow&u_{\epsilon}\in D_+ \end{aligned}$$ (by the nonlinear strong maximum principle, see [@4 p. 738]). Next, we show that for some $\mu\in(0,1)$ and all $0<\epsilon{\leqslant}1$, we have $u_{\epsilon}\in C^{1,\mu}(\overline{\Omega})$ and $$\{u_{\epsilon}\}_{0<\epsilon{\leqslant}1}\subseteq C^{1,\mu}(\overline{\Omega})\ \mbox{is bounded}.$$ Using this fact and letting $\epsilon\rightarrow 0^+$, we will generate a positive solution for problem (\[eq1\]). \[prop5\] If hypotheses $H(\beta),H(f)$ hold, then there exist $\mu\in(0,1)$ and $c^*>0$ such that for all $0<\epsilon{\leqslant}1$ we have $$u_{\epsilon}\in C^{1,\mu}(\overline{\Omega})\ \mbox{and}\ ||u_{\epsilon}||_{C^{1,\mu}(\overline{\Omega})}{\leqslant}c^*.$$ Let $\epsilon\in\left(0,1\right]$ and let $u_{\epsilon}\in D_+$ be a solution of (\[eq5\]) produced in Proposition \[prop4\]. We have $$\begin{aligned} \label{eq18} &&\left\langle A(u_{\epsilon}),h\right\rangle+\int_{\partial\Omega}\beta(z)u^{p-1}_{\epsilon}hd\sigma=\int_{\Omega}f(z,u_{\epsilon},Du_{\epsilon})hdz+\epsilon\int_{\Omega}ehdz\ \mbox{for all}\ h\in W^{1,p}(\Omega). \end{aligned}$$ Hypothesis $H(f)(ii)$ implies that given $\epsilon>0$, we can find $M_1=M_1(\epsilon)>0$ such that $$\label{eq19} f(z,x,y)x{\leqslant}(\vartheta(z)+\epsilon)x^p\ \mbox{for almost all}\ z\in\Omega,\ \mbox{all}\ x{\geqslant}M_1,\ \mbox{and all}\ y\in{\mathbb R}^N.$$ Also, hypothesis $H(f)(i)$ implies that $$\begin{aligned} \label{eq20} &&f(z,x,y)x{\leqslant}c_5(1+|y|^{p-1})\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ && \mbox{all}\ 0{\leqslant}x{\leqslant}M_1,\ \mbox{all}\ y\in{\mathbb R}^N,\ \mbox{some}\ c_5>0. \end{aligned}$$ Then from (\[eq19\]), (\[eq20\]) and since $\vartheta\in L^{\infty}(\Omega)$, it follows that $$\begin{aligned} \label{eq21} &&f(z,x,y)x{\leqslant}(\vartheta(z)+\epsilon)x^p+c_6|y|^{p-1}+c_6\ \mbox{for almost all}\ z\in\Omega,\nonumber\\ && \mbox{all}\ x{\geqslant}0,\ \mbox{all}\ y\in{\mathbb R}^N,\ \mbox{and for some}\ c_5>0. \end{aligned}$$ In (\[eq18\]) we choose $h=u_{\epsilon}\in W^{1,p}(\Omega)$. We obtain $$\begin{aligned} &&||Du_{\epsilon}||^p_p+\int_{\partial\Omega}\beta(z)u^p_{\epsilon}d\sigma{\leqslant}\int_{\Omega}[\vartheta(z)+\epsilon]u^p_{\epsilon}dz+c_7[||Du_{\epsilon}||^{p-1}_p+||u_{\epsilon}||+1]\\ &&\mbox{ for some}\ c_7>0,\\ &\Rightarrow&||Du_{\epsilon}||^p_p+\int_{\partial\Omega}\beta(z)u^p_{\epsilon}d\sigma-\int_{\Omega}\vartheta(z)u^p_{\epsilon}dz-\epsilon||u_{\epsilon}||^p{\leqslant}c_8[||u_{\epsilon}||^{p-1}+1]\ \mbox{for some}\ c_8>0,\\ &\Rightarrow&[c_0-\epsilon]||u_{\epsilon}||^p{\leqslant}c_8[||u_{\epsilon}||^{p-1}+1]\ (\mbox{see Lemma \ref{lem3}}). \end{aligned}$$ Choosing $\epsilon\in(0,c_0)$, we infer that $$\label{eq22} \{u_{\epsilon}\}_{0<\epsilon{\leqslant}1}\subseteq W^{1,p}(\Omega)\ \mbox{is bounded}.$$ From (\[eq18\]) we have $$\label{eq23} \left\{\begin{array}{ll} -\Delta_pu_{\epsilon}(z)=f(z,u_{\epsilon}(z),Du_{\epsilon}(z))+\epsilon e(z)\ \mbox{for almost all}\ z\in\Omega,&\\ \frac{\partial u_{\epsilon}}{\partial n_p}+\beta(z)u^{p-1}_{\epsilon}=0\ \mbox{on}\ \partial\Omega& \end{array}\right\}$$ (see Papageorgiou and Rădulescu [@10]). From (\[eq22\]), (\[eq23\]) and Winkert [@14] (see also Papageorgiou and Rădulescu [@11]), we see that we can find $c_9>0$ such that $$||u_{\epsilon}||_{\infty}{\leqslant}c_9\ \mbox{for all}\ 0<\epsilon{\leqslant}1.$$ Invoking Theorem 2 of Lieberman [@8], we know that there exist $\mu\in(0,1)$ and $c^*>0$ such that $$u_{\epsilon}\in C^{1,\mu}(\overline{\Omega})\ \mbox{and}\ ||u_{\epsilon}||_{C^{1,\mu}(\overline{\Omega})}{\leqslant}c^*\ \mbox{for all}\ \epsilon\in\left(0,1\right].$$ This completes the proof. Now letting $\epsilon\rightarrow 0^+$, we will produce a positive solution for problem (\[eq1\]). \[th6\] If hypotheses $H(\beta),H(f)$ hold, then problem (\[eq1\]) has a positive solution $\hat{u}\in D_+$. Let $\{\epsilon_n\}_{n{\geqslant}1}\subseteq\left(0,1\right]$ and assume that $\epsilon_n\rightarrow 0^+$. We set $u_n=u_{\epsilon_n}$ for all $n\in{\mathbb N}$. On account of Proposition \[prop5\] and since $C^{1,\mu}(\overline{\Omega})$ is embedded compactly into $C^1(\overline{\Omega})$, by passing to a subsequence if necessary, we may assume that $$\label{eq24} u_n\rightarrow \hat{u}\ \mbox{in}\ C^1(\overline{\Omega})\ \mbox{as}\ n\rightarrow\infty.$$ Suppose that $\hat{u}=0$. Let $M=\sup\limits_{n{\geqslant}1}||u_n||_{C^1(\overline{\Omega})}$. Hypothesis $H(f)(iii)$ implies that given $\epsilon>0$, we can find $\delta=\delta(\epsilon)>0$ such that $$\label{eq25} f(z,x,y){\geqslant}[\eta_M(z)-\epsilon]x^{p-1}\ \mbox{for almost all}\ z\in\Omega,\ \mbox{and all}\ 0{\leqslant}x{\leqslant}\delta,\ \mbox{all}\ |y|{\leqslant}M.$$ Consider the function $$R(\hat{u}_1,u_n)(z)=|D\hat{u}_1(z)|^p-|Du_n(z)|^{p-2}(Du_n(z),D\left(\frac{\hat{u}_1^p}{u_n^{p-1}}\right)(z))_{{\mathbb R}^N}.$$ By the nonlinear Picone identity of Allegretto and Huang [@1], we have $$\begin{aligned} \label{eq26} &0&{\leqslant}\int_{\Omega}R(\hat{u}_1,u_n)dz\nonumber\\ &&=||D\hat{u}_1||^p_p-\int_{\Omega}|Du_n|^{p-2}(Du_n,D\left(\frac{\hat{u}_1^p}{u_n^{p-1}}\right))_{{\mathbb R}^N}dz\nonumber\\ &&=||D\hat{u}_1||^p_p-\int_{\Omega}(-\Delta_pu_n)\left(\frac{\hat{u}_1^p}{u_n^{p-1}}\right)dz+\int_{\partial\Omega}\beta(z)u_n^{p-1}\frac{\hat{u}^p_1}{u_n^{p-1}}d\sigma\nonumber\\ &&\mbox{(by the nonlinear Green's identity, see Gasinski and Papageorgiou \cite[ p. 211]{4})}\nonumber\\ &&=||D\hat{u}_1||^p_p+\int_{\partial\Omega}\beta(z)\hat{u}_1^{p-1}d\sigma-\int_{\Omega}f(z,u_n,Du_n)\frac{\hat{u}_1^p}{u_n^{p-1}}dz-\epsilon_n\int_{\Omega}e\frac{\hat{u}_1^p}{u_n^{p-1}}dz\nonumber\\ &&(\mbox{see (\ref{eq23}) with}\ u_{\epsilon}\ \mbox{replaced by}\ u_n)\nonumber\\ &&{\leqslant}\hat{\lambda}_1-\int_{\Omega}\eta_M(z)u_n^{p-1}\frac{\hat{u}_1^p}{u_n^{p-1}}dz+\epsilon\ \mbox{for all}\ n{\geqslant}n_0\nonumber\\ &&(\mbox{see (\ref{eq25}), (\ref{eq24}) and recall that}\ \hat{u}=0\ \mbox{and}\ ||\hat{u}_1||_p=1)\nonumber\\ &&=\hat{\lambda}_1-\int_{\Omega}\eta_M(z)\hat{u}_1^pdz+\epsilon\nonumber\\ &&=\int_{\Omega}[\hat{\lambda}_1-\eta(z)]\hat{u}_1^pdz+\epsilon\ \mbox{for all}\ n{\geqslant}n_0\ (\mbox{recall that}\ ||\hat{u}_1||_p=1). \end{aligned}$$ Let $\xi^*=\int_{\Omega}[\eta_M(z)-\hat{\lambda}_1]\hat{u}_1^pdz$. Since $\hat{u}_1\in D_+$, hypothesis $H(f)(iii)$ implies that $$\xi^*>0.$$ Then from (\[eq26\]) and by choosing $\epsilon\in(0,\xi^*)$ we have $$0{\leqslant}R(\hat{u}_1,u_n)<0\ \mbox{for all}\ n{\geqslant}n_0,$$ a contradiction. So, $\hat{u}\neq 0$. Therefore, $\hat{u}{\geqslant}0$ is a positive solution of (\[eq1\]) and as before, via the nonlinear strong maximum principle, we have $\hat{u}\in D_+$. This research was supported in part by the Slovenian Research Agency grants P1-0292, J1-7025, J1-8131, and N1-0064. V.D. Rădulescu acknowledges the support through a grant of the Romanian Ministry of Research and Innovation, CNCS–UEFISCDI, project number PN-III-P4-ID-PCE-2016-0130, within PNCDI III. [99]{} W. Allegretto, Y.X. Huang, A Picone’s identity for the $p$-Laplacian and applications, [*Nonlinear Anal.*]{} [**32**]{} (1998), 819-830. F. Faraci, D. Motreanu, D. Puglisi, Positive solutions of quasilinear elliptic equations with dependence on the gradient, [*Calc. Var.*]{} [**54**]{} (2015), 525-538. D. de Figueiredo, M. Girardi, M. Matzeu, Semilinear elliptic equations with dependence on the gradient via mountain-pass techniques, [*Diff. Integral Equations*]{} [**17**]{} (2004), 119-126. L. Gasinski, N.S. Papageorgiou, [*Nonlinear Analysis*]{}, Chapman & Hall/CRC, Boca Raton, FL, 2006. L. Gasinski, N.S. Papageorgiou, Positive solutions for nonlinear elliptic problems with dependence on the gradient, [*J. Differential Equations*]{} [**263**]{} (2017), 1451-1476. M. Girardi, M. Matzeu, Positive and negative solutions of a quasilinear elliptic equation by a mountain pass method and truncature techniques, [*Nonlinear Anal.*]{} [**59**]{} (2004), 199-210. N.B. Huy, B.T. Quan, N.H. Khanh, Existence and multiplicity results for generalized logistic equations, [*Nonlinear Anal.*]{} [**144**]{} (2016), 77-92. G. Lieberman, Boundary regularity for solutions of degenerate elliptic equations, [*Nonlinear Anal.*]{} [**12**]{} (1988), 1203-1219. D. Mugnai, N.S. Papageorgiou, Resonant nonlinear Neumann problems with indefinite weight, [*Ann. Sc. Norm. Super. Pisa Cl. Sci.*]{} (5) [**11**]{} (2012), no. 4, 729-788. N.S. Papageorgiou, V.D. Rădulescu, Multiple solutions with precise sign information for nonlinear parametric Robin problems, [*J. Differential Equations*]{} [**256**]{} (2014), 393-430. N.S. Papageorgiou, V.D. Rădulescu, Nonlinear nonhomogeneous Robin problems with superlinear reaction term, [*Adv. Nonlinear Studies*]{} [**16**]{} (2016), 737-764. N.S. Papageorgiou, V.D. Rădulescu, D.D. Repovš, Nonlinear elliptic inclusions with unilateral constraint and dependence on the gradient, [*Appl. Math. Optim.*]{}, to appear (DOI: 10.1007/s00245-016-9392-y). D. Ruiz, A priori estimates and existence of positive solutions for strongly nonlinear problems, [*J. Differential Equations*]{} [**199**]{} (2004), 96-114. P. Winkert, $L^{\infty}$-estimates for nonlinear elliptic Neumann boundary value problems, [*Nonlin. Diff. Equations Appl. (NoDEA)*]{} [**17**]{} (2010), 289-302.
--- abstract: 'We define a new class of binary matrices by maximizing the peak-sidelobe distances in the aperiodic autocorrelations. These matrices can be used as robust position marks for in-plane spatial alignment. The optimal square matrices of dimensions up to 7 by 7 and optimal diagonally-symmetric matrices of 8 by 8 and 9 by 9 were found by exhaustive searches.' author: - 'Scott A. Skirlo' - Ling Lu - Marin Soljačić title: Binary matrices of optimal autocorrelations as alignment marks --- Introduction ============ Binary sequences [@Barker1953; @Neuman1971] and matrices with good autocorrelation properties have key applications in digital communications (radar, sonar, CDMA and cryptography) [@Golomb2004] and in coded aperture imaging [@Gottesman1989]. Several works have conducted exhaustive searches for the optimal matrices of these applications  [@Alquaddoomi1989; @Costas1984; @Golomb1982rectangle; @Mow2005]. A less developed application of binary matrices with good aperiodic autocorrelations is two-dimensional (2D) translational spatial alignment. For example, it has been shown in electron-beam lithography [@Boegli1990] that position marks based on such binary matrices are immune to noise and manufacturing errors. However, the symbols for these applications have not been optimized [@Anderson2004; @Boegli1990; @ling2010]. In this paper, we define and report the optimal binary matrices as alignment marks. Section \[sec:preliminaries\] sets up the problem. Section \[sec:criteria\] defines the criteria for the optimal matrices. Section \[sec:relatedwork\] discusses previous work related to this problem. Section \[sec:bound\] works out the useful bounds. Section \[sec:search\] explains the exhaustive computer searches and lists the results. Section \[sec:observations\] discusses several key observations of the optimal marks. Section \[sec:acc\] compares the performance of optimal and non-optimal marks through simulations. Section \[sec:applications\] discusses the potential applications of the matrices found. Section \[sec:conclusions\] concludes the paper. Preliminaries {#sec:preliminaries} ============= An alignment mark is made by creating a surface pattern different from the background so that the pattern information transforms into a two-level signal when a digital image is taken. This image can be represented as a binary matrix where 1 represents the (black) pattern pixels and 0 represents the (white) background pixels or vice versa. The 2D aperiodic autocorrelation (A) of an $M$ by $N$ binary matrix with elements $R_{i,j}$ is defined as $$A(\tau_{1},\tau_{2})=\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{N}R_{i,j}R_{i+\tau_{1},j+\tau_{2}}$$ where $\tau_{1},\tau_{2}$ are integer shifts. The peak value is $A(0,0)$ while all other values are sidelobes. $A$ is an inversion-symmetric \[$A(\tau_{1},\tau_{2})=A(-\tau_{1},-\tau_{2})$\] $(2M-1)$ by $(2N-1)$ matrix. The crosscorrelation between $R$ and the data image matrix $D_{i,j}$ is expressed as $$C(\tau_{1},\tau_{2})=\sum_{i=1}^{M}\sum_{j=1}^{N}R_{i,j}D_{i+\tau_{1},j+\tau_{2}} .\\$$ When the data $D$ is a noisy version of the reference $R$, the peak value of the crosscorrelation determines the most probable position of the mark. It is important to note that all the matrices are implicitly padded with 0s for all the matrix elements of indices exceeding their matrix dimensions. A linear transformation of the data matrix results in a linear transformation of the correlation as long as the reference matrix is kept the same. This can be seen from $$\begin{aligned} D_{i,j}^{\prime} &=cD_{i,j}+d\\ C^{\prime}(\tau_{1},\tau_{2}) &= cC(\tau_{1},\tau_{2})+d\sum^{M}_{i=1}\sum^{N}_{j=1}R_{i,j}\end{aligned}$$ where the second term of $C^{\prime}$ is a constant. The data matrix can thus be arbitrarily scaled ($c\neq 0$) while keeping the correlation equivalent and the alignment results identical. ![We illustrate an autocorrelation function $A(\tau)$, whose peak value is $p$, highest sidelobe value is $s$, and whose peak-sidelobe distances are $d_{i}$.[]{data-label="fig:illustrate"}](AutoCorr1.eps){width="7cm"} Other works on binary matrices of 1s and 0s with aperiodic autocorrelations have used different criterias selected for applications in radar and sonar. In the Costas-array problem [@Costas1984], only one black pixel is placed per column and row and the maximum sidelobe is fixed to one. In the Golomb-Rectangle problem [@Golomb1982rectangle], the number of black pixels is maximized with the restriction that the sidelobe still be fixed to one [@Robinson1997]. However, our criteria does bear some resemblance to those in some of the works on one dimensional -1 and 1 (three levels) sequences [@Neuman1971]. Two Upper bounds of $d_{1,\textnormal{max}}(p)$, $d_{1,\textnormal{max}}^{\textnormal{upper,I}}(p)$ and $d_{1,\textnormal{max}}^{\textnormal{upper,II}}(p)$ {#sec:bound} =========================================================================================================================================================== ![Lowerbounds of $s_{\textnormal{min}}(p)$, $s_{\textnormal{min}}^{\textnormal{lower,I}}(p)$ and $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)$. $p$ is the autocorrelation peak. The three matrices on top illustrate the methods of filling black pixels for regions I, II and III for the matrix construction of $s_{\textnormal{min}}^{\textnormal{lower,I}}(p)$. The grey pixels show spots to be filled in that region, while the black pixels are spots that have been filled in previous regions. []{data-label="fig:bound"}](1_boundnew.eps){width="7cm"} For a binary matrix $R$, the peak value $p$ of its autocorrelation $A$ equals the number of ones in the matrix ($R$). The largest $d_{1}$ for all matrices with a given $p$, of a fixed dimension, is $d_{1,\textnormal{max}}(p)$. $d_{1,\textnormal{max}}(p)=p-s_{\textnormal{min}}(p)$, where $s_{\textnormal{min}}(p)$ is the minimum highest sidelobe value as a function of $p$. In this section, we constructed an upperbound of $d_{1,\textnormal{max}}(p)$, $d_{1,\textnormal{max}}^{\textnormal{upper,I}}(p)$, by maximizing $p-A(\pm 1,0)$. The $A(\pm 1,0)$ computed here forms a lower bound on $s_{\textnormal{min}}(p)$, $s_{\textnormal{min}}^{\textnormal{lower,I}}(p)$. This construction is illustrated in Fig. \[fig:bound\], where we assume the matrix $R$ used to construct our bound is of dimension $M\times N$ with $M\le{}N$. We find: $$d_{1,\textnormal{max}}^{\textnormal{upper,I}}(p) = \left\{ \begin{array}{llc} p, &p \in [0,N_{1}] &\textrm{I} \\ N_{1}, &p \in [N_{1},N_{2}] &\textrm{II} \\ M(N+1)-p, &p \in [N_{2},MN] &\textrm{III} \end{array} \right.$$ where $N_{1}=\frac{MN}{2}, N_{2}=\frac{MN}{2}+M$ when $MN$ is even and $N_{1}=\frac{MN+1}{2}, N_{2}=\frac{MN+1}{2}+M-1$ when $MN$ is odd. This upperbound can be derived by starting out with a matrix $R_{i,j}=0$ for all $(i,j)$ and ‘filling in’ with ones in a particular pattern. In region I, ones can be placed anywhere in $R_{i,j}$ where $i+j$ is odd. When $p=N_{1}$, we have formed a “checkerboard pattern”. In region II, we place ones wherever $i+j$ is even for $i=1$ or $i=N$. In region III, the remaining locations without ones are filled. The autocorrelation function $A(\tau_1,\tau_2)$ equals the number of black squares that are connected by a displacement vector $(\tau_1,\tau_2)$. We can use this property to construct a second lower bound $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)$. This approach is similiar to the method used in Ref. [@Robinson1997]. Since the autocorrelation is invariant under inversion, there are $((2M-1)(2N-1)-1)/2=2NM-N-M$ unique non-zero displacements; a matrix of $p$ ones fills $p(p-1)/2$ of them. As $p$ increases, there are repeated displacements because $p(p-1)/2$ quickly exceeds $2NM-N-M$. We can find a lowerbound $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)$ by assuming that the displacements added to the autocorrelation function distribute uniformly, that is $|A(\tau_{1},\tau_{2})-A(\tau'_{2},\tau'_{2})|\le{}1$ for nonzero displacements. This gives $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)=ceil[\frac{p(p-1)}{4NM-2N-2M}]$, where $ceil[x]$ rounds to the nearest integer greater than $x$. Consequently, $d_{1,\textnormal{max}}^{\textnormal{upper,II}}=p-ceil[\frac{p(p-1)}{4NM-2N-2M)}]$. As illustrated in Fig. \[fig:bound\], $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)$ is a better bound for small p, while $s_{\textnormal{min}}^{\textnormal{lower,I}}(p)$ is a better bound for large p. Exhaustive computer searches for the optimal square matrices {#sec:search} ============================================================ ![image](2_2-7new.eps){width="15cm"} ![image](3_symm89new.eps){width="17cm"} Physical in-plane alignment usually requires equal alignment accuracies in both directions; this calls for square matrices ($M=N$). We applied exhaustive searches to find the square matrices with the maximum $d_{1}$\[$=max(d_{1,\textnormal{max}}(p))$\] . The resulting matrices were ranked using the criteria in Sec. \[sec:criteria\] to obtain the optimal matrices. Backtrack conditions based on symmetries and sidelobes have been found useful in exhaustive searches for binary matrices  [@Alquaddoomi1989; @Robinson1997; @Shearer2004]. Matrices related by symmetry operations are considered the same matrix. The symmetry operations for square matrices are horizontal and vertical flips and rotations by multiples of 90 degrees. For this study, a backtrack condition based on eliminating redundant matrices related by horizontal flips was implemented. Backtrack conditions based on sidelobe levels are useful if the sidelobes are being minimized. However, we are maximizing the peak-sidelobe distance $d_{1}$, so the sidelobe backtrack condition was not used. The search algorithm we implemented works by exhaustively generating matrices row by row. The algorithm continues generating rows until a backtrack condition occurs, or a matrix is completely specified. The matrix is stored for later ranking if it has the same or greater $d_{1}$ than the existing maximum $d_{1}$. Several techniques were implemented to speed up the algorithm. Each matrix row was represented as a binary word so that fast bit-wise operations could be used. In addition lookup tables were created to calculate the horizontal flips and correlations of rows. For our binary matrices, the maximum sidelobes were typically located near the autocorrelation peak. Because of this, the sidelobe values were checked in a spiral pattern around the peak to quickly determine if a matrix had a $d_{1}$ less than the stored maximum. The search results for square matrices of size up to 7 by 7 are presented in Fig. \[fig:2-7\]. Fig. \[fig:2-7\]a) gives the optimal matrices for 2 by 2, 3 by 3 and 4 by 4. In Fig. \[fig:2-7\]b), c) and d) we plot, in red, $s_{\textnormal{min}}(p)$ for matrices of sizes 5 by 5, 6 by 6 and 7 by 7. This red curve is indeed bounded from below by the grey $s_{\textnormal{min}}^{\textnormal{lower,I}}(p)$ and $s_{\textnormal{min}}^{\textnormal{lower,II}}(p)$ constructed in Sec. \[sec:bound\]. The number of the matrices having the maximum $d_{1}$ is plotted in blue. This curve peaks around the intersection of the $d_{1,\textnormal{max}}^{\textnormal{upper,I}}$ and $d_{1,\textnormal{max}}^{\textnormal{upper,II}}$ upperbounds. The circle on the blue line specifies the location of the optimal matrix ranked first by the criteria in Sec. \[sec:criteria\]. The optimal matrices and their autocorrelations are shown as insets. The two numbers on the y-axes of the autocorrelation plots are the $p$ and $s$ values of the optimal matrices. The matrices ranked second and third and their distance spectra are listed in \[App:spectra\]. The runtime for 7 by 7 matrices was 3 hours on 1000 Intel EM64T Nodes with 2.6 GHz clock speed. Exhaustive searches of square matrices of size 8 by 8 are not accessible to us, since the size of the search space increases exponentially with the number of matrix elements as $2^{N^2}$. Observations on the optimal square matrices {#sec:observations} =========================================== The first interesting observation is that most top-ranked matrices in Fig. \[fig:2-7\] and \[App:spectra\] are *diagonally symmetric*. Because of this if we restrict our searches to symmetric matrices of larger sizes, we still expect to find top-ranked matrices  [@Shearer2004]. The search results for diagonally-symmetric matrices of 8 by 8 and 9 by 9 are presented in Fig. \[fig:symm89\]. The second observation for our optimal matrices shown in Fig. \[fig:2-7\], is that $d_{1}$ always occurred in the first four neighbors of the autocorrelation peak \[$A(0,\pm{}1), A(\pm{}1,0)$\]. The third observation is that all of the optimal matrices shown in Figs. \[fig:2-7\] and \[fig:symm89\] are connected through their black pixel (1s) and all but 3 by 3 are connected through their white pixels. A pixel is connected if one or more of its eight neighboring pixels has the same value. *Connectedness* is a preferred topological property for alignment marks; it makes the marks self-supportive, suspendible and robust against disturbances. Alignment accuracies of the optimal matrices {#sec:acc} ============================================ ![The “horizontal" alignment deviation is shown for the four alignment marks under various signal-to-noise ratios. The vertical deviation is almost identical. The color of each plot line borders the corresponding marker. All markers have been expanded to 35 by 35 pixels to illustrate the idea of pixel expansion. The top, black line, on the right edge, corresponds to the 7 by 7 cross, while the second to top, grey line corresponds to the 5 by 5 cross. The second to bottom, blue line corresponds to the optimal 5 by 5 marker, while the bottom, red line corresponds to the optimal 7 by 7 matrix. []{data-label="fig:alignment"}](alignment.eps){width="7cm"} We study the performance of the optimal matrices by comparing the optimal alignment marks to the cross patterns. The matrices were embedded in a white “0" background with a size 5 times that of the symbol. Uniform Gaussian noise was added to all pixels to simulate a noisey image. This was correlated with its noise-free version. The alignment accuracy was determined by the deviation of the correlation peak from the center for 10000 trials. In Fig. \[fig:alignment\], we plot the alignment deviation as a function of signal-to-noise ratio for two optimal marks from Fig. \[fig:2-7\] and the crosses. The y-axis is the horizontal alignment deviation in pixels while the x-axis is the signal-to-noise ratio in decibels ($=20log{\frac{S}{N}}$). At a signal-to-noise ratio of 0 dB, the markers are barely discernible by eye. All markers were expanded to the same area, of 70 by 70 total pixels, for direct comparison. Applying the criteria from Section \[sec:criteria\], using the expanded 70 by 70 symbols, the 7 by 7 mark is ranked first, followed by the 5 by 5 mark, and then the crosses. The quality of the optimal alignment marks should improve with increasing size, which provides a motivation to continue the search for larger optimal matrices. Applications {#sec:applications} ============ Correlation detection from a digital image is a simple, efficient and reliable way to determine the position of an alignment mark. In practice, the crosscorrelations can be calculated by fast-Fourier-transforms. The peak of the correlation can further be interpolated to obtain an alignment accuracy better than the distance represented by a single pixel of the image [@Anderson2004]. The matrices reported in this paper are the desirable patterns to use in this context; they can replace the cross-type patterns widely in use today as position markers. Alignment using these matrices is very robust against noise in the imaging system and partial damage of the mark, providing the strongest peak signal for accurate sub-pixel interpolation. The potential applications of the matrices found in this paper include, but are not limited to, electron-beam lithography [@Boegli1990], planar alignment in manufacturing [@Sakou1989], synchronization [@Scholtz1980] and digital watermarking [@Tirkel1998]. Acknowledgments =============== We would like to thank John D. O’Brien, Robert A. Scholtz, Yuan Shen, Moe Win, Ramesh Raskar and Steven G. Johnson for useful discussions. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number OCI-1053575. S.S. was supported by the MIT Undergraduate Research Opportunities program (UROP). This work was supported in part by the U.S.A.R.O. through the ISN, under Contract No. W911NF-07-D-0004. L.L. was supported in part by the MRSEC program of the NSF under Award No. DMR-0819762. L.L. and M.S. were partially supported by the MIT S3TEC Energy Research Frontier Center of the Department of Energy under Grant No. DE-SC0001299. [00]{} S. Alquaddoomi and R. Scholtz, “On the nonexistence of barker arrays and related matters,” *Information Theory, IEEE Transactions on*, vol. 35, no. 5, pp. 1048–1057, Sep 1989. E.H. Anderson, D. Ha, and J.A. Liddle, “Sub-pixel alignment for direct-write electron beam lithography,” *Microelectronic Engineering*, vol. 73-74, p. 74, 2004. R. H. Barker, *Group synchronizing of binary digital systems*, W. J. Communication Theory, Ed.1em plus 0.5em minus 0.4emAcademic Press, New York, 1953. V. Boegli and D. P. Kern, “Automatic mark detection in electron beam nanolithography using digital image processing and correlation,” *Journal of Vacuum Science Technology B: Microelectronics and Nanometer Structures*, vol. 8, no. 6, pp. 1994 –2001, nov 1990. J. Costas, “A study of a class of detection waveforms having nearly ideal range - doppler ambiguity properties,” *Proceedings of the IEEE*, vol. 72, no. 8, pp. 996 – 1009, aug. 1984. S. Golomb and H. Taylor, “Two-dimensional synchronization patterns for minimum ambiguity,” *Information Theory, IEEE Transactions on*, vol. 28, no. 4, pp. 600 – 604, jul 1982. S. W. Golomb and G. Gong, *Signal Design for Good Correlation: For Wireless Communication, Cryptography, and Radar*.1em plus 0.5em minus 0.4emCambridge University Press, 2004. S. R. Gottesman and E. E. Fenimore, “New family of binary arrays for coded aperture imaging,” *Appl. Opt.*, vol. 28, no. 20, pp. 4344–4352, Oct 1989. L. Lu, “Photonic Crystal Nanocavity Lasers for Integration”, Ph.d thesis, Appendix B, University of Southern California, 2010. F. Neuman and L. Hofman, “New pulse sequences with desirable correlation properties,” *Aerospace and Electronic Systems, IEEE Transactions on*, vol. AES7, no. 3, p. 570, 1971. G. Ramakrishna and W. Mow, “A new search for optimal binary arrays with minimum peak sidelobe levels,” *Sequences and Their Applications - SETA 2004*, vol. 3486, pp. 71–93, 2005. J. Robinson, “Golomb rectangles as folded rulers,” *Information Theory, IEEE Transactions on*, vol. 43, no. 1, pp. 290 –293, jan 1997. H. Sakou, T. Miyatake, S. Kashioka, and M. Ejiri, “A position recognition algorithm for semiconductor alignment based on structural pattern matching,” *Acoustics, Speech and Signal Processing, IEEE Transactions on*, vol. 37, no. 12, pp. 2148 –2157, dec 1989. R. Scholtz, “Frame synchronization techniques,” *Communications, IEEE Transactions on*, vol. 28, no. 8, pp. 1204 – 1213, aug 1980. J. Shearer, “Symmetric golomb squares,” *Information Theory, IEEE Transactions on*, vol. 50, no. 8, pp. 1846 – 1847, aug. 2004. A. Tirkel, C. Osborne, and T. Hall, “Image and watermark registration,” *Signal Processing*, vol. 66, no. 3, pp. 373 – 383, 1998. Distance spectra {#App:spectra} ================ In order to provide additional useful matrices and to illustrate our ranking criteria, we tabulated, in Table \[tab:spectra\], part of the peak-sidelobe distance spectra for the top-three ranked square matrices from the exhaustive search results. The values of the first four distances ($d_1,d_2,d_3,d_4$) and the numbers ($n_1,n_2,n_3,n_4$) of the corresponding sidelobes are listed. Those top-three binary square matrices are shown in Fig. \[fig:2-7\] and in Fig. \[fig:2nd3rdMatrices\]. $\textrm{N}\times\textrm{N} $ $d_1$ $d_2$ $d_3$ $d_4$ ------------------------------- -------- -------- -------- -------- $\textrm{Ranking}$ $n_1$ $n_2$ $n_3$ $n_4$ $3\times 3$ $ 4 $ $ 5 $ $ 6 $ $ 7 $ First 4 4 12 4 Second 4 12 6 2 Third 6 6 12 0 $4\times 4$ $ 7 $ $ 8 $ $ 9 $ $ 10 $ First 8 8 22 10 Second 10 2 10 18 Third 12 0 8 20 $5\times 5$ $ 10 $ $ 11 $ $ 12 $ $ 13 $ First 4 6 10 6 Second 4 12 8 6 Third 4 12 16 12 $6\times 6$ $ 14 $ $ 15 $ $ 16 $ $ 17 $ First 4 16 4 2 Second 6 6 12 4 Third 6 8 12 6 $7\times 7$ $ 19 $ $ 20 $ $ 21 $ $ 22 $ First 14 8 6 0 Second 16 4 4 4 Third 16 4 8 4 : Peak-sidelobe distance spectra of the top-three ranked square matrices from the exhaustive search results.[]{data-label="tab:spectra"} ![Matrices ranked second and third. The first-ranked optimal matrices are shown in Fig. \[fig:2-7\].[]{data-label="fig:2nd3rdMatrices"}](appendix2a.eps){width="30.00000%"}
--- author: - Felix Rühle - Holger Stark bibliography: - 'shorttitles.bib' - 'lit.bib' title: | Emergent collective dynamics of\ bottom-heavy squirmers under gravity --- Introduction {#sec:intro} ============ Active entities consume energy locally in order to self-propel without an external force. When these non-equilibrium objects move collectively, fascinating patterns emerge both on the macroscopic and microscopic scale [@Ramaswamy2010; @RomanczukSchimansky-Geier2012; @ElgetiGompper2015; @BechingerVolpe2016; @ZoettlStark2016], such as flocks of birds [@CavagnaViale2010], motility-induced phase separation [@FilyMarchetti2012; @ButtinoniSpeck2013; @RednerBaskaran2013; @CatesTailleur2015], swarming [@ThutupalliHerminghaus2011; @CopelandWeibel2009; @OyamaYamamoto2016; @JeckelDrescher2019], and active turbulence [@WensinkYeomans2012; @DunkelGoldstein2013]. Of particular interest are microswimmers, *i.e.*, organisms or synthetic particles that self-propel in a fluid at low Reynolds numbers [@Purcell1977; @MarchettiSimha2013]. At higher densities, hydrodynamic interactions between the microswimmers become important and influence their collective dynamics [@IshikawaLocseiPedley2008; @IshikawaPedley2008; @MolinaYamamoto2013; @BlaschkeStark2016; @TheersGompper2018]. Studying microswimmers under gravity is important because often they are not neutrally buoyant [@PalacciBocquet2010; @NashCates2010; @Stark2016; @ShenLintuvuori2019]. In such a setting non-equilibrium sedimentation has been observed [@PalacciBocquet2010; @JungValles2014; @GinotCottin-Bizonne2015] accompanied by polar order along the vertical [@EnculescuStark2011] and convection [@KuhrStark2017]. Numerical hydrodynamic studies also discovered two-dimensional Wigner fluids and swarming under strong gravity [@KuhrStark2019], as well as fluid pumps in a parabolic potential [@HennesStark2014]. In experimental systems swimming under the influence of external fields generates intriguing and surprising phenomena such as the formation of thin layers of motile phytoplankton in coastal regions [@DurhamStocker2009], algae in bound dancing states [@DrescherGoldstein2009], and hovering rafts of active emulsion droplets [@KruegerMaass2016]. Many microswimmers also perform *gravitaxis*, which is the ability to align (anti-)parallel to the direction of gravity. The gravitational torque to achieve this alignment can either result from hydrodynamic drag of a microswimmer with asymmetric morphology [@Roberts2006; @tenHagenBechinger2014; @SenguptaStocker2017] or from bottom heaviness, *i.e.*, when the center of mass is offset relative to the geometrical center [@Kessler1985; @DurhamStocker2009; @CampbellEbbens2013; @WolffStark2013]. Gravitactic swimmers can induce an overturning instability when accumulating with higher density at the top boundary (reminiscent of the Rayleigh-Taylor instability), which then initiates various patterns of bioconvection [@PlessetWinet1974; @ChildressSpiegel1975; @NewellWhitehead1969; @PedleyKessler1992]. However, also *gyrotaxis* clearly plays an important role in such settings [@PedleyKessler1992; @HillKessler1989; @BeesHill1997; @GhoraiHill1999; @CzirokKessler2000; @DesaiArdekani2017]. There, the collective dynamics of microswimmers depends on the combined action of gravity and hydrodynamic flow [@Kessler1985; @PedleyKessler1992]. The involvement of physiological aspects in biotic pattern formation has also been discussed [@MachemerTakahashi1991; @OoyaBaba1992; @Roberts2010]. In theory microswimmer systems under gravity have been investigated in the past using the versatile spherical squirmer model swimmer [@ShenLintuvuori2019; @KuhrStark2017; @KuhrStark2019; @RuehleStark2018; @BrumleyPedley2019; @FaddaYamamoto2020]. Here, we simulate around 900 bottom-heavy squirmers under gravity with full hydrodynamics using the method of multi-particle collision dynamics (MPCD) [@MalevanetsKapral1999; @NoguchiGompper2007]. Varying the ratio of swimming to bulk sedimentation velocity and the gravitational torque due to bottom heaviness, we determine state diagrams for neutral as well as strong pusher and puller squirmers. While for low swimming velocity and torque conventional sedimentation is recovered, the sedimentation profile becomes inverted when increasing their values. For neutral squirmers we discover a rich phenomenology in between both states including a state where plumes consisting of collectively sinking squirmers feed convective rolls at the bottom of the system and dense clusters which spawn single squirmers. These two states can also occur transiently starting from a uniform squirmer distribution and then disappear in the long-time limit. For strong pushers and pullers only weak plume formation is observed. We thoroughly characterize all states by different quantities. In the following, in sect. \[sec:methods\] we introduce the squirmer model swimmer and the simulation method of multi-particle collision dynamics. Then, in sect. \[sec:results\] we present all our results. We start with the state diagram of neutral squirmers followed by a detailed characterization of the different states and also look at strong pushers and pullers. Finally, we finish with a summary and conclusions. Squirmer model swimmer and simulation method {#sec:methods} ============================================ Spherical squirmer {#sec:squirmer} ------------------ Swimming on the micron scale is dominated by friction [@Taylor1951; @Purcell1977]. Hence, hydrodynamics is captured by the Stokes equations: $$\begin{aligned} \nabla \cdot \mathbf{u} & = & 0 \\ \eta \nabla^2 \mathbf{u} & = & \nabla p \, ,\end{aligned}$$ where $\mathbf{u}$ and $p$ are the fluid velocity and pressure fields, respectively. Biological microswimmers often propel themselves by collective beating patterns of cilia, which create flow fields along their cell surfaces [@ElgetiGompper2015; @LaugaPowers2009]. Also artificial microswimmers exist that either use phoretic self-propulsion mechanisms, such as diffusiophoresis and thermophoresis in the case of active colloids [@ElgetiGompper2015; @PalacciChaikin2013; @ButtinoniBechinger2012], or Marangoni stresses in the case of active emulsion droplets [@ThutupalliHerminghaus2011] in order to generate such surface flow fields. A simple and effective approximation to all these swimming mechanisms is offered by the spherical squirmer model [@Lighthill1952; @Blake1971], where an axisymmetric tangential flow field on the surface is prescribed: $$\label{eq:surface_field} \mathbf{u}(\mathbf{r})\vert_{r=R} = \sum_{n=1}^{\infty}B_n\dfrac{2 P_n^\prime(\mathbf{e}\cdot\mathbf{\hat{r}})}{n(n+1)}\left[ -\mathbf{e} + (\mathbf{e}\cdot \mathbf{\hat{r}})\mathbf{\hat{r}}\right]\, .$$ Here, $\mathbf{e}$ is the swimmer orientation vector, $R$ is the swimmer radius, $P_n$ is the $n$th Legendre polynomial, and $P_n^\prime$ means its first derivative. Typically, the expansion is truncated after the second term, leaving the two relevant modes $B_1$ and $B_2$. Then, the flow field generated by the surface field of eq. (\[eq:surface\_field\]) in the surrounding fluid is [@Blake1971; @PakLauga2014] $$\begin{aligned} \label{eq:squirmer_field} \begin{split} \mathbf{u}(\mathbf{r}) = & \frac{B_1}{2}\Biggl[\left(-\frac{R}{r}\left[\mathbf{e}+(\mathbf{e}\cdot\mathbf{\hat{r}})\mathbf{\hat r}\right]+\frac{R^3}{r^3}\left[-\mathbf{e}+3(\mathbf{e}\cdot\mathbf{\hat{r}})\mathbf{\hat{r}}\right]\right) \\ & - \beta\frac{R^2}{r^2}\left(-\mathbf{\hat{r}}+3\left(\mathbf{e}\cdot\mathbf{\hat{r}}\right)^2\mathbf{\hat{r}}\right) + \mathcal{O}\left(\frac{R^4}{r^4}\right)\Biggr] \, \end{split}\end{aligned}$$ where $\beta=B_2/B_1$ is the squirmer-type parameter. ### Free squirmer The squirmer induces a hydrodynamic source dipole and for $\beta \neq 0$ also a force dipole, the far fields of which decay as $1/r^3$ and $1/r^2$, respectively. Swimmers with $\beta = 0$ are called neutral squirmers, while $\beta > 0$ generates pullers and $\beta < 0$ pushers. Since free squirmers are force-free, a stokeslet term with a flow field decaying as $1/r$ is not allowed but apears in eq.  [@Lighthill1952; @Blake1971]. The reason is that for a moving squirmer also the swimming velocity $v_0\mathbf{e}$ contributes to its surface velocity field, which is not included in eq. (\[eq:surface\_field\]). Thus, following Pak and Lauga [@PakLauga2014] the flow field of eq.  has to be interpreted as the pumping field generated by a squirmer held at a constant position by a force. This stalling force $\mathbf{F}_a$ is given by the balance equation [@PakLauga2014]: $$\label{eq:force_balance} \mathbf{F}_a - 6\pi\eta R\mathbf{v} = 0 \, ,$$ where here $\mathbf{v}$ is the swimming velocity of the freely moving squirmer. One can calculate the stalling force using Lamb’s solution to the Stokes equations [@Lamb1932; @KimKarrila2013; @PakLauga2014], $\mathbf{F}_a = 4\pi \eta R \nabla(\mathbf{e} \cdot \mathbf{r})B_1 = 4\pi \eta R B_1 \mathbf{e}$, and arrive at the known relation $\mathbf{v} = \dfrac{2}{3}B_1\mathbf{e}$ with the swimming speed $v_0:=\frac{2}{3}B_1$. In a freely translating squirmer, the Stokes flow field initiated by the pumping force is no longer present. Thus, in eq.  the stokeslet vanishes and the source-dipole term is modified leading to the flow field of a free squirmer [@Blake1971; @PakLauga2014], $$\begin{aligned} \label{eq:squirmer_free_field} \begin{split} \mathbf{u}_\mathrm{free}(\mathbf{r}) = & B_1\Biggl[\frac{1}{3}\frac{R^3}{r^3}\left[-\mathbf{e}+3(\mathbf{e}\cdot\mathbf{\hat{r}})\mathbf{\hat{r}}\right] \\ & - \frac{\beta}{2}\frac{R^2}{r^2}\left(-\mathbf{\hat{r}}+3\left(\mathbf{e}\cdot\mathbf{\hat{r}}\right)^2\mathbf{\hat{r}}\right) + \mathcal{O}\left(\frac{R^4}{r^4}\right)\Biggr] \, . \end{split}\end{aligned}$$ ### Squirmer under gravity Adding the gravitational force $-mg\mathbf{e}_z$ modifies the force balance of eq. (\[eq:force\_balance\]) and yields for the total squirmer velocity, $$\label{eq:velocity_gravity} \mathbf{v} = v_0\mathbf{e} - mg/(6\pi\eta R)\mathbf{e}_z,$$ As in our previous publications [@KuhrStark2017; @RuehleStark2018] we introduce the velocity ratio $\alpha: = v_0/v_\mathrm{sed}$ to compare the self-propulsion to the bulk sedimentation velocity, $v_\mathrm{sed} = {mg}/{6\pi\eta R}$. The gravitational force adds a Stokes flow field to the free-squirmer solution of eq. (\[eq:squirmer\_free\_field\]), which as usual contains a stokeslet and a source-dipole contribution: $$\begin{aligned} \label{eq:grav_flow_st} \mathbf{u}_\mathrm{st}^g & =-\frac{3}{4}v_\mathrm{sed}\dfrac{R}{r} \left(\mathbf {e}_z + \,\frac{z}{r}\, \mathbf{\hat{r}}\right)\\ \label{eq:grav_flow_sd} \mathbf{u}_\mathrm{sd}^g & = \frac{1}{4}v_\mathrm{sed}\dfrac{R^3}{r^3}\left(-\mathbf{e}_z + 3 \,\frac{z}{r}\, \mathbf{\hat{r}}\right) \, ,\end{aligned}$$ where we introduced the coordinate along the vertical $z = \mathbf{r} \cdot \mathbf{e}_z$. Due to their long-range nature it is important to always take stokeslet flow fields into account when they occur. This has been shown in experimental studies of the dancing motion of Volvox algae [@DrescherGoldstein2009] or the Stokesian dynamics of swimmers in a harmonic trap [@HennesStark2014]. ### Squirmer with bottom heaviness In this article we assume the spherical squirmer to be bottom-heavy, *i.e.*, its center of mass has an offset $r_0$ from the geometrical center [@WolffStark2013] such that a torque $mgr_0(-\mathbf{e}_z\times\mathbf{e})$ acts on the swimmer. Balancing external torque and rotational friction torque $- 8\pi \eta R^3 \mathbf{\Omega}$, we find the angular velocity $$\label{eq:angular_velocity_bh} \boldsymbol{\Omega} = \frac{3}{4}\frac{v_0}{R}\frac{r_0}{R\alpha}(-\mathbf{e}_z\times\mathbf{e}) \, .$$ We will later use the dimensionless parameter $$\frac{r_0}{R\alpha} = \frac{R}{v_0} \, \frac{mg r_0}{6 \pi \eta R^3}$$ to quantify the strength of the external torque. It compares - up to the factor 3/4 - the characteristic time scale of self-propulsion, $R/v_0$, to the characteristic time of reorientation by bottom heaviness, $8\pi \eta R^3 / (mgr_0)$. Sometimes it is called the gyrotactic orientation parameter [@PedleyKessler1992; @PedleyKessler1987]. The rotating squirmer generates the flow field of a rotlet [@KimKarrila2013], $$\label{eq:grav_flow_rot} \mathbf{u}_r^\mathrm{bh} = \frac{R^3}{r^2} \boldsymbol{\Omega} \times \mathbf{\hat{r}} = \frac{3}{4}v_0 \frac{r_0}{R\alpha} \frac{R^2}{r^2} \left((\mathbf{e}\cdot\mathbf{\hat{r}}) \mathbf{e}_z - \,\frac{z}{r}\, \mathbf{e}\right).$$ Note that the spatial decay of this flow field is the same as for pusher and puller squirmers but is more long-ranged compared to the neutral squirmer. However, $\mathbf{u}_r^\mathrm{bh}$ vanishes when the squirmer is aligned with the vertical, meaning $\mathbf{e} = \mathbf{e}_z$. Multi-particle collision dynamics {#sec:mpcd} --------------------------------- In the following we present the algorithm for performing simulations with multi-particle collision dynamics that we have already used in the past [@BlaschkeStark2016; @KuhrStark2017; @KuhrStark2019]. Therefore, we only summarize it here. The algorithm is implemented in a massively parallelized code, which runs on a computer cluster. In addition, we also provide the parameters used in our simulations. ### Algorithm We numerically solve the Navier-Stokes equations at low Reynolds number using the mesoscale method of multi-particle collision dynamics (MPCD) [@MalevanetsKapral1999; @PaddingLouis2006; @NoguchiGompper2007; @ZoettlStark2018]. Thermal noise is automatically included in this particle-based solver. Since the Reynolds numbers employed in our simulations are smaller than one, we effectively obtain solutions of the Stokes equations where any inertia is neglected. In the MPCD method the fluid is composed of point particles of mass $m_0$ that are kept at temperature $T_0$. A simulation step consists of the fluid particles performing consecutive streaming and collision steps. During the streaming step, which has a duration $\Delta t$, each fluid particle $i$ moves ballistically with its velocity $\mathbf{v}_i$ according to $\mathbf{r}_i(t+\Delta t) = \mathbf{r}_i(t)+\mathbf{v}_i\Delta t$. [^1] The duration $\Delta t$ is a simulation parameter that controls the fluid viscosity [@PaddingLouis2006; @NoguchiGompper2008]. During the streaming step fluid momentum is advected and also transferred to swimmers or absorbed by walls. Furthermore, boundary conditions need to be applied to the squirmer surfaces and to bounding walls. For this we employ the bounce-back rule [@PaddingLouis2005] to implement either the surface slip velocity field of a squirmer from eq.  or the no-slip boundary condition for walls. The dynamics of the squirmers themselves is also computed during the streaming step. We perform 20 molecular dynamics steps during each streaming step using Velocity-Verlet integration together with the gravitational force and steric interactions between squirmers [@ZoettlStark2018]. The purpose of the collision step is to exchange momentum between fluid particles. To that end, the simulation box is divided into cubical cells of edge length $a_0$. We also use this length to define the fluid particle density as the average particle number $n_\mathrm{fl}$ per collision cell. Within each cell fluid-particle velocities are updated with the help of a collision operator, for which we use the MPC-AT+a rule [@NoguchiGompper2007; @ZoettlStark2018]. Thus, a thermostat is set up and both linear and angular momentums are conserved [@NoguchiGompper2007]. We also apply a grid shift for each new collision step in order to enforce Galilean invariance [@IhleKroll2003]. During the collision step fluid and squirmers/bounding walls interact as well: if a collision cell overlaps with a boundary or a squirmer, the overlapping volume is filled with virtual fluid particles to ensure the fluid density $n_\mathrm{fl}$ remains the same [@LamuraKroll2001; @ZoettlStark2018]. After the collision step the momentum gain of the virtual fluid particles is transferred to the involved squirmer where the virtual particles are located. The flow fields calculated with the MPCD method are accurate on length scales larger than the mean free path of the fluid particles. Using large enough squirmer radii, the hydrodynamic behaviour of squirmer microswimmers is therefore well reproduced by the MPCD method [@DowntonStark2009; @GoetzeGompper2010]. Thus it has widely been used to simulate a variety of settings [@GoetzeGompper2010; @ZoettlStark2014; @BlaschkeStark2016; @KuhrStark2017; @RuehleStark2018; @TheersWinkler2016Soft]. ### Parameters For the most part, we use the parameters presented below, in case of deviations we have stated them in the text. We use the duration $\Delta t = 0.02a_0\sqrt{m_0/k_BT_0}$ for the streaming step and a fluid particle density of $n_\mathrm{fl} = 10$. This implies a viscosity of $\eta = 16.05 \sqrt{m_0k_BT_0}/a_0^2$ [@NoguchiGompper2008; @ZoettlPHD]. For our squirmers we use a radius $R=4a_0$, therefore the translational and rotational thermal diffusivities in bulk fluid are $D_T = k_BT /(6\pi\eta R) \approx 8\cdot 10^{-4} a_0\sqrt{k_BT_0/m_0}$ and $D_R = k_BT /(8\pi\eta R^3) \approx 4\cdot 10^{-5}\sqrt{k_BT_0/m_0}/a_0$, respectively. We choose $B_1=0.1\sqrt{k_BT_0/m_0}$ and thus have the active Péclet number $\mathrm{Pe} = Rv_0 / D_T = 330$ and the Reynolds number $\mathrm{Re} = v_0Rn_\mathrm{fl}/\eta = 0.17$. Typically, the volume density of squirmers was held constant at $10\%$, simulating 914 squirmers in a box with a system size of $108a_0 \times 108a_0 \times 210a_0$, where the latter length is the box height. We use no-slip boundary conditions at the top and bottom walls and periodic boundary conditions in the horizontal plane. For our study of bottom-heavy squirmers we vary the velocity ratio $\alpha$ by changing the gravitational acceleration $g$, which in an experiment depends on the density mismatch between swimmer and fluid. The rescaled torque $r_0/R\alpha$ is varied by changing both $g$ and the center-of-mass offset $r_0$. Note that the torque depends on real mass rather than buoyant mass. In order to account for this difference, in ref. [@WolffStark2013] the parameter $r_0$ was redefined as $r_0m/\Delta m$, with the buoyant mass $\Delta m$. This way we can use the same parameter $\alpha$ in both the torque and force equations. Alternatively, one could also assume that $\Delta m \approx m$ [@WolffStark2013]. To analyze the data of our simulation runs, we save the position, orientation, as well as translational and angular velocities for each squirmer every 1000th time step. Note for most densely packed squirmers a depletion of the MPCD fluid particles is observed between the squirmers due to finite compressibility [@TheersGompper2018]. In the states discussed in sects. \[sec:convective\_roll\] and \[sec:spawning\] the squirmers are not most densely packed. Nevertheless, we ran one simulation for each of the states at higher fluid particle density $n_\mathrm{fl}=80$ and decreased self-propulsion velocity $B_1=0.01$. This leaves the Péclet number almost constant [@TheersGompper2018] while lowering compressibility. In these simulations we could confirm that the phenomenology stays the same as for the lower fluid particle density, which we typically use. Results {#sec:results} ======= ![image](phase_snapshots.jpg){width="95.00000%"} In our hydrodynamic simulations we explored the dynamics of bottom-heavy squirmers under gravity. Depending on the velocity ratio $\alpha: = v_0/v_\mathrm{sed}$ and the strength of the gravitational torque, we observed a variety of stable and transient states. In fig. \[fig:neutral\_states\](a) we show the state diagram in the parameter space $\alpha$ versus reduced gravitational torque $r_0/R\alpha$, which we determined in simulations with neutral squirmers. We have marked four exemplary states by orange points, for which we show the density profiles in fig. \[fig:density\_profiles\] and videos M1-M4 in the supplemental material. In the following, we introduce the main phenomenology of the observed states. ![Density profiles for collective squirmer states marked by orange dots in the state diagram of fig. \[fig:neutral\_states\](a). The profiles belong to conventional non-equilibrium sedimentation (red), inverted sedimentation (blue), stable plumes and convection rolls (orange), and the spawning cluster (green).[]{data-label="fig:density_profiles"}](zhist_4in1.eps){width="45.00000%"} The horizontal dotted line at $\alpha=1$ in fig. \[fig:neutral\_states\](a) marks the upper limit, below which isolated squirmers sink down in a bulk fluid and settle at a finite distance from a lower bounding wall [@RuehleStark2018]. Hydrodynamically interacting squirmers show a non-equilibrium sedimentation state, which persists to $\alpha \approx 3$ for weak torques. For zero torque the non-equilibrium sedimentation was already observed in ref. [@KuhrStark2017]. A typical sedimentation profile at $\alpha=1.5$ and $r_0/R\alpha=0.01$ is depicted in fig. \[fig:density\_profiles\]. In contrast, at large $\alpha$ and torques, the orientational bias of the squirmers leads to their enrichment at the top wall and inverted sedimentation occurs, which is even observable for small torques. The corresponding inverted sedimentation profile for $\alpha=6.01$ and $r_0/R\alpha = 0.02$ is shown in fig. \[fig:density\_profiles\]. For dilute systems of bottom-heavy active particles such states were already described in ref. [@WolffStark2013]. In between sedimentation and inverted sedimentation interesting dynamic states occur, which we shortly introduce now. In the region colored in gray in fig. \[fig:neutral\_states\](a) we observe collections of sinking squirmers, which we call plumes. Although oriented upwards on average, they can sink due to the reduced viscous friction of a squirmer cluster. The plumes supply a convective roll at the bottom of the system that is formed and kept running by the self-propelling squirmers. Solitary squirmer escape from the edges of the roll and swim upwards. The formation of rolls and plumes are reminiscent of bioconvection observed in experiments [@PedleyKessler1992; @JanosiHorvath1998; @HosoyaMogami2010; @SatoToyoshima2018]. We show an example of this state at $\alpha=2.31$ and $r_0 / R\alpha =0.11$ in video M3 and also provide a snapshot in fig. \[fig:neutral\_states\](b), left. The density profile is non-monotonous with a broad maximum at the position of the bottom cluster, a minimum in the central region, where plumes pass through, and a sharp maximum at the top wall, where squirmers accumulate. Notably, starting from an initially uniform distribution of squirmers we also observe plumes that form at the top wall, sink down, and then slowly evaporate. Likewise, convection rolls can be merely transient when the bottom cluster eventually disappears. The steady state for these cases are inverted sedimentation profiles with strong layering at the top wall. The transient plumes and rolls occur at higher torques in the blue region of the state diagram of fig. \[fig:neutral\_states\] beyond the dashed line and to the right of the state of stable plumes and convection rolls. An interesting situation arises in the state diagram when $\alpha$ is situated in a narrow stripe above $\alpha = 1$ for torques larger than a threshold value that we show by a dashed line in fig. \[fig:neutral\_states\](a). The clearest representation of this spawning-cluster state arises for large torques, where the squirmer orientation is fixed to the upright direction. A big cluster of squirmers floats above the lower wall \[see fig. \[fig:neutral\_states\](b), right\]. Hydrodynamic interactions between the squirmers increase their mobilities and thereby their sedimentation velocities, which can cancel the swimming velocity even for $\alpha \ge 1$. Figure \[fig:density\_profiles\] shows the sedimentation profile for $\alpha=1.5$ and $r_0 / R \alpha =0.5$. In comparison to the convection roll the bottom cluster has a higher density visible by a more pronounced broad maximum and also the depletion in the middle of the cell is stronger by an order of magnitude. We call this state “spawning cluster” because individual squirmers occasionally escape from the pores within the cluster at high velocity. This can be seen in video M4, as well as in the snapshot in fig. \[fig:neutral\_states\](b), right. In the following we discuss the different states in more detail. We investigate conventional and inverted sedimentation of neutral squirmers in sect. \[sec:sedimentation\] and the state with plumes and convection rolls in sect. \[sec:plumes\]. Transient plumes and rolls in the inverted sedimentation state are discussed in sect. \[sec:transients\] and in sect. \[sec:spawning\] we address the spawning-cluster state. Finally, we show how the state diagram changes for pusher and puller squirmers in sect. \[sec:flow\_fields\]. Conventional and inverted sedimentation {#sec:sedimentation} --------------------------------------- ![Density profiles for the inverted sendimentation state for $\alpha=6.01$ and different torque values. For high torques the steady state is reached via a long-lived transient, where a cluster separates from the top layers and slowly evaporates (not shown). []{data-label="fig:g-005_density_profiles"}](zhist_beta0_g-005_spec.eps){width="45.00000%"} ### Sedimentation Collective sedimentation of squirmers under gravity has extensively been studied in refs. [@KuhrStark2017; @KuhrStark2019; @ShenLintuvuori2019]. It also occurs, of course, for bottom-heavy squirmers. In the state diagram of fig. \[fig:neutral\_states\](a) the sedimentation regime at low torques extends beyond $\alpha =1$, although single squirmers with upright orientation can overcome gravity. For $\alpha=1.5$ and $r_0/R\alpha=0.01$ we already showed the exponential sedimentation profile in fig. \[fig:density\_profiles\]. It occurs because the flow fields of nearby squirmers tilt the orientation of one squirmer away from the upright direction, which therefore sinks even for $\alpha > 1$. The necessary flow vorticity $\boldsymbol{\omega} = \mathrm{curl} \, \mathbf{u} / 2$ for the reorientation is provided only by the gravity-induced stokeslets \[see eq. )\] since the flow field of neutral squirmers has zero vorticity. The situation changes for large torques, where the squirmer orientation is always upright. At $\alpha < 1$ squirmers are confined to clusters sitting on the bottom wall. When crossing $\alpha \approx 1$ they start to float, in the sense that the density at $z=0$ develops a minimum. An exemplary density profile of these spawning clusters for $\alpha = 1.5$ and $r_0/R\alpha = 0.5$ was already discussed in connection with fig. \[fig:density\_profiles\]. ### Inverted sedimentation In fig. \[fig:g-005\_density\_profiles\] we show a set of density profiles for the inverted sedimentation state for $\alpha=6.01$. While at zero torque the profile is nearly uniform, increasing the torque from zero the sedimentation length of the inverted profile decreases and at high torques the inversion becomes strong enough that layers of squirmers form at the top wall. Note that for the three largest torque values transient plumes are observed. In concrete, a cluster separates from the top layers, sinks as a plume, and slowly evaporates (see video M8 in the supplemental material, and sect. \[sec:transients\]). However, the sedimentation profiles are determined after reaching the steady state in the long-time limit. Plumes and convective rolls {#sec:plumes} --------------------------- In the following we discuss squirmer plumes that constantly appear in the bulk and feed a convective roll at the bottom of the system. ![Mean vertical squirmer orientation $\langle\cos\vartheta\rangle_{xy}$ as a function of height $z$ for all squirmers with drift velocity $v_z < 0$. The curves correspond to the states of sedimentation (red), inverted sedimentation (blue), and plumes and convective rolls (orange). Horizontal lines show $\cos\vartheta_{\text{th}}=1/\alpha$, where single squirmers under gravity switch between up- and downwards swimming. []{data-label="fig:meanori"}](meanori_zhist_beta0_4in1_v-.eps){width="49.00000%"} ### Collective sinking and plumes {#sec:collective_sinking} Plumes of squirmers sink with preferentially upright orientation. To characterize this motional state and contrast it with (inverted) sedimentation, we plot in fig. \[fig:meanori\] the mean vertical squirmer orientation $\langle\cos\vartheta\rangle_{xy}$ as a function of height $z$ for all squirmers, which drift downwards: $v_z<0$. The average is taken over the horizontal $xy$ plane. In addition, for each $\alpha$ the dashed line indicates the threshold value $\cos\vartheta_\mathrm{th} = 1/\alpha$ for the degree of upright orientation, which a single squirmer must not exceed in order to sink. In the conventional sedimentation state (red curve) the mean upright orientation is always below the threshold value, which explains the downward drifting of the squirmers. The same is true for the inverted sedimentation state (blue curve). However, while the threshold value $\cos\vartheta_\mathrm{th}$ belongs to a small upright orientation, the almost constant mean orientation is negative here, thus squirmers with velocity $v_z < 0$ swim downwards (rather than sink). In contrast, the mean orientation of squirmers in the plume state (orange curve) exceeds $\cos\vartheta_\mathrm{th}$ in the region away from the walls, where plumes occur. Thus, their sinking cannot be explained by looking at single squirmers. Instead, it occurs since hydrodynamic friction in clusters of squirmers is reduced such that the mobility of each squirmer is increased. Thus, their sedimentation velocity, which acts against the upwards swimming, is larger compared to single squirmers and the whole cluster can sink. Indeed, for two squirmers the leading order flow field acting on the neighbor is given by the stokeslet contribution in eq. , which provides a flow in negative $z$-direction and thus reinforces the gravitational sinking. This hydrodynamically induced mobility increase has already been studied for passive colloids on the basis of Rotne-Prager mobilities, for example, in refs. [@ReichertStark2004; @ReichertStark2004II] and for larger conglomerates in refs. [@CichockiHinsen1995; @LassoWeidman1986]. #### Cluster velocities ![Distribution of vertical velocities of neutral squirmers at $\alpha=2.31 $ and $r_0/R\alpha = 0.11$ for different cumulative cluster sizes $N_\mathrm{cl}$. Vertical lines: maximum bulk velocity $v=v_0-mg/\gamma$ (black, right), nearly zero mean velocity of all squirmers (black, left), velocity for the mean vertical orientation of all squirmers from $\rho(v_z)$ for $N_\mathrm{cl}\geq 10$ (orange). Inset: probability density of cluster sizes.[]{data-label="fig:clusters"}](clustered_velhist_beta0g-013_com10_combine.eps){width="49.00000%"} To quantify the collective sinking in the plume state further, we show in fig. \[fig:clusters\] the distributions of the vertical cluster velocity for different cumulated cluster sizes $N_\mathrm{cl}$. To determine them, every 1000th time step we monitor the clustering in the system by grouping squirmers that have a neighbor distance $d<R/8$ into the same cluster. This identifies clusters of different size $N_\mathrm{cl}$, where solitary squirmers have $N_\mathrm{cl}=1$. We only consider squirmers in a region with height from $60a_0 < z < 160 a_0$ in order to avoid the densely accumulated squirmers close to the top and bottom walls. The distribution of cluster sizes (excluding solitary swimmers) is shown in the inset of fig. \[fig:clusters\]. Having grouped the squirmers into clusters of different sizes, we can determine the velocity distributions for all squirmers, solitary squirmers ($N_\mathrm{cl}=1$), for squirmers in clusters with $N_\mathrm{cl} \geq 2$, as well as those with $N_\mathrm{cl} \geq 10$ (see fig. \[fig:clusters\]). As references we have indicated three characteristic velocities by vertical lines: the maximum bulk velocity $v=v_0-mg/\gamma$ (right black line) and the zero mean velocity of all squirmers (left black line). The dashed orange line is the vertical velocity, which is calculated with the mean vertical orientation of all squirmers from the orange distribution with $N_\mathrm{cl}\geq 10$. The total distribution of vertical squirmer velocities in the convective plume state has a broad shape symmetric about a mean very close to zero. This makes sense since in steady state there should not be a non-zero vertical flux of squirmers. Solitary squirmers ($N_\mathrm{cl}=1$) have a broad distribution as well, but with a positive bias. This illustrates that solitary squirmers contribute more to upwards than to downwards motion. Interestingly, the velocities of solitary squirmers and small clusters can exceed the free-swimming limit $v_z=v_0$. We speculate that this comes from squirmers approaching the convective roll at the bottom, where they are pushed up by hydrodynamic flow fields originating from the convective rolls. Indeed, squirmers with $v_z > v_0$ are hardly present in the velocity distribution of an inverted sedimentation state (not shown) where convective flows at the bottom do not exist. Restricting ourselves to clusters of larger size, $N_\mathrm{cl} \ge k$, the distributions are more and more shifted to negative velocities with increasing $k$ and become narrower compared to both the single-particle and the overall distributions. They also lose the tail with positive drift velocity $v_z$. The squirmer velocities in clusters are determined by the balance of self-propulsion and sedimentation velocities. The latter is increased in clusters of squirmers compared to solitary squirmers due to their hydrodynamic interacions, as discussed above. In contrast, when one calculates for $N_{\text{cl}} \ge 10$ the mean vertical orientation and balances the upwards swimming and sedimentation velocities using the single-squirmer mobility, one obtains a small positive velocity indicated by the dashed orange line. It hardly intersects the distribution at its far end. However, this velocity is close to the mean of the single-squirmer velocity distribution (red curve), because we find that the orientational distributions of squirmers swimming alone or in clusters are very similar. We attribute this to the missing vorticity in the flow field of a neutral squirmer. Finally, for the system displayed in fig. \[fig:clusters\] at $N_\mathrm{cl}\ge 10$ the mean value of the vertical velocity is close to $-v_0$. It decreases further for even larger clusters and, of course, also depends on the velocity ratio $\alpha$. #### Flow vorticity We have already mentioned how advection by hydrodynamic flow fields from neighboring squirmers (stokeslets in leading order) enhances the sinking velocity and thus the mobility of squirmers in a plume. However, squirmers in the neighborhood of a sinking plume are also reoriented by the vorticity of these flow fields. While sinking, a squirmer reorients and therefore swims towards the plume. It joins the plume and thereby contributes to its vertically extended shape. We note that such vorticities also play a role in the formation of a fluid pump by hydrodynamically interacting active particles moving in a harmonic trap potential [@HennesStark2014]. In that study hydrodynamic torques only compete with rotational noise, whereas in the present case they have to balance the external gravitational torque acting on bottom-heavy squirmers. At high external torques stable plume states do not exist in the state diagram of fig. \[fig:neutral\_states\](a), because bottom heaviness completely dominates the upright squirmer orientations. A reorientation by flow vorticity is not possible. However, when both hydrodynamic and gravitational torques are comparable, a neighboring squirmer tilts towards a plume but can only join it for a vertical alignment with $\cos\vartheta \lessapprox 1/\alpha$, because otherwise it will swim upwards as explained before. This is why plumes persist at higher torques for decreasing $\alpha$ in the state diagram of fig. \[fig:neutral\_states\](a). The mechanism reported here resembles an instability described previously [@PedleyKessler1992], where reorientations by flow vorticity also induce the formation of plumes composed of gyrotactic algae. #### Sizes of sinking clusters ![Mean number of squirmers in sinking clusters ($N_-\geq 2$) *versus* simulation time in units of MPCD time step $\Delta t$. The plumes (orange) are clearly visible via distinct spikes. For transient plumes (purple) the spikes disappear with time. For conventional sedimentation (red), spawning clusters (green), and inverted sedimentation (blue) pronounced spikes are not visible.[]{data-label="fig:mean_sinking"}](clustertime_combine_select.eps){width="\linewidth"} We have already identified the plumes in fig. \[fig:meanori\] via the mean vertical orientation of sinking squirmers. The clustering dynamics offers a further means to characterize and distinguish the stable plume state from the other states. In fig. \[fig:mean\_sinking\] we show the mean number of squirmers $\langle N_- \rangle$ in sinking clusters, *i.e.*, where $N_- \geq 2$, over a period of $10^6$ MPCD time steps $\Delta t$ and within the same vertical region $60 < z/a_0 < 160$ as in fig. \[fig:clusters\]. We can clearly recognize the plume state (orange line) by the high spikes that correspond to sudden events of collectively sinking squirmers passing through the region. Also shown is the transient plume state (purple line) that we discuss further in sect. \[sec:transients\]. Here, the spikes disappear around $1.3 \cdot 10^6 \Delta t$ when the plume has evaporated. In contrast, in the inverted sedimentation state (blue line), the average size of sinking clusters remains low. Some small spikes can be seen for the conventional sedimentation state (red line), where small clusters form and induce some convective dynamics, which has been observed before [@KuhrStark2017]. Likewise, some small spikes are visible for the spawning cluster state (green line) but much more rarely. ### Convective roll {#sec:convective_roll} ![image](roll_pics.jpg){width="75.00000%"} ![Heat map of the mean vertical current density $j_z(x,y)$ in the horizontal plane and for three regions $z/a_0\in [0,80]$, $[80,120]$, and $[120,160]$. The time range for averaging is the same as in fig. \[fig:roll\_snapshots\]. We have applied a low-pass filter (provided by python’s scipy package) in order to smoothen the data. []{data-label="fig:current_density"}](flux2dhist_beta0_g-013_com100_t_tailored_filtered.eps){width="\linewidth"} ![Three-dimensional visualization of the trajectory of a spherical squirmer in the convective rolls. The squirmer first sinks as part of a plume (blue dot). Upon reaching the bottom cluster at position I it meanders laterally. Positions II and III indicate failed attempts to leave the cluster at the edge of a roll. Finally, it escapes at position IV and swims further upwards (red dot). Squirmers with their orientations are shown, their radii have been increased for better visibility. Using the periodic boundary condition, the system box has been slightly shifted so that the squirmer does not leave the box. Dashed line: Projection of the trajectory on the $x$-$y$ plane.[]{data-label="fig:highlight_3d_traj"}](highlight3d_traj_g-013_withtop.jpg){width="0.95\linewidth"} In the context of microswimmers, convection rolls are stationary rotational patterns, which are formed by gravitactic swimmers due to their self-propulsion and advection in the self-generated flow fields [@JanosiHorvath1998; @CzirokKessler2000; @KruegerMaass2016]. Here, we use this term for the recirculating motion inside a cluster sitting at the bottom wall. The cluster is visible in the left snapshot of fig. \[fig:neutral\_states\](b) and in video M3. In video M5, we show a 3D view of the system in the region $0 \leq z/a_0 \leq 100$, where we have color-coded the vertical squirmer velocities. A snapshot from the video is depicted in fig. \[fig:roll\_snapshots\](a). A very dynamic situation is visible. Squirmers in a plume sink down on the right and join the cluster. Because of their negative velocities they are colored in blue \[see color bar in fig. \[fig:roll\_snapshots\](c)\]. The plume is also visible on the left due to the periodic boundary condition. At the same time, individual squirmers colored in red swim upwards in a region with low density. In fig. \[fig:roll\_snapshots\](b) we see this depleted region more clearly from the top. All squirmers below $z=70a_0$ are plotted. Furthermore, we recognize that the convective roll has an elongated shape. This will change for larger systems as we demonstrate below. The circulation pattern in the convective roll is visualized in fig. \[fig:roll\_snapshots\](c). Here, we plot all the trajectories of squirmers within a slice of thickness $\Delta y = R$ spanning roughly the time interval $9\cdot 10^5 \Delta t$. Each velocity vector of a squirmer is colored according to the value of its horizontal velocity component $v_x$. We recognize two wavy patterns in the bottom region below $z/a_0 = 100$. They originate from squirmers moving down- and upwards while swimming to the left (blue lines) and from squirmers moving up- and downwards while swimming to the right (red lines). This generates the two circular patterns of the rolls. One is clearly visible in the right region, a second one on the left is not complete due to the periodic boundary condition. In the middle we see squirmers leaving the convective rolls. In order to study the vertical motion in the system, we plot in fig. \[fig:current\_density\] the mean vertical current density of the squirmers $j_z(x,y)$, defined via $j_z(x,y):= \overline{\langle\rho(\mathbf{r})\rangle_\parallel \langle v_z(\mathbf{r})\rangle_\parallel}$ [@KuhrStark2017], where $\rho$ is the squirmer density, $\langle \dots \rangle_\parallel$ means average along the $z$-direction, and $ \overline{ \cdots}$ average over time. Figure \[fig:current\_density\] shows the current density with the vertical average taken in three different regions: at the botton where the convective rolls are ($0 \le z/a_0 \le 80$), in the middle where the plumes predominately occur ($80 \le z/a_0 \le 120$), and in the bulk region above it ($120 \le z/a_0 \le 160$). At the bottom (left plot) we immediately recognize the low density region where single squirmers move up (red region), while the two counterrotating rolls meet where the squirmers drift downwards (blue region). In the middle section (center plot) the sinking squirmers are spatially focussed due to the plume formation: squirmers swim towards each other and sink collectively. The plumes feed the two counterrotating rolls and, therefore, are found at the right edge. In contrast, rising squirmers move mainly individually, which is why their distribution is more spread out (center and right plot). In the middle height section (center plot) a weak plume is observable close to the center of the plane. In the videos M3 and M6 we see how such plumes move to the side while sinking. This exemplifies the strong hydrodynamic flows that uphold the convective roll. While the two convective rolls convey the picture of a regular structure, the path of a single squirmer inside the dense cluster is irregular, as we show in fig. \[fig:highlight\_3d\_traj\]. We have reoriented the point of view in comparison to fig. \[fig:roll\_snapshots\](a) by $90^{\circ}$, such that we now look at the long side of the rolls. Furthermore, taking into account the periodic boundary condition, we have slightly shifted the system box so that the squirmer does not leave the box. For special points in time we also display the squirmer’s orientation vector in the figure. The trajectory starts at the blue dot. After joining the rolls at position I, the squirmer meanders inside the dense cluster and eventually escapes from an edge (IV) with an upright orientation. The plotted trajectory ends at the red dot. The squirmer attempts to leave the bottom cluster several times (II,III) but is unsuccesful because the orientation is tilted too strongly against the vertical. It even crosses the low density region close to position IV as the projected trajectory on the horizontal plane shows. The strong variation of the squirmer orientation against the normal is, of course, due to the vorticity of the generated hydrdynamic flow field. In our case, it results dominantly from the stokeslet due to the gravitational force acting on each squirmer: The source-dipole far-field of the neutral squirmer has zero vorticity, while wall-induced image fields decay with $\mathcal{O}(r^{-4})$ and therefore are weak [@RuehleStark2018; @SpagnolieLauga2012]. Interestingly, the “minuet” dance of a pair of Volvox algae in the experiments of ref. [@DrescherGoldstein2009] could also be reproduced by stokeslet interactions between the two microswimmers. The more chaotic meandering motion observed here inside the convective rolls then occurs due to the interactions with all the surrounding squirmers. ![Snapshots for an array of convective rolls in a system with a horizontal cross-section that has been increased by a factor of four in comparison to fig. \[fig:roll\_snapshots\], while keeping the height constant. Left: Top view showing all squirmers below $z=100a_0$. Right: Side view.[]{data-label="fig:larger_system"}](largerbottom_rolls_topview_dens5.jpg "fig:"){width="0.48\linewidth"} ![Snapshots for an array of convective rolls in a system with a horizontal cross-section that has been increased by a factor of four in comparison to fig. \[fig:roll\_snapshots\], while keeping the height constant. Left: Top view showing all squirmers below $z=100a_0$. Right: Side view.[]{data-label="fig:larger_system"}](larger_rolls_dens5.jpg "fig:"){width="0.48\linewidth"} We already mentioned that the elongated convective rolls occupy the whole horizonzal plane of our simulation box. To investigate how our observations depend on the system size, we double the edge length of the square cross section, keep the height constant, but reduce the squirmer volume fraction by a factor of two to 5%. As the top view of fig. \[fig:larger\_system\], left demonstrates, several compact convective rolls with an island shape appear in contrast to a single elongated cluster. In the side view of fig. \[fig:larger\_system\], right we recognize several sinking plumes. Videos M6 and M7 illustrates impressively the observed dynamics, in particular, how sinking plumes feed the convective rolls, which seem to have a characteristic spacing. Increasing the density to 10% as in the small systems the islands become larger and partially touch each other. ![Mean vertical current density for the same system size as in fig. \[fig:larger\_system\]. The vertical region for determining the average is $z/a_0\in [0,100]$. The chosen time span of $4.5\cdot 10^5\Delta t$ includes the snapshots of fig. \[fig:larger\_system\]. A low-pass filter (provided by python’s scipy package) was applied in order to smoothen the data.[]{data-label="fig:larger_current"}](flux2dhist_largerbeta0_g-013_com100_t_tailored_filtered_dens5.eps){width="0.85\linewidth"} ![ Squirmer trajectories during a time span of $4\cdot 10^5 \Delta t$ projected onto the horizontal plane. Left: for the bottom vertical region of the convective rolls at $0 \le z/R \le 4$. Right: for the upper vertical region at $8 \le z/R \le 12$. The horizontal velocity direction is color-coded according to the circular color bar. The faint circles show squirmers from the snapshot in fig. \[fig:larger\_system\], left within the respective vertical regions. \[fig:larger\_vhorizontal\] ](velarrows_horizontal_larger_rolls_h08_dens5.jpg "fig:"){width="0.49\linewidth"} ![ Squirmer trajectories during a time span of $4\cdot 10^5 \Delta t$ projected onto the horizontal plane. Left: for the bottom vertical region of the convective rolls at $0 \le z/R \le 4$. Right: for the upper vertical region at $8 \le z/R \le 12$. The horizontal velocity direction is color-coded according to the circular color bar. The faint circles show squirmers from the snapshot in fig. \[fig:larger\_system\], left within the respective vertical regions. \[fig:larger\_vhorizontal\] ](velarrows_horizontal_larger_rolls_h40_dens5_circle.jpg "fig:"){width="0.49\linewidth"} We clearly see the island shape of the convective rolls also in the mean vertical current density illustrated in fig. \[fig:larger\_current\]. The sinking plumes above the rolls and the sinking squirmers inside them are visible by the blue areas, while around these regions squirmers swim upwards. Figure \[fig:larger\_vhorizontal\] completes the picture of the convective roll. We show squirmer trajectories inside horizontal slices of thickness $4R$ either at the bottom (left) or top (right) of the convective rolls. The direction of the in-plane velocity is color-coded. At the bottom squirmers move radially outwards (away from the dense clusters) while at the top they move radially inwards (towards the centers of the dense clusters). Combined with the vertical squirmer current, this implies a toroidal flow pattern for the convective rolls, which is also visible in video M7. Transient plumes and rolls {#sec:transients} -------------------------- Plumes and convective rolls as described in sect. \[sec:plumes\] are not stable at high torques. Instead, the long-term steady state of the system is an inverted exponential sedimentation profile that develops after a long-lived transient. In the following, we distinguish between two different transient states, which we observed starting from an initially uniform distribution of squirmers: evaporating plumes forming at the top wall and unstable rolls. ### Evaporating plumes {#sec:evaporating} First, we consider systems with large $\alpha\gtrapprox 5.6$. In the state diagram of fig. \[fig:neutral\_states\](a) we have identified inverted sedimentation with transient plumes to the right of the dashed line. At such high velocity ratios $\alpha$ stable plumes cannot exist since the vorticity from the gravitational stokeslet is too weak to orient neighboring squirmers towards each other in order to form stable plumes while sinking. Instead, starting from a uniform distribution all squirmers oriented upwards by the large torque also swim upwards and form dense squirmer layers at the top wall. Here, due to flow vorticity the orientations tilt and protrusions of squirmers form that eventually separate from the layers. The sinking squirmer cluster reaches a final height well above the bottom wall. It emits squirmers which join the layers at the top wall. Thus the plumes gradually evaporate and disappear. The whole process can be seen in video M8. In steady state inverted sedimentation profiles with a few layers occur. We have already plotted some of them in fig. \[fig:g-005\_density\_profiles\] for $r_0 / R\alpha \ge 0.04$ and also show such a profile at the end of video M8. Note, gravitational detachment of protrusions from a top layer has been extensively studied in different theoretical and experimental settings in connection with bioconvection [@PlessetWinet1974; @ChildressSpiegel1975; @PedleyKessler1992; @JanosiHorvath1998; @SatoToyoshima2018]. ![Time evolution of the volume fraction $\eta$ in the lower ($0 < z < H/2$) and upper ($H/2 < z < H$) half of the system. The height is $H=210a_0$. The dashed line denotes the global volume fraction $\eta_\mathrm{glob}=0.10$. The curves correspond to inverted (red) and conventional sedimentation (blue), stable plumes and convective rolls (orange), transient rolls (green), and transient plumes (purple). []{data-label="fig:volume_fraction"}](positioncount_all_combine.eps){width="\linewidth"} ### Transient convective rolls We now consider lower values of $\alpha$, where sinking plumes and convective rolls form, starting from the uniform initial distribution. Now, an increasing external torque dominates the orientational dynamics of squirmers meaning they experience a stronger vertical bias. This counteracts the hydrodynamic reorientation and the motion of squirmers towards a plume. Thus, at higher torques the plumes become thinner and eventually disappear in the long-time limit. As a result, the convective rolls at the bottom wall do not receive sufficient influx of squirmers and also evaporate. Again, the system reaches a steady state with an inverted sedimentation profile and with layering at the top wall. The process is visualized in video M9. Both of the observed transient structures reveal themselves with clear signatures in the time evolution of the spatial squirmer distribution. In fig. \[fig:volume\_fraction\] we plot the volume fraction $\eta$ versus time, for both the lower and upper half of the system ($z \in [0,H/2]$ and $[H/2,H]$, where $H$ is the system height). The curves for the steady states of inverted and conventional sedimentation (red and blue curves), as well as for stable plumes and convective rolls (orange curve) fluctuate around a constant value. However, transient plumes sinking from the top wall (purple curve) or transient convective rolls (green curve) have a steadily decreasing density in the lower half of the system. At the same time, the squirmer layers at the top wall grow at the expense of the shrinking plumes and rolls. Simulating the transients is very time-consuming. For example, the green curve belongs to a transient convective roll and has not equilibrated yet. Initially, the density is high in the lower half of the system and the roll takes a long time to dissolve. Spawning clusters and transient hovering {#sec:spawning} ---------------------------------------- ![Left: Side-view snapshot of a spawning cluster at $\alpha=1.5, r_0/R\alpha=0.5$ The cluster has a height of ca. $50a_0 = 12.5R$. Right: Top-view snapshot of the same system. Vertical velocities are color-coded with the same scale as in fig. \[fig:roll\_snapshots\].[]{data-label="fig:spawning_snapshots"}](snapshot_spawning_com3.jpg "fig:"){width="0.45\linewidth"} ![Left: Side-view snapshot of a spawning cluster at $\alpha=1.5, r_0/R\alpha=0.5$ The cluster has a height of ca. $50a_0 = 12.5R$. Right: Top-view snapshot of the same system. Vertical velocities are color-coded with the same scale as in fig. \[fig:roll\_snapshots\].[]{data-label="fig:spawning_snapshots"}](snapshot_spawning_topcom3.jpg "fig:"){width="0.45\linewidth"} ![Heat map of the mean vertical current density $j_z(x,y)$ in the horizontal plane at $\alpha = 1.5$ in the region $80\leq z/a_0 \leq 120$ and averaged over a time period of $9\cdot 10^5\Delta t$. The torque value $r_0/R\alpha$ corresponds to sedimentation (left) and a spawning cluster (middle and right). We have applied a low-pass filter (provided by python’s scipy package) in order to smoothen the data.[]{data-label="fig:spawning_2dhist"}](flux2dhist_beta0_g-020_select_middle_filtered_last900.eps){width="\linewidth"} In fig. \[fig:spawning\_snapshots\] we show a spawning cluster viewed from the side (left) and from the top (right). The snapshots belong to a spawning cluster at $\alpha=1.5$ and large torque situated at the far right of the state diagram in fig. \[fig:neutral\_states\]. We also show this system in video M4. In the top view of the right snapshot we observe a porous structure of the cluster. In contrast to the convective rolls discussed above, holes strongly depleted by squirmers are visible. Thus the clusters are not compact objects. We colored the squirmers in the snapshots according to their vertical velocity $v_z$ using the same color code as in fig. \[fig:roll\_snapshots\]. Hence, the green color of most squirmers shows that they move little in the vertical direction. Single squirmers perform a random walk or meander around within the cluster and when they reach the edge of a hole, the flow field of the neighboring squirmers strongly drifts them upwards with large velocities up to $3 v_0$. They either rejoin the cluster or leave it. This is nicely visible in video M4. In the spawning-cluster state plumes and convective rolls no longer exist, as videos M4 and M7 demonstrate and when inspecting spawning clusters at different parameters. This becomes also clear from fig. \[fig:spawning\_2dhist\], where we show the mean vertical current densities in the region $z/a_0 \in [80,120]$ above the spwaning clusters and for three different torques at $\alpha =1.5$. The left plot for $r_0/R\alpha = 0.08$ represents the sedimentation state. Similar to ref. [@KuhrStark2017] convective patterns in the region with an exponential sedimentation profile are visible, which consist of clearly separated areas with upward and downwards moving squirmers. Increasing the rescaled torque to $0.17$ (see video M10), these areas start to disintegrate and thereby mark the onset of the spawning-cluster state. Increasing the torque even further to $0.50$, the mean vertical current density is zero everywhere except for some small patches. This is consistent with the very small density in the bulk region of our system as demonstrated by the corresponding density profile in fig. \[fig:density\_profiles\]. Note, however, that compared to the sedimentation state the spawning-cluster state has a much higher density at the top wall, where squirmers leaving the cluster gather. In the state diagram of fig. \[fig:neutral\_states\] we also mention “transient hovering”. For very large external torques, well above the values used in the state diagram, we expect spawning clusters to dissolve with time at $\alpha > 1$. However, close to $\alpha = 1$ they dissolve very slowly. ![image](phase_puller_large_norm_strengthbh.jpg){width="49.00000%"} ![image](phase_pusher_large_norm_strengthbh.jpg){width="49.00000%"} Influence of squirmer type: pushers and pullers {#sec:flow_fields} ----------------------------------------------- For strong pullers and pushers the state diagrams, which we show in fig. \[fig:pusher\_puller\_states\], simplify considerably compared to neutral squirmers shown in fig. \[fig:neutral\_states\](a). First of all, we do not observe any stable or transient convective rolls nor spawning clusters. The main features in the state diagrams of fig. \[fig:pusher\_puller\_states\] are conventional and inverted sedimentation, where the separation line is shifted to higher torques and higher alpha compared to neutral squirmers. By visual inspection we observe plumes (see videos M11 and M12), where clusters of squirmers form in the upper region of the simulation cell, sink down, and dissolve. For pullers the plumes are more pronounced as we explain in sect. \[subsec.plumes\_pullers\_pushers\] and we indicate them by the grey shaded region as part of the sedimentation state. ### Influence of hydrodynamics on sedimentation state ![Density profiles of strong pullers at $\alpha=6.01$ for different rescaled torques $r_0/R\alpha$, which generate exponential, constant and inverted exponential profiles. Inset: Orientational distribution function at $\alpha=6.01$ and $r_0/R\alpha = 0.04$ for different squirmer parameters $\beta=-5,0,5$.[]{data-label="fig:pusher_puller_histograms"}](zhistorihist_betacombine.eps){width="\linewidth"} ![Density profiles at $\alpha=2.00$ for neutral, pusher, and puller squirmers. The rescaled torque $r_0/R\alpha$ is chosen such that collective sinking via plumes occur in the system (see videos M11, M12 for $\beta=\pm 5$). Inset: Distribution of cluster sizes in the plumes for the same parameters as in the main plot. []{data-label="fig:plumes_comparison"}](zhist_sizedistr_betacombine_g-015_plumes.eps){width="\linewidth"} For pushers and pullers, which have a non-zero squirmer parameter $\beta$, the flow field of a force dipole is added to the total squirmer velocity field $\mathbf{u}(\mathbf{r})$ as documented by eq. (\[eq:squirmer\_field\]). It decays like $1/r^2$ and in contrast to the pure source-dipole field of neutral squirmers possesses a non-zero vorticity, which acts on the orientation of nearby squirmers. Previous studies already investigated the consequences of this vorticity field for suspensions of microswimmers. They found that it weakens polar order in clusters, whereby pullers often retain a higher degree of polar order than pushers [@HennesStark2014; @EvansLauga2011; @AlarconPagonabarraga2013; @PessotMenzel2018]. Indeed, in the inset of fig. \[fig:pusher\_puller\_histograms\] we observe for the same parameters $\alpha = 6.01$ and $r_0/R\alpha =0.04$ that the neutral squirmer has a stronger alignment along the vertical than pushers and pullers. As a result, pushers and pullers have a stronger tendency to sink under gravity. Therefore, while neutral squirmers show inverted sedimentation at $\alpha = 6.01$ for all torque values, strong pullers ($\beta = 5$) exhibit a transition from conventional to inverted sedimentation with increasing $r_0/R\alpha$. Examples for the respective sedimentation profiles are plotted in the main graph of fig. \[fig:pusher\_puller\_histograms\] together with a uniform profile right at the transition. Thus, the disturbance of the vertical alignment by the vorticity field of strong pushers and pullers shifts the transition line between conventional and inverted sedimentation to larger torques and swimming velocities. For the same reason the tendency to form layers at the upper wall is strongly reduced. This becomes obvious by comparing the sedimentation profiles for the same parameters $\alpha = 6.01$ and $r_0/R\alpha =0.04$ for neutral squirmers in fig. \[fig:g-005\_density\_profiles\] and strong pullers ($\beta = 5$) in fig. \[fig:pusher\_puller\_histograms\]. ### Plumes of pullers and pushers {#subsec.plumes_pullers_pushers} Unlike for neutral squirmers we do not observe stable convective rolls for both strong pullers and pushers. This becomes obvious from the density profiles in fig. \[fig:plumes\_comparison\]. The broad and high density peak near the bottom wall is missing for pullers and pushers. Note that we increased the rescaled torque $r_0/R\alpha$ for pullers and pushers to be clearly in the region where plumes occur. Convective rolls possess dense squirmer clusters with some polar order due to bottom heaviness. However, such clusters are hydrodynamically unstable for squirmers with a strong force-dipole contribution [@EvansLauga2011; @AlarconPagonabarraga2013]. The same applies to spawning clusters, which we also do not observe. For neutral squirmers we observed plumes feeding convective rolls. For strong pullers and pushers we can also identify plumes by visual inspection, as demonstrated in videos M11 and M12, respectively. Pullers form visible plumes in the upper region that disband close to the bottom wall. Pusher plumes are very unstable and disintegrate already while they are sinking. Generally, the plume clusters are smaller compared to neutral squirmers. Thus, when plotting the mean number of sinking squirmers in a cluster, $\langle N_- \rangle$, versus time as in fig. \[fig:mean\_sinking\], the plumes cannot clearly be identified by high spikes. Instead, for pullers we determined the mean squirmer number by also averaging over more than $5\cdot 10^5$ time steps. The result is plotted in fig. \[fig:mean\_sinking\_size\] *versus* the rescaled torque for different swimming speeds $\alpha$. For $\alpha = 1.5$, 2.0, and 3.0 we roughly see a sigmoidal shape and locate a transition to a plume state at the inflection point. This is the meaning of the curved dashed line in the schematic state diagram of fig. \[fig:pusher\_puller\_states\], left. For pushers we do not see such a sigmoidal shape but rather a slow and steady increase. For this reason we we did not identify a separate region for plumes but only indicate that we see them around the line separating conventional and inverted sedimentation from each other at higher torques. ![Mean size of sinking puller clusters ($\beta=5$) as a function of the rescaled torque for different $\alpha$. The mean $\overline{\langle N_{\mathrm{-}}\rangle}$ is calculated in the lower region $20 \leq z/a_0 \leq 120$ and by averaging over more than $5\cdot 10^5$ time steps. Furthermore, all sinking clusters with $N_{\mathrm{-}} \ge 2$ are considered.[]{data-label="fig:mean_sinking_size"}](cluster_distribution_moment1_beta5_thresh2.png){width="\linewidth"} Conclusions and outlook ======================= In this article we have investigated the remarkable features of microswimmer suspensions under gravity using full hydrodynamic simulations of ca. 900 squirmer model swimmers, where we concentrated on the neutral squirmer but also looked at strong pushers and pullers. We have determined the respective state diagrams varying the ratio of swimming to bulk sedimentation velocity and the gravitational torque due to bottom heaviness. The general trend in all three cases reveals conventional sedimentation for low swimming velocity and torque, while the sedimentation profile becomes inverted when increasing both values. In addition, for neutral squirmers we have discovered a rich phenomenology in between both sedimentation states. Squirmers sink collectively in plumes due to reduced hydrodynamic friction and feed fascinating convective roll patterns of elongated or toroidal shape that self-organize at the bottom of the system. The plume formation is supported by squirmer reorientation due to vorticity resulting from the stokeslet contribution to the flow field. The latter is induced by the gravitational force acting on each squirmer. The combination of upward swimming (gravitaxis) and reorientation by nearby flow fields (rheotaxis) is called gyrotaxis, a mechanism introduced and discussed in connection with bioconvection [@PedleyKessler1992; @HillKessler1989; @BeesHill1997; @GhoraiHill1999; @DesaiArdekani2017]. In our case plumes and convective rolls also form transiently at larger torques. When starting from an initially uniform squirmer distribution, a dense layering of squirmers forms at the top wall. It develops an instability due to the gyrotactic mechanism and then sinking plumes emerge. At increasing torque and moderate speed ratios we also observe dense but porous squirmer clusters that float above the bottom wall and spawn single squirmers. They also become transient for increasing speed ratio. For strong pushers and pullers the transition line between conventional and inverted sedimentation is shifted to higher torques and speed ratios. The reason is the non-zero vorticity of the additional force-dipole flow field, which reorients neighboring squirmer orientations from the vertical so that they can sink more easily. This is also the reason why strong pusher and pullers do not show such a rich phenomenology. Only weak plume formation is observed without any stable convective rolls occurring. In the case of pullers we could quantify it by the mean size of sinking puller clusters. In systems with biological microswimmers plume formation is often traced back to the gyrotactic mechanism introduced above [@PedleyKessler1992; @BeesHill1997; @GhoraiHill1999]. However, also the overturning instability of a dense layer of microswimmers at the top boundary, reminiscent of the Rayleigh-Taylor instability is discussed [@ChildressSpiegel1975; @HarashimaFujishiro1988; @MogamiBaba2004]. Indeed, in ref. [@MogamiBaba2004] for plumes of the microorganism *Tetrahymena* emanating from a dense top layer, gyrotaxis is discarded. In contrast, in our case plumes for the convective rolls also form in bulk, where clearly gyrotaxis is the relevant mechanism. Even for the transient plumes of neutral squirmers emanating from a dense squirmer layer at the top boundary, we think that gyrotaxis is predominant. In biological systems, microswimmers are often of pusher and puller type but nevertheless, in contrast to our simulations, can form stable plumes and convection cells [@CzirokKessler2000; @JanosiHorvath1998; @BeesHill1997]. Our investigations are performed for strong pushers and pullers, while sufficiently weak pusher and puller squirmers with $\beta$ closer to zero will also show convective rolls. Indeed, estimates for the force-dipole moment of some biological microswimmers show a weak dipole strength [@BerkeLauga2008; @DrescherTuval2010; @DrescherGoldstein2011]. Furthermore, in our simulations we look at a very generic system concentrating on gravitational force, bottom heaviness, and hydrodynamic interactions between the squirmers. Real microswimmers with a non-spherical shape also experience a drag torque [@SenguptaStocker2017; @Roberts2010] and their flagella might act on neighbors by steric forces. Also, for the algae *C.reinhardtii* it has been shown that the flow field induced by the periodic beating pattern varies in time and during a short period is reminiscent of a pusher [@MuellerThiffeault2017; @MathijssenPolin2018]. Thus, real microorganisms show a large variability and it was argued that different organisms could even be distinguished via their bioconvection patterns [@CzirokKessler2000]. Generalizing our simulations in these directions provides opportunities for interesting future research. Bottom heaviness is not the only source for orientational order in order to observe interesting pattern formation under gravity. It can also be induced by additional external fields, in which the microorganism performs taxis. For example, the alga *C.reinhardtii* relies on phototaxis [@SatoToyoshima2018; @SinghFischer2018], while the bacterium *B. subtilis* shows aerotaxis, where it aligns along a gradient of oxygen [@CzirokKessler2000; @JanosiHorvath1998]. The response to chemical fields (chemotaxis) is fascinating on its own [@BergBrown1972; @TheurkauffBocquet2012; @SahaRamaswamy2014; @PohlStark2014; @Stark2018; @StuermerStark2019] and combining it with microswimmers [@HuangKapral2017] moving under gravity opens a new research direction. Finally, microorganisms moving under gravity might also adapt their behavior to further external cues. For example, it has been argued that phytoplankton actively change their morphology to adjust their migration strategy to turbulent flow fields [@SenguptaStocker2017]. This connects to another new and fascinating research direction related to learning in active systems [@ColabreseBiferale2017; @Muinos-LandinCichos2018; @SchneiderStark2019]. We are grateful for stimulating discussions with and valuable input from J.-T. Kuhr, R. Kapral, K. Drescher, and A.J.T.M. Mathijsen. This project was funded by Deutsche Forschungsgemeinschaft through the priority program SPP1726 (grant number STA352/11). The authors acknowledge the North-German Supercomputing Alliance (HLRN) for providing HPC resources that have contributed to the research results reported in this paper. Appendix ======== Table \[tab:movies\] provides an index of the videos referenced in this paper and made available in the electronic supplemental material. It contains the system parameters and describes the states visualized by the videos. movie name $\beta$ $\alpha$ $r_0/R\alpha$ state ------------ --------- ---------- --------------- ------------------------------------------------- M1 $0$ $6.01$ $0.02$ inverted sedimentation M2 $0$ $1.50$ $0.01$ sedimentation M3 $0$ $2.31$ $0.11$ plumes and convective rolls M4 $0$ $1.50$ $0.50$ spawning cluster M5 $0$ $2.31$ $0.11$ plumes and convective rolls (3D) M6 $0$ $2.31$ $0.11$ plumes and convective rolls, larger system M7 $0$ $2.31$ $0.11$ plumes and convective rolls, larger system (3D) M8 $0$ $6.01$ $0.08$ evaporating plume M9 $0$ $2.31$ $0.22$ transient roll M10 $0$ $1.50$ $0.17$ spawning cluster M11 $5$ $2.00$ $0.19$ weak plumes M12 $-5$ $2.00$ $0.19$ weak plumes \[tab:movies\] Authors contributions ===================== All the authors were involved in the preparation of the manuscript. All the authors have read and approved the final manuscript. [^1]: Our system height is smaller than the sedimentation length of the fluid so that gravity acting on the fluid particles can be neglected.
--- abstract: 'This paper addresses the problem of *segmenting a stream of graph signals*: we aim to detect changes in the mean of the multivariate signal defined over the nodes of a known graph. We propose an offline algorithm that relies on the concept of *graph signal stationarity* and allows the convenient translation of the problem from the original vertex domain to the spectral domain (Graph Fourier Transform), where it is much easier to solve. Although the obtained spectral representation is sparse in real applications, to the best of our knowledge this property has not been much exploited in the existing related literature. Our main contribution is a change-point detection algorithm that adopts a model selection perspective, which takes into account the sparsity of the spectral representation and determines automatically the number of change-points. Our detector comes with a proof of a non-asymptotic oracle inequality, numerical experiments demonstrate the validity of our method.' author: - 'Alejandro de la Concha Nicolas Vayatis Argyris Kalogeratos[^1]' bibliography: - 'CP\_GS\_2020.bib' title: | Offline detection of change-points in the mean\ for stationary graph signals --- Introduction {#sec:intro} ============ One of the most common tasks in Signal Processing is segmentation. Identifying time intervals where a signal is homogeneous is a strategy to uncover latent features of its source. The signal segmentation problem can be restated as change-point detection task: delimiting a segment consists on fixing the timestamps where it starts and ends. This subject has being extensively investigated leading to a vast literature and applications in many domains including computer science, finance, medicine, geology, meteorology, [etc.]{}The majority of the work done so far in signal segmentation focuses on temporal signals. [@Baseville1993; @Balzano2010; @Chen2012; @Tartakovsky2014; @Aminikhanghahi2016; @truong2020]. In this work we will study a different kind of object: *graph signals appearing as a stream*. In general terms, a graph signal is a function defined over the nodes of a given graph. Intuitively, the graph partially encodes the variability of the function: nodes that are connected will take similar values. This applies to real situations, for instance, contacts in social networks would share similar tastes; two neighboring sensors in a sensor network would provide similar measurements. Moreover, this behavior is not evident only in the case when the graph is explicitly given. In some applications, the graph itself has to be inferred and most algorithms are built over this local similarity property that corresponds to signal *smoothness*. This can be seen in graphical models or networks used to approximate manifolds [@Perraudin2017; @Friedman2007; @LeBars2019arxiv; @Tenenbaum2000]. It is true that there is a plethora of change-point detectors, nevertheless, despite their many applications in different contexts, the development of detectors specifically designed for graph signals is still limited in the literature [@Balzano2010; @Angelosante2011; @Chen2018; @LeBars2019arxiv]. To the best of our knowledge, the existing methods do not yet take into account the interplay between the signal and the graph structure. The main contribution of this article is an offline change-point detector aiming to spot jumps in the mean of a SGS. Our algorithm leverages many of the techniques developed in Graph Signal Processing (GSP), a relatively new field aiming to generalize the tools commonly used in classical Signal Processing [@Shuman2013; @Ortega2018]. More specifically, our algorithm depends on the concept of Graph Fourier Transform (GFT) that, similarly to the usual Fourier Transform, induces a spectral domain and a sparse representation of the signal. The main idea behind our approach is to translate the problem from the vertex domain to the spectral domain, and design a change-point detector operating in this space that accounts for the sparsity of the data and and automatically infers the number of change-points. This is done by adding two penalization terms: a $\ell_1$ penalization term aiming to recover the sparsity and another one penalizing models with a high number of change-points. The performance of the algorithm and the design of this penalization terms are based on the framework introduced in [@Birge2001] and the innovative perspective of the $\ell_1$ norm analyzed in [@Massart2011]. The organization of the paper is as follows. In [Sec.\[sec:basic\_definitions\]]{} we present basic definitions and tools that will be used in the rest of the paper. In [Sec.\[sec:our\_method\]]{} we formulate the change-point detection problem in the context of graph signals and we propose a Lasso-based change-point detection algorithm. In [Sec.\[sec:Model\_selection\]]{} we provide theoretical guarantees for the algorithms we introduce in the previous section and, finally, in [Sec.\[sec:exps\]]{} we test our method in experiments on properly generated synthetic data. Basic concepts and notations ============================ [\[sec:basic\_definitions\]]{} In this section we introduce notations and key concepts. Let $A_i$ and $A^{(j)}$ denote the $i$-th row and $j$-th column of matrix $A$, respectively. $A^{{{\mkern-1.5mu\mathsf{T}}}}$ and $A^*$ stand for transpose and the conjugate transpose ([i.e.]{}transpose with negative imaginary part) of matrix $A$. $x^{(i)}$ denotes the $i$-th entry of vector $x$, $x_t$ represents the observed vector $x$ at time $t$ and $\tilde{x}$ stands for the GFT of $x$, which is introduced in [Definition\[GFT\]]{}. A graph is defined by an ordered tuple $G=(V,E)$, where $V$ and $E$ stand for the vertices and edges sets respectively and $p := |V|$ is the number of graph nodes. [\[Graph Signal\]]{} A **graph signal** is a tuple $(G,y)$, where $G=(V,E)$ and $y$ is a function $y:V \rightarrow \mathbb{R}$. [\[GSO\]]{} A **graph shift operator** (GSO) $S$ associated with a graph $G=(V,E)$, is a $p \times p$ matrix whose entry $S_{i,j} \neq 0 $ iff $i=j$ or $(i,j)\in E$, and it admits an eigenvector decomposition $S=U\Theta U^*$. [\[GFT\]]{} For a given GSO $S=U\Theta U^*$ associated with a graph $G$, the **Graph Fourier Transform** (GFT) of a graph signal $y: V \rightarrow \operatorname*{\mathbb{R}}$ is defined as $\hat{y}= U^*y$. The frequencies of the GFT correspond to the elements of the diagonal matrix $\Theta$, that is $\{\theta_{i,i}\}_{i=1}^p$. Moreover, the eigenvectors $\{u_i\}_{i=1}^p$ provide also an orthogonal basis for the graph signals defined over the graph $G$. Finally, the GFT is the basic tool that allows us to translate operations from the vertex domain to the spectral domain [@Sandryhaila2013]. The *graph signal stationarity* over the vertex domain is a property aiming to formalize the notion that the graph structure explains to a large degree the inter-dependencies observed in a graph signal. For the rest of the paper, we refer to stationary with respect to a GSO $S$. The definitions and the properties that are listed bellow can be found in [@Marques2017; @Perraudin2017]. [\[Stationarity2\]]{} **Stationarity with respect to the vertex domain:** Given a normal GSO $S$, a zero-mean graph signal $y:V \rightarrow R$ with covariance matrix $\Sigma_y$ is stationary with respect to the vertex domain encoded by $S$, iff $\Sigma_y$ and $S$ are simultaneously diagonalizable, [i.e.]{}$\Sigma_y=U {\operatorname{diag}}({P}_y) U^{{{\mkern-1.5mu\mathsf{T}}}}$. The vector ${P}_y \in \operatorname*{\mathbb{R}}^p$ is known as the **graph power spectral density** (PSD). The following two properties are used in the derivation of our change-point detection algorithm, in the generation of the synthetic scenarios and the estimation of ${P}_y$. [\[Property:independence\]]{} Let $y$ be a stationary graph signal respect to $S$, then $\tilde{y}=U^* y$, which means that the GFT of $y$ will have a covariance matrix $\Sigma_{\tilde{y}}= {\operatorname{diag}}({P}_y)$. [\[prop:GF\_stationarity\]]{} Let $y$ be a stationary graph signal with covariance matrix $\Sigma_y$ and PSD $P_y$. The output of a graph filter $H=U {\operatorname{diag}}(h) U^*$, with a frequency response $h$, applied to the graph signal is $z = Hy$ and has the following properties: 1. It is stationary on $S$ with covariance $\Sigma_z=H \Sigma_y H^*$. 2. ${P}_z^{(i)}= |h^{(i)}|^2 P_y^{(i)}$. Change-point detection for a stream of graph signals {#sec:our_method} ==================================================== [***Problem formulation*[.]{}**]{} Suppose we observe a multivariate time series $Y=\{y_t\}_{t=0}^{T}$, where $\forall t\!: y_t \in \operatorname*{\mathbb{R}}^{p}$ and let its mean value be $\mu_t = {\mathbb{E}[y_t]}$. We suppose that there is an ordered set $\tau=\{\tau_0,...,\tau_{D_{\tau}}\} \subset \{0,...,T\}$ of $D_{\tau}$ *change-points*, with $\tau_0=0$ and $\tau_{D_{\tau}}=T$. The elements of $\tau$ define the following set of matrices: $$F_{\tau} = \{\mu \in \operatorname*{\mathbb{R}}\ \!\!\!\!^{T \times p} \ |\ \mu_{\tau_{l-1}+1}=...=\mu_{\tau_{l}} \}.$$ We additionally suppose that the elements of the time series $Y=\{y_t\}_{t=0}^{T}$ are graph signals defined over the same graph $G=(E,V,S)$, that is a stream of graph signals (SGS). Our goal is to infer the set of change-points $\tau$, and the set of parameters $\{\mu_{\tau_l}\}_{l \in \{1,...,D_{\tau}\}}$. We make the following hypotheses over the SGS: 1. The graph signals are [i.i.d.]{}with respect to the temporal domain. 2. The graph signals follow a multivariate normal distribution. 3. If $t \in \{\tau_{j-1}+1,\tau_{j-1}+2,...,\tau_{j}\}$, then $y_t-\mu_{\tau_j}$ is stationary with respect to the GSO $S$. This derives from the stationarity of $y_t$ itself. 4. The graphs signal admit a sparse representation with respect to the basis defined by the eigenvectors of $U$. That is, it exists $I \subsetneq \{1,...,p\}$ such that $\hat{y}_t^{j}=0$ for all $j \in I^\mathsf{c}$, where $I^\mathsf{c}$ is the complement of set $I$, $\forall t>0$. 5. $S$ is a normal matrix with all its eigenvalues different, and $S$ remains constant throughout the observation time-horizon. The problem is illustrated in [Fig.\[fig:CP\_GS\]]{} through an example where we can identify four different segments, [i.e.]{}$D_{\tau}=|\tau|-1=4$ change-points. ![An example stream of graph signals (SGS) with four change-points in the mean (according our problem formulation we take into account the end of the sequence as change-point). Successive segments have different color. The color of the graph nodes represents the mean of the signal during the first segment of the observed signal. The signal observed at each node evolves through time as shown in the line plots next to the nodes. At some timestamps the mean of the graph signal exhibits a change in a subset of the nodes. The change-points signify changes in the spectral representation of the signals. []{data-label="fig:CP_GS"}](Change_point_Graph_Signal.pdf){width="1\linewidth"} As we suppose that $S$ does not change over time, the stationarity of the graph signals with respect to the graph implies that the covariance matrix $\Sigma_y=U {\operatorname{diag}}({P}_y) U^*$ remains unchanged too. Then the average log-likelihood of the SGS can be written as: $${\label{eq:loglikelihood}} \begin{aligned} L(\mu,\tau) =-\sum_{l=1}^{D_{\tau}} \sum_{t=\tau_{l-1}+1}^{\tau_{l}} \sum_{i=1}^{p} \left[{ \frac{(\hat{y}^{(i)}_t-\hat{\mu}^{(i)}_{\tau_l})^2}{2 T P^{(i)}_y}+\frac{\log P^{(i)}_y}{2 T}}\right]. \end{aligned}$$ This formulation can be seen as a way to *translate the signal from the vertex domain to the spectral domain*, where the sample becomes independent to the graph structure according to [Property\[Property:independence\]]{}. [***Penalized cost function for an SGS with sparse GFT representation*[.]{}**]{} The log-likelihood of the SGS can be used to define the cost function to minimize in order to detect the change-points. Since many graph signals observed in real applications can be accurately approximated by a subset of Graph Fourier frequencies, it is necessary to further account for this feature in the means [@Perraudin2017; @Marques2017; @Huang2016] of each segment. This justifies adding an $\ell_1$ penalization term in the formulation of the problem. Furthermore, to address the issue that the number of change-points can also be unknown, we also add a penalization term $pen(d)$. The overall optimization problem for the change-point detection is written as: $${\label{eq:Change-point_detection_problem}} \begin{aligned} \!\!\!\!\!\!\!\!\!(\hat{d}, \hat{\tau}(\hat{d}), {\hat{\tilde{\mu}}_{\hat{\tau}}}(\hat{d}))&= (\hat{d}, \{\hat{\tau}_0,\hat{\tau}_1,...,\hat{\tau}_{d} \}, \{\hat{{\tilde{\mu}}}_0,\hat{{\tilde{\mu}}}_1,...,\hat{{\tilde{\mu}}}_{d} \} ) \\ & := \operatorname*{arg\,min}_{d \in \{1,...,T\}} \operatorname*{arg\,min}_{\tau \in {\mathcal{T}}_{T}^{d} } \operatorname*{arg\,min}_{{\tilde{\mu}}_{\tau_1},...,{\tilde{\mu}}_{l}} C_{T} (\tau,{\tilde{\mu}},\tilde{Y}) + pen(d) \\ &= \operatorname*{arg\,min}_{d \in \{1,...,T\}} \operatorname*{arg\,min}_{\tau \in {\mathcal{T}}_{T}^{d}} \sum_{l=1}^{d} \left\{ \operatorname*{arg\,min}_{{\tilde{\mu}}_{\tau_1},...,{\tilde{\mu}}_{\tau_{d}}} \!\!\left[ \sum_{t=\tau_{l-1}+1}^{\tau_{l}} \sum_{i=1}^{p} {\frac{(\tilde{y}^{(i)}_t-{\tilde{\mu}}^{(i)}_{\tau_l})^2}{T({P}_y^{(i)})}} \right. \right. \\ & \left. \left. + \lambda_{l} \frac{\sum_{i=1}^{p} I_l | {\tilde{\mu}}^{(i)}_{\tau_l}|}{T} \right] \right\} +\frac{d}{T}\left(c_1+c_2 \log(\frac{T}{d})\right) ,\\ \end{aligned}$$ where $C_{T} (\tau,{\tilde{\mu}},\tilde{Y})$ represents the $\ell_1$ penalized least squares cost function, $\lambda_{l}$ is the penalization constant leading to the desired sparsity of the GFT that a priori is segment-specific, $I_{l}=\tau_{l}-\tau_{l-1}$ denotes the length of the $l$-segment and ${\mathcal{T}}_{T}^{d}$ is the set of all possible segments of the set $\{0,...,T\}$ of size $d$. [Problem\[eq:Change-point\_detection\_problem\]]{} requires estimating the GFT of the mean of the graph signals that remains segment-wise constant. The separability of the cost function implies that this parameter depends just on the observations belonging to each of the segments delimited by the change-points. Moreover, this formulation leads to a closed-form solution for ${\bar{{\tilde{\mu}}}}_{\tau_l}^{(i)}$: $${\label{eq:mean}} {\bar{{\tilde{\mu}}}}_{\tau_{k}}^{(i)} = \operatorname{sign}\left({\frac{\sum_{t=\tau_{l-1}+1}^{\tau_{l}}\tilde{y}_{t}^{(i)}}{I_l}}\right)\max\left( {\left|\frac{\sum_{t=\tau_{l-1}+1}^{\tau_{l}}\tilde{y}_{t}^{(i)}}{I_l}\right|}- \frac{\lambda_l {P}_y^{(i)}}{2} ,0 \right)\!\!.$$ Thanks to this formulation, it is easy to see how we can find the precise change-points using dynamic programming. The final method can be found in Alg. \[alg:l1changepointdetector\]. Estimate the GFT of the dataset $\tilde{Y}=Y U$ Compute an estimation of ${P}_y$ using $w$ observations Choose ${\displaystyle\hat{d}:=\operatorname*{\arg\!\min}_{d=\{1,...,D_{\max}\}}} C_T(\hat{\tau}(d),{\hat{\tilde{\mu}}_{\hat{\tau}}}(d),\hat{Y})+\frac{d}{T}\left(c_1+c_2 \log(\frac{T}{d})\right)$ Return $\hat{\tau}(\hat{d})$ and ${\hat{\tilde{\mu}}_{\hat{\tau}}}(\hat{d})$ [***Choosing the right constants $\lambda$, $c_1$, $c_2$*[.]{}**]{} Even if [Problem\[eq:Change-point\_detection\_problem\]]{} is easy to solve, it requires to set the $\lambda_{l}$ parameter, related with the sparsity of the graph signals, and a penalization term $pen(d)$ that would allow us to infer the number of change-points. This problem is not trivial since the number of possible solutions depends on the time-horizon $T$ and the number of nodes $p$; this feature hinder an asymptotic analysis. We require penalization terms that have good performance in practice and depend on $p$ and $T$. Following the model selection approach, we can obtain an oracle type inequality for the estimators $\hat{\tau}(\hat{d})$ and ${\hat{\tilde{\mu}}_{\hat{\tau}}}(\hat{d})$. Nevertheless such analysis allows us to only infer the shape of $pen(d)$ depending of unknown constants $c_1$, $c_2$ and a lower bound for $\lambda$. These elements are not enough to use the method in practice, therefore we propose the alternative [Alg.\[alg:modelselectionchangepointdetector\]]{}, which is further detailed in [Sec.\[sec:Model\_selection\]]{}. Estimate the GFT of the dataset $\tilde{Y}=Y U$. Compute an estimation of ${P}_y$ using $w$ observations. Find $K_1,K_2,K_3$ using the slope heuristic Solve: $${\label{eq:op_problem_2}} (\hat{\lambda},\hat{d}):=\operatorname*{\arg\!\min}_{ (\lambda \in \Lambda,d=\{1,...,D_{\max}\})} {\textstyle C_{\text{LSE}} (\hat{\tau}(d,D_{m_{\lambda}}),{\hat{\tilde{\mu}}_{\hat{\tau}}}^{\text{LSE}}(d,D_{m_{\lambda}}))+K_1\frac{D_{m_{\lambda}}}{T}+\frac{d}{T}\left(K_2+K_3 \log\left(\frac{T}{d}\right)\right)}$$ Keeping the segmentation $\hat{\tau}(\hat{d},\hat{D}_{m_{\hat{\lambda}}})$ and $\hat{\lambda}$, recover ${\hat{\tilde{\mu}}_{\hat{\tau}}}(\hat{d})$ via [Eq.\[eq:mean\]]{}. Return $\hat{\tau}(\hat{d},\hat{D}_{m_{\hat{\lambda}}})$ and ${\hat{\tilde{\mu}}_{\hat{\tau}}}(\hat{d})$. Both algorithms require the knowledge of the PSD of the SGS. We can estimate that via a maximum likelihood approach on observations belonging to a segment, where we know the graph signals share the same mean. However, it has been shown that the variance of the maximum likelihood estimator requires too many observations before achieving a good approximation. The estimator proposed by [@Perraudin2017] requires a smaller number of samples and its computation scales with the number of connections in the graph (sparse in most applications). The idea of the estimator is based on [Property\[prop:GF\_stationarity\]]{}: once the vertex domain of a stationary graph signal is known, it is possible to use different filters to focus on different regions of the graph, and then use this information to reconstruct the PSD. Model selection approach {#sec:Model_selection} ======================== The problem of detecting a change in the mean of an SGS can be written as a generalized linear Gaussian model after preprocessing the data and the hypothesis of normality. With regards to the preprocessing, we will detect the change-points over the GFT of the SGS that is $\tilde{Y}$ instead of $Y$, we suppose that we have standardized $\tilde{Y}$ such that the variance of all the GFT coefficients is $\epsilon = 1$. Under the aforementioned conditions, we define as follows an isonormal process $(W({\tilde{\mu}}))_{{\tilde{\mu}}\in \operatorname*{\mathbb{R}}^{T \times p}}:W({\tilde{\mu}}):=\frac{\operatorname{tr}({{\eta}^{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}\tilde{\mu}})}{T}$, where $\eta \in \operatorname*{\mathbb{R}}^{T \times p}$ is a matrix whose rows follow a centered multivariate Gaussian distribution with covariance matrix $\mathbb{I}_p$. The generalized Gaussian process related to the SGS can be written as: $${\label{eq:GGP}} \hat{Y}_{\epsilon}({\tilde{\mu}})=\frac{{\operatorname{tr}({\tilde{\mu}}^{*{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}} {\tilde{\mu}})}}{T}+\epsilon W({\tilde{\mu}}).$$ This formulation enables us to use techniques from the model selection literature [@Massart2003] in order to design the penalized term $pen(d)$ related with the number of change-points and derivate oracle-type inequalities for the performance of the proposed estimators described in [Alg.\[alg:l1changepointdetector\]]{} and [Alg.\[alg:modelselectionchangepointdetector\]]{}. [Theorem\[Th:main\_result\]]{} is an *oracle inequality* that provides insights on how [Alg.\[alg:modelselectionchangepointdetector\]]{} will behave with respect to the time-horizon $T$ and the size of the network $p$. Furthermore, it gives us a guideline towards choosing $\lambda_l$ and the number of change-points in order to minimize the penalized mean-squared criteria, which is one of the differences of our work to the change-point detection algorithms analyzed in [@Lebarbier2003; @Arlot2019] that are based in model selection too, but they focus on mean-squared criteria. [\[Th:main\_result\]]{} Assume that: $${\label{eq:constants}} \begin{aligned} \lambda_{l}=\lambda \geq \frac{(3\sqrt{2}) \epsilon \sqrt{\log p + L}}{T}\ \ \ and \ \ \ pen(D_{\tau})=\frac{D_{\tau}}{T}\left(c_1+c_2 \log\left(\frac{T}{D_{\tau}}\right)\right), \end{aligned}$$ where $c_1 \geq 6 \sqrt{2} \epsilon^2$, $c_2 \geq 3 \sqrt{2}\epsilon^2$ and $L$ is such that $L > \log 2 $. Then, there exists an absolute constant $C>0$ such that $${\label{inq:oracle_main_result}} \begin{aligned} \!\!\!\!\!{\mathbb{E}}\left[\frac{{\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}-{\tilde{\mu}}^*\right\rVert}^2_F}{T}+\lambda {\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}\right\rVert}_{[\hat{\tau}]}\!\!+pen(\hat{d})\right] \leq \,C(K) \!\!\left[\Bigg(\!\inf_{\tau \in {\mathcal{T}}} \bigg(\!\inf_{\substack{{\tilde{\mu}}\in F_{\tau} \\ {\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]}<+\infty}}\!\!\!\! \frac{{\left\lVert{\tilde{\mu}}-{\tilde{\mu}}^*\right\rVert}^2_F}{T} +\lambda {\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]}\bigg)\right. \\ \!\!\!\!\!\!\! +\,pen(D_\tau) \Bigg) + 2 \lambda \epsilon + \left(\!\! 1+\frac{1}{(e^{\gamma}-1)(e-1)}\right)\epsilon^2 \Bigg]\!, \end{aligned}$$ where ${\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]}:=\frac{1}{T}\sum_{l=1}^{D_{\tau}} I_{\tau_l} {\left\lVert{\tilde{\mu}}_{\tau_l}\right\rVert}_1$, ${\mathcal{T}}$ is the set of all possible segmentations of the SGS of length $T$, $\gamma= \frac{1}{K}(\sqrt{\log p+ L}-\sqrt{\log p + \log{2}})$ and $K>1$ is a given constant. The proof, which can be found in the supplementary material, follows similar arguments to [@Massart2011]. Specifically, it requires first to define the set of models of our interest. In this case, the list of candidate models is a list indexed by the possible segmentations and $\ell_1$ balls of length $m \epsilon$, where $m \in \mathbb{N}^*$. The following lemma is a direct consequence of Corollary 4.3 in [@Giraud2015]. [\[lem:sparsity\]]{} For any $L > 0$, the solution to [Problem\[eq:Change-point\_detection\_problem\]]{} estimator with tuning parameter $$\!\! \lambda = 3 \epsilon \sqrt{2(\log p+L)},$$ fulfills with probability at least $1-e^{-L}$ the risk bound: $${\label{ineq:sparsity}} \begin{aligned} \!\!\frac{{\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}- {\tilde{\mu}}^*\right\rVert}_F^2}{T} \leq {\sum_{l=1}^{D_{\tau}} \sum_{t=\tau_{l-1}+1}^{\tau_l} \inf_{\substack{{\tilde{\mu}}\neq 0,\\ {\tilde{\mu}}\in R^p}} \frac{{\left\lVert{\tilde{\mu}}- {\tilde{\mu}}_t^*\right\rVert}_2^2}{T}+\frac{18\epsilon^2(L+\log p)}{ T \Phi({\tilde{\mu}})^2} {\left\lVert{\tilde{\mu}}\right\rVert}_0}, \end{aligned}$$ where $\Phi({\tilde{\mu}})$ is known in the literature as compatibility constant. Both results, [Theorem\[Th:main\_result\]]{} and [Lemma\[lem:sparsity\]]{}, provide details of the performance of the algorithm, when applied in practice, with respect to $\lambda$. [Theorem\[Th:main\_result\]]{} concludes that *the value of $\lambda_l$ should be the same for all the segments*, while [Lemma\[lem:sparsity\]]{} relates the value of $\lambda$ with the sparsity of the signal. We can see that there is a trade-off between the performance of the estimator and its ability to recover the sparsity of the signal: From one side, we need a low value of $\lambda$ in order to reduce the bias of the estimator (see [Ineq.\[inq:oracle\_main\_result\]]{}), while from the other side we need a higher value of $\lambda$ that will allow us to recover the sparsity of the signal with a higher probability [Ineq.\[ineq:sparsity\]]{}. [Theorem\[Th:main\_result\]]{} provides lower bounds for the values of $c_1$ and $c_2$. Nevertheless, in practice, when fixing $c_1$ and $c_2$ at these values, the obtained results were not satisfying. Finding the right constants in model selection is a common difficult problem [@Arlot2019_b]. In some cases, it is possible to use a technique called slope heuristic which recovers the constants using a linear regression of the empirical risk against the elements of a penalization term. However, the curve defined by the cost function including the $\ell_1$ term does not tend to remain constant as the number of change-points increases, a feature that is used in the slope heuristics. For that reason, we propose the alternative [Alg.\[alg:modelselectionchangepointdetector\]]{}. The idea is to replace the $\ell_1$ penalization term by a Variable Selection penalization term. For each of the elements of given set of penalization parameters $\Lambda$, we solve a Lasso problem over the whole stream of graph signals. This will allow us to keep all the relevant frequencies. Then, we solve multiple change-point detection problems for different levels of sparsity. We can deduce the right level of sparsity as well as two constants $K_2$ and $K_3$ related with the number of change-points via the slope heuristic. This last statement is validated by [Theorem\[Th:variable\_selection\_oracle\]]{} and the experiments in [Sec.\[sec:exps\]]{}. [\[Th:variable\_selection\_oracle\]]{} Let us denote by $S_{D_m}$ the space generated by $m$ specific elements of the standard $\operatorname*{\mathbb{R}}^p$ basis, and let us define the set $S_{(D_m,\tau)}$ as: $${\label{def:set_variable_selection}} S_{(D_m,\tau)}:=\{ {\tilde{\mu}}\in F_{\tau}\ |\ {\tilde{\mu}}_{\tau_l} \in S_{D_m}, \forall l \in \{0,...,D_{\tau}\} \}.$$ Let $\hat{\tau}$ and ${\hat{\tilde{\mu}}_{\hat{\tau}}}$ are solutions to the optimization problem \[eq:op\_problem\_2\]. Then, there exist constants $K_1$, $K_2$, $K_3$ such that if the penalty is defined for all $(m,\tau) \in M$, where $M \subset \{1,...,p\} \times {\mathcal{T}}$:$${\label{eq:constants_model_selection}} \begin{aligned} pen(m,\tau)=K_1 \frac{D_m}{T}+\frac{D_{\tau}}{T}\left(K_2+K_3 \log \left(\frac{T}{D_{\tau}}\right)\right)\!, \end{aligned}$$ such that there exists a positive constant $C(K)$ and $K>1$ such that: $$\begin{aligned} {\mathbb{E}}\left[\frac{{\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}-{\tilde{\mu}}^*\right\rVert}^2_F}{T}\right] \leq & \ C(K) \left[\inf_{(m,\tau) \in M } \bigg(\inf_{{\tilde{\mu}}\in S(D_m,\tau)} \frac{{\left\lVert{\tilde{\mu}}-{\tilde{\mu}}^*\right\rVert}^2_F}{T} +pen(m,\tau) \bigg) \right. \\ & + \left. \left(1+\left(\frac{1}{(e^\gamma-1)(e-1)}\right)\right) \epsilon^2 \right]\!. \end{aligned}$$, [Theorem\[Th:variable\_selection\_oracle\]]{} is proved in the supplementary material and it is a consequence of Theorem 4.18 of [@Massart2003]. It is important to mention that the results stated in both theorems do not just apply for detecting the change-points in the SGS, but in any case the problem can be restated as in [Eq.\[eq:GGP\]]{}. Numerical experiments ===================== [\[sec:exps\]]{} As mentioned earlier, a key hypothesis in our approach is the stationarity of the graph signals. An alternative definition of graph stationarity says that if we apply a graph filter $H$ with a frequency response $h(\theta)$ to white noise following a standard normal variable we get a stationary signal. This definition and the one given in [Definition\[Stationarity2\]]{} are equivalent when the GSO is normal and all its eigenvalues are different [@Perraudin2017]. In this case we will use the Laplacian of the graph as GSO. The distance between two adjacent change-points is generated as observations of a exponential distribution with expectation $20$, we add $30$ to this result to guarantee a minimum distance of $30$ time stamps. We generate the SGS over a graph of $500$ nodes. We generate $50$ different random instances of each scenario. The particularities of each scenario are described bellow: *Scenario I:* We generate Erd[ő]{}s–Rényi (ER) graphs with a fixed link creation probability $p_{_{\operatorname{ER}}}=0.3$ and the frequency response of the filter defined as: $h(\theta) \propto \frac{1}{\log{\theta+10}+1}$. We generate change-points via a Poisson distribution with expectation 5. Before the first change-point, the mean of the graph signals is a linear combination of the first $100$ eigenvectors of the Laplacian matrix; $20$ random coefficients of the Graph Fourier transform are changed after each of the change-points, the mean is then this new linear combination of eigenvectors. In all cases the coefficients of the linear combinations were generated uniformly at random in the interval $[-5,5]$. *Scenario II:* The graph structure is generated by a Barabasi-Albert (BA) model in which each incoming node is connected to $m$ nodes, $m=4.0$. The spectral profile of the filter is proportional to the density function of a Gamma distribution, $h(\theta) \propto p_{_{\Gamma(20,5)}}(\theta)$. Then, $4$ change-points are generated. Before the first change-point, the mean of the graph signals is a linear combination of the first $20$ eigenvectors of the Laplacian matrix; after the first change, the node with the highest degree and all its neighbors change their mean; after the second change-point the first $5$ nodes with the highest degrees modify their mean; after the third change-point, $20$ nodes at random of the graph get their mean changed. In all cases, the mean is generated uniformly at random in the interval $[-5,5]$. In this section, we analyze the performance of our algorithm. We also analyze the differences in performance with the kernel-based detector introduced in [@Harchaoui2008], which also uses model selection and the slope heuristic to identify the number of change-points [@Arlot2019]. As we are interested in detecting changes in the mean. We will show the results obtained by using the linear kernel $k(x,y) = {\langle x,y \rangle}$, the Laplacian-based kernel $k(x,y)= {x^{{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}}} S y$ and the Gaussian kernel $k(x,y) = \exp\left(- \frac{{\left\lVertx-y\right\rVert}^2}{2h}\right)$, where $h$ is chosen according to the median heuristics. The detectors built using these kernels will be referenced as Linear, Laplacian and Gaussian. [Alg.\[alg:modelselectionchangepointdetector\]]{} is called to Variable Selection when we use the real values of $P_y$ and Approx. Variable Selection when we approximate it. In order to estimate the $P_y$ of the signal, a parameter required for our proposed algorithms, we follow the technique described in [@Perraudin2017]. We use $300$ graph Gaussian filters over the observed signal and $10$ over white noise. In the synthetic scenarios, we use the first $50$ observations of the SGS, that is $1/10$ of the number of nodes. In the synthetic scenarios, we use the first $50$ observations of the SGS, that is $1/10$ of the number of nodes. We implement the slope-heuristic described in [@Arlot2019] to recover the parameters $K_1$, $K_2$, and $K_3$ : that is we make a robust linear regression of the cost-functions of the list of models with high complexity against the penalization terms, then we multiply the linear coefficients by $-2$. By higher complexity we refer to the models such that inferred number of change-points is bigger than $0.6\!\cdot\!\frac{T}{\log{T}}$. [***Results*[.]{}**]{} In both considered scenarios our method performs very well and is not affected by the estimation of the PSD: the distance with respect to the real change-points (Hausdorff distance) is small given the minimum gap between change-points. Almost all the points are correctly classified as whether they are change-points or not (Rand Index close to $1$). All the change-points are recovered (Recall equals $1$). However, it tends to slightly overestimate the number of change-points (Precision around $0.75$). These spurious change-points could be easily filtered out as they define segments of very small length, as clearly indicated by the Hausdorff distance. For the Kernel-based detectors, we estimate the equivalent of the parameters $K_2$ and $K_3$ via the slope heuristic and we obtain a slightly better performance. Nevertheless, this method does not allow us to extract any information about the mean of the signal in each of the segments, let alone its sparse GFT representation. Conclusion ========== In this work we presented an offline change-point detection approach for shifts in the mean of a stream of graph signals that automatically infer the number of change-points and the level of sparsity of the signal in its Graph Fourier representation. The formulation has the advantage of being easy to resolve via dynamic programming and interesting theoretical guarantees such as an oracle type inequality. The performance of our algorithm is comparable to that of the state-of-the-art kernel-based methods for changes in the mean of a multivariate signal, with the advantage that we can also get back the coefficients of the Graph Fourier Transform that could be used to interpret the change. The techniques and results of this paper can be generalized to similar situations when we aim to spot change-points in a stream of multivariate signals that supports a sparse representation in a given basis. Proving the consistency of the detected change-points is among our plans for future work. Supplementary material {#supplementary-material .unnumbered} ====================== Proof of Theorems 1 and 2 ========================= In this appendix, we present the proofs of [Theorem\[Th:main\_result\]]{} and [Theorem\[Th:variable\_selection\_oracle\]]{}. For the sake of completeness, we introduce basic concepts of the model selection literature and we restate some results which are a key component to proof the oracle inequalities presented in this work. The model selection framework offers an answer to the question: how to chose the function $pen(d)$ and the parameter $\lambda$ so we recover the right number of change-points and the sparsity of the signal in its Graph Fourier representation at the same time. Given a separable Hilbert space ${\mathbb{H}}$, a generalized linear Gaussian model is defined as: $${\label{eq:GLGM}} Y_{\epsilon}(g)= {\langle f,g \rangle}_{{\mathbb{H}}}+ \epsilon W(g), \ \text{ for all } \\ g \in {\mathbb{H}},$$ where $W$ is a isonormal process (Definition \[def:Isonormal\]). [\[def:Isonormal\]]{} A Gaussian process $(W(g))_{g \in {\mathbb{H}}}$ is said to be isonormal if it is centered with covariance given by ${\mathbb{E}}[W(h) W(g)]= {\langle h,g \rangle}_{{\mathbb{H}}}$. A isonormal process is the natural extension of the notion of standard normal random vector to a infinite dimensional case. As stated in the main text, the change-point detection problem can be restated as a generalized linear Gaussian model, where ${\mathbb{H}}= \operatorname*{\mathbb{R}}^{T \times p}$: the dot product ${\langle h,g \rangle}_{{\mathbb{H}}}$ is the one inducing the Frobenius norm divided by T. Finally, the isonormal process $(W({\tilde{\mu}}))_{{\tilde{\mu}}\in \operatorname*{\mathbb{R}}^{T \times p}}$ is defined as: $$W({\tilde{\mu}}):=\frac{\operatorname{tr}({{\eta}^{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}\tilde{\mu}})}{T},$$ where $\eta \in \operatorname*{\mathbb{R}}^{T \times p}$ is a matrix whose rows follow a centered multivariate Gaussian distribution with covariance matrix $\mathbb{I}_p$. It is easy to show that $W({\tilde{\mu}})$ satisfies [Definition\[def:Isonormal\]]{}. [Theorem\[Theorem:model\_selection\]]{}, which can be found as Theorem 4.18 in [@Massart2003], details the model selection procedure and provide us with an oracle-type inequality for this kind of estimators. The result applies for a more general model selection procedure which allows us to deal with non linear models. Both [Theorem\[Th:main\_result\]]{} and [Theorem\[Th:variable\_selection\_oracle\]]{} are a direct consequence of this result. [\[Theorem:model\_selection\]]{} Let $\{S_m\}_{m \in M}$ be some finite or countable collection of closed convex subsets of ${\mathbb{H}}$. It is assume that for any $m \in M$, there exits some a.s continuous version $W$ of the isonormal process on $S_m$. Assume furthermore the existence of some positive and non-decreasing continuous function $\phi_m$ defined on $(0, + \infty)$ such that $\phi_m(x)/x$ is non-increasing and $${\label{ineq:gaussian_process}} 2 {\mathbb{E}}[ \sup_{g \in S_m} \left( \frac{W(g)-W(h)}{{\left\lVertg-h\right\rVert}^2 + x^2} \right)] \leq x^{-2} \phi_m(x)$$ for any positive x and any point $h$ in $S_m$. Let define $D_m>0$ such that $${\label{ineq:penalization_penm}} \phi_m( \epsilon \sqrt{D_m})= \epsilon D_m.$$ and consider some family of weights $\{ x_m \}_{m \in M}$ such that $${\label{ineq:sum_weights}} \sum_{m \in M} e^{-x_m} = \Sigma < \infty.$$ Let $K$ be some constant with $K>1$ and take $${\label{ineq:penalty_theorem_1}} pen(m) \geq K \epsilon^2 \left( \sqrt{D_m} + \sqrt{2 x_m} \right)^2.$$ Set for all $g \in H$, $\gamma (g)={\left\lVertg\right\rVert}^2 - 2 Y_{\epsilon}(g)$ and consider some collection of $p_m-$approximate penalized least squares estimator $\{\hat{f}_m\}_{m \in M}$ i.e, for any $m \in M$, $$\gamma \left( \hat{f}_m \right) \leq \gamma (g) + \rho, \text{ for all } g\in S_m.$$ Defining a penalized $\rho-$LSE as $\hat{f}=\hat{f}_{\hat{m}}$, the following risk bounds holds for all $f \in {\mathbb{H}}$ $${\mathbb{E}}\left[ {\left\lVert\hat{f}-f\right\rVert}^2 \right] \leq C(K) \left[ \inf_{m \in M} ( d(f,S_m)^2 + pen(m) ) + \epsilon ( \Sigma+1) + \rho \right].$$ [Theorem\[Theorem:model\_selection\]]{} require us to have a predefined list of estimators that will be related with a list of closed convex subsets of ${\mathbb{H}}$. It states that we are able to recover a penalization term $pen(m)$ which allow us to find a model satisfying an oracle kind inequality if we manage to control a kind of standardized version of the isonormal process and to design a set of weights for the elements in our list of candidate models. [Theorem\[Theorem:Massart\]]{} is a restricted version of [Theorem\[Theorem:model\_selection\]]{} which is more handy when dealing with the $\ell_1$ penalization term. This version of the theorem appears as Theorem A.1 in [@Massart2011]. [\[Theorem:Massart\]]{} Let $\{S_m\}_{m \in M}$ be a countable collection of convex and compact subsets of a Hilbert space ${\mathbb{H}}$: lets define for any $m \in M$, $$\Delta_m={\mathbb{E}}\left[{\sup_{h \in S_m} W(h)}\right],$$ and consider weights $\{x_m\}_{m \in M}$ such that $$\Sigma:= \sum_{m \in M} e^{-x_m}< \infty.$$ Let $K>1$ and assume that, for any $m \in M$, $${\label{eq:pen_m}} pen(m) \geq 2K \epsilon (\Delta_m + \epsilon x_m + \sqrt{\Delta_m \epsilon x_m}).$$ Given non negative $\rho_m,m \in M$,define a $\rho_m$-approximate penalized least squares estimator as any $\hat{f} \in S_{\hat{m}},\hat{m} \in M$, such that $$\gamma(\hat{f})+pen(\hat{m}) \leq \inf_{m \in M} \Big( \inf_{h \in S_m} \gamma(h) + pen(m)+ \rho_m\Big).$$ Then, there is a positive constant C(K) such that for all $f \in {\mathbb{H}}$ and $z>0$, with probability larger that $1-\Sigma e^{-z}$, $$\begin{aligned} \!\!\!\!\!\!\!\!\!\!\!\!\!\!{\left\lVertf-\hat{f}\right\rVert}^2+pen(\hat{m}) \leq C(K) \left[ \inf_{m \in M} \Big( \inf_{h \in S_m} {\left\lVertf-h\right\rVert}^2 + pen(m) + \rho_m\Big) +(1+z) \epsilon^2 \right]\!\!. \end{aligned}$$ After integrating the inequality with respect to $z$ leads to the following risk bound: $${\label{eq:oracle_inequality}} \begin{aligned} {\mathbb{E}}\left[{\left\lVertf-\hat{f}\right\rVert}^2+pen(\hat{m})\right] & \leq C(K)\! \left[ \inf_{m \in M} \Big( \inf_{h \in S_m} {\left\lVertf-h\right\rVert}^2 +pen(m) + \rho_m\Big) + (1+\Sigma) \epsilon^2\right]\!. \end{aligned}$$ Finally, we will make use of the following lemma that can be found as Lemma 2.3 in [@Massart2011], a concentration inequality for real valued random variables. [\[lem:inequality\]]{} Let $\{Z_i, i \in I\}$ be a finite family of real valued random variables. Let $\psi$ be some convex and continuously differentiable function on $[0,b)$, with $0 < b \leq \infty$, such that $\psi(0)=\psi'(0)=0$. Assume that $\forall \gamma \in (0,b)$ and $\forall i \in I$, $\psi_{Z_i}(\gamma) \leq \psi(\gamma)$. Then, using any measurable set $B$ with ${\mathbb{P}}[B>0]$ we have: $$\frac{{\mathbb{E}}[\sup_{i \in I} Z_i {\mathbf{1}}_B] }{{\mathbb{P}}[B]} \leq \psi^{*-1} \left(\log \frac{|I|}{{\mathbb{P}}[B]}\right)\!.$$ In particular, if one assumes that for some non-negative number $\epsilon$, $\psi({\gamma})= \frac{\gamma^2 \epsilon^2}{2} \forall \gamma \in (0,\infty)$, then: $$\frac{{\mathbb{E}}[\sup_{i \in I} Z_i {\mathbf{1}}_B] }{{\mathbb{P}}[B]} \leq \epsilon \sqrt{ 2 \log\frac{|I|}{{\mathbb{P}}(B)}} \leq \epsilon \sqrt{2 \log|I|}+ \epsilon \sqrt{2 \log\frac{1}{{\mathbb{P}}(B)}}.$$ [***Proof of Theorem 2*[.]{}**]{} Let us define the set $S_{(m,\tau)}$: $${\label{expr:lasso_set}} S_{(m,\tau)}:=\{{\tilde{\mu}}\in F_{\tau}, {\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]} \leq m \epsilon \},$$ where ${\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]}=\frac{\sum_{l=1}^{D_{\tau}} I_{\tau_l} {\left\lVert{\tilde{\mu}}_{\tau_l}\right\rVert}_1}{T} $. And $M:= \mathbb{N}^* \times {\mathcal{T}}$, where ${\mathcal{T}}$ is the set of all possible ways the segmentation of a stream of length T. We denote by $\hat{\tau}$ and ${\hat{\tilde{\mu}}_{\hat{\tau}}}$ the estimators obtained by solving the problem of [Eq.\[eq:Change-point\_detection\_problem\]]{}. Denote by $\hat{m}$ the smallest integer such that ${\hat{\tilde{\mu}}_{\hat{\tau}}}$ belongs to $S_{\hat{m}}$, [i.e.]{}$$\hat{m}=\left\lceil\frac{{|\!|}{\hat{\tilde{\mu}}_{\hat{\tau}}}{|\!|}_{[\tau]}}{\epsilon}\right\rceil\!,$$ then, $$\begin{aligned} \gamma{({\hat{\tilde{\mu}}_{\hat{\tau}}})} +\lambda \hat{m} \epsilon +pen(D_{\hat{\tau}}) & \leq \gamma{({\hat{\tilde{\mu}}_{\hat{\tau}}})} + \lambda {|\!|}{\hat{\tilde{\mu}}_{\hat{\tau}}}{|\!|}_{[\hat{\tau}]} + \lambda \epsilon + pen(D_{\hat{\tau}}) \\ & \leq \inf_{\tau \in {\mathcal{T}}} \inf_{{\tilde{\mu}}\in S_{(m,\tau)}} [ \gamma{({\tilde{\mu}})} + \lambda {\left\lVert{\tilde{\mu}}\right\rVert_{[\tau]}} + pen(D_{\tau})] + \lambda \epsilon \ \ \ \ \text{\tiny{Definition of ${\hat{\tilde{\mu}}_{\hat{\tau}}}$ and $\hat{\tau}$ }}\\ & \leq \inf_{(m,\tau) \in M} \inf_{{\tilde{\mu}}\in S_{(m,\tau)}} [ \gamma{({\tilde{\mu}})} + \lambda m \epsilon + pen(D_{\tau})] + \lambda \epsilon. \end{aligned}$$ In conclusion, we have the following result: $${\label{ineq:p_approximate}} \gamma{({\hat{\tilde{\mu}}_{\hat{\tau}}})}+pen(\hat{m},\hat{\tau}) \leq \inf_{(m,\tau) \in M} [\inf_{\hat{\mu} \in S_{(m,\tau)}} \gamma({\tilde{\mu}}) + pen(m,\tau)+ \rho],$$ where $\rho=\lambda\epsilon>0$ and $pen(m,\tau)= \lambda m \epsilon + pen(D_{\tau}) >0$, $D_{\tau}$ is the number of change-points with the exception of $\tau_0=0$. [Ineq.\[ineq:p\_approximate\]]{} implies ${\hat{\tilde{\mu}}_{\hat{\tau}}}$ is a $\rho$-approximated least squares estimator. Then, the only hypothesis that remains to be proved is Expression \[eq:pen\_m\]. We start by getting an upper bound for $\Delta_m$. By the definition of the isonormal process $(W({\tilde{\mu}}))_{{\tilde{\mu}}\in \operatorname*{\mathbb{R}}^{T \times p}}$, we know it is continuous. This implies it achieves its maximum at $S_{(m,\tau)}$, a compact set, let call $\hat{g}$ this point, then: $${\label{eq:W(h)}} \begin{aligned} {\mathbb{E}}[|W(\hat{g})|]=\ \ {\mathbb{E}}\left[\left|\frac{{\operatorname{tr}(\zeta^{{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}}\hat{g})}}{T}\right|\right]= & {\mathbb{E}}\left[ \left| \sum_{i=1}^{p} \sum_{t=1}^{T} \frac{\zeta^{(i)}_t \hat{g}^{(i)}_t}{T}\right| \right]\\ \leq & \sum_{t=1}^{T} \sum_{i=1}^{p} \left| \frac{\hat{g}^{(i)}_t}{T} \right| {\mathbb{E}}\left[\max_{\{i=1,..,p\}} |\zeta^{(i)}_t| \right] \\ \leq & \sum_{l=1}^{D_{\tau}} \frac{I_{\tau_l}}{T}{\left\lVert\hat{g}_{\tau_l}\right\rVert}_1 {\mathbb{E}}\left[ \max_{ \{i=1,..,p\}} \{ \zeta^{(i)}_t,-\zeta^{(i)}_t\} \right] \\ \leq & {\left\lVert\hat{g}\right\rVert}_{[\tau]} \sqrt{2 \log{2p}} \ \ \ \ \text{\tiny{{Lemma\,\ref{lem:inequality}}}} \\ \leq & \sqrt{2} m \epsilon \sqrt{\log{2}+\log{p}}. \ \ \ \ \text{\tiny{{Eq.\,\ref{expr:lasso_set}}}} \end{aligned}$$ Let us define the $x_{(m,\tau)}= \gamma m + D_{\tau}L(D_{\tau})$, where $\gamma>0$. $L(D_{\tau})=2+\log\frac{T}{D_{\tau}}$ is a constant that just depends on the cardinality of the segmentation induced by $\tau$. Then: $${\label{eq:weights_lasso}} \begin{aligned} \Sigma = & \left(\sum_{m \in N^*} e^{-\gamma m}\right)\left(\sum_{\tau \in {\mathcal{T}}}e^{-D_{\tau}L(D_{\tau})}\right) \\ = & \left(\frac{1}{e^{\gamma}-1}\right) \left( \sum_{d=1}^{T} e^{-d L(d)} | \{\tau \in {\mathcal{T}}, |D_{\tau}|=d\} |\right) \\ \leq & \left(\frac{1}{e^{\gamma}-1}\right) \left( \sum_{d=1}^{T} e^{-d L(d)} \binom{T}{d}\right) \\ \leq & \left(\frac{1}{e^\frac{\gamma}{T}-1}\right) \left( \sum_{d=1}^{T} e^{-\frac{d L(d)}{T}} \left(\frac{eT}{d}\right)^d\right) \\ \leq & \left(\frac{1}{e^{\gamma}-1}\right) \left( \sum_{d=1}^{T} e^{-d \left(L(d)-1- \log\frac{T}{d} \right) } \right) \\ \leq & \left(\frac{1}{e^{\gamma}-1}\right) \left(\frac{1}{e-1}\right) < \infty. \end{aligned}$$ Finally, let us fix $\eta=(3\sqrt{2}-2)^{-1}>0$, $K=\frac{3 }{2+\eta}>1$, $\gamma= \frac{\sqrt{\log p + L}-\sqrt{\log p +\log 2}}{K}$. It is clear $\gamma>0$ since $L>\log(2)$. Then by the expressions of [Eq.\[eq:W(h)\]]{} and [Eq.\[eq:weights\_lasso\]]{}, and the useful inequality $2\sqrt{ab} \leq a \eta^{-1} + b\eta $, we have: $${\label{eq:ineq_th}} \begin{aligned} 2\frac{K\epsilon}{T} \left[ \Delta_{(m,\tau)} +\epsilon x_{(m,\tau)} + \sqrt{\Delta_{(m,\tau)} \epsilon x_{(m,\tau)}} \right] & \leq 2 \frac{K \epsilon}{T} [ (1+\frac{\eta}{2}) \Delta_{(m,\tau)} + (1+\frac{\eta^{-1}}{2}) x_{(m,\tau)} \epsilon] \\ & \leq 2 \frac{K \epsilon^2}{T} \left[ (1+\frac{\eta}{2})\left( \sqrt{2} m (\sqrt{\log{p}+\log{2}} )\right) \right. + \\ & \left. (1+\frac{\eta^{-1}}{2}) \left( \gamma m + D_{\tau}L(D_{\tau}) \right) \right] \\ & \leq 3 \sqrt{2} \frac{\epsilon^2}{T} \left[ (\sqrt{\log p + \log 2} + K \gamma)m + D_{\tau}L(D_{\tau}) \right] \\ & \leq 3 \sqrt{2} \frac{\epsilon^2}{T} \left[ (\sqrt{\log p+ L})m+ D_{\tau}L(D_{\tau}) \right] \\ & \leq 3 \sqrt{2} \frac{\epsilon^2}{T}(\sqrt{\log p+ L})m + \frac{D_{\tau}}{T}\left(c_1+c_2 \log(\frac{T}{D_{\tau}})\right) \\ & \leq \lambda m \epsilon + pen(D_{\tau})= pen(m,\tau). \end{aligned}$$ Then [Eq.\[eq:pen\_m\]]{} is satisfied. We can conclude by [Eq.\[ineq:p\_approximate\]]{}, \[eq:weights\_lasso\] and \[eq:ineq\_th\] that, if the hypotheses of [Theorem\[Theorem:Massart\]]{} are satisfied, then there exists a positive constant $C(K)$ such that $\mu^* \in \operatorname*{\mathbb{R}}^{T \times p}$ and $z>0$, with probability larger that $1-\Sigma e^{-z}$, $$\begin{aligned} \frac{{\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}-\mu^*\right\rVert}_F^2}{T}+pen(\hat{m})+pen(D_{\hat{\tau}}) & \leq C(K) \left[ \inf_{(\tau,m) \in M} \inf_{{\tilde{\mu}}\in S_{(m,\tau)}} \left( \frac{{\left\lVert{\tilde{\mu}}-\mu^*\right\rVert}_F^2}{T} + \lambda m \epsilon + pen(D_{\tau}) \right) + \lambda\epsilon + (1+z)\epsilon^2 \right] \\ & \leq C(K) \left[ \inf_{\tau \in {\mathcal{T}}} \inf_{{\tilde{\mu}}\in F_{\tau}} \left( \frac{{\left\lVert{\tilde{\mu}}-\mu^*\right\rVert}_F^2}{T} + \lambda {\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]} + pen(D_{\tau}) \right) + 2 \lambda \epsilon + (1+z)\epsilon^2 \right]. \end{aligned}$$ Thanks to the last expression, we have that: $$\begin{aligned} \frac{{\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}-\mu^*\right\rVert}_F^2}{T} + \lambda {\left\lVert{\hat{\tilde{\mu}}_{\hat{\tau}}}\right\rVert}_{[\tau]}+pen(D_{\hat{\tau}}) & \leq C(K) \left[ \inf_{\tau \in {\mathcal{T}}} \inf_{{\tilde{\mu}}\in F_{\tau}} \left( \frac{{\left\lVert{\tilde{\mu}}-\mu^*\right\rVert}_F^2}{T} + \lambda {\left\lVert{\tilde{\mu}}\right\rVert}_{[\tau]}+ pen(D_{\tau}) \right) + 2 \lambda \epsilon + (1+z)\epsilon^2 \right]. \\ \end{aligned}$$ After integrating this inequality, we get the desired result. [***Proof of Theorem 3*[.]{}**]{} We will call $S_{D_m}$ the space generated by $m$ specific elements of the standard basis of $\operatorname*{\mathbb{R}}^p$ and let us define the set $S_{(D_m,\tau)}$ as: $$S_{(D_m,\tau)}:=\{ {\tilde{\mu}}\in F_{\tau}| {\tilde{\mu}}_{\tau_l} \in S_{D_m} \text{ for all } l \in \{0,...,D_{\tau}\} \},$$ This implies that we restrict the means defined in each of the segments to be elements of $S_{D_m}$. Let define $M \subset \{1,...,p\} \times {\mathcal{T}}$ and let denote ${\hat{\tilde{\mu}}^{\text{LSE}}_{\hat{\tau}^{\text{LSE}}}}$ and ${\hat{\tau}^{\text{LSE}}}$ the solutions to the following optimization problem: $${\label{Eq:optimization_problem_2}} \begin{aligned} ({\hat{\tilde{\mu}}^{\text{LSE}}_{\hat{\tau}^{\text{LSE}}}},{\hat{\tau}^{\text{LSE}}}):= & \operatorname*{\arg\!\min}_{ (\tau \in {\mathcal{T}},{\tilde{\mu}}\in S_{(D_m,\tau)})} \left\{ \sum_{l=1}^{D_{\tau}} \left( \sum_{t=\tau_{l-1}+1}^{\tau_{l}} \sum_{i=1}^{p} {\frac{(\tilde{y}^{(i)}_t-{\tilde{\mu}}^{(i)}_{\tau_l})^2}{T}}\right)+K_1\frac{D_{m_{\lambda}}}{T} \right. \\ & \left. +\frac{D_{\tau}}{T}\left(K_2+K_3 \log\left(\frac{T}{D_{\tau}}\right)\right) \right\} \end{aligned}$$ In order to obtain a oracle inequality for this estimator, we will rely on the result stated in [Theorem\[Theorem:model\_selection\]]{}. This means that we need to verify [Ineq.\[ineq:gaussian\_process\]]{} and [Ineq.\[ineq:penalty\_theorem\_1\]]{} for a set of weights satisfying [Ineq.\[ineq:sum\_weights\]]{}. We will begin by proving [Ineq.\[ineq:gaussian\_process\]]{}. Let $\hat{g},\hat{f} \in S_{(D_m,\tau)}$, then we have: $${\label{ineq:dif_gp}} \begin{aligned} W(\hat{g})-W(\hat{h}) & = \frac{\operatorname{tr}({{\eta}^{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}\hat{g}})}{T} - \frac{\operatorname{tr}({{\eta}^{{\mathsmaller {{\mkern-1.5mu\mathsf{T}}}}}\hat{h}})}{T} \\ & \leq \sum_{i \in Supp_m} \sum_{t=1}^{T} \frac{\zeta^{(i)}_t (\hat{g}^{(i)}_t-\hat{h}^{(i)}_t)}{T} \\ & \leq \sum_{i \in Supp_m} \sqrt{\sum_{t=1}^{T} \frac{(\zeta^{(i)}_t)^2}{T}} \sqrt{ \sum_{t=1}^{T} \frac{(\hat{g}^{(i)}_t-\hat{h}^{(i)}_t)^2}{T}} \ \ \ \ \text{\tiny{Cauchy-Schwarz Ineq.}} \\ & \leq \sqrt{\sum_{i \in Supp_m} \sum_{t=1}^{T} \frac{(\zeta^{(i)}_t)^2}{T}} \sqrt{ \sum_{i \in Supp_m} \sum_{t=1}^{T} \frac{(\hat{g}^{(i)}_t-\hat{h}^{(i)}_t)^2}{T}} \ \ \ \ \text{\tiny{Cauchy-Schwarz Ineq.}} \\ & = {\left\lVert\hat{g}-\hat{h}\right\rVert}_{{\mathbb{H}}} \sqrt{\sum_{i \in Supp_m} \sum_{t=1}^{T} \frac{(\zeta^{(i)}_t)^2}{T}}. \end{aligned}$$ Thanks to this inequality and the fact that $\zeta^{(i)}_t$ follows a standard Gaussian distribution, we derive the following expression for each $h \in S_{(D_m,\tau)}$: $$\begin{aligned} 2 {\mathbb{E}}\left[ \sup_{\hat{g} \in S_{(D_m,\tau)}} \left( \frac{W(\hat{g})-W(\hat{h})}{{\left\lVert\hat{g}-\hat{h}\right\rVert}_{{\mathbb{H}}}^2 + x^2} \right)\right] & \leq x^{-1} {\mathbb{E}}\left[ \sup_{\hat{g} \in S_{(D_m,\tau)}} \left( \frac{W(\hat{g})-W(\hat{h})}{{\left\lVert\hat{g}-\hat{h}\right\rVert}_{{\mathbb{H}}} } \right)\right] \\ & \leq x^{-1} \left[ {\mathbb{E}}\left[ \sup_{g \in S_{(D_m,\tau)}} \left( \frac{W(\hat{g})-W(\hat{h})}{{\left\lVert\hat{g}-\hat{h}\right\rVert}_{{\mathbb{H}}} } \right)^2\right] \right]^{1/2} \ \ \ \ \text{\tiny{Jensen's ineq.} } \\ & \leq x^{-1} \left[ {\mathbb{E}}\left[ \sum_{i \in Supp_m} \frac{\sum_{t=1}^{T} (\zeta^{(i)}_t)^2 }{T} \right] \right]^{1/2} \\ & = x^{-1} \sqrt{D_m}. \ \ \ \ \text{\tiny{ $(\zeta^{(i)}_t)$ follows a standard Gaussian distribution}}. \end{aligned}$$ We can conclude that [Ineq.\[ineq:gaussian\_process\]]{} with $\phi_m(x)= x\sqrt{D_m}$, from which is straightforward to derive $D_m$. Next, we define $x_{(m,\tau)}= \gamma D_m + D_{\tau}L(D_{\tau})$, where $\gamma>0$ and $L(D_{\tau})=2+\log\frac{T}{D_{\tau}}$, that is a constant that just depends on the cardinality of the segmentation induced by $\tau$. Then: $${\label{eq:x_m}} \begin{aligned} \Sigma= \sum_{(m,\tau) \in M} e^{-x_{(m,\tau)}} = & \left(\sum_{m \in N^*, m \leq p} e^{-\gamma D_m}\right)\left(\sum_{\tau \in {\mathcal{T}}}e^{-D_{\tau}L(D_{\tau})}\right) \\ \leq & \left(\frac{1}{e^\gamma-1}\right) \left( \sum_{D=1}^{T} e^{-D L(D)} | \{\tau \in {\mathcal{T}}, |D_{\tau}|=D\} |\right) \\ \leq & \left(\frac{1}{e^\gamma-1}\right) \left( \sum_{D=1}^{T} e^{-D L(D)} \binom{T}{D}\right) \\ \leq & \left(\frac{1}{e^\gamma-1}\right) \left( \sum_{D=1}^{T} e^{-D L(D)} \left(\frac{eT}{D}\right)\right)^D \\ \leq & \left(\frac{1}{e^\gamma-1}\right) \left( \sum_{D=1}^{T} e^{-D \left(L(D)-1- \log\frac{T}{D}\right)} \right) \\ \leq & \left(\frac{1}{e^\gamma-1}\right) \left(\frac{1}{e-1}\right) < \infty. \end{aligned}$$ Let fix $\eta > 0$, $C> 2 + \frac{2}{\eta}$, then $K=\frac{C \eta}{2(1+\eta)}>1$. And fix $0<\delta<1$ such that $\gamma=1-\delta>0$. By using the useful inequality $2\sqrt{ab} \leq a \eta^{-1} + b\eta$. $$\begin{aligned} \frac{K \epsilon^2}{T} \left( \sqrt{D_m} + \sqrt{2 ( \gamma D_m +D_{\tau} L(D_{\tau})} \right)^2 & \leq \frac{K \epsilon^2}{T} \left( \sqrt{(1+\gamma) D_m}+ \sqrt{2 D_{\tau} L(D_{\tau})} \right)^2 \ \ \ \ \text{\tiny{Triangle inequality}}\\ & \leq \frac{K \epsilon^2}{T} \left( (1+\gamma) D_m + 2 \sqrt{2 (1+\gamma) D_m D_{\tau} L(D_{\tau}) } \right. \\ & \left. + 2 D_{\tau} L(D_{\tau}) \right) \\ & \leq \frac{K \epsilon^2}{T} \left( (1+\gamma)D_m+ 2D_{\tau} L(D_{\tau}) \right. \\ & \left. + (1+\gamma)D_m \eta + 2D_{\tau} L(D_{\tau}) \eta^{-1} \right) \\ & \leq \frac{K \epsilon^2}{T} \left( (1+\gamma)(1+\eta)D_m + (2+ \eta^{-1}) D_{\tau} L(D_{\tau}) \right) \\ & \leq \left( C\eta(2-\delta) \epsilon^2 \frac{D_m}{T} + C \epsilon^2 \frac{D_{\tau}}{T} L(D_{\tau}) \right) \\ & = K_1 \frac{D_m}{T} + \frac{D_{\tau}}{T}\left(c_1+c_2 log\left(\frac{T}{D_{\tau}}\right)\right) \\ & = pen(m,\tau). \end{aligned}$$ As the hypotheses of [Theorem\[Theorem:model\_selection\]]{} are satisfied, we obtain the desired result. [^1]: Corresponding author. The authors are members of the Center Borelli, ENS Paris-Saclay, France. Emails: `{alejandro.de_la_concha_duarte, vayatis, kalogeratos}@cmla.ens-cachan.fr`.
--- abstract: 'The tensor model can be regarded as theory of dynamical fuzzy spaces, and gives a way to formulate gravity on fuzzy spaces. It has recently been shown that the low-lying fluctuations around the Gaussian background solutions in the tensor model agree correctly with the metric fluctuations on the flat spaces with general dimensions in the general relativity. This suggests that the local gauge symmetry (the symmetry of local translations) is also emergent around these solutions. To systematically study this possibility, I apply the BRS gauge fixing procedure to the tensor model. The ghost kinetic term is numerically analyzed, and it has been found that there exist some massless trajectories of ghost modes, which are clearly separated from the other higher ghost modes. Comparing with the corresponding BRS gauge fixing in the general relativity, these ghost modes forming the massless trajectories in the tensor model are shown to be identical to the reparametrization ghosts in the general relativity.' author: - | Naoki [Sasakura]{}[^1]\ [*Yukawa Institute for Theoretical Physics, Kyoto University,*]{}\ [*Kyoto 606-8502, Japan*]{} title: | \ Gauge fixing in the tensor model\ and emergence of local gauge symmetries --- =17.5pt plus 0.2pt minus 0.1pt \#1[(\[\#1\])]{} Introduction {#sec:intro} ============ Various thought experiments considering quantum gravitational fluctuations have shown that the classical concept of smooth spacetime in the general relativity is not appropriate in some extreme cases [@Garay:1994en; @Sasakura:1999xp], and should be replaced in some way by a novel concept of quantum spacetime. Fuzzy space[^2] is one of such candidates of quantum space [@Connes:1994yd; @Madore:2000aq; @Balachandran:2005ew]. A fuzzy space is defined by an algebra of functions on it, unlike a classical spacetime being described by a coordinate system. This kind of algebraic definition of spaces has some physically interesting advantages over the classical description. For example, in quantum gravity, the changes of topologies and dimensions of space are believed to be the vital processes of quantum fluctuations. However, it is generally hard or tightly constrained to describe these processes without encountering singularities in the classical description [@Carlip:1998uc]. On the contrary, in general, one will have much more freedom to describe such processes in fuzzy spaces through interpolation between algebraic structures of fuzzy spaces approximating classical spaces with distinct topologies and/or dimensions. This kind of thoughts suggest an interesting research direction; considering theory of dynamical fuzzy spaces as a model of quantum gravity. In the recent years, there have been numerous discussions about gravity on fuzzy spaces. A class of approaches discuss analogues of the general relativity on fuzzy spaces. In this class of approaches, however, the dynamical variable is a fuzzy analogue of the metric tensor, and a fuzzy space itself is assumed to be fixed. Therefore these approaches do not take the full advantages of the notion of fuzzy space as explained above. On the contrary, a more interesting kind of approaches were initiated by the matrix models [@Banks:1996vh; @Ishibashi:1996xs]. These approaches consider spaces as dynamical objects generated as classical solutions or vacua, and fluctuations of matrices around such vacua are regarded as field fluctuations on background fuzzy spaces. Then an extremely interesting possibility is that gravity may appear as one of these emergent fields. So far this is yet an open issue under active investigations [@Kawai:2007zz; @Steinacker:2009mb]. In view of this present status, it might be meaningful to study another kind of model of dynamical fuzzy spaces, which is similar to but distinct from the matrix models. The model studied in this paper is the tensor model, which has a rank-three tensor as its dynamical variable, instead of matrices in the matrix models. The tensor model was originally proposed to describe the simplicial quantum gravity in dimensions greater than two [@Ambjorn:1990ge; @Sasakura:1990fs; @Godfrey:1990dt; @Boulatov:1992vp; @Ooguri:1992eb; @DePietri:1999bx; @DePietri:2000ii][^3]. The tensor model has not yet been successful for the analysis of the simplicial quantum gravity itself, in part because of the absence of the analytical methods to solve the tensor model. However, it was recently proposed by the present author that the tensor model may be reinterpreted as theory of dynamical fuzzy spaces [@Sasakura:2005js; @Sasakura:2005gv; @Sasakura:2006pq]. This is based on the fact that a fuzzy space can be characterized by a rank-three tensor $C_{ab}{}^c$ which determines the algebraic relations among all the functions $f_a$ on a fuzzy space through the product $f_a \star f_b=C_{ab}{}^cf_c$. From this point of view, it may not be necessary to analytically solve the tensor model to make relations to physics. In analogy with the matrix model mentioned above, a classical solution in the tensor model may be regarded as a background fuzzy space, and the fluctuations of the tensor about it as field fluctuations. Then the question is whether gravity appears in such fluctuations. In fact, in a class of tensor models which have the classical solutions with Gaussian forms, it has been shown that the low-lying fluctuations about these solutions at low momenta match correctly with the metric fluctuations on flat spaces in the general relativity in general dimensions [@Sasakura:2007sv; @Sasakura:2007ud; @Sasakura:2008pe]. The above agreement is very interesting, but it is merely classical and obviously not enough quantum mechanically. The main purpose of this paper is to show the agreement a step further to include the gauge degrees of freedom, which are the local translations in the present case. The tensor model has the symmetry of the orthogonal group $O(N)$, where $N$ is the number of all the functions or more physically “points" forming a fuzzy space. A background solution of the tensor model breaks this $O(N)$ symmetry down to some remaining symmetries of the solution or the background space, and the broken symmetries are realized non-linearly around it[^4]. Since the broken symmetries permute the “points" of the background fuzzy space, they are intrinsically local symmetries, and it is tempting to insist that these are emergent local gauge symmetries (the local translation symmetry)[^5]. In this paper, to make this statement more precise and systematic, I will apply the BRS gauge fixing procedure to the tensor model and numerically analyze the ghost kinetic term. Then I will compare the results of the numerical analysis with the corresponding BRS gauge fixing in the general relativity. This paper is organized as follows. In the following section, I will apply the BRS gauge fixing procedure to the tensor model. In Sec.\[sec:BRSrelativity\], I will discuss the corresponding BRS gauge fixing procedure in the general relativity. In Sec.\[sec:numanaly\], I will numerically study the eigenvalues and eigenmodes of the ghost kinetic term in the tensor model at the Gaussian backgrounds with dimensions $D=1,2,3$, and compare with the ghost kinetic term in the general relativity on the flat spaces in these dimensions. The final section is devoted to a summary and discussions. BRS gauge fixing procedure in the tensor model {#sec:BRStensor} ============================================== Direct computation of the gauge volume {#sec:direct} -------------------------------------- Let me start with the direct computation of the gauge volume. The dynamical variable of the tensor model in this paper is given by a real rank-three tensor $C_{abc}$, which is totally symmetric, C\_[abc]{}=C\_[bca]{}=C\_[cab]{}=C\_[bac]{}=C\_[acb]{}=C\_[cba]{}. The index takes values $1,2,\ldots,N$. There is also a [*nondynamical*]{} symmetric real tensor $g^{ab}$, which is basically taken to be $g^{ab}=\delta^{ab}$. Therefore, by following the standard pairwise index contractions, the tensor model is invariant under the orthogonal group transformation $O(N)$, C\_[abc]{}(MC)\_[abc]{}M\_a\^[a’]{} M\_b\^[b’]{} M\_c\^[c’]{}C\_[a’b’c’]{}, where $M_{a}{}^{a'}\in O(N)$. This is the gauge symmetry of the tensor model. The $O(N)$ symmetric metric in the space of the dynamical variable $C$ is defined by[^6] \[eq:cmeasure\] ds\_C\^2=dC\_[abc]{}dC\^[abc]{}. The inner product associated with the metric between two rank-three totally symmetric tensors is defined by A, B =A\_[abc]{}B\^[abc]{}. The infinitesimal $SO(N)$ transformation of $C$ is given by (T\^i C)\_[abc]{}T\^i\_[a]{}\^[a’]{} C\_[a’bc]{}+T\^i\_b\^[b’]{}C\_[ab’c]{}+T\^i\_c\^[c’]{}C\_[abc’]{}, where $T^i{}_a{}^{a'}$ ($i=1,2,\ldots,N(N-1)/2$) are the real antisymmetric matrices forming the Lie-algebra $so(N)$ in the vector representation. The volume measure in the space of $C$ is defined from the metric . Dividing an infinitesimal region into the gauge directions and the others, the infinitesimal volume $dV_C$ can be expressed as \[eq:dvcdg\] dV\_C=dg dV\_C\^ , where $dg$ is the Haar measure of $SO(N)$ and $dV_C^\perp$ denotes the infinitesimal volume normal to the gauge directions. Here Det($\cdots$) are the determinants of the matrices with components, TC,TC\_[ij]{}&=&T\^iC, T\^jC,\ T,T\_[ij]{}&=& h\_0 (T\^i T\^j), where Tr denotes the trace in the vector representation, and $h_0$ is a coefficient related to the normalization of the Haar measure. Since the integrand in is invariant along the $SO(N)$ gauge directions, the partial integration over the gauge directions is trivially performed as \_[SO(N)]{} dV\_C=dV\_C\^ , where $n$ is the possible symmetry factor becoming larger than 1 if there exists a non-trivial $M\in SO(N)$ which satisfies $C=MC$. This factor $n$ can practically be ignored, since the regions of such symmetric values of $C$ have generally vanishing volumes in the space of $C$. Thus, ignoring all the factors independent of $C$, one finally obtains \[eq:dvc\] \_[SO(N)]{} dV\_C=dV\_C\^ . BRS gauge fixing procedure in the tensor model {#brs-gauge-fixing-procedure-in-the-tensor-model} ---------------------------------------------- I apply the general BRS gauge fixing scheme with the so-called $B$ field presented in [@Kugo:1981hm] to the $SO(N)$ symmetry in the tensor model. The BRST transformation of $C$ is given by (\_B C)\_[abc]{}=c\_i(T\^i C)\_[abc]{}, where $c_i$ are the ghost variables, which are assumed to be real. The BRST transformation of the ghost variables is given by \_B c\_k=12 f\^[ij]{}\_k c\_i c\_j, where $f^{ij}{}_k$ is the structure constant of $so(N)$, defined by $[T^i,T^j]=f^{ij}{}_k T^k$. There are also the anti-ghost and the B-variables, the BRST transformations of which are given by \_B |[c]{}\_i&=&i B\_i. \_B B\_i&=&0. These $\bar c_i$ and $B_i$ are also assumed to be real. The nilpotency $\delta_B^2=0$ can easily be shown by explicit computations. The interest of the present paper is in the small fluctuations around certain backgrounds of $C$. Let me denote a background by $C^0$ and the fluctuations by $A$, C\_[abc]{}=C\^0\_[abc]{}+A\_[abc]{}. Then the dynamical variable is shifted to $A$, and its BRST transformation is given by (\_B A)\_[abc]{}=(\_B C)\_[abc]{}=c\_i(T\^i C)\_[abc]{}=c\_i(T\^i C\^0)\_[abc]{}+c\_i(T\^iA)\_[abc]{}. The general scheme implies that the BRST exact action corresponding to the sum of the Faddeev-Popov and the gauge fixing terms can generally be given by \[eq:fppgf\] S\_[GF+FP]{}=\_B(|c\_i F\^i(A,c,|c,B)), where $F^i$ are the (almost arbitrary) gauge-fixing functions with vanishing ghost number. A natural choice in the present case is \[eq:fi\] F\^i=T\^iC\^0,A, since the gauge fixing conditions ($F^i=0$) only allow $A$ to be normal to the gauge directions at the background $C^0$. Computing with , $S_{GF+FP}$ is explicitly given by \[eq:expfpgf\] S\_[GF+FP]{}=i B\_i T\^iC\^0,A- |c\_i T\^iC\^0,T\^j C c\_j. The path integral measure, which is just a usual integration in the present case, can be defined by \[eq:BRSvolmeasure\] \_i dB\_i dc\_i d|c\_i, where $[dA]$ is the volume measure of $A$ defined from the metric $ds^2_A=dA_{abc}dA^{abc}$, which is identical to the $O(N)$ symmetric metric . Here a possible overall factor is not taken care of. From the $O(N)$ invariance of the volume measure $[dA]$, one can easily prove the BRST invariance of the integral, \_i dB\_i dc\_i d|c\_i \_B(…)=0, which guarantees the independence of physics from the choice of the gauge-fixing functions. Comparison between the direct and the BRS expressions ----------------------------------------------------- In the following, let me compare the BRS result , with the direct computation . To do this, let me introduce a normalized orthogonal basis which divides the space about $C^0$ into the subspaces tangent $\{v^{0||}_i\}$ and normal $\{v_l^{0\perp}\}$ to the gauge directions, \[eq:c0basis\] T\^i C\^0,v\_l\^[0]{}&=&0,v\^[0]{}\_l,v\^[0]{}\_m &=&\_[lm]{},v\^[0]{}\_l,v\^[0||]{}\_i &=&0, v\^[0||]{}\_i,v\^[0||]{}\_j&=&\_[ij]{}. In general, $A$ can be expanded in terms of these vectors as \[eq:Aexpand\] A=\^i v\^[0||]{}\_i+\^l v\^[0]{}\_l. From the definition of the basis , the volume measure is $[dA]=\prod_{i} d\alpha^i \prod_l d\beta^l$. Putting into , and integrating over $c_i,\bar c_i,B_i$ and finally over $\alpha^i$, one obtains \[eq:FPexp\] \_i dB\_i dc\_i d|c\_i e\^[-S\_[GF+FP]{}-S(C)]{}= \_l d\^l   e\^[-S(C\^0+\^l v\^[0]{}\_l)]{}, where an overall numerical constant is ignored, $S(C)$ is the original unfixed action, and the matrices in the determinants are defined by \[eq:tcmatrix\] TC\^0,TC\_[ij]{}&=&T\^iC\^0,T\^jC,TC\^0,v\^[0||]{}\_[ij]{}&=&T\^iC\^0,v\_j\^[0||]{}. The result does not look like , but they are actually identical. To see this, let me introduce a similar normalized orthogonal basis $\{v^{||}_i\}$, $\{v_l^{\perp}\}$ around $C=C^0+A$ as , \[eq:cbasis\] T\^i C,v\_l\^&=&0,v\^\_l,v\^\_m &=&\_[lm]{},v\^\_l,v\^[||]{}\_i &=&0, v\^[||]{}\_i,v\^[||]{}\_j&=&\_[ij]{}. Then the square of the determinants in can be computed as \[eq:squareFP\] ()\^2&=& (TC,TC\^0 (TC\^0,v\^[0||]{} v\^[0||]{},TC\^0)\^[-1]{} TC\^0,TC) &=& ( TC,v\^[0||]{}v\^[0||]{},TC) &=& ( TC,TC) \^2 &=& ( TC,TC) \^2, where similar shorthand notations like are used to denote the matrices. In the above derivation, I have used the completeness of the bases $\left\{v^{||}\right\},\left\{ v^{0||}\right\}$ in the subspaces tangent to the gauge directions, and \^2=( [cc]{} v\^[||]{},v\^[||]{} & v\^[||]{},v\^[0]{}\ v\^[0]{},v\^[||]{}& v\^[0]{},v\^[0]{} )=\^2, which can be shown from the identity, (D) (A-B D\^[-1]{} C)= ( [cc]{} A & B\ C & D )= (A) (D-C A\^[-1]{} B), and the properties of the orthogonal normalized bases. From the definition and that $dV_C^\perp$ in is the infinitesimal volume normal to the gauge directions at $C$, one obtains \[eq:jacobi\] dV\^\_C=\_l d\_l | (v\^,v\^[0]{})|. Thus and are actually identical, because of , . BRS gauge fixing in the general relativity {#sec:BRSrelativity} ========================================== In this subsection, I will discuss the BRS gauge fixing procedure in the general relativity [@Nakanishi:1977gt; @Nishijima:1978wq; @Kugo:1978rj; @Nakanishi:1990qm] corresponding to that of the tensor model in the previous section. By rewriting the coordinate transformation of the metric tensor with the ghost fields, the BRST transformation of the metric tensor is given by \_B g\_=\_c\_+\_c\_, where $\nabla_\mu$ is the covariant derivative, and $c_\mu$ is the ghost vector field. Then the nilpotency of the BRST transformation requires that the ghost field be transformed by \_B c\_&=&-c\^\_c\_, (\_B c\^&=&c\^\_c\^=c\^\_c\^). The anti-ghost field and the $B$-field are introduced with the BRST transformations, \_B |c\_&=&i B\_, \_B B\_&=&0. The nilpotency $\delta_B^2=0$ can be checked by explicit computations[^7]. All the fields above are assumed to be real. In the numerical analysis of the following section, I will take $C^0$ to be the Gaussian backgrounds [@Sasakura:2007sv; @Sasakura:2007ud; @Sasakura:2007ud], which correspond to the fuzzy flat spaces with arbitrary dimensions. Correspondingly, flat backgrounds are considered in the general relativity as g\_=\_+h\_, where $h_{\mu\nu}$ is the new dynamical field with $\delta_B h_{\mu\nu}=\nabla_{\mu}c_\nu+\nabla_\nu c_\mu$. In [@Sasakura:2007ud], it was argued that the metric corresponds to the DeWitt supermetric [@DeWitt:1962ud], \[eq:dewitt\] ds\^2\_g=d\^Dx (g\^g\^+4 g\^g\^) g\_ g\_. Thus the inner product associated to between two rank-two symmetric tensor fields is defined by k,l \_g=d\^Dx (g\^g\^+4 g\^g\^) k\_ l\_. In the following, I want to obtain the BRS gauge fixing in the general relativity which is analogous to , . Since $C^0$ corresponds to the flat background, the analogy of $T^i C^0$ are the infinitesimal local translations of the flat background. Therefore $\bar c_i T^i C^0$ in the tensor model should correspond to the field $\partial_\mu \bar c_\nu + \partial_\nu \bar c_\mu$. The deviation $A$ from the background in the tensor model corresponds to $h_{\mu\nu}$. Thus the action corresponding to , is obtained as \[eq:gfpfpg\] S\^g\_[GF+FP]{}&=&\_B \_|c\_+\_|c\_, h\_\_g &=& 2 \_B (d\^Dx (g\^g\^+4 g\^g\^) (\_|c\_) h\_). In the quadratic order of the fields around the flat background, becomes \[eq:gf2\] S\^[g(2)]{}\_[GF+FP]{}= 2 d\^Dx. The partial derivative of with respect to $B_\mu$ gives the gauge fixing condition as \_h\^\_+4 \^h\_=0. One can check that this gauge fixing condition is actually satisfied by all the metric fluctuation modes corresponding to the low-lying fluctuation modes in the tensor model which were reported previously in [@Sasakura:2007ud; @Sasakura:2008pe]. Putting the form $c_\mu,\bar c_\mu = n_\mu e^{ipx}$, the kinetic term of the ghost fields in can be shown to have the spectra, \[eq:spectracon\] { [ll]{} 20 p\^2 & n\_p\_,\ 8 p\^2 & n\_p\^=0. . Thus the longitudinal mode has no degeneracy, while the normal modes have the degeneracy $D-1$ in $D$ dimensions. Numerical analysis of the ghost kinetic term in the tensor model {#sec:numanaly} ================================================================ In the papers [@Sasakura:2007sv; @Sasakura:2007ud; @Sasakura:2008pe], a class of tensor models possessing the classical solutions with Gaussian forms have been constructed and analyzed. In this section, I will take $C^0$ to be such Gaussian backgrounds, and numerically study the ghost kinetic term in the tensor model. In all the dimensional cases to be studied ($D=1,2,3$), some massless trajectories of ghost modes, which are clearly separated from the other higher ghost modes, will be found, and they will be identified with the reparametrization ghosts in the general relativity. The coefficient matrix of the ghost kinetic term ------------------------------------------------ In the momentum basis, such a gaussian background $C^0$ has the form [@Sasakura:2007sv; @Sasakura:2007ud; @Sasakura:2008pe; @Sasai:2006ua] \[eq:c0p\] C\^0\_[p\_1,p\_2,p\_3]{}=( -(p\_1\^2+p\_2\^2+p\_3\^2)) \_[p\_1+p\_2+p\_3,0]{}, where $\alpha$ is a positive parameter, and each momentum is assumed to take integer vales bounded by $L$: \[eq:pregion\] p=(p\^1,p\^2,…,p\^D),    \_[i=1]{}\^D (p\^i)\^2 L\^2. Since the delta function in implies the momentum conservation, there remains the $D$-dimensional translational symmetry on this background. From , the ghost kinetic term is given by \[eq:sgh\] S\_[gh]{}=-|c\_i T\^iC\^0,T\^jC\^0c\_j. Because of the momentum conservation of the background , it is most convenient to take a momentum basis for the generators $T^i$. Namely, the indices of the generators are given by pairs of distinct momenta $i=[p\,q]\ (p\neq q)$, and the generators are antisymmetric matrices defined by \[eq:tpq\] (T\^[\[pq\]]{})\_[rs]{}=\_[p,r]{}\_[q,s]{}-\_[p,s]{}\_[q,r]{}. Then, putting into , the coefficient matrix of the ghost kinetic term is given by M\^[gh]{}\_[\[p\_1q\_1\],\[p\_2q\_2\]]{}&& T\^[\[p\_1q\_1\]]{}C\^0,T\^[\[p\_2q\_2\]]{}C\^0&=& 3 \_[p\_1+p\_2,0]{}\_[q\_1+q\_2,0]{} \_[r,s]{}C\^0\_[q\_1, r, s]{}C\^0\_[q\_2,-r,-s]{}+ 6 \_[r]{} C\^0\_[p\_1,q\_2,r]{}C\^0\_[q\_1,p\_2,-r]{} &&  -(p\_1q\_1)-(p\_2q\_2)+(p\_1q\_1,p\_2q\_2), where the summations of the momenta $r,s$ are over the range , and the simplified notations for anti-symmetrization have been used. Because of the momentum conservation of the background, the matrix $M^{gh}_{[p_1q_1],[p_2q_2]}$ is divided into the block matrices with each value of the ghost momentum $p_1+q_1=-(p_2+q_2)$. Therefore, the analysis of $M^{gh}$ can be performed independently at each momentum sector. In the following subsections, I will study the spectra and the properties of the eigenmodes of $M^{gh}$, for dimensions $D=1,2,3$, and compare with the continuum theory. The numerical facility was a Windows XP 64 workstation containing two Opteron 275 processors and 8 GB memories. The C++ codes[^8] were compiled by Intel C++ compiler 10.1 with ACML 4.2 for Lapack/Blas routines. The output were analyzed in Mathematica 6.0. The D=1 case ------------ In Fig.\[fig:fig1\], the eigenvalues of $M^{gh}$ are plotted for two cases. A zero spectrum exists at $p=0$, as expected from the fact that there remains a translational symmetry on the background. There clearly exist a series of spectra which form a massless trajectory and are clearly separated from the other higher modes at low momenta. This series can be identified with the reparametrization ghost of the continuum theory, as explained below. In fact, the trajectory contains only one mode at each momentum value, which is consistent with the result of the continuum theory. In the left figure with $L=15$, $\alpha=0.5/L^2$, the trajectory looks to have a linear momentum dependence near the origin, which contradicts . But, as can be seen in the right figure with $L=1500$, $\alpha=3/L^2$, the trajectory tends to become reasonably smooth near the origin in the cases with larger $L$ and $\alpha$, which is consistent with . This is in agreement with the natural expectation that the continuum theory will be obtained only in large $L$ and at low momenta. ![The eigenvalue plots for $D=1$. The horizontal axis is the momentum of the ghosts. The left figure shows the whole spectra for $L=15$, $\alpha=0.5/L^2$. The right figure shows the low part of the spectra for $L=1500$, $\alpha=3/L^2$. The fitting line is $8.1\times 10^{-4} p^2 -2.1\times 10^{-7}p^4$.[]{data-label="fig:fig1"}](dim1al05L15.eps "fig:") ![The eigenvalue plots for $D=1$. The horizontal axis is the momentum of the ghosts. The left figure shows the whole spectra for $L=15$, $\alpha=0.5/L^2$. The right figure shows the low part of the spectra for $L=1500$, $\alpha=3/L^2$. The fitting line is $8.1\times 10^{-4} p^2 -2.1\times 10^{-7}p^4$.[]{data-label="fig:fig1"}](dim1al3L1500.eps "fig:") The D=2 case ------------ The result of the continuum theory implies that there should exist two massless trajectories of spectra with no degeneracy, and that the ratios of the spectra in the two trajectories should be given by $\frac{20}{8}=2.5$. In fact, in the left figure of Fig.\[fig:fig2\], one can find that there exist two massless trajectories linked to the two zero spectra at $p=0$, which come from the unbroken translational symmetries. The numerical data also show that each trajectory has only one mode at each momentum value. The right figure shows that the ratios of the two trajectories at each momentum are actually in good agreement with $\frac{20}{8}$. ![The left figure shows the low part of the spectra for $D=2$, $L=100$, $\alpha=2/L^2$. The right figure shows the ratios of the two trajectories. The horizontal axis is the momentum size $\sqrt{(p^1)^2+(p^2)^2}$.[]{data-label="fig:fig2"}](dim2al2L100.eps "fig:") ![The left figure shows the low part of the spectra for $D=2$, $L=100$, $\alpha=2/L^2$. The right figure shows the ratios of the two trajectories. The horizontal axis is the momentum size $\sqrt{(p^1)^2+(p^2)^2}$.[]{data-label="fig:fig2"}](dim2ratio.eps "fig:") It will also be a good check to see whether the mode profiles are consistent with the continuum theory. To do this, I follow the strategy in the previous works [@Sasakura:2007ud; @Sasakura:2008pe]. Let me define a tensor, K\_[ab]{}=C\_[acd]{}C\_b\^[cd]{}. Under small fluctuations around $C^0$, this tensor fluctuates as K\_[ab]{}=C\_[acd]{}C\^0\_b\^[cd]{}+C\^0\_[acd]{}C\_b\^[cd]{}. The present interest is in the fluctuations in the gauge directions $T^iC^0$. For the gauge direction determined by an eigenvector $v$ of $M^{gh}$, $\delta K$ is given by K\^v\_[ab]{}=(v\_iT\^iC\^0)\_[acd]{} C\^0\_b\^[cd]{}+C\^0\_[acd]{} (v\_iT\^iC\^0)\_b\^[cd]{}. In Fig.\[fig:fig3\], $\delta K$ is plotted for the eigenmodes contained in the two trajectories. ![The contour plots of $\delta K_{q,-q+p}$ for the modes contained in the two massless trajectories for $D=2$, $L=20$, $\alpha=2/L^2$. The horizontal axes are $(q^1,q^2)=q$. The left and right figures are shown for the modes in the lower and upper trajectories, respectively, at the momentum $p=(1,0)$.[]{data-label="fig:fig3"}](trans.eps "fig:") ![The contour plots of $\delta K_{q,-q+p}$ for the modes contained in the two massless trajectories for $D=2$, $L=20$, $\alpha=2/L^2$. The horizontal axes are $(q^1,q^2)=q$. The left and right figures are shown for the modes in the lower and upper trajectories, respectively, at the momentum $p=(1,0)$.[]{data-label="fig:fig3"}](long.eps "fig:") On the other hand, the correspondence between the tensor model and the general relativity implies [@Sasakura:2007ud; @Sasakura:2008pe] K\_[p\_1p\_2]{}=h\_(p\_1+p\_2) (p\_1-p\_2)\^(p\_1-p\_2)\^ ( - (p\_1-p\_2)\^2). Substituting the gauge transformation $h_{\mu\nu}(p)= n_\mu p_\nu+n_\nu p_\nu$ into this expression, one obtains the two contour plots in Fig.\[fig:fig4\] for the normal ($p_\mu n^\mu=0$) and the longitudinal ($p_\mu\propto n_\mu$) modes, respectively. ![The contour plots of $\delta K_{q,-q+p}$ for $p=(1,0)$ expected from the continuum theory. The left and right figures are for the normal ($p_\mu n^\mu=0$) and the longitudinal ($p_\mu\propto n_\mu$) modes, respectively. []{data-label="fig:fig4"}](transgen.eps "fig:") ![The contour plots of $\delta K_{q,-q+p}$ for $p=(1,0)$ expected from the continuum theory. The left and right figures are for the normal ($p_\mu n^\mu=0$) and the longitudinal ($p_\mu\propto n_\mu$) modes, respectively. []{data-label="fig:fig4"}](longgen.eps "fig:") These figures are in full agreement with Fig.\[fig:fig3\] in view of the mode assignment . The $D=3$ case -------------- The result of the continuum theory implies that the lower trajectory should contain two modes at each momentum. In the left figure of Fig.\[fig:fig5\], the low part of the spectra for $D=3$, $L=15$, $\alpha=1.5/L^2$ is shown. There exists two massless trajectories, and in fact the numerical data show that the lower trajectory contains two modes at each momentum value. In the right figure, the ratios of the two trajectories are shown, which are consistent with $\frac{20}{8}$. ![The left figure shows the low part of the spectra for $D=3$, $L=15$, $\alpha=1.5/L^2$. The right figure shows the ratio of the two trajectories.[]{data-label="fig:fig5"}](dim3spec.eps "fig:") ![The left figure shows the low part of the spectra for $D=3$, $L=15$, $\alpha=1.5/L^2$. The right figure shows the ratio of the two trajectories.[]{data-label="fig:fig5"}](dim3ratio.eps "fig:") Summary and discussions ======================= In this paper, I have applied the BRS gauge fixing procedure to the tensor model, and have numerically analyzed the ghost kinetic term at the Gaussian backgrounds, which correspond to the fuzzy flat spaces with arbitrary dimensions. Then it has been found that there exist some massless trajectories of the ghost modes, which are clearly separated from the other higher ghost modes. By examining the properties of the modes in these massless trajectories, it has been shown that these modes can be identified with the reparametrization ghosts in the BRS gauge fixing of the general relativity. This means physically that the local gauge symmetry (the local translation symmetry) is emergent around these backgrounds in the tensor model. Combined with the results of the previous works, this paper has shown that the low-lying fluctuations around the Gaussian backgrounds in the tensor model correctly generates the general relativity, including its gauge symmetry, on the flat spaces in general dimensions. However, this has only been shown in the quadratic order of the fluctuations around the backgrounds, but not for any of the higher non-linear terms. On the other hand, the general relativity (possibly with modification of the action) is the unique interacting field theory of the rank-two symmetric tensor field with the gauge symmetry. Therefore there exists a good chance for the tensor model to correctly generate also the non-linear terms. This should be studied in future works. The above agreement between the tensor model and the general relativity including the gauge symmetry also suggests that the quantization of the general relativity can be realized by that of the tensor model, and thus a kind of quantum gravity can be defined by the tensor model. There exist a lot of questions to be addressed by quantum gravity, the most phenomenologically interesting of which would be the cosmological constant problem [@Weinberg:1988cp]. In the conventional approaches, one needs an extreme fine-tuning of the cosmological constant to stabilize a flat space from quantum corrections. Therefore it would be interesting to study how the Gaussian backgrounds, which represent fuzzy flat spaces, respond to quantum corrections in the tensor model. So far, the correspondence between the tensor model and the general relativity has been shown only for a limited class of tensor models, which have the Gaussian solutions. It is also an interesting future problem to investigate whether such correspondence holds in more general tensor models. Acknowledgments {#acknowledgments .unnumbered} =============== The author would like to thank T. Kugo for the knowledge about the references on the BRS gauge fixing in the general relativity and those on the viewpoints of gauge symmetry as spontaneously broken symmetry. The author was supported in part by the Grant-in-Aid for Scientific Research No.18340061(B) from the Japan Society for the Promotion of Science (JSPS). [99]{} L. J. Garay, “Quantum gravity and minimum length,” Int. J. Mod. Phys.  A [**10**]{}, 145 (1995) \[arXiv:gr-qc/9403008\]. N. Sasakura, “An uncertainty relation of space-time,” Prog. Theor. Phys.  [**102**]{}, 169 (1999) \[arXiv:hep-th/9903146\]. Y. Sasai and N. Sasakura, “One-loop unitarity of scalar field theories on Poincare invariant commutative nonassociative spacetimes,” JHEP [**0609**]{}, 046 (2006) \[arXiv:hep-th/0604194\]. A. Connes, “Noncommutative geometry,” [*Academic Press (1994) 661 p*]{}. J. Madore, “An Introduction To Noncommutative Differential Geometry And Itsphysical Applications,” Lond. Math. Soc. Lect. Note Ser.  [**257**]{}, 1 (2000). A. P. Balachandran, S. Kurkcuoglu and S. Vaidya, “Lectures on fuzzy and fuzzy SUSY physics,” arXiv:hep-th/0511114. S. Carlip, “Quantum gravity in 2+1 dimensions,” [*Cambridge, UK: Univ. Pr. (1998) 276 p*]{}. T. Banks, W. Fischler, S. H. Shenker and L. Susskind, “M theory as a matrix model: A conjecture,” Phys. Rev.  D [**55**]{}, 5112 (1997) \[arXiv:hep-th/9610043\]. N. Ishibashi, H. Kawai, Y. Kitazawa and A. Tsuchiya, “A large-N reduced model as superstring,” Nucl. Phys.  B [**498**]{}, 467 (1997) \[arXiv:hep-th/9612115\]. H. Kawai, “Curved space-times in matrix models,” Prog. Theor. Phys. Suppl.  [**171**]{}, 99 (2007). H. Steinacker, “Matrix Models, Emergent Gravity, and Gauge Theory,” arXiv:0903.1015 \[hep-th\]. J. Ambjorn, B. Durhuus and T. Jonsson, “Three-Dimensional Simplicial Quantum Gravity And Generalized Matrix Models,” Mod. Phys. Lett. A [**6**]{}, 1133 (1991). N. Sasakura, “Tensor Model For Gravity And Orientability Of Manifold,” Mod. Phys. Lett. A [**6**]{}, 2613 (1991). N. Godfrey and M. Gross, “Simplicial Quantum Gravity In More Than Two-Dimensions,” Phys. Rev. D [**43**]{}, 1749 (1991). D. V. Boulatov, “A Model of three-dimensional lattice gravity,” Mod. Phys. Lett. A [**7**]{}, 1629 (1992) \[arXiv:hep-th/9202074\]. H. Ooguri, “Topological lattice models in four-dimensions,” Mod. Phys. Lett. A [**7**]{}, 2799 (1992) \[arXiv:hep-th/9205090\]. R. De Pietri, L. Freidel, K. Krasnov and C. Rovelli, “Barrett-Crane model from a Boulatov-Ooguri field theory over a homogeneous space,” Nucl. Phys. B [**574**]{}, 785 (2000) \[arXiv:hep-th/9907154\]. R. De Pietri and C. Petronio, “Feynman diagrams of generalized matrix models and the associated manifolds in dimension 4,” J. Math. Phys.  [**41**]{}, 6671 (2000) \[arXiv:gr-qc/0004045\]. S. Imai and N. Sasakura, “Scalar field theories in a Lorentz-invariant three-dimensional noncommutative space-time,” JHEP [**0009**]{}, 032 (2000) \[arXiv:hep-th/0005178\]. L. Freidel and E. R. Livine, “Effective 3d quantum gravity and non-commutative quantum field theory,” Phys. Rev. Lett.  [**96**]{}, 221301 (2006) \[arXiv:hep-th/0512113\]. Y. Sasai and N. Sasakura, “Massive particles coupled with 2+1 dimensional gravity and noncommutative field theory,” arXiv:0902.3502 \[hep-th\]. D. Oriti, “Group field theory and simplicial quantum gravity,” arXiv:0902.3903 \[gr-qc\]. N. Sasakura, “An invariant approach to dynamical fuzzy spaces with a three-index variable,” Mod. Phys. Lett.  A [**21**]{}, 1017 (2006) \[arXiv:hep-th/0506192\]. N. Sasakura, “An invariant approach to dynamical fuzzy spaces with a three-index variable - Euclidean models,” in the proceedings of 4th International Symposium on Quantum Theory and Symmetries (QTS-4), Varna, Bulgaria, 15-21 Aug 2005 \[arXiv:hep-th/0511154\]. N. Sasakura, “Tensor model and dynamical generation of commutative nonassociative fuzzy spaces,” Class. Quant. Grav.  [**23**]{}, 5397 (2006) \[arXiv:hep-th/0606066\]. N. Sasakura, “The fluctuation spectra around a Gaussian classical solution of a tensor model and the general relativity,” Int. J. Mod. Phys.  A [**23**]{}, 693 (2008) \[arXiv:0706.1618 \[hep-th\]\]. N. Sasakura, “The lowest modes around Gaussian solutions of tensor models and the general relativity,” Int. J. Mod. Phys.  A [**23**]{}, 3863 (2008) \[arXiv:0710.0696 \[hep-th\]\]. N. Sasakura, “Emergent general relativity on fuzzy spaces from tensor models,” Prog. Theor. Phys.  [**119**]{}, 1029 (2008) \[arXiv:0803.1717 \[gr-qc\]\]. R. Ferrari and L. E. Picasso, “Spontaneous breakdown in quantum electrodynamics,” Nucl. Phys.  B [**31**]{}, 316 (1971). R. A. Brandt and W. C. Ng, “Gauge Invariance And Mass,” Phys. Rev.  D [**10**]{}, 4198 (1974). A. B. Borisov and V. I. Ogievetsky, “Theory Of Dynamical Affine And Conformal Symmetries As Gravity Theory Of The Gravitational Field,” Theor. Math. Phys.  [**21**]{}, 1179 (1975) \[Teor. Mat. Fiz.  [**21**]{}, 329 (1974)\]. T. Kugo and S. Uehara, “General Procedure Of Gauge Fixing Based On Brs Invariance Principle,” Nucl. Phys.  B [**197**]{}, 378 (1982). N. Nakanishi, “Indefinite-Metric Quantum Field Theory Of General Relativity,” Prog. Theor. Phys.  [**59**]{}, 972 (1978). K. Nishijima and M. Okawa, “The Becchi-Rouet-Stora Transformation For The Gravitational Field,” Prog. Theor. Phys.  [**60**]{}, 272 (1978). T. Kugo and I. Ojima, “Subsidiary Conditions And Physical S Matrix Unitarity In Indefinite Metric Quantum Gravitational Theory,” Nucl. Phys.  B [**144**]{}, 234 (1978). N. Nakanishi and I. Ojima, “Covariant operator formalism of gauge theories and quantum gravity,” World Sci. Lect. Notes Phys.  [**27**]{}, 1 (1990). B. S. DeWitt, “Quantization of fields with infinite-dimensional invariance groups. III. Generalized Schwinger-Feynman theory,” J. Math. Phys.  [**3**]{}, 1073 (1962). S. Weinberg, “The cosmological constant problem,” Rev. Mod. Phys.  [**61**]{}, 1 (1989). [^1]: sasakura@yukawa.kyoto-u.ac.jp [^2]: In this paper, this term is used as its widest meanings. It includes noncommutative spaces as well as nonassociative ones [@Sasai:2006ua]. [^3]: A tightly related kind of models, called the group field theories, have been being discussed mainly in the context of the loop quantum gravity. It is known that a certain group field theory can be considered to be a field theory on a noncommutative spacetime [@Imai:2000kq] and can also be derived as effective field theory of three-dimensional quantum gravity [@Freidel:2005me; @Sasai:2009az]. See also [@Oriti:2009nd] for more and the recent developments. [^4]: These modes of broken symmetries appeared as vanishing spectra of fluctuations in the previous works [@Sasakura:2007sv; @Sasakura:2007ud]. [^5]: The idea to consider local gauge symmetries to be non-linearly realized broken symmetries is rather old. For example, see [@Ferrari:1971at; @Brandt:1974jw; @Borisov:1974bn]. [^6]: There exists an ambiguity to add $dC_{ab}{}^{b}dC^{ac}{}_c$ to this metric. The addition will change some details of the analysis of both the tensor model and the continuum theory, but the mutual agreement should be obtained anyway. [^7]: For example, $\delta_B^2 c_\mu=0$ can be shown from $\delta_B \Gamma_{\mu\nu}{}^\rho=c^\sigma R_{\mu\sigma\nu}{}^\rho+\nabla_\mu\nabla_\nu c^\rho$ and the Bianchi identities for the Riemann tensor. [^8]: The codes are downloadable from http://www2.yukawa.kyoto-u.ac.jp/\~sasakura/codes/ghostcpp.zip.
--- abstract: | Let $M$ be a compact Riemannian manifold and let $\mu ,d$ be the associated measure and distance on $M$. Robert McCann obtained, generalizing results for the Euclidean case by Yann Brenier, the polar factorization of Borel maps $S:M\rightarrow M$ pushing forward $\mu $ to a measure $\nu $: each $S$ factors uniquely a.e. into the composition $S=T\circ U$, where $% U:M\rightarrow M$ is volume preserving and $T:M\rightarrow M$ is the optimal map transporting $\mu $ to $\nu $ with respect to the cost function $d^{2}/2$. In this article we study the polar factorization of conformal and projective maps of the sphere $S^{n}$. For conformal maps, which may be identified with elements of $O_{o}\left( 1,n+1\right) $, we prove that the polar factorization in the sense of optimal mass transport coincides with the algebraic polar factorization (Cartan decomposition) of this Lie group. For the projective case, where the group $GL_{+}\left( n+1\right) $ is involved, we find necessary and sufficient conditions for these two factorizations to agree. author: - | Yamile Godoy and Marcos Salvai [^1]\ \ \ title: Polar factorization of conformal and projective maps of the sphere in the sense of optimal mass transport --- 2010 MSC: 49Q20, 53A20, 53A30, 53C20, 53D12, 58E40 Keywords and phrases: optimal mass transport, conformal map, projective map, $c$-convex potential, Lagrangian submanifold Introduction ============ **Optimal mass transport and polar factorization.** Given two spatial distributions of mass, the problem of Monge and Kantorovich (see for instance [@Villani]) is to transport the mass from one distribution to the other as efficiently as possible. Here efficiency is measured against a cost function $c(x,y)$ specifying the transportation tariff per unit mass. More precisely, let $X$ be a topological space, let $c:X\times X\rightarrow \mathbb{R}$ be a nonnegative cost function and let $\mu ,\,\nu $ be finite Borel measures on $X$ with the same total mass. A map $T:X\rightarrow X$ that minimizes the functional $$T\mapsto \int_{X}c(x,T(x))~d\mu (x)$$under the constraint that $T$ pushes forward $\mu $ onto $\nu $ (that is, $% \nu (B)=\mu (T^{-1}(B))$ for any Borel set in $X$, which is denoted by $% T_{\#}\mu =\nu $) is called an* optimal transportation map between* $% \mu $ *and* $\nu $. In the following, when it is clear from the context, we call it simply optimal. An important particular case is the following: Let $M$ be a compact oriented Riemannian manifold and let $\mu =$ vol be the Riemannian measure on $M$ and $d$ the associated distance and consider the cost $c\left( p,q\right) =\frac{% 1}{2}d\left( p,q\right) ^{2}$. Let $S:M\rightarrow M$ be a Borel map pushing forward $\mu $ to a measure $% \nu $. Robert McCann proved in [@McCann], generalizing results for the Euclidean case by Brenier [@Brenier], that $S$ factors uniquely a.e.into the composition $S=T\circ U$, where $U:M\rightarrow M$ is volume preserving and $T:M\rightarrow M$ is the optimal map transporting $\mu $ to $% \nu $. This is called the *polar factorization of* $S$* in the sense of optimal mass transport*. For the sake of brevity we call it the Brenier-McCann polar factorization of $S$. **Polar factorization of conformal maps of the sphere.** An orientation preserving diffeomorphism $F$ of an oriented Riemannian manifold $\left( M,g\right) $ of dimension $n\geq 2$ is said to be *conformal* if $F^{\ast }g=fg$ for some positive function $f$ on $M$. Let $S^{n}$ be the unit sphere centered at the origin of $\mathbb{R}^{n+1}$ and let $S:S^{n}\rightarrow S^{n}$ be a conformal transformation of $S^{n}$. Let $G=O_{o}\left( 1,n+1\right) $ be the identity component of the group preserving the symmetric bilinear form of signature $\left( 1,n+1\right) $ on $\mathbb{R}^{n+2}$. The map $S$ can be canonically identified with an element of $G$, thinking of $S^{n}$ as the projectivization of the light cone in the Lorentz space $\mathbb{R}^{1,n+1}$. For $A\in G$ and $u\in S^{n}$ (in particular $\left( 1,u\right) $ is a null vector) one defines $A\cdot u$ as the unique $u^{\prime }\in S^{n}$ such that$$A\left( \begin{array}{c} 1 \\ u% \end{array}% \right) \in \mathbb{R}\left( \begin{array}{c} 1 \\ u^{\prime }% \end{array}% \right) \text{.} \label{APuntoU}$$This is well known; we refer for instance to [@Udo] (see also [Salvai02]{}). By definition, the conformal transformations of the circle $% S^{1}$ are given by the above action of $O_{o}\left( 1,2\right) $ on it. They coincide with the Moebius maps of the circle, that is, the restrictions to $S^{1}$ of the Moebius maps of $\mathbb{C}\cup \left\{ \infty \right\} $ preserving the unit disc. The (algebraic) polar factorization of $G$ is $\exp \left( \mathfrak{p}% \right) SO\left( n+1\right) $, where $\mathfrak{p}$ is the vector space of symmetric matrices in the Lie algebra $o\left( 1,n+1\right) $ of $G$, that is$$\mathfrak{p}=\left\{ \left( \begin{array}{cc} 0 & v^{t} \\ v & 0% \end{array}% \right) \mid v\in \mathbb{R}^{n}\right\} \text{.} \label{p}$$In this context, it is usually called the Cartan decomposition of $G$. We have the following result: \[conformes\] Let $S$ be a conformal transformation of the sphere $S^{n}$. Then the Brenier-McCann polar factorization of $S$ coincides with the algebraic polar factorization of $S$. **Polar factorization of projective maps of the sphere.** An orientation preserving diffeomorphism $F$ of an oriented Riemannian manifold $M$ of dimension $n\geq 2$ is said to be *projective* if for any geodesic $\gamma $ of $M$, $F\circ \gamma $ is a reparametrization (not necessarily of constant speed) of a geodesic of $M$. For $n\geq 2$, the projective transformations of $S^{n}$ are exactly those of the form $$p\longmapsto Ap/\left\Vert Ap\right\Vert \label{Aproyectiva}$$for some $A\in GL_{+}(n+1)$, the group of linear automorphisms of $\mathbb{R}% ^{n+1}$ with positive determinant. By definition, the projective transformations of the circle $S^{1}$ are given by the action of $% GL_{+}\left( 2\right) $ on it as above. \[proyectivas\]** **Let $S$ be a projective transformation of the sphere $S^{n}$. Then the Brenier-McCann polar factorization of $S$ coincides with the algebraic polar factorization $PO$ of $S$ *(*that is, with positive definite self adjoint $P$ and orthogonal $O$*)* if and only if $P$ has at most two distinct eigenvalues. We would like very much to know explicitly, if possible, the Brenier-McCann polar factorization of the projective map of $S^{2}$ induced by, say, diag $% \left( 1,2,3\right) $. Next we give the main arguments in the proofs of the theorems. A projective map of $S^{n}$ induced by a positive definite self adjoint transformation of $\mathbb{R}^{n+1}$ with at most two distinct eigenvalues preserves meridians of the sphere through points lying in two fixed orthogonal subspaces. It turns out to be a particular case of the optimal maps considered in Theorem \[CorderoProyectiva\] in the preliminaries, that we prove using the characterization by McCann of the optimal maps on Riemannian manifolds involving $c$-convex potentials. These are a powerful theoretical tool, in general arduous to deal with in concrete cases, but the fact that the optimal maps of the circle are well known and the symmetries of our problem allowed us to apply them. Conformal maps on $S^{n}$ induced by symmetric transformations of Euclidean space behave similarly. We prove Theorem \[conformes\] by verifying that the conformal map induced by the symmetric part of the polar algebraic decomposition satisfies the hypotheses of Theorem \[CorderoConforme\], which is analogous to Theorem \[CorderoProyectiva\]. We comment on the proof of Theorem \[proyectivas\]. When $P$ has at most two distinct eigenvalues we check that the projective map induced by $P$ satisfies the hypotheses of Theorem \[CorderoProyectiva\] using that conformal maps of the circle double cover the projective maps of the circle. In the case that $P$ has at least three distinct eigenvalues, we resort to a necessary condition for a map on a Riemannian manifold $M$ to be optimal: that its graph be a.e. a Lagrangian submanifold with respect to a certain symplectic form defined a.e. on $M\times M$ in terms of the cost function. In [@Salvai02] and [@salvai07] (see also [@Emmanuele]) the second author et al. studied force free conformal and projective motions of $S^{n}$. In particular, they found some geodesics $% \sigma $ of the groups $SL\left( n+1,\mathbb{R}\right) $ and $O_{o}\left( 1,n+1\right) $ endowed with the not invariant (as it happens with non-rigid motions) Riemannian metric given by the kinetic metric. The curves $\sigma $ induce curves of measures $t\mapsto \left( \sigma _{t}\right) _{\#}\left( \text{vol}\right) $. We wonder whether this is related, with conformal or projective constraints, with the survey paper [@Buttazzo] on transportation problems in which a given mass dynamically moves from an initial configuration to a final one (see also [@BenamouB] and [Brasco]{}). Preliminaries ============= Let $\left( M,g\right) $, $\mu =$ vol$_{M}$ and $c=d^{2}/2$ be as in the introduction. We recall the result by McCann in [@McCann], which gives an expression for the optimal map transporting $\mu $ to a measure $\nu $. Given a lower semi-continuous function $\phi :M\rightarrow \mathbb{R}$, the supremal convolution of $\phi $ is the function $\phi ^{c}:M\rightarrow \mathbb{R}$ defined by $$\phi ^{c}(p)=\sup_{q\in M}\{-c(p,q)-\phi (q)\}\text{.} \label{c-transf}$$(We adopt the notation of [@Loeper], using supremal instead of infimal convolution.) \[McCann\] The optimal map $T$ between $\mu $ and $T_{\#}(\mu )$ can be expressed as a gradient map, that is, $$T(p)=\text{\emph{Exp}}_{p}\left( \text{\emph{grad}}_{\,p}\phi \right)$$a.e., where *Exp* is the geodesic exponential map of $M$ and $\phi $ is a $c$-convex potential, that is, $\phi :M\rightarrow \mathbb{R}$ is a lower semi-continuous function satisfying $\phi ^{cc}=\phi $. Cordero-Erausquin characterized in [@Cordero] (see also [@McCann]) the optimal transportation maps on tori. We write below an equivalent statement taken from subsection 6.2 in [@N.Q.Le]. \[CE\] Let $\mu $ and $\nu $ be two probability measures on the torus $% T^{n}=\mathbb{R}^{n}/2\pi \mathbb{Z}^{n}$ with the same total mass which are absolutely continuous with respect to the Lebesgue measure and let $% T:T^{n}\rightarrow T^{n}$ be a map pushing forward $\mu $ to $\nu $. Then $T$ is the optimal transportation map between $\mu $ and $\nu $ if and only if it lifts a.e. to a map $\widetilde{T}:\mathbb{R}^{n}\rightarrow \mathbb{R} ^{n}$ of the form $\widetilde{T}=$ *grad* $\psi $, where $\psi $ is a convex function on $\mathbb{R}^{n}$ such that $\psi\left( x\right) -\left\vert x\right\vert ^{2}/2$ descends to $T^{n}$. Conformal or projective maps of $S^{n}$ induced by self adjoint transformations of $\mathbb{R}^{n+1}$ (with at most two distinct eigenvalues, in the projective case) have a particular behavior encompassed in the following two theorems. Besides, for $n=1$, we have a particular case of the theorem above. We denote by $\{e_{0},\cdots ,e_{n}\}$ the canonical basis of $\mathbb{R}^{n+1}$. \[CorderoProyectiva\]Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be a smooth $\pi $-periodic odd function with $f^{\prime }+1>0$ and write $% \mathbb{R}^{n+1}=\mathbb{R}^{k}\oplus \mathbb{R}^{\ell }$. Then $% T:S^{n}\rightarrow S^{n}$, $$T\left( \cos x~u,\sin x~v\right) =\left( \cos \left( x+f\left( x\right) \right) u,\sin \left( x+f\left( x\right) \right) v\right) \text{,} \label{Tproyectiva}$$where $x\in \mathbb{R}$, $u\in \mathbb{R}^{k}$, $v\in \mathbb{R}^{\ell }$ and $\left\Vert u\right\Vert =\left\Vert v\right\Vert =1$, is well defined and is the optimal transportation map among all maps sending *vol*$% _{S^{n}}$ to $T_{\#}\left( \emph{vol}_{S^{n}}\right) $.  First we show that $T$ is well defined. Let $p=\left( \cos x~u,\sin x~v\right) $ and suppose that $p=\left( \cos y~U,\sin y~V\right) $, where $% U\in \mathbb{R}^{k}$ and $V\in \mathbb{R}^{\ell }$ are unit vectors. Then $$\cos y=\varepsilon \cos x\text{,\ \ \ \ }U=\varepsilon u\text{,\ \ \ \ }\sin y=\delta \sin x\text{\ \ \ \ \ and \ \ \ \ \ }V=\delta v$$for some $\varepsilon ,\delta =\pm 1$. If $\varepsilon =1$, then $y=\delta x+2m\pi $ for some $m\in \mathbb{Z}$ and so $$T\left( \cos y~U,\sin y~V\right) =\left( \cos \left( \delta x+f\left( \delta x\right) \right) u,\sin \left( \delta x+f\left( \delta x\right) \right) \delta v\right) \text{,} \label{Tz}$$which coincides with the right hand side of (\[Tproyectiva\]), as desired (we used that $f$ is odd and that a $\pi $-periodic function is in particular $2\pi $-periodic). Next we consider the case $\varepsilon =-1$. We have $y=\pi -\delta x+2m\pi $ for some $m\in \mathbb{Z}$ and so $f\left( y\right) =f\left( \pi -\delta x\right) =f\left( -\delta x\right) =-\delta f\left( x\right) $, since $f$ is $\pi $-periodic and odd. Finally, using that $\cos \left(\pi -\delta t\right) =-\cos( t)$ and $\sin \left(\pi -\delta t\right) =\delta \sin(t)$, for each $t\in \mathbb{R}$, we obtain (\[Tproyectiva\]) again. Now we prove that $T$ is optimal. We consider first the case $k=\ell =1$ ($% n=1$). We may suppose $u=e_{0}$ and $v=e_{1}$. Then, after the canonical identification $\mathbb{R}^{2}=\mathbb{C}$ we have that $T\left( e^{ix}\right) =e^{i\left( x+f\left( x\right) \right) }$, whose lift $% \widetilde{T}:\mathbb{R}\rightarrow \mathbb{R}$ with $\widetilde{T}\left( 0\right) =0$ (notice that $T\left( 1\right) =1$ since $f$ is odd) is given by $\widetilde{T}\left( x\right) =x+f\left( x\right) $. Now, a primitive of $% \widetilde{T}$ is convex since $1+f^{\prime }>0$. Also, $\psi \left( x\right) -x^{2}/2$ descends to the circle, since $\int_{0}^{2\pi }f=0$ ($f$ is odd and $\pi $-periodic). By the result of Cordero-Erausquin in Theorem \[CE\] for $n=1$, $T$ is optimal. We return to the general case. We consider the vector field $W$ on $S^n$ defined by$$W\left( p\right) =f\left( x\right) \left( -\sin x~u,\cos x~v\right)$$for $p=\left( \cos x~u,\sin x~v\right) $. One can check as above that $W$ is well defined. We have that $$T\left( p\right) =\text{Exp}_{p}\left( W\left( p\right) \right) \text{.}$$By the result of McCann in Theorem \[McCann\] it suffices to show that $W$ is the gradient of a function $\phi $ with $\phi ^{cc}=\phi $. We propose $$\phi \left( \cos x~u,\sin x~v\right) =\int_{0}^{x}f\left( s\right) ~ds\text{% , }$$ which is well defined since $\int_{a}^{a+\pi }f\left( s\right) ~ds=0$ for any $a\in \mathbb{R}$ (indeed, the fact that $f$ is $\pi $-periodic and continuous implies that $\int_{a}^{a+\pi }f\left( s\right) ~ds=\int_{0}^{\pi }f\left( s\right) ~ds$ for any $a\in \mathbb{R}$, and also as $f$ is odd we have that the last integral vanishes). We compute $$\begin{aligned} d\phi _{p}\left( -\sin x~u,\cos x~v\right) &=&\left. \frac{d}{dt}\right\vert _{0}\phi \left( \left( \cos \left( x+t\right) u,\sin \left( x+t\right) v\right) \right) \\ &=&\left. \frac{d}{dt}\right\vert _{0}\int_{0}^{x+t}f\left( s\right) ~ds=f\left( x\right) \text{.}\end{aligned}$$Hence $\left\langle \text{grad}_{\,p}\phi ,\left( -\sin x~u,\cos x~v\right) \right\rangle =f\left( x\right) $. Let $X\in T_{p}S^{n}=p^{\bot }$ with $X\bot \left( -\sin x~u,\cos x~v\right) $. Suppose that $X=\left( u^{\prime },v^{\prime }\right) $ (in particular, $% u^{\prime }\bot u$ and $v^{\prime }\bot v$). Let $\alpha $ and $\beta $ be smooth curves on $S^{k-1}$ and $S^{\ell -1}$ with $\alpha \left( 0\right) =u,\alpha ^{\prime }\left( 0\right) =u^{\prime }/\cos x,\beta \left( 0\right) =v$ and $\beta ^{\prime }\left( 0\right) =v^{\prime }/\sin x$. We compute $$d\phi _{p}\left( X\right) =\left. \frac{d}{dt}\right\vert _{0}\phi \left( \cos x~\alpha \left( t\right) ,\sin x~\beta \left( t\right) \right) =-\left. \frac{d}{dt}\right\vert _{0}\int_{0}^{x}f\left( s\right) ~ds=0\text{.}$$Therefore grad$~\phi =W$. Finally, we have to verify that $\phi $ is a $c$-convex potential. For this, we resort to the case $n=1$ and use $SO\left( n\right) $-invariance. Since the sphere is compact, given $p\in S^{n}$ there exists $q_{o}\in S^{n}$ such that $$\phi ^{c}(p)=\sup_{q\in S^{n}}\{-c_{p}(q)-\phi (q)\}=-c_{p}(q_{o})-\phi (q_{o}),$$where $c_{p}(q)=c(p,q)$. Next we observe that $p$ and $q_{o}$ lie in the same great circle through $\left( \mathbb{R}^{k}\times \left\{ 0\right\} \right) \cap S^{n}$ and $\left( \left\{ 0\right\} \times \mathbb{R}% ^{k}\right) \cap S^{n}$. In fact, since $q_{o}$ is the maximum of the map $$q\in S^{n}\mapsto -c_{p}(q)-\phi (q)\in \mathbb{R}\text{,}$$it follows that $(dc_{p})_{q_{o}}=-(d\phi )_{q_{o}}$. So, their kernels coincide. Suppose $q_{o}=\left( \xi ,\eta \right) $. We saw above that Ker $% (d\phi )_{q_{o}}=\xi ^{\bot }\times \eta ^{\bot }$. Also, a straightforward computation yields that Ker $% (dc_{p})_{q_{o}}=q_{o}^{\bot }\cap p^{\bot }$. If $q_{o}\neq \pm p$, then $p\bot q_{o}^{\bot }\cap p^{\bot }=\xi ^{\bot }\times \eta ^{\bot }$, and so $$p\in \left( \xi ^{\bot }\times \eta ^{\bot }\right) ^{\bot }=\text{span}% ~\left\{ \left( \xi ,0\right) ,\left( 0,\eta \right) \right\} \text{.}$$ We suppose first that $p$ is in the circle $S:=$ span$\left\{ e_{0},e_{k+1}\right\} \cap S^{n}$ and, by the above observation, we see that $$\phi ^{c}(p)=\sup_{q\in S^{n}}\left\{ -c_{p}(q)-\phi (q)\right\} =\max_{q\in S}~\{-c_{p}(q)-\phi (q)\}=\phi _{o}^{c}(p),$$where $\phi _{o}=\left. \phi \right\vert _{S}$. Now, if $p\in S_{p}:=$ span $% \left\{ \left( u,0\right) ,\left( 0,v\right) \right\} \cap S^{n}$ with $u,v$ unit vectors, let $R\in SO\left( k\right) \times SO\left( \ell \right) $ such that $R(u,0)=e_{0}$, $R\left( 0,v\right) =e_{k+1}$. Since $$c(p,q)=c(R(p),R(q))\hspace{1cm}\text{and}\hspace{1cm}\phi \circ R(q)=\phi (q)$$for all $p,q\in S^{n}$, we have that $$\phi ^{c}(p)=\max_{q\in S_{p}}~\{-c_{p}(q)-\phi (q)\}=\max_{q\in S}~\{-c_{R(p)}(q)-\phi (q)\}=\phi _{o}^{c}(R(p)),$$where the first equality holds again by the above observation. In the same way, $\phi ^{cc}(p)=\phi _{o}^{cc}\circ R(p)$. We recall that at the beginning of the proof we saw that $T|_{S}$ is optimal, so $\phi _{o}^{cc}=\phi _{o}$. Then, for $p\in S^{n}$ we obtain that $$\phi ^{cc}(p)=\phi _{o}^{cc}\circ R(p)=\phi _{o}\circ R(p)=\phi (p)\text{.}$$Therefore, $\phi $ is a $c$-convex potential, as we wanted to see, and the proof of the theorem is complete. We have a statement similar to the one of the theorem above. The proof is essentially the same, considering only the case $\varepsilon =1$ and $R\in SO\left( n\right) $ such that $R\left( e_{0}\right) =e_{0}$ and $R\left( 0,v\right) =e_{1}$. \[CorderoConforme\]Let $g:\mathbb{R}\rightarrow \mathbb{R}$ be a smooth $% 2\pi $-periodic odd function with $g^{\prime }+1>0$. Then $% T:S^{n}\rightarrow S^{n}$, $$T\left( \cos x,\sin x~v\right) =\left( \cos \left( x+g\left( x\right) \right) ,\sin \left( x+g\left( x\right) \right) v\right) \text{,} \label{Tconforme}$$where $v$ is a unit vector in $\mathbb{R}^{n}$, is well defined and is the optimal transportation map among all maps sending *vol*$_{S^{n}}$ to $% T_{\#}\left( \text{\emph{vol}}_{S^{n}}\right) $. **Graphs of optimal maps as Lagrangian submanifolds.** Let $M$ be a Riemannian manifold and $c=\frac{1}{2}d^{2}$, as in the introduction. Kim, McCann and Warren found in [@KimWarren] a necessary condition for a map of $M$ to be optimal, which will be useful in the proof of Theorem [proyectivas]{}. They proved that the graph of the optimal map is calibrated a.e. by a certain split special Lagrangian calibration on an open dense subset of $M\times M$ endowed with a certain neutral metric. In particular, the graph turns out to be Lagrangian with respect to the symplectic form $% \omega $ on an open dense subset of $M\times M$ defined by $\omega =d\alpha $, with $\alpha $ the $1$-form on the same subset of $M\times M$ given by $$\alpha \left( Z\right) =dc\left( X,0\right) =\left( X,0\right) \left( c\right) \text{,}$$where $Z_{\left( p,q\right) }=\left( X_{p},Y_{q}\right) $ after the natural identification $T_{\left( p,q\right) }\left( M\times M\right) \approx T_{p}M\times T_{q}M$ (see [@Kim]). It is clear that $\omega \left( \left( u,0\right) ,\left( v,0\right) \right) =\omega \left( \left( 0,u\right) ,\left( 0,v\right) \right) =0$ for any pair of vector fields $u,v$ on $M$. In the proposition below we describe explicitly $\omega $ for the case where $M$ is the sphere $S^{n}$. We will use it in Proposition \[Lagrangian\]. \[omega\]Let $p,q$ be a pair of non-antipodal distinct points on $S^{n}$ and let $\gamma :\left[ 0,d\right] \rightarrow S^{n}$ be the unique unit speed shortest geodesic joining $p$ with $q$. Suppose that $\mathcal{B}% =\left\{ u_{1},\dots ,u_{n}\right\} $ is an orthonormal basis of $T_{p}S^{n}$ with $u_{1}=\gamma ^{\prime }\left( 0\right) $ and denote by $v_{i}$ the parallel transport of $u_{i}$ along $\gamma $ from $0$ to $d$ $\mathcal{(}$in particular, $v_{1}=\gamma ^{\prime }\left( d\right) $ and $\mathcal{\bar{B% }}=\left\{ v_{1},\dots ,v_{n}\right\} $ is an orthonormal basis of $% T_{q}S^{n}\mathcal{)}$. Let $\mathcal{C}$ be the oriented basis of $% T_{\left( p,q\right) }\left( S^{n}\times S^{n}\right) \approx T_{p}S^{n}\times T_{q}S^{n}$ obtained by juxtaposing $\mathcal{B}$ with $% \mathcal{\bar{B}}$. Then$$\left[ \omega _{\left( p,q\right) }\right] _{\mathcal{C}}=\left( \begin{array}{cc} 0 & A \\ -A & 0% \end{array}% \right) \text{,}$$where $A=$ $\emph{diag}$ $\left( 1,\frac{d}{\sin d}~I_{n-1}\right) $, with $% I_{k}$ the $k\times k$ identity matrix. Let $U,V$ be vector fields defined on open neighborhoods of $p$ and $q$ in $S^{n}$, respectively. Denote $u=U_{p}$, $v=V_{q}$. If $\sigma $ and $% \tau $ are curves in $S^{n}$ with $\sigma \left( 0\right) =p$, $\sigma ^{\prime }\left( 0\right) =u$, $\tau \left( 0\right) =q $ and $\tau ^{\prime }\left( 0\right) =v$, then$$\begin{aligned} \omega _{\left( p,q\right) }\left( \left( u,0\right) ,\left( 0,v\right) \right) &=&d\alpha _{\left( p,q\right) }\left( \left( u,0\right) ,\left( 0,v\right) \right) =-\left( 0,v\right) _{\left( p,q\right) }\left( dc\left( U,0\right) \right) \label{dsdt} \\ &=&-\left. \frac{d}{dt}\right\vert _{0}dc_{\left( p,\tau \left( t\right) \right) }\left( U_{p},0\right) =-\left. \frac{d}{dt}\right\vert _{0}\left. \frac{d}{ds}\right\vert _{0}c\left( \sigma \left( s\right) ,\tau \left( t\right) \right) \text{,} \notag\end{aligned}$$where the second equality follows from the fact that $\alpha \left( 0,V\right) =0$ and $\left[ \left( U,0\right) ,\left( 0,V\right) \right] =0$. Since $S^{n}$ is two-point homogeneous, we may suppose without loss of generality that $p=e_{0}$, $q=\cos d~e_{0}+\sin d~e_{1}$ with $0<d<\pi $ (so $\gamma \left( t\right) =\cos t~e_{0}+\sin t~e_{1}$) and $u_{i}=e_{i}$ ($% i=1,\dots ,n$). Let $\sigma _{i},\tau _{j}$ be the geodesics in $S^{n}$ with $\sigma _{i}\left( 0\right) =p$ and $\sigma _{i}^{\prime }\left( 0\right) =u_{i}$, $\tau _{j}\left( 0\right) =q$ and $\tau _{j}^{\prime }\left( 0\right) =v_{j}$. We have $\sigma _{1}=\gamma $, $\tau _{1}\left( t\right) =\gamma \left( t+d\right) $ and $$\sigma _{i}\left( s\right) =\cos s~e_{0}+\sin s~e_{i}\text{,\ \ \ \ \ \ \ \ \ }\tau _{2}\left( t\right) =\cos t\left( \cos d~e_{0}+\sin d~e_{1}\right) +\sin t~e_{2}\text{.}$$We have $A_{ij}=\omega \left( \left( u_{i},0\right) ,\left( 0,v_{j}\right) \right) $. By the $SO\left( n-1\right) $-symmetries of $S^{n}$ fixing the trajectory of $\gamma $ it suffices to show that $A_{11}=1$, $% A_{12}=A_{21}=A_{32}=0$ and $A_{22}=\frac{d}{\sin d}$. For any manifold $M$, if $\gamma $ is length minimizing on an open interval containing $\left[ 0,d\right] $ we have $d\left( \gamma \left( s\right) ,\gamma \left( d+t\right) \right) =d+t-s$ for $s,t$ near $0$ and so by ([dsdt]{}) $$A_{11}=-\left. \frac{d^{2}}{dsdt}\right\vert _{\left( 0,0\right) }\frac{1}{2}% \left( d+t-s\right) ^{2}=1\text{.}$$On the other hand, since $c\left( \cdot ,\cdot \right) =\frac{1}{2}\arccos ^{2}\left\langle \cdot ,\cdot \right\rangle $, we have by (\[dsdt\]) that $% A_{ij}=f_{ij}^{\prime }\left( 0\right) $, where$$f_{ij}\left( t\right) =-\left. \frac{d}{ds}\right\vert _{0}c\left( \sigma _{i}\left( s\right) ,\tau _{j}\left( t\right) \right) =\frac{\arccos \left( \left\langle \sigma _{i}\left( 0\right) ,\tau _{j}\left( t\right) \right\rangle \right) }{\sqrt{1-\left\langle \sigma _{i}\left( 0\right) ,\tau _{j}\left( t\right) \right\rangle ^{2}}}\left\langle \sigma _{i}^{\prime }\left( 0\right) ,\tau _{j}\left( t\right) \right\rangle \text{.% }$$Since $\left\langle \sigma _{i}^{\prime }\left( 0\right) ,\tau _{j}\left( t\right) \right\rangle $ vanishes for $\left( i,j\right) =\left( 2,1\right) ,\left( 3,2\right) $, $f_{21}=f_{32}\equiv 0$. Also, $f_{12}$ is an even function and so $f_{12}^{\prime }\left( 0\right) =0$. Hence, $% A_{21}=A_{32}=A_{12}\equiv 0$. Now, putting $\cos d\cos t=\cos \left( x\left( t\right) \right) $ for a function $x$ with values in the interval $% \left( 0,\pi \right) $ (in particular $x\left( 0\right) =d$ and $x^{\prime }\left( 0\right) =0$), we have $$A_{22}=f_{22}^{\prime }\left( 0\right) =\left. \frac{d}{dt}\right\vert _{0}% \frac{\arccos \left( \cos x\left( t\right) \right) }{\sqrt{1-\cos ^{2}x\left( t\right) }}\sin t=\left. \frac{d}{dt}\right\vert _{0}\frac{% x\left( t\right) }{\sin x\left( t\right) }\sin t=\frac{d}{\sin d}\text{.}$$ Therefore the coefficients of the matrix $A$ are as desired. Proofs of the theorems ====================== Let $S\in G$. Thus, $S=\exp (A)O$, where $A\in \mathfrak{p}$ (see (\[p\])) and $O\in SO(n+1)$. Clearly, $O$ preserves $\text{vol}_{S^{n}}$. So, by the uniqueness of the Brenier-McCann polar factorization (see the introduction), we have to prove that $\exp (A)$ is the optimal transportation map between $\text{vol}% _{S^{n}}$ and $S_{\#}(\text{vol}_{S^{n}})$. Without lost of generality we can take $A=a\left( \begin{array}{cc} 0 & e_{0}^{t} \\ e_{0} & 0% \end{array}% \right) \in \mathfrak{p}$, where $a>0$. In fact, if $R\in SO\left( n\right) $ satisfies $R\left( ae_{0}\right) =v$, then $A$ and $\left( \begin{array}{cc} 0 & v^{t} \\ v & 0% \end{array}% \right) $ are conjugate by the isometry diag $\left( 1,R\right) $ of $S^{n}$, and isometries preserve optimality. Next we compute $\exp (A)$ as a map of the sphere. Putting $$H_{t}=\left( \begin{array}{cc} \cosh at & \sinh at \\ \sinh at & \cosh at% \end{array}% \right)$$we have that $\exp \left( tA\right) =$ diag $\left( H_{t},I_{n}\right) $. Let $q=\left( u_{0},\dots ,u_{n}\right) \in S^{n}$ and let $\gamma \left( t\right) =\exp \left( tA\right) \left( q\right) $. By (\[APuntoU\]), $$\gamma \left( t\right) =\left( \sinh at+u_{0}\cosh at,u_{1},\dots ,u_{n}\right) /h\left( u_{0},t\right)$$with $h(u,t)=u\sinh at+\cosh at$. Now, $\gamma $ is the integral curve through $q$ of the vector field $V$ on the sphere defined by $$V\left( p\right) =\left. \frac{d}{dt}\right\vert _{0}\exp \left( tA\right) \left( p\right) =ae_{0}-\left\langle ae_{0},p\right\rangle p\text{.}$$Hence, $$\left\Vert \gamma ^{\prime }\left( t\right) \right\Vert =\left\Vert V\left( \gamma \left( t\right) \right) \right\Vert ={\frac{a\sqrt{1-u_{0}^{2}}}{% h(u_{0},t)}}\text{.}$$ Since $\gamma $ lies on a meridian through $e_{0}$, which is a geodesic, and $V$ vanishes at $\pm e_{0}$, the distance $D\left( q\right) $ from $q$ to $% \exp \left( A\right) \left( q\right) $ is the length of $\left. \gamma \right\vert _{\left[ 0,1\right] }$. Hence, $\exp (A)(q)=\text{Exp}_{q}(U(q))$, where $U\left( q\right) $ is a tangent vector pointing in the direction of $\gamma ^{\prime }\left( 0\right) $ with $\Vert U(q)\Vert =D\left( q\right) $. We have $$D\left( \cos x,\sin x~v\right) =\int_{0}^{1}{\frac{a\left\vert \sin x\right\vert }{h(\cos x,t)}}\,dt.$$The appropriate choice of sign yields that $\exp \left( A\right) $ equals $T$ as in (\[Tconforme\]) with $$g\left( x\right) =-\int_{0}^{1}{\frac{a\sin x}{h(\cos x,t)}}\,dt\text{.} \label{g}$$ Now we apply Theorem \[CorderoConforme\] to prove that $\exp \left( A\right) $ is optimal. Since $g$ is odd and $2\pi $-periodic, we have to verify only that $g^{\prime }+1>0$. We compute $$g^{\prime }(x)=-\int_{0}^{1}\frac{d}{dx}{\frac{a\sin x}{h(\cos x,t)}}% \,dt=-\int_{0}^{1}\frac{h_{t}(\cos x,t)}{h(\cos x,t)^{2}}~dt=\dfrac{1}{\cos x\sinh a+\cosh a}-1\text{,}$$where $h_{t}=\partial h/\partial t$. Hence $g^{\prime }+1>0$, as desired. The polar decomposition of $S\in GL_{+}\left( n+1\right) $ is $S=PO$, where $O\in SO\left( n+1\right) $ and $P $ is a positive definite self adjoint linear transformation. As in the conformal case, we may suppose that $P$ is diagonal and we have to prove that the induced operator $T\left( q\right) =P\left( q\right) /\left\Vert P\left( q\right) \right\Vert $ on $S^{n}$ is optimal. We consider first the case when $P$ has at least three distinct eigenvalues. As a consequence of the necessary condition for optimality stated in the preliminaries we have that $T$ is not optimal, since otherwise the graph of $% T$ would be Lagrangian a.e., contradicting Proposition \[Lagrangian\] below. So now we assume that $P$ has exactly two distinct eigenvalues, say $\lambda $ and $\mu $, with respective eigenspaces of dimensions $k$ and $\ell $ (the case when $P$ is a multiple of the identity is trivial). We may suppose that $$P=\text{diag}~\left( e^{\frac{a}{2}}I_{k},e^{-\frac{a}{2}}I_{\ell }\right) =\exp B\text{,}$$where $B=\frac{a}{2}$ diag $\left( I_{k},-I_{\ell }\right) $ for some $a\in \mathbb{R}$. In fact, $T$ does not change if we take instead of $P$ a positive multiple $cP$ of it (we chose $c=1/\sqrt{\lambda \mu }$ and $a=\log \left( \lambda /\mu \right) $). We compute $$T\left( \cos x~u,\sin x~v\right) =\frac{\left( e^{a/2}\cos x~u,e^{-a/2}\sin x~v\right) }{\left( e^{a}\cos ^{2}x+e^{-a}\sin ^{2}x\right) ^{1/2}}\text{,} \label{expAmedio}$$where $u\in S^{k-1}$ and $v\in S^{\ell -1}$. We see that $T$ preserves the meridian $S^{n}\cap $ span $\left\{ u,v\right\} $ and moreover it has the form (\[Tproyectiva\]) for some $\pi $-periodic odd function $f$. Now, identifying span $\left\{ u,v\right\} $ with $\mathbb{C}$ ($u=1,v=i$) we have by Lemma \[DoubleC\] below that $$e^{i2\left( x+g\left( x\right) \right) }=e^{i\left( 2x+f\left( 2x\right) \right) }$$where $g$ is the $2\pi $-periodic odd function in (\[g\]). Since $f\left( 0\right) =g\left( 0\right) =0$, we have that $f\left( x\right) =2g\left( \frac{1}{2}x\right) $ for all $x$. Now, $g^{\prime }+1>0$ by the proof of Theorem \[conformes\], hence $f^{\prime }+1>0$. Therefore, $T$ is optimal by Theorem \[CorderoProyectiva\]. Next we state the lemma we used in the proof above. It is well known that there is a double covering morphism $PSL\left( 2,\mathbb{R}\right) \rightarrow O_{o}\left( 1,2\right) $. We only need the morphism restricted to some subgroups isomorphic to the circle. Let $$A=\frac{a}{2}\left( \begin{array}{cc} 1 & 0 \\ 0 & -1% \end{array}% \right) \text{\ \ \ \ \ \ and\ \ \ \ \ \ }B=a\left( \begin{array}{cc} 0 & 1 \\ 1 & 0% \end{array}% \right)$$with $a>0$. Then $\exp A$ and $\exp B$ induce a projective and a conformal map of $S^{1}$ as in (\[Aproyectiva\]) and (\[APuntoU\]), respectively. The following lemma asserts that the first one double covers the second one. \[DoubleC\]Let $A$ and $B$ be as above and let $\rho :S^{1}\rightarrow S^{1}$, $\rho \left( x,y\right) =\left( x^{2}-y^{2},2xy\right) $ *(*that is, $\rho \left( z\right) =z^{2}$ after the identification $\mathbb{R}% ^{2}=\mathbb{C}$*)*. Then the following diagram commutes$$\begin{array}{ccc} S^{1} & \overset{\exp A}{\longrightarrow } & S^{1} \\ \downarrow \rho & & \downarrow \rho \\ S^{1} & \overset{\exp B}{\longrightarrow } & S^{1}\text{,}% \end{array}%$$where we are considering the induced conformal and projective maps of the circle. The statement is well-known; we sketch the proof for the sake of completeness. We have that the projective map on $S^{1}$ induced by $\exp \left( A\right) $ applied to $\left( \cos x,\sin x\right) $ equals the right hand side of (\[expAmedio\]) with $u=v=e_{1}=1$. We also have $$\exp \left( B\right) \cdot \left( \cos x,\sin x\right) =\frac{\left( \cosh a\cos x+\sinh a,\sin x\right) }{\sinh a\cos x+\cosh a}\text{.}$$Now, a straightforward computation yields the commutativity of the diagram. We used the following proposition in the proof of Theorem \[proyectivas\]. It involves the symplectic form $\omega $ considered in the preliminaries. \[Lagrangian\]Let $P$ be a positive definite self adjoint operator of $% \mathbb{R}^{n+1}$ with at least three distinct eigenvalues and let $T$ be the projective map induced by $P$ on $S^{n}$. Then there exists an open dense subset $W$ of $S^{n}$ such that *graph*$\left( dT_{p}\right) $ is *not* a Lagrangian subspace of $T_{p}S^{n}\times T_{T\left( p\right) }S^{n}$ with respect to $\omega _{\left( p,T\left( p\right) \right) }$ for any $p\in W$. In particular *graph*$\,\left( T\right) $ is not a Lagrangian submanifold a.e. of $S^{n}\times S^{n}$. We may suppose without lost of generality that $Pe_{i}=\lambda _{i}e_{i}$ for $i=0,\dots ,n$ and that $\lambda _{0},\,\lambda _{1},\,\lambda _{2}$ are positive and pairwise different. Let $W$ be the open subset of $S^{n}$ consisting of all points whose coordinates are all different from zero. Let $p\in W$ and let $q=T\left( p\right) \in W$ (which is different from $p$ and $-p$). Let $\mathcal{B}$ and $\mathcal{\bar{B}}$ be as in Proposition \[omega\]. Let $\mathcal{B}^{\prime }$ be the ordered basis consisting of all the elements of the basis $\mathcal{\bar{B}}$, except that $v_{1}$ is substituted for $V_{1}=\frac{\sin d}{d}v_{1}$, and let $\mathcal{C}^{\prime }$ be the juxtaposition of $\mathcal{B}$ and $% \mathcal{B}^{\prime }$. Then the matrix of $\omega _{\left( p,q\right) }$ with respect to $\mathcal{C}^{\prime }$ is a multiple of $\left( \begin{array}{cc} 0 & I_{n} \\ -I_{n} & 0% \end{array}% \right) $. It is well-known that the graph of a linear transformation $% L:T_{p}S^{n}\rightarrow T_{q}S^{n}$ is Lagrangian for $\omega _{\left( p,q\right) }$ if and only if the matrix of $L$ with respect to the bases $% \mathcal{B}$ and $\mathcal{B}^{\prime }$ is symmetric. A straightforward computation yields $$dT_{p}(v)=\frac{1}{\left\Vert P(p)\right\Vert }\left( P(v)-\left\langle P(v),q\right\rangle q\right) =\frac{1}{\left\Vert P(p)\right\Vert }\text{pr}% _{q^{\bot }}(P(v)) \label{dT}$$for any $p\in S^{n}$ and $v\in T_{p}S^{n}$, where pr$_{q^{\bot }}$ is the orthogonal projection onto $q^{\bot }$. The vectors $u_{1}\in T_{p}S^{n}$ and $v_{1}\in T_{q}S^{n}$ as in Proposition \[omega\] are $$u_{1}=\frac{q-\left\langle p,q\right\rangle p}{\Vert q-\left\langle p,q\right\rangle p\Vert }\ \text{\ \ \ \ \ \ \ \ and\ \ \ \ \ \ \ \ \ \ }% v_{1}=\frac{\left\langle p,q\right\rangle q-p}{\Vert \left\langle p,q\right\rangle q-p\Vert }\text{.}$$Now we write $p=\left( x,y\right) $ with $x\in \mathbb{R}^{3}$ and $P\left( x,y\right) =\left( P_{1}(x),P_{2}(y)\right) $, where $P_{1}=$ diag $\left( \lambda _{0},\lambda _{1},\lambda _{2}\right) $. It is easy to verify that we can take vectors $u_{2}\in T_{p}S^{n}$ and $v_{2}\in T_{q}S^{n}$ as in Proposition \[omega\] as follows:$$u_{2}=v_{2}=\left( P_{1}(x)\times x,0\right) /\left\Vert P_{1}(x)\times x\right\Vert \text{.}$$Now we verify that the matrix of $dT_{p}$ with respect to the bases $% \mathcal{B}$ and $\mathcal{B}^{\prime }$ (recall that $V_{1}=\frac{\sin d}{d}% v_{1}$) is not symmetric. We call $$a=\left\Vert P(p)\right\Vert \text{,}\hspace{1cm}b=\Vert q-\left\langle p,q\right\rangle p\Vert =\Vert \left\langle p,q\right\rangle q-p\Vert \ \ \ \ \ \ \ \text{and\ \ \ \ \ \ \ }c=\left\Vert P_{1}(x)\times x\right\Vert \text{% .}$$Straightforward computations using (\[dT\]) yield that $$\begin{aligned} \left\langle dT_{p}(u_{1}),v_{2}\right\rangle &=&\frac{1}{abc}\left\langle P_{1}^{2}\left( x\right) ,P_{1}(x)\times x\right\rangle \\ \left\langle dT_{p}(u_{2}),V_{1}\right\rangle &=&\frac{\sin d}{d}\frac{1}{% abc}\left\langle q,p\right\rangle \left\langle P_{1}^{2}\left( x\right) ,P_{1}(x)\times x\right\rangle \text{.}\end{aligned}$$We compute $\left\langle P_{1}^{2}\left( x\right) ,P_{1}(x)\times x\right\rangle =\left( \lambda _{0}-\lambda _{1}\right) \left( \lambda _{0}-\lambda _{2}\right) \left( \lambda _{1}-\lambda _{2}\right) x_{0}x_{1}x_{2}\neq 0$. Hence $\left\langle dT_{p}(u_{1}),v_{2}\right\rangle =\left\langle dT_{p}(u_{2}),V_{1}\right\rangle $ if and only if $% \left\langle q,p\right\rangle \sin d=d$, or equivalently $$\sin \left( 2d\right) =2\cos d\sin d=2d\text{,}$$which holds only for $d=0$. Therefore the matrix of $dT_{p}$ with respect to the bases $\mathcal{B}$ and $\mathcal{B}^{\prime }$ is not symmetric for all $p\in W$, as desired. [99]{} J. D. Benamou, Y. Brenier, A computational Fluid Mechanics solution to the Monge-Kantorovich mass transfer problem, Numer. Math. 84 (2000) 375–393. L. Brasco, A survey on dynamical transport distances, J. Math. Sci. (N.Y.) 181(6) (2012) 755–781. Y. Brenier, Décomposition polaire et réarrangement monotone des champs de vecteurs, C.R. Acad. Sci. Paris Sér. I Math. 305 (1987), 805–808. G. Buttazzo, Evolution models for mass transportation problems, Milan J. Math. 80(1) (2012) 47–63. D. Cordero-Erausquin, Sur le transport de mesures périodiques, Comptes Rendus de l’Académie des Sciences-Series I-Mathematics 329(3) (1999), 199–202. D. Emmanuele, M. Salvai, Force free Moebius motions of the circle, J. Geom. Symmetry. Phys. 27 (2012) 59-65. U. Hertrich-Jeromin, Introduction to Möbius differential geometry. London Mathematical Society Lecture Note Series, 300. Cambridge University Press, Cambridge, 2003. Y. H. Kim, R. McCann, Continuity, curvature, and the general covariance of optimal transportation, J. Eur. Math. Soc. (JEMS) 12(4) (2010) 1009–1040. Y. H. Kim, R. McCann, M. Warren, Pseudo-Riemannian geometry calibrates optimal transportation, Math. Res. Lett. 17(6) (2010) 1183–1197. N. Q. Le, Hölder regularity of the 2D dual semigeostrophic equations via analysis of linearized Monge-Ampère equations, to appear in Comm. Math. Phys. G. Loeper, Regularity of optimal maps on the sphere: The quadratic cost and the reflector antenna, Arch. Rational Mech. Anal. 199(1) (2011) 269-289. R. McCann, Polar factorization of maps on Riemannian manifolds, Geom. Funct. Anal. 11 (2001) 589–608. M. Salvai, Force free conformal motions of the sphere, Differential Geom. Appl. 16 (2002) 285–292. M. M. Lazarte, M. Salvai, A. Will, Force free projective motions of the sphere, J. Geom. Phys. 57 (2007) 2431–2436. C. Villani, Optimal transport. Old and new. Grundlehren der Mathematischen Wissenschaften, 338. Springer-Verlag, Berlin, 2009. [^1]: This work was partially supported by <span style="font-variant:small-caps;">Conicet</span> (PIP 112-2011-01-00670), <span style="font-variant:small-caps;">Foncyt</span> (PICT 2010 cat 1 proyecto 1716) <span style="font-variant:small-caps;">Secyt Univ.Nac.Córdoba</span>
--- abstract: 'The notions of column and row operator space were extended by A. Lambert from Hilbert spaces to general Banach spaces. In this paper, we use column and row spaces over quotients of subspaces of general $L_p$-spaces to equip several Banach algebras occurring naturally in abstract harmonic analysis with canonical, yet not obvious operator space structures that turn them into completely bounded Banach algebras. We use these operator space structures to gain new insights on those algebras.' author: - '*Matthias Neufang*' - '*Volker Runde*' title: | Column and row operator spaces over ${\mathit{QSL}}_p$-spaces\ and their use in abstract harmonic analysis --- column and row operator spaces, Herz–Schur multipliers, operator algebras, operator Connes-amenability, pseudofunctions, pseudomeasures, ${\mathit{QSL}}_p$-spaces. Primary 43A65; Secondary 22D12, 43A30, 46H25, 46J99, 47L25, 47L50. Introduction {#introduction .unnumbered} ============ The Fourier algebra $A(G)$ of a general locally compact group $G$ was introduced by P. Eymard in [@Eym]. If $G$ is abelian with dual group $\hat{G}$, then $A(G)$ is just $L_1(\hat{G})$ via the Fourier transform. As the predual of the group von Neumann algebra, $A(G)$ has a canonical structure as an abstract operator space (see [@ER], [@Pau], or [@Pis] for the theory of operator spaces), turning it into a completely contractive Banach algebra. In the past decade and a half, operator space theoretic methods have given new momentum to the study of $A(G)$ (see [@IS], [@NRS], or [@Rua], for example), yielding new insights, even if the problem in question seemed to have nothing to do with operator spaces ([@FKLS] or [@FR]). The definition of $A(G)$ can be extended to an $L_p$-context: instead of restricting oneself the left regular representation of $G$ on $L_2(G)$, one considers the left regular representation of $G$ on $L_p(G)$ for general $p \in (1,\infty)$. This approach leads to the *Figà-Talamanca–Herz algebras* $A_p(G)$, which were introduced by C. Herz in [@Her1] and further studied in [@Her2]. Ever since, the Figà-Talamanca–Herz algebras have been objects of independent interest in abstract harmonic analysis. At the first glance, it may seem that the passage from $L_2(G)$ to $L_p(G)$ for $p \neq 2$ is of little significance, and, indeed, many (mostly elementary) properties of $A(G)$ can be established for $A_p(G)$ with $p \neq 2$ along the same lines. However, the lack of von Neumann algebraic methods for operator algebras on $L_p$-spaces for $p \neq 2$ has left other problems, which have long been solved for $A(G)$, wide open for $A_p(G)$. For instance, any closed subgroup of $G$ is a set of synthesis for $A(G)$ ([@TT]) whereas the corresponding statement for $A_p(G)$ with $p \neq 2$ is still wide open. As the Figà-Talamanca–Herz algebras have no obvious connections with operator algebras on Hilbert space, it appears at first glance that operator space theoretic methods are of very limited use when dealing with $A_p(G)$ for $p \neq 2$. There is a notion of $p$-completely boundedness for general $p \in (1,\infty)$ with $2$-complete boundedness just being usual complete boundedness, and an abstract theory based on $p$-complete boundedness can be developed—called $p$-operator space theory in [@Daw]—that parallels operator space theory ([@LeM]). There are indeed applications of $p$-complete boundedness to Figà-Talamanca–Herz algebras (see [@Fen] and [@Daw]). Alas, as pointed out in [@Daw], there is no suitable Hahn–Banach theorem for $p$-completely bounded maps, so that the duality theory of $p$-operator spaces has to be fairly limited. In [@LNR], A. Lambert and the authors pursued a different approach to putting operator spaces to work on Figà-Talamanca–Herz algebras. In his doctoral thesis [@Lam], Lambert extended the notions of column and row operator space, which are canonical over Hilbert space, to general Banach spaces. This allows, for $p \in (1,\infty)$, to equip ${\cal B}(L_p(G))$ for any $p \in (1,\infty)$ with an operator space structure, which, for $p = 2$, is the canonical one. This, in turn, can be used to equip $A_p(G)$—for any $p \in (1,\infty)$—with an operator space structure in the usual sense, making it a completely bounded Banach algebra. With respect to this operator space structure, [@Rua Theorem 3.6] extends to Figà-Talamanca–Herz algebras: $G$ is amenable if and only if $A_p(G)$ is operator amenable for one—and, equivalently, all—$p \in (1,\infty)$. In the present paper, we continue the work begun in [@LNR] and link it with the paper [@RunBp] by the second author. Most of it is devoted to extending operator space theoretic results known to hold for the Fourier algebra and (reduced) Fourier–Stieltjes algebra of a locally compact group to the suitable generalizations in a general $L_p$-context. In particular, we show that, for any $p \in (1,\infty)$, the Banach algebra $B_p(G)$ introduced in [@RunBp] can be turned into a completely bounded Banach algebra in a canonical manner, and we obtain an $L_p$-generalization of [@RS Theorem 4.4]. Preliminaries ============= In this section, we recall some of background from [@LNR] and [@RunBp]. We shall throughout rely heavily on those papers, and the reader is advised to have them at hand. Column and row operators spaces over Banach spaces -------------------------------------------------- The notions of column and row operator space of Hilbert space are standard in operator space theory ([@ER 3.4]). In [@Lam], Lambert extended these notions to general Banach spaces. As his construction is fairly involved, we will only sketch it very briefly here and refer to [@LNR Sections 2 and 3] instead (and to [@Lam] for more details). Throughout the paper, we adopt the notation from [@LNR]. Lambert introduces a category—called *operator sequence spaces*—that can be viewed as an intermediary between Banach spaces and operator spaces, and defines functors $$\min, \max \!: \{\text{Banach spaces} \} \to \{ \text{operator sequence spaces} \}$$ and $${\operatorname{Min}}, {\operatorname{Max}}\!: \{\text{operator sequence spaces} \} \to \{ \text{operator spaces} \}$$ such that ${\operatorname{Min}}\circ \min = {\operatorname{MIN}}$ and ${\operatorname{Max}}\circ \max = {\operatorname{MAX}}$. He then defines $${\operatorname{COL}}, {\operatorname{ROW}}\!: \{\text{Banach spaces} \} \to \{ \text{operator spaces} \}$$ as $${\operatorname{COL}}:= {\operatorname{Min}}\circ \max \qquad\text{and}\qquad {\operatorname{ROW}}:= {\operatorname{Max}}\circ \min.$$ For any Banach space $E$, the operator spaces ${\operatorname{COL}}(E)$ and ${\operatorname{ROW}}(E)$ are homogeneous and satisfy $${\operatorname{COL}}(E)^\ast = {\operatorname{ROW}}(E^\ast) \qquad\text{and}\qquad {\operatorname{ROW}}(E)^\ast = {\operatorname{COL}}(E^\ast).$$ By [@Math], these definitions coincide with the usual ones in the case of a Hilbert space. We would like to point out that column and row operator spaces in a general Banach space context display a behavior quite different from in the Hilbert space setting, as shown by the following two propositions: \[colmax\] For any subhomogeneous $\cstar$-algebra $\A$, we have canonical completely bounded isomorphisms $${\operatorname{COL}}(\A) \cong {\operatorname{MIN}}(\A) \qquad\text{and}\qquad {\operatorname{ROW}}(\A) \cong {\operatorname{MAX}}(\A).$$ Let $\A$ be a subhomogeneous $\cstar$-algebra. As $\id_\A \!: \min(\A) \to \max(\A)$ is sequentially bounded by [@Lam Satz 2.2.25], $$\id_\A \!: {\operatorname{MIN}}(\A) = {\operatorname{Min}}(\min(\A)) \to {\operatorname{Min}}(\max(\A)) = {\operatorname{COL}}(\A)$$ and $$\id_\A \!: {\operatorname{ROW}}(\A) = {\operatorname{Max}}(\min(\A)) \to {\operatorname{Max}}(\max(\A)) = {\operatorname{MAX}}(\A)$$ are completely bounded. Let $\Hilbert$ be an infinite-dimensional Hilbert space, and let $\iota \!: \Hilbert \to {\cal C}(\Omega)$ be an isometric embedding into the continuous functions onto some compact Hausdorff space $\Omega$. As ${\operatorname{COL}}({\cal C}(\Omega))\cong {\operatorname{MIN}}({\cal C}(\Omega))$ by Proposition \[colmax\] whereas $\id_\Hilbert \!: {\operatorname{MIN}}(\Hilbert) \to {\operatorname{COL}}(\Hilbert)$ is not completely bounded, this means that ${\operatorname{COL}}$ does not respect subspaces; in a similar way, we see that ${\operatorname{COL}}$ does not respect quotients either. By duality, the same is true for ${\operatorname{ROW}}$. Given an operator space $E$, we denote its *opposite operator space* by $E^\op$ (see [@Pis0 pp. 43–44]). It is immediate that ${\operatorname{COL}}(\Hilbert)^\op = {\operatorname{ROW}}(\Hilbert)$ and ${\operatorname{ROW}}(\Hilbert)^\op = {\operatorname{COL}}(\Hilbert)$ for any Hilbert space $\Hilbert$. For a general Banach space $E$, we have: \[opposite\] For any Banach space $E$, the identity on $E$ induces a complete contraction from ${\operatorname{COL}}(E)$ to ${\operatorname{ROW}}(E)^\op$, which, in general, fails to have a completely bounded inverse. From the definition of ${\operatorname{ROW}}$, it is obvious that $\id_E \!: \min(E) \to C({\operatorname{ROW}}(E)^\op)$ is a sequential isometry, so that $$\id_E \!: {\operatorname{COL}}(E) = {\operatorname{Max}}(\min(E)) \to {\operatorname{ROW}}(E)^\op$$ is a complete contraction by [@Lam Satz 4.1.12]. On the other hand, if $E$ is a subhomogeneous $\cstar$-algebra, then we have ${\operatorname{ROW}}(\A)^\op \cong {\operatorname{MAX}}(\A)^\op = {\operatorname{MAX}}(\A)$ by Proposition \[colmax\] whereas ${\operatorname{COL}}(\A) \cong {\operatorname{MIN}}(\A)$. Hence, $\id \!: {\operatorname{ROW}}(\A)^\op \to {\operatorname{COL}}(\A)$ cannot be completely bounded unless $\dim \A < \infty$ ([@Pau Theorem 14.3(iii)]). Representations on ${\mathit{QSL}}_p$-spaces -------------------------------------------- By a *representation* of a locally compact group $G$ on a Banach space, we mean a pair $(\pi,E)$, where $E$ is a Banach space and $\pi$ is homomorphism from $G$ into the group of invertible isometries on $E$ which is continuous with respect to the given topology on $G$ and the strong operator topology on ${\cal B}(E)$. (This is somewhat more restrictive than the usual use of the term; as we will not consider any other kind of representation, however, we prefer to keep our terminology short.) Given two representations $(\pi,E)$ and $(\rho,F)$ of a locally compact group $G$, we - call $(\pi,E)$ and $(\rho,F)$ *equivalent* if there is an invertible isometry $V \!: E \to F$ with $$V\pi(x) V^{-1} = \rho(x) \qquad (x \in G),$$ - call $(\rho,F)$ a *subrepresentation* of $(\pi,E)$ if $F$ is a closed subspace of $E$ and $\rho(x) = \pi(x) |_F$ holds for all $x \in F$, and - say that $(\rho,F)$ is *contained* in $(\pi,E)$ if $(\rho,F)$ is equivalent to a subrepresentation of $(\pi,E)$, in which case, we write $(\rho,F) \subset (\pi,E)$. Throughout, we shall often identify a particular representation with its equivalence class in order to avoid pedantry. Given a locally compact group $G$ and a representation $(\pi,E)$ of $G$, we obtain a representation of the group algebra $L_1(G)$ on $E$, i.e., a contractive algebra homomorphism from $L_1(G)$ into ${\cal B}(E)$, which we denote by $\pi$ as well, via $$\label{integ} \pi(f) := \int_G f(x) \pi(x) \, dx \qquad (f \in L_1(G)),$$ where the integral converges in the strong operator topology. Conversely, if $\pi \!: L_1(G) \to {\cal B}(E)$ is a representation that is non-degenerate, i.e., the span of $\{ \pi(f)\xi : f \in L_1(G), \, \xi \in E \}$ is dense in $E$, then it arises from a representation of $G$ on $E$ via (\[integ\]). In this paper, we are interested in representations on ${\mathit{QSL}}_p$-spaces with $p \in (1,\infty)$, i.e., on Banach spaces that are isometrically isomorphic to quotients of subspaces—or, equivalently, subspace of quotients—of the usual $L_p$-spaces. By [@Kwa [§]{}4, Theorem 2], these are precisely the $p$-spaces of [@Her2]. For a locally compact group $G$ and $p \in (1,\infty)$, we denote by ${\mathrm{Rep}}_p(G)$ the collection of all (equivalence classes of) representations of $G$ on a ${\mathit{QSL}}_p$-space. For the following definition, recall that, for $p \in (1,\infty)$, any ${\mathit{QSL}}_p$-space $E$ is reflexive, so that ${\cal B}(E)$ is a dual Banach space in a canonical way, so that we can speak of a weak$^\ast$ topology. Let $G$ be a locally compact group, let $p \in (1,\infty)$, and let $(\pi,E) \in {\mathrm{Rep}}_p(G)$. Then: the algebra ${\mathrm{PF}}_{p,\pi}(G)$ of *$p$-pseudofunctions associated with $(\pi,E)$* is the norm closure of $\pi(L_1(G))$ in ${\cal B}(E)$; the algebra $\PM_{p,\pi}(G)$ of *$p$-pseudomeasures associated with $(\pi,E)$* is the weak$^\ast$ closure of $\pi(L_1(G))$ in ${\cal B}(E)$. If $(\pi,E) = (\lambda_p,L_p(G))$, i.e., the left regular representation of $G$ on $L_p(G)$, we simply speak of $p$-pseudofunctions and $p$-pseudomeasures, as is standard usage, and write ${\mathrm{PF}}_p(G)$ and $\PM_p(G)$, respectively. ${\mathrm{PF}}_p(G)$ is not an operator algebra for $p \neq 2$ ============================================================== Let $G$ be a locally compact group. Then ${\mathrm{PF}}_2(G)$ and $\PM_2(G)$ are the reduced group $\cstar$-algebra $C^\ast_r(G)$ and the group von Neumann algebra $\VN(G)$, respectively. As $\cstar$-subalgebras of ${\cal B}(L^2(G))$, they are operator spaces in a canonical manner. For any Hilbert space $\Hilbert$, the operator spaces ${\cal B}(\Hilbert)$ and $\CB({\operatorname{COL}}(\Hilbert))$ are completely isometrically isomorphic. We thus define: \[osdef\] Let $G$ be a locally compact group, let $p \in (1,\infty)$, and let $(\pi,E) \in {\mathrm{Rep}}_p(G)$. Then the canonical operator space structure of ${\mathrm{PF}}_{p,\pi}(G)$ and $\PM_{p,\pi}(G)$, respectively, is the one inherited as a subspace of $\CB({\operatorname{COL}}(E))$. As $\CB(E)$ is a completely contractive Banach algebra, for every operator space $E$, Definition \[osdef\] turns ${\mathrm{PF}}_{p,\pi}(G)$ and $\PM_{p,\pi}(G)$ into completely contractive Banach algebras. By an *operator algebra*, we mean a completely contractive Banach algebra that is completely isometrically isomorphic to a—not necessarily self-adjoint—closed subalgebra of ${\cal B}(\Hilbert)$ for some Hilbert space $\Hilbert$. Not every completely bounded Banach algebra is an operator algebra: for instance, $\CB(E)$ for an operator space $E$, is an operator algebra if and only if $E = {\operatorname{COL}}(\Hilbert)$ for some Hilbert space $E$ ([@Ble Theorem 3.4]). Nevertheless, there are operator algebras that arise naturally from $L_p$-spaces for $p \neq 1$ ([@BLeM]). Hence, the following theorem is somewhat less self-evident than one might think at the first glance: \[noopalg\] Let $p \in (1,\infty) \setminus \{ 2 \}$, and let $G$ be an amenable, locally compact group containing an infinite abelian subgroup. Then ${\mathrm{PF}}_p(G)$ and $\PM_p(G)$ are not operator algebras. Before we delve into the proof, we establish some more notation and definitions. For a locally compact group $G$ and $p \in (1,\infty)$, we denote its *Figà-Talamanca–Herz algebra* by $A_p(G)$ (see [@Her1], [@Her2], and [@Eym2]). We have a canonical duality $A_p(G)^\ast = \PM_{p'}(G)$, where $p' \in (1,\infty)$ is such that $\frac{1}{p} + \frac{1}{p'} = 1$. Of course, $A_2(G)$ is just Eymard’s Fourier algebra $A(G)$. By a *multiplier* of $A_p(G)$, we mean a continuous function $f$ such that $fg \in A_p(G)$ for all $g \in A_p(G)$; for any multiplier $f$ of $A_p(G)$, multiplication induces a linear map $M_f \!: A_p(G) \to A_p(G)$, which is easily seen to be bounded by the closed graph theorem. As $A_p(G)$ is a closed subspace of $\PM_{p'}(G)^\ast$, it inherits a canonical operator space structure via Definition \[osdef\] (this is the same operator space structure as considered in [@LNR]). We thus define the *completely bounded multipliers* of $A_p(G)$ as $${\mathcal{M}_\mathrm{cb}}(A_p(G)) := \{ f : \text{$f$ is a multiplier of $A_p(G)$ such that $M_f \in \CB(A_p(G))$} \}.$$ It is easy to see that ${\mathcal{M}_\mathrm{cb}}(A_p(G))$ is a closed subalgebra of $\CB(A_p(G))$. In [@LNR], it was shown that multiplication in $A_p(G)$ is completely bounded, even though not necessarily completely contractive, so that we have a canonical completely bounded inclusion $A_p(G) \subset {\mathcal{M}_\mathrm{cb}}(A_p(G))$. We start the proof of Theorem \[noopalg\] with a lemma: \[nol1\] Let $G$ be a locally compact group, and let $p \in (1,\infty)$ be such that ${\mathrm{PF}}_p(G)$ is an operator algebra. Then $A_p(G) \subset {\mathcal{M}_\mathrm{cb}}(A(G))$ holds. Let $f \in A_{p'}(G)$, so that $M_f \in {\mathcal{M}_\mathrm{cb}}(A_{p'}(G))$ and, consequently, $M_f^\ast \in \CB(\PM_p(G))$. It is easy to see that $M_f^\ast$ leaves ${\mathrm{PF}}_p(G)$ invariant, so that we can view $M_f^\ast$ as an element of $\CB({\mathrm{PF}}_p(G))$. Assume that there is a Hilbert space $\Hilbert$ and a completely isometric algebra homomorphism $\iota \!: {\mathrm{PF}}_p(G) \to {\cal B}(\Hilbert)$. Then $\lambda_p^\# := \iota \circ \lambda_p \!: L_1(G) \to {\cal B}(\Hilbert)$ is a contractive representation of $L_1(G)$ on $\Hilbert$. Let $( e_\alpha )_\alpha$ be an approximate identity for $L_1(G)$ bounded by one, and let $P \in {\cal B}(\Hilbert)$ a weak$^\ast$accumulation point of $(\lambda^\#_p(e_\alpha) )_\alpha$. Then $P$ is a norm one idempotent and thus a projection. Replacing $\Hilbert$ by $P\Hilbert$, we can therefore suppose that $\lambda^\#_p \!: L_1(G) \to {\cal B}(\Hilbert)$ is a non-degenerate representation of $L_1(G)$. Consequently, it arises from a unitary representation of $G$ on $\Hilbert$, which we denote likewise by $\lambda_p^\#$. We can view $M_f^\ast$ as a completely bounded map from ${\mathrm{PF}}_p(G)$ to ${\cal B}(\Hilbert)$. By the Arveson–Wittstock–Hahn–Banach theorem ([@ER Theorem 4.1.5]), we can extend $M_f^\ast$ to a completely bounded map $\widetilde{M^\ast_f} \!: {\cal B}(\Hilbert) \to {\cal B}(\Hilbert)$. By [@Pau Theorem 8.4], there is another Hilbert space $\mathfrak K$ along with a unital $^\ast$-homomorphism $\pi \!: {\cal B}(\Hilbert) \to {\cal B}(\mathfrak{K})$ and bounded operators $V,W \!: \Hilbert \to \mathfrak K$ such that $$\widetilde{M^\ast_f}(T) = V^\ast \pi(T) W \qquad (T \in {\cal B}(\Hilbert))$$ and thus $$\label{factor1} M^\ast_f(\lambda^\#_p(g)) = V^\ast \pi(\lambda^\#_p(g)) W \qquad (g \in L_1(G)).$$ Set $\sigma := \pi \circ \lambda^\#_p$, so that $\sigma$ is a contractive representation of $L_1(G)$ on the Hilbert space $\mathfrak K$. As before for $\lambda_p^\#$, we see that $\sigma$ is non-degenerate and thus is induced via integration by a unitary representation of $G$ on $\mathfrak K$, which we denote likewise by $\sigma$. If $( e_\alpha )_\alpha$ denotes again a bounded approximate identity for $L_1(G)$, we obtain $$\label{factor2} \begin{split} f(x) \lambda_p^\#(x) & = M_f^\ast(\lambda_p^\#(x)) \\ & = \lim_\alpha M_f^\ast(\lambda_p^\#(\delta_x \ast e_\alpha)) \\ & = \lim_\alpha V^\ast \sigma(\delta_x \ast e_\alpha) W, \qquad\text{by (\ref{factor1})}, \\ & = V^\ast \sigma(x) W \qquad (x \in G), \end{split}$$ where the limits are taken in the strong operator topology. Fix a unit vector $\xi \in \Hilbert$, and define $L, R \!: G \to \Hilbert$ by letting $$L(x) := \sigma(x^{-1})V\lambda_p^\#(x)\xi \quad\text{and}\quad R(x) := \sigma(x^{-1}) W \lambda_p^\#(x)\xi \qquad (x \in G).$$ It follows that $$\begin{split} \langle L(x),R(y) \rangle & = \langle \sigma(x^{-1})V\lambda_p^\#(x)\xi, \sigma(y^{-1})W\lambda_p^\#(y)\xi \rangle \\ & = \langle \lambda_p^\#(x)\xi, V^\ast \sigma(xy^{-1})W\lambda_p^\#(y)\xi \rangle \\ & = \langle f(xy^{-1}) \lambda_p^\#(x)\xi, \lambda_p^\#(xy^{-1})\lambda_p^\#(y)\xi \rangle, \qquad \text{by (\ref{factor2})}, \\ & = f(xy^{-1}) \qquad (x,y \in G). \end{split}$$ By the characterization of completely bounded multipliers of $A(G)$ from [@BF]—see [@Jol] for an alternative proof—, this means that $f \in {\mathcal{M}_\mathrm{cb}}(A(G))$. Our proof of Lemma \[nol1\] is patterned after that of the main result of [@Jol]. Our next proposition seems to be folklore (and likely true for all infinite $G$), but for lack of a suitable reference, we give a proof that was indicated to us by A. Derighetti: \[nop\] Let $G$ be a locally compact group that contains an infinite abelian subgroup, and let $p \in (1,\infty) \setminus \{ 2 \}$. Then $A_p(G)$ is not contained in $A(G)$. Let $H$ be an infinite abelian subgroup of $G$, which we can suppose to be closed, and note that $\PM_p(H) \subsetneq \PM_2(H)$ by [@Lar Theorems 4.5.4 and 4.5.5], so that, by duality, $A(H) \subsetneq A_p(H)$. Assume that $A_p(G) \subset A(G)$, and let $Q_{p,H} \!: A_p(G) \to A_p(H)$ and $Q_H \!: A(G) \to A(H)$ denote the restriction maps. Since both $Q_{p,H}$ and $Q_H$ are surjective ([@Her2 Theorem 1b]), $A_p(G) \subset A(G)$ thus yields $A_p(H) \subset A(H)$, which contradicts $A(H) \subsetneq A_p(H)$. Groups to which Proposition \[nop\] applies are, in particular, all infinite, compact groups ([@Zel Theorem 2]), all non-metrizable groups (by [@HR (8.7) Theorem] and [@Zel Theorem 2] combined), and all connected groups ([@MZ Theorem in 4.13]). Since ${\mathrm{PF}}_p(G)$ is a closed subalgebra of $\PM_p(G)$, it is enough to prove the claim for ${\mathrm{PF}}_p(G)$. By Lemma \[nol1\], we have $A_{p'}(G) \subset {\mathcal{M}_\mathrm{cb}}(A(G))$. Since $G$ is amenable, $A(G)$ has a bounded approximate identity by Leptin’s theorem ([@Pie Theorem 10.4]). Moreover, $A_{p'(G)}$ is a Banach $A(G)$-module by [@Her1], and the bounded approximate identity for $A(G)$ constructed in the proof of [@Pie Theorem 10.4] is easily seen to also be a bounded approximate identity for the Banach $A(G)$-module. From one of the versions of Cohen’s factorization theorem—[@Dal Corollary 2.9.26], for instance—, it then follows that each function in $A_{p'}(G)$ is a product of another function in $A_{p'}(G)$ with a function from $A(G)$. By Lemma \[nol1\], $A_{p'}(G) \subset {\mathcal{M}_\mathrm{cb}}(A(G))$ holds, so that $A_{p'}(G) \subset A(G)$. This contradicts Proposition \[nop\]. ${\mathrm{PF}}_{p'}(G)$ and $B_p(G)$ as completely bounded Banach algebras ========================================================================== Let $G$ be a locally compact group. Then $A(G) = \VN(G)_\ast$, $B_r(G) = C^\ast_r(G)^\ast$—the *reduced Fourier–Stieltjes algebra* of $G$, and the *Fourier–Stieltjes algebra* $B(G) = \cstar(G)^\ast$, where $\cstar(G)$ denotes the full group $\cstar$-algebra, all have canonical operator space structures turning them into completely contractive Banach algebras. For $p \in (1,\infty)$, the embedding $A_p(G) \subset \PM_{p'}(G)^\ast$ turns $A_p(G)$ into a *completely bounded Banach algebra*, i.e., turns it into an operator space such that multiplication is completely bounded, albeit not necessarily completely contractive (see [@LNR] for details). For any $p \in (1,\infty)$, the space ${\mathrm{PF}}_{p'}(G)^\ast$ consists of continuous functions on $G$ and is a Banach algebra under pointwise multiplication (see [@Her3] and [@Cow]). Moreover, in [@RunBp], the second author defined a unital, commutative Banach algebra $B_p(G)$ containing ${\mathrm{PF}}_{p'}(G)^\ast$ ([@RunBp Theorem 6.6(i)]), which, for $p = 2$, is just $B(G)$. In this section, we will adapt the construction from [@LNR] to equip both ${\mathrm{PF}}_{p'}(G)$ and $B_p(G)$ with canonical operator space structures—generalizing those for $B_r(G)$ and $B(G)$ in the $p=2$ case—such that they become completely bounded Banach algebras. We begin the following proposition: \[tprop\] Let $p \in (1, \infty)$, and let $E$ and $F$ be ${\mathit{QSL}}_p$-spaces. Then there is a norm $\| \cdot \|_p$ on the algebraic tensor product $E \tensor F$ with the following properties: $\| \cdot \|_p$ is a cross norm dominating the injective tensor norm; the completion $E \ttensor_p F$ of $(E \tensor F, \| \cdot \|_p)$ is a ${\mathit{QSL}}_p$-space; if $G$ is a locally compact group with $(\pi,E), (\rho, F) \in {\mathrm{Rep}}_p(G)$, then $(\pi \tensor \rho, E \ttensor_p F) \in {\mathrm{Rep}}_p(G)$; the bilinear maps $${\operatorname{COL}}(E) \times {\operatorname{COL}}(F) \to {\operatorname{COL}}(E \ttensor_p F), \quad (\xi,\eta) \mapsto \xi \tensor \eta$$ and $${\operatorname{ROW}}(E) \times {\operatorname{ROW}}(F) \to {\operatorname{ROW}}(E \ttensor_p F), \quad (\xi,\eta) \mapsto \xi \tensor \eta$$ are completely bounded with $\cb$-norm at most ${K_{\mathbb G}}$, the *complex Grothendieck constant*. Moreover, if $E = L^p(X)$ for some measure space $X$, we can choose $\| \cdot \|_p$ as the norm $L^p(X) \tensor F$ inherits as a subspace of $L^p(X,F)$. (i), (ii), and (iii) just summarize [@RunBp Theorem 3.1] and the “moreover” part is clear from an inspection of the proof of that theorem. \(iv) follows from [@LNR Theorem 5.5 and 5.8] and the construction of $\| \cdot \|_p$ in [@RunBp]. Given a locally compact group $G$, $p \in (1,\infty)$, and $(\pi,E) \in {\mathrm{Rep}}_{p'}(G)$, let $\PM_{p,,\pi}(G)_\ast$ denote the canonical predual of $\PM_{p',\pi}(G)$; we shall consider it with the operator space structure inherited from $\PM_{p,\pi}(G)^\ast$. It is immediate that $\PM_{p,,\pi}(G)_\ast$ consists of continuous functions on $G$. We have: \[mullem\] Let $G$ be a locally compact group, let $p \in (1,\infty)$, let $(\pi,E), (\rho, F) \in {\mathrm{Rep}}_{p'}(G)$, and let $(\pi \tensor \rho, E \ttensor_p F)$ be as in Proposition *\[tprop\]*. Then pointwise multiplication induces a completely bounded map from $\PM_{p',\pi}(G)_\ast \Tensor \PM_{p',\rho}(G)_\ast$ into $\PM_{p', \pi \tensor \rho}(G)_\ast$ with $\cb$-norm at most ${K_{\mathbb G}}^2$. It follows from [@RunBp Corollary 3.2] that pointwise multiplication of two functions in $\PM_{p',\pi}(G)_\ast$ and $\PM_{p',\rho}(G)_\ast$, respectively, does indeed yield a function in $\PM_{p', \pi \tensor \rho}(G)_\ast$. A diagram chase just as in the proof of [@LNR Lemma 6.2]—invoking Proposition \[tprop\](iv)—then shows that the induced bilinear map is indeed completely bounded with norm at most ${K_{\mathbb G}}^2$. We can now prove: \[mulprop\] Let $G$ be a locally compact group, let $p \in (1,\infty)$, let $(\pi,E), (\rho, F) \in {\mathrm{Rep}}_{p'}(G)$, and let $(\pi \tensor \rho, E \ttensor_p F)$ be as specified in Proposition *\[tprop\]*. Then pointwise multiplication induces a completely bounded, bilinear map from ${\mathrm{PF}}_{p',\pi}(G)^\ast \times {\mathrm{PF}}_{p',\rho}(G)^\ast$ into ${\mathrm{PF}}_{p', \pi \tensor \rho}(G)^\ast$ with $\cb$-norm at most ${K_{\mathbb G}}^2$. Moreover, this multiplication is separately continuous with respect to the weak$^\ast$ topologies involved. Let $$m \!: \PM_{p',\pi}(G)_\ast \times \PM_{p',\rho}(G)_\ast \to \PM_{p', \pi \tensor \rho}(G)_\ast$$ denote pointwise multiplication and recall from Lemma \[mullem\] that $\| m \|_\cb \leq {K_{\mathbb G}}^2$. As a bilinear map between Banach spaces, $m$ has two Arens extensions $$\begin{gathered} m_1^{\ast\ast} \!: \PM_{p',\pi}(G)^\ast \times \PM_{p',\rho}(G)^\ast \to \PM_{p', \pi \tensor \rho}(G)^\ast \\ \text{and}\qquad m_2^{\ast\ast} \!: \PM_{p',\pi}(G)^\ast \times \PM_{p',\rho}(G)^\ast \to \PM_{p', \pi \tensor \rho}(G)^\ast.\end{gathered}$$ (This construction is mostly done only for the product of a Banach algebra—see [@Dal]—, but works as well for general bilinear maps: see [@Gro]). It is routinely checked that $m_1^{\ast\ast}$ and $m_2^{\ast\ast}$ are both completely bounded with $\| m_j^{\ast\ast} \|_\cb \leq {K_{\mathbb G}}^2$ for $j=1,2$. For $\sigma \in \{ \pi, \rho, \pi \tensor \rho \}$, let $Q_\sigma \!: \PM_{p',\sigma}(G)^\ast \to {\mathrm{PF}}_{p',\sigma}(G)^\ast$ denote the restriction map, and note that it is a complete quotient map. We claim that $$Q_{\pi \tensor \rho} \circ m_1^{\ast\ast} \!: \PM_{p',\pi}(G)^\ast \times \PM_{p',\rho}(G)^\ast \to {\mathrm{PF}}_{p', \pi \tensor \rho}(G)^\ast$$ drops to a map $$\tilde{m} \!: {\mathrm{PF}}_{p',\pi}(G)^\ast \times {\mathrm{PF}}_{p',\rho}(G)^\ast \to {\mathrm{PF}}_{p', \pi \tensor \rho}(G)^\ast,$$ which is easily seen to be pointwise multiplication and clearly satisfies $\| \tilde{m} \| \leq {K_{\mathbb G}}^\ast$. (We could equally well work with $m_2$.) For $\sigma \in \{ \pi, \rho, \pi \tensor \rho \}$, let $\iota_\sigma \!: \PM_{p',\sigma}(G)_\ast \to L_\infty(G)$ and $\tilde{Q}_\sigma\!: \PM_{p',\sigma}(G)^\ast \to L_\infty(G) = L_1(G)^\ast$ denote the canonical inclusion and restriction maps, respectively. Also, let $Q \!: L_\infty(G)^{\ast\ast} \to L_\infty(G)$ be the canonical restriction map, and note that it is an algebra homomorphism. As the diagram $$\begin{diagram} \PM_{p',\pi}(G)_\ast & \times & \PM_{p',\rho}(G)_\ast & \rTo^m & \PM_{p', \pi \tensor \rho}(G)_\ast \\ \dTo^{\iota_\pi} & & \dTo_{\iota_\rho} & & \dTo_{\iota_{\pi \tensor \rho}} \\ L_\infty(G) & \times & L_\infty(G) & \rTo & L_\infty(G), \end{diagram}$$ where the bottom row is pointwise multiplication in $L_\infty(G)$ commutes, so does $$\label{dia1} \begin{diagram} \PM_{p',\pi}(G)^\ast & \times & \PM_{p',\rho}(G)^\ast & \rTo^{m_1^{\ast\ast}} & \PM_{p', \pi \tensor \rho}(G)^\ast \\ \dTo^{Q \circ \iota_\pi^{\ast\ast}} & & \dTo_{Q \circ \iota_\rho^{\ast\ast}} & & \dTo_{Q \circ \iota_{\pi \tensor \rho}^{\ast\ast}} \\ L_\infty(G) & \times & L_\infty(G) & \rTo & L_\infty(G), \end{diagram}$$ As $$Q \circ \iota_\sigma^{\ast\ast} = \tilde{Q}_\sigma \qquad (\sigma \in \{ \pi, \rho, \pi \tensor \rho \}),$$ this entails the commutativity of $$\begin{diagram} \PM_{p',\pi}(G)^\ast & \times & \PM_{p',\rho}(G)^\ast & \rTo^{m_1^{\ast\ast}} & \PM_{p', \pi \tensor \rho}(G)^\ast \\ \dTo^{\tilde{Q}_\pi} & & \dTo_{\tilde{Q}_\rho} & & \dTo_{\tilde{Q}_{\pi \tensor \rho}} \\ L_\infty(G) & \times & L_\infty(G) & \rTo & L_\infty(G), \end{diagram}$$ and thus of $$\begin{diagram} \PM_{p',\pi}(G)^\ast & \times & \PM_{p',\rho}(G)^\ast & \rTo^{m_1^{\ast\ast}} & \PM_{p', \pi \tensor \rho}(G)^\ast \\ \dTo^{Q_\pi} & & \dTo_{Q_\rho} & & \dTo_{Q_{\pi \tensor \rho}} \\ {\mathrm{PF}}_{p',\pi}(G)^\ast & \times & {\mathrm{PF}}_{p',\rho}(G)^\ast & \rTo & {\mathrm{PF}}_{p', \pi \tensor \rho}(G)^\ast \end{diagram}$$ with the bottom row being the desired map $\tilde{m}$. Finally, since the weak$^\ast$ topology of ${\mathrm{PF}}_{p',\sigma}(G)^\ast$ for $\sigma \in \{ \pi, \rho, \pi \tensor \rho \}$ coincides with the weak$^\ast$ topology of $L_\infty(G)$ on norm bounded subsets and since multiplication in $L_\infty(G)$ is separately weak$^\ast$ continuous, the commutativity of (\[dia1\]) and the Kreĭn–Šmulian theorem ([@DS Theorem V.5.7]) establish the separate weak$^\ast$ continuity of pointwise multiplication from ${\mathrm{PF}}_{p',\pi}(G)^\ast \times {\mathrm{PF}}_{p',\rho}(G)^\ast$ to ${\mathrm{PF}}_{p', \pi \tensor \rho}(G)^\ast$. Following [@RS], we call a completely bounded Banach algebra a *dual, completely bounded Banach algebra* if it is a dual operator space such that multiplication is separately weak$^\ast$ continuous. We can finally state and prove the first theorem of this section: \[PFthm\] Let $G$ be a locally compact group $G$, and let $p \in (1,\infty)$. Then ${\mathrm{PF}}_{p'}(G)^\ast$ is a dual, completely bounded Banach algebra with multiplication having the $\cb$-norm at most ${K_{\mathbb G}}^2$. By Proposition \[mulprop\], pointwise multiplication $${\mathrm{PF}}_{p'}(G)^\ast \times {\mathrm{PF}}_{p'}(G)^\ast \to {\mathrm{PF}}_{p',\lambda_{p'} \tensor \lambda_{p'}}(G)^\ast$$ is completely bounded with $\cb$-norm at most ${K_{\mathbb G}}^2$ and separately weak$^\ast$ continuous. From [@LNR Theorem 4.6] and [@RunBp Proposition 5.1], we conclude that ${\mathrm{PF}}_{p'}(G)$ and ${\mathrm{PF}}_{p',\lambda_{p'} \tensor \lambda_{p'}}(G)$ are canonically completely isometrically isomorphic. This completes the proof. We shall now turn to the task of turning $B_p(G)$—the $p$-analog of the Fourier–Stieltjes algebra introduced in [@RunBp]—into a completely bounded Banach algebra. As in [@RunBp], a difficulty arises due to the fact that ${\mathrm{Rep}}_{p'}(G)$ is not a set, but only a class; we circumvent the problem, by imposing size restriction on the spaces involved: Let $G$ be a locally compact group, and let $p \in (1,\infty)$. We call $(\pi,E) \in {\mathrm{Rep}}_{p'}(G)$ *small* if $\mathrm{card}(E) \leq \mathrm{card}(L_1(G))^{\aleph_0}$. The left regular representation $(\lambda_p,L_p(G))$ is small, as are the cyclic representations used in [@RunBp]. Unlike ${\mathrm{Rep}}_p(G)$, the class of all small representation in ${\mathrm{Rep}}_p(G)$ is indeed a set. Let $G$ be a locally compact group, let $p \in (1,\infty)$, and let $(\pi,E), (\rho,F) \in {\mathrm{Rep}}_p(G)$ be such that $(\rho,F) \subset (\pi,E)$. Then we have a canonical complete contraction from ${\mathrm{PF}}_{p,\pi}(G)$ to ${\mathrm{PF}}_{p,\rho}(G)$. Consequently, if $( (\rho_\alpha,F_\alpha))_\alpha$ is a family of representations contained in $(\pi,E)$, we have a canonical complete contraction from ${\mathrm{PF}}_{p,\pi}(G)$ to $\text{$\ell_\infty$-}\bigoplus_\alpha {\mathrm{PF}}_{p,\rho_\alpha}(G)$. We note: \[small\] Let $G$ be a locally compact group, let $p \in (1,\infty)$, let $(\pi,E) \in {\mathrm{Rep}}_p(G)$, and let $( (\rho_\alpha,F_\alpha))_\alpha$ be the family of all small representations contained in $(\pi,E)$. Then the canonical map from ${\mathrm{PF}}_{p,\pi}(G)$ to $\text{$\ell_\infty$-}\bigoplus_\alpha {\mathrm{PF}}_{p,\rho_\alpha}(G)$ is a complete isometry. We need to show the following: for each $n,m \in \posints$, each $n \times n$ matrix $[ f_{j,k} ] \in M_n(L_1(G))$, and each $\epsilon > 0$, there is a closed subspace $F$ of $G$ invariant under $\pi(G)$ with $\mathrm{card}(F) \leq \mathrm{card}(L_1(G))^{\aleph_0}$ such that $$\left\| \left[ \pi(f_{j,k})^{(m)} |_{M_m(F)} \right] \right\|_{{\cal B}(M_m(F),M_{nm}(F))} \geq \left\| \left[ \pi(f_{j,k})^{(m)} \right] \right\|_{{\cal B}(M_m(E),M_{nm}(E))} - \epsilon.$$ Let $n,m \in \posints$, $[ f_{j,k} ] \in M_n(L_1(G))$, and $\epsilon > 0$. Trivially, there is $[ \xi_{\nu,\mu} ] \in M_m(E)$ with $\| [ \xi_{\nu,\mu} ] \|_{M_m(E)} \leq 1$ such that $$\| [ \pi(f_{j,k})\xi_{\nu,\mu} ] \|_{M_{nm}(E)} \geq \left\| \left[ \pi(f_{j,k})^{(m)} \right] \right\|_{{\cal B}(M_m(E),M_{nm}(E))} - \epsilon.$$ Let $F$ be the closed linear span of $\{ \pi(f) \xi_{\nu,\mu} : f \in L_1(G), \, \nu,\mu = 1, \ldots, m \}$; it clearly has the desired properties. \[unidef\] Let $G$ be a locally compact group, and let $p \in (1,\infty)$. We say that $(\pi_u,E_u) \in {\mathrm{Rep}}_p(G)$ *$p$-universal* if it contains every small reperesentation in ${\mathrm{Rep}}_p(G)$. We write ${\mathrm{UPF}}_p(G)$ instead of ${\mathrm{PF}}_{p,\pi_u}(G)$ and call the elements of ${\mathrm{UPF}}_p(G)$ *universal $p$-pseudofunctions*. Since cyclic representations in the sense of [@RunBp] are small, a $p$-universal representation according to Definition \[unidef\] is also $p$-universal in the sense of [@RunBp Definition 4.5]. We do not know if the converse is also true. There are indeed $p$-universal representations: this can be seen as in the example immediately after [@RunBp Definition 4.5]. If $(\pi_u,E_u) \in {\mathrm{Rep}}_p(G)$ is $p$-universal and $(\rho,F) \in {\mathrm{Rep}}_p(G)$ is arbitrary, then Proposition \[small\] shows that we have a canonical complete contraction from ${\mathrm{UPF}}_p(G)$ to ${\mathrm{PF}}_{p,\rho}(G)$. In particular, the operator space structure of ${\mathrm{UPF}}_p(G)$ does not depend on a particular $p$-universal representation. Let $G$ be a locally compact group, and let $p \in (1,\infty)$. As every $p'$-universal representation of $G$ is also $p'$-universal in the sense of [@RunBp], [@RunBp Theorem 6.6(ii)] remains valid, and we can identity $B_p(G)$ with the Banach space dual of ${\mathrm{UPF}}_{p'}(G)$. As ${\mathrm{UPF}}_{p'}(G)$ is an operator space by virtue of Definition \[osdef\], we define the canonical operator space structure of $B_p(G)$ as the one it has as the dual space of ${\mathrm{UPF}}_{p'}(G)$. \[Bpthm\] Let $G$ be a locally compact group, and let $p \in (1,\infty)$. Then: $B_p(G)$ is a dual, completely bounded Banach algebra; the canonical image of ${\mathrm{PF}}_{p'}(G)^\ast$ in $B_p(G)$ is an ideal of $B_p(G)$. Let $(\pi_u,E_u) \in {\mathrm{Rep}}_{p'}(G)$ be $p'$-universal. By Proposition \[mulprop\], pointwise multiplication $$\label{muleq} B_p(G) \times B_p(G) \to {\mathrm{PF}}_{p',\pi_u \tensor \pi_u}(G)^\ast$$ is completely bounded. Since $(\pi_u,E_u)$ is $p'$-universal, we have a canonical complete contraction from ${\mathrm{UPF}}_{p'}(G)$ to ${\mathrm{PF}}_{p',\pi_u \tensor \pi_u}(G)$. Composing the adjoint of this map with (\[muleq\]), we obtain pointwise multiplication on $B_p(G)$, which is thus completely bounded. That multiplication in $B_p(G)$ is separately weak$^\ast$ continuous is seen as in the proof of Theorem \[PFthm\]. This proves (i). \(ii) follows from [@RunBp Proposition 5.1]. Unless $p =2$, we it is well possible that the canonical map from ${\mathrm{PF}}_{p'}(G)^\ast$ to $B_p(G)$ fails to be an isometry: see the remark immediately after [@RunBp Corollary 5.3]. We even have that ${\mathrm{PF}}_{p'}(G)^\ast$ is a $B_p(G)$ module with completely bounded module actions. Since the canonical map from ${\mathrm{PF}}_{p'}(G)^\ast$ to $B_p(G)$ need not be a (complete) isometry, this is somewhat stronger than Theorem \[Bpthm\](ii). Herz–Schur and completely bounded multipliers of $A_p(G)$ ========================================================= Let $G$ be a locally compact group, let $p,q \in (1,\infty)$, and—as in [@ER] and [@LNR]—let $\tensor^\gamma$ stand for the projective tensor product of Banach spaces. Even though $L^p(G) \tensor^\gamma L^q(G)$ does not consist of functions on $G \times G$, but rather of equivalence classes of functions, it still makes sense to speak of multipliers of $L^p(G) \tensor^\gamma L^q(G)$: by a multiplier of $L^p(G) \tensor^\gamma L^q(G)$, we mean a continuous function $f$ on $G \times G$, so such that the corresponding multiplication operator $M_f$ induces a bounded linear operator on $L^p(G) \tensor^\gamma L^q(G)$. For $p \in (1,\infty)$, we write ${\cal V}_p(G)$ to denote the pointwise multipliers of $L^p(G) \tensor^\gamma L^{p'}(G)$. For any function $f \!: G \to \comps$, we write $K(f)$ for the function $$G \times G \to \comps, \quad (x,y) \mapsto f(xy^{-1})$$ We define the *Herz–Schur multipliers* of $A_p(G)$ as $${\mathcal{M}_\mathrm{HS}}(A_p(G)) := \{ f \!: G \to \comps : K(f) \in {\cal V}_p(G) \}.$$ As ${\cal V}_p(G)$ is a closed subspace of ${\cal B}(L^p(G) \tensor^\gamma L^{p'}(G))$, and since the map ${\mathcal{M}_\mathrm{HS}}(A_p(G)) \ni f \mapsto K(f)$ is injective, we can equip ${\mathcal{M}_\mathrm{HS}}(A_p(G))$ with a natural norm turning it into a Banach space. In [@BF], M. Bożejko and G. Fendler showed that ${\mathcal{M}_\mathrm{HS}}(A(G))$ and ${\mathcal{M}_\mathrm{cb}}(A(G))$ are isometrically isomorphic (see also [@Jol]), and in [@Fen], Fendler showed that, for general $p \in (1,\infty)$, the Herz–Schur multipliers of $A_p(G)$ are precisely the $p$-completely bounded ones. In this section, we investigate how ${\mathcal{M}_\mathrm{HS}}(A_p(G))$ and ${\mathcal{M}_\mathrm{cb}}(A_p(G))$ relate to each other for general $p \in (1,\infty)$, but with $A_p(G)$ having the operator space structure introduced in [@LNR]. We start with a lemma: \[HSlem1\] Let $p \in (1,\infty)$, let $X$ and $Y$ be measure spaces, and let $E$ be a ${\mathit{QSL}}_p$-space. Then the map $$\begin{gathered} (L^p(X) \tensor E) \tensor (L^{p'}(Y) \tensor E^\ast) \to L^p(X) \tensor L^{p'}(Y), \\ (f \tensor \xi) \tensor (g \tensor \phi) \mapsto \langle \xi, \phi \rangle (f \tensor g)\end{gathered}$$ extends to a complete quotient map $$\tr_E \!: {\operatorname{ROW}}(L^p(X,E)) \Tensor {\operatorname{COL}}(L^{p'}(Y,E^\ast)) \to {\operatorname{ROW}}(L^p(X)) \Tensor {\operatorname{COL}}(L^{p'}(Y)).$$ Since $E^\ast$ is a ${\mathit{QSL}}_{p'}$-space, this follows from [@LNR Theorem 4.6] through taking adjoints. \[HSprop\] Let $p \in (1,\infty)$, and let $G$ be a locally compact group. Then, for any $f \in {\mathcal{M}_\mathrm{HS}}(A_p(G))$, the multiplication operator $M_{K(f)} \!: L^p(G) \tensor^\gamma L^{p'}(G) \to L^p(G) \tensor^\gamma L^{p'}(G)$ is completely bounded on ${\operatorname{ROW}}(L^p(G)) \Tensor {\operatorname{COL}}(L^{p'}(G))$ such that $$\| M_{K(f)} \|_\cb = \| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))}.$$ Let $f \in {\mathcal{M}_\mathrm{HS}}(A_p(G))$, and let $\epsilon > 0$. Then, by [@Gil] (see also [@Fen Theorem 4.4]), there is a ${\mathit{QSL}}_p$-space $E$ along with bounded continuous functions $L \!: G \to E$ and $R \!: G \to E^\ast$ such that $$K(f)(x,y) = \langle L(x), R(y)\rangle \qquad (x,y \in G)$$ and $$\label{estim1} \| L \|_\infty \| R \|_\infty < \| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))} + \epsilon,$$ where $$\| L \|_\infty := \sup_{x \in G} \| L(x) \| \qquad\text{and}\qquad \| R \|_\infty := \sup_{x \in G} \| R(x) \|.$$ Define $\tilde{L} \!: L^p(G) \to L^p(G,E)$ by letting $$(\tilde{L}\xi)(x) := \xi(x)L(x) \qquad (\xi \in L^p(G), \, x \in G).$$ Then $\tilde{L}$ is linear and bounded with $\| \tilde{L} \| = \| L \|_\infty$. Similarly, one defines a bounded linear map $\tilde{R} \!: L^{p'}(G) \to L^{p'}(G,E^\ast)$ with $\| \tilde{R} \| = \| R \|_\infty$ by letting $$(\tilde{R}\eta)(x) := \eta(x)R(x) \qquad (\eta \in L^{p'}(G), \, x \in G).$$ Since the row and the column spaces over any Banach space are homogeneous, it is clear that $\tilde{L} \!: {\operatorname{ROW}}(L^p(G)) \to {\operatorname{ROW}}(L^p(G,E))$ and $\tilde{R} \!: {\operatorname{COL}}(L^{p'}(G)) \to {\operatorname{COL}}(L^{p'}(G,E^\ast))$ are completely bounded with $\| \tilde{L} \|_\cb = \| L \|_\infty$ and $\| \tilde{R} \|_\cb = \| R \|_\infty$. From [@ER Corollary 7.1.3], it thus follows that $$\tilde{L} \tensor \tilde{R} \!: {\operatorname{ROW}}(L^p(G)) \Tensor {\operatorname{COL}}(L^{p'}(G)) \to {\operatorname{ROW}}(L^p(G,E)) \Tensor {\operatorname{COL}}(L^{p'}(G,E^\ast))$$ is completely bounded as well with $\| \tilde{L} \tensor \tilde{R} \|_\cb \leq \| L \|_\infty \| R \|_\infty$. Since $$\tr_E \!: {\operatorname{ROW}}(L^p(G,E)) \Tensor {\operatorname{COL}}(L^{p'}(G,E^\ast)) \to {\operatorname{ROW}}(L^p(G)) \Tensor {\operatorname{COL}}(L^{p'}(G))$$ as in Lemma \[HSlem1\] is a complete contraction, we conclude that $\tr_E \circ (\tilde{L} \tensor \tilde{R})$ is completely bounded with $\cb$-norm at most $\| L \|_\infty \| R \|_\infty$. From the definitions of $\tr_E$, $\tilde{L}$, and $\tilde{R}$, it it straightforward to verify that $\tr_E \circ (\tilde{L} \tensor \tilde{R}) = M_{K(f)}$. In view of (\[estim1\]), we thus obtain that $$\| M_{K(f)} \|_\cb \leq \| L \|_\infty \| R \|_\infty < \| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))} + \epsilon.$$ Since $\epsilon > 0$ is arbitrary, this yields $\| M_{K(f)} \|_\cb \leq \| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))}$. By definition, $$\| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))} = \| M_{K(f)} \| \leq \| M_{K(f)} \|_\cb,$$ holds, so that have equality as claimed. Passing to quotients we thus obtain: \[HSthm\] Let $p \in (1,\infty)$, and let $G$ be a locally compact group. Then ${\mathcal{M}_\mathrm{HS}}(A_p(G))$ is contained in ${\mathcal{M}_\mathrm{cb}}(A_p(G))$ such that the inclusion is a contraction. By Proposition \[HSprop\], the linear map ${\mathcal{M}_\mathrm{HS}}(A_p(G)) \ni f \mapsto M_{K(f)}$ is an isometric embedding into the operator space $\CB({\operatorname{ROW}}(L^p(G)) \Tensor {\operatorname{COL}}(L^{p'}(G)))$ and can be used to equip ${\mathcal{M}_\mathrm{HS}}(A_p(G))$ with a canonical operator space structure. We do not know whether Theorem \[HSthm\] can be improved to yield a completely contractive—or at least completely bounded—inclusion map. For amenable $G$, the algebras ${\mathrm{PF}}_{p'}(G)^\ast$, $B_p(G)$, ${\mathcal{M}_\mathrm{cb}}(A_p(G))$, and ${\mathcal{M}_\mathrm{HS}}(A_p(G))$ are easily seen to be isometrically isomorphic (see [@Cow], [@Her3], and [@RunBp]). We do not know whether theses isometric isomorphisms are, in fact, completely isometric; for some of them, this seems to be open even in the case where $p =2$. For $p \in (1,\infty)$ and a locally compact group $G$, any $f \in A_p(G)$ is a $\cb$-multiplier of $A_p(G)$, simply because $A_p(G)$ is a completely bounded Banach algebra. However, as $A_p(G)$ is not known to be completely contractive, this does not allow us to conclude that $\|f \|_{{\mathcal{M}_\mathrm{cb}}(A_p(G))} \leq \| f \|_{A_p(G)}$, but only that $\|f \|_{{\mathcal{M}_\mathrm{cb}}(A_p(G))} \leq {K_{\mathbb G}}^2 \| f \|_{A_p(G)}$. Theorem \[HSthm\], nevertheless allows us to obtain a better norm estimate: Let $p \in (1,\infty)$, and let $G$ be a locally compact group. Then we have $$\| f \|_{{\mathcal{M}_\mathrm{cb}}(A_p(G))} \leq \| f \|_{A_p(G)} \qquad (f \in A_p(G)).$$ Let $f \in A_p(G)$. By [@Pie Proposition 10.2], we have $\| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))} \leq \| f \|_{A_p(G)}$ and thus $$\| f \|_{{\mathcal{M}_\mathrm{cb}}(A_p(G))} \leq \| f \|_{{\mathcal{M}_\mathrm{HS}}(A_p(G))} \leq \| f \|_{A_p(G)}$$ by Theorem \[HSthm\]. $B_p(G)$, ${\mathrm{PF}}_{p'}(G)^\ast$, and the amenability of $G$ ================================================================== A classical amenability criterion due to R. Godement asserts that a locally compact group $G$ is amenable if and only if its trivial representation is weakly contained in $(\lambda_2,L_2(G))$ (see [@Pie Theorem 8.9], for instance). In terms of Fourier–Stieltjes algebras this means that $G$ is amenable if and only if $B_r(G) = B(G)$ (the equality is automatically an complete isometry). The following theorem generalizes this to a general $L_p$-context: \[amthm\] The following are equivalent for a locally compact group $G$: the canonical map from ${\mathrm{PF}}_{p'}(G)^\ast$ into $B_p(G)$ is surjective for each $p \in (1,\infty)$; there is $p \in (1,\infty)$ such that $1 \in {\mathrm{PF}}_{p'}(G)^\ast$; $G$ is amenable. \(i) $\Longrightarrow$ (ii) is trivial. \(ii) $\Longrightarrow$ (iii): An inspection of the proof of [@Cow Theorem 5] shows that $1 \in {\mathrm{PF}}_{p'}(G)^\ast$ for just one $p \in (1,\infty)$ is possible only if $G$ is amenable. \(iii) $\Longrightarrow$ (i): This follows from [@RunBp Theorem 6.7]. Unless $p=2$, we cannot say for amenable $G$ whether or not $\PM_{p'}(G)^\ast = B_p(G)$ holds completely isometrically. By [@RunBp Theorem 6.7], we do have an isometric isomorphism, but this is all we can say. Due to the lack of an inverse mapping theorem for completely bounded maps, we even do not know for general $p \in (1,\infty)$ whether the completely bounded bijective map from ${\mathrm{PF}}_{p'}(G)^\ast$ onto $B_p(G)$ has a completely bounded inverse. In [@Rua], Z.-J. Ruan adapted the notion of an amenable Banach algebra due to B. E. Johnson ([@Joh]) to an operator space context. Given a completely bounded Banach algebra $\A$ and a completely bounded Banach $\A$-bimodule $E$, i.e., a Banach $\A$-bimodule which is also an operator space such that the module actions are completely bounded, the dual operator space $E^\ast$ because a completely bounded Banach $\A$-bimodule in its own right via $$\langle \xi, a \cdot \phi \rangle := \langle \xi \cdot a, \phi \rangle \quad\text{and}\quad \langle \xi, \phi \cdot a \rangle := \langle a \cdot \xi, \phi \rangle \qquad (\xi \in E, \, \phi \in E^\ast, \, a \in \A),$$ and $\A$ is said to be *operator amenable* if and only if, for each completely bounded Banach $\A$-bimodule $E$, every completely bounded derivation $D \!: \A \to E^\ast$ is inner. Ruan showed that a locally compact group $G$ is amenable if and only if $A(G)$ is operator amenable, and in [@LNR], Lambert and the authors extended this result to $A_p(G)$ for arbitrary $p \in (1,\infty)$. Suppose that $\A$ is a *dual*, completely bounded Banach algebra. If $E$ is a completely bounded Banach $\A$-bimodule, we call $E^\ast$ *normal* if the bilinear maps $$\A \times E^\ast \to E^\ast, \quad \quad (a,\phi) \mapsto \left\{ \begin{array}{c} a \cdot \phi, \\ \phi \cdot a \end{array} \right.$$ are separately weak$^\ast$ continuous. Following [@RS], we say that $\A$ is *operator Connes-amenable* if, for every completely bounded Banach $\A$-bimodule $E$ such that $E^\ast$ is normal, every weak$^\ast$-weak$^\ast$-continuous, completely bounded derivation $D \!: \A \to E^\ast$ is inner. Extending [@RS Theorem 4.4] in analogy with [@LNR Theorem 7.3], we obtain: The following are equivalent for a locally compact group $G$: $G$ is amenable; $P_{p'}(G)^\ast$ is operator Connes-amenable for every $p \in (1,\infty)$; $B_r(G)$ is operator Connes-amenable; there is $p \in (1,\infty)$ such that ${\mathrm{PF}}_{p'}(G)^\ast$ is operator Connes-amenable. \(i) $\Longrightarrow$ (ii): Let $p \in (1,\infty)$ be arbitrary. Then [@LNR Theorem 7.3] yields the operator amenability of $A_p(G)$. Since the inclusion of $A_p(G)$ into ${\mathrm{PF}}_{p'}(G)^\ast$ is a completely contractive algebra homomorphism with weak$^\ast$ dense range, the operator space analog of [@RunD Proposition 4.2(i)] yields the operator Connes-amenability of ${\mathrm{PF}}_{p'}(G)^\ast$. \(ii) $\Longrightarrow$ (iii) $\Longrightarrow$ (iv) are trivial. \(iv) $\Longrightarrow$ (i): Let $p \in (1,\infty)$ be such that ${\mathrm{PF}}_{p'}(G)^\ast$ is operator Connes-amenable. The operator space analog of [@RunD Proposition 4.1] then yields that ${\mathrm{PF}}_{p'}(G)^\ast$ has an identity, so that Theorem \[amthm\](ii) is satisfied. By Theorem \[amthm\], this means that $G$ is amenable. [F–K–L–S]{} <span style="font-variant:small-caps;">D. P. Blecher</span>, A completely bounded characterization of operator algebras. *Math. Ann.* **303** (1995), 227–239. <span style="font-variant:small-caps;">D. Blecher</span> and <span style="font-variant:small-caps;">C. Le Merdy</span>, On quotients of function algebras and operator algebra structures on $l_p$. *J. Operator Theory* **34** (1995), 315–346. <span style="font-variant:small-caps;">M. Bożejko</span> and <span style="font-variant:small-caps;">G. Fendler</span>, Herz–Schur multipliers and completely bounded multipliers of the Fourier algebra of a locally compact group. *Boll. Un. Mat. Ital.* A (6) **3** (1984), 297–302. , An application of Littlewood–Paley theory in harmonic analysis. *Math. Ann*. **241** (1979), 83–96. <span style="font-variant:small-caps;">H. G. Dales</span>, *Banach Algebras and Automatic Continuity*. London Mathematical Society Monographs (New Series) **24**, Clarendon Press, Oxford, 2000. <span style="font-variant:small-caps;">M. Daws</span>, $p$-Operator Spaces and Figà-Talamanca–Herz Algebras. *J. Operator Theory* (to appear). <span style="font-variant:small-caps;">N. Dunford</span> and <span style="font-variant:small-caps;">J. T. Schwartz</span>, *Linear Operators. Part I: General Theory*. Wiley Classics Library, John Wiley & Sons, Inc., New York, 1988. <span style="font-variant:small-caps;">E. G. Effros</span> and <span style="font-variant:small-caps;">Z.-J. Ruan</span>, *Operator Spaces*. London Mathematical Society Monographs (New Series) **23**, Clarendon Press, Oxford, 2000. <span style="font-variant:small-caps;">P. Eymard</span>, L’algèbre de Fourier d’un groupe localement compact. *Bull. Soc. Math. France* **92** (1964), 181–236. <span style="font-variant:small-caps;">P. Eymard</span>, Algèbres $A_p$ et convoluteurs de $L^p$. In: *Séminaire Bourbaki, vol. 1969/70, Exposés 364–381*, Lecture Notes in Mathematics **180**, Springer Verlag, Berlin–Heidelberg–New York, 1971. <span style="font-variant:small-caps;">G. Fendler</span>, *Herz–Schur Multipliers and Coefficients of Bounded Representations*. PhD thesis, Ruprecht Karls Universität Heidelberg, 1987. <span style="font-variant:small-caps;">B. E. Forrest</span>, <span style="font-variant:small-caps;">E. Kaniuth</span>, <span style="font-variant:small-caps;">A. T.-M. Lau</span>, and <span style="font-variant:small-caps;">N. Spronk</span>, Ideals with bounded approximate identities in Fourier algebras. *J. Funct. Anal.* **203** (2003), 286–304. <span style="font-variant:small-caps;">B. E. Forrest</span> and <span style="font-variant:small-caps;">V. Runde</span>, Amenability and weak amenability of the Fourier algebra. *Math. Z.* **250** (2005), 731–744. <span style="font-variant:small-caps;">J. E. Gilbert</span>, $L^p$-convolution opeators and tensor products of Banach spaces, I, II, and III. Unpublished manuscripts. <span style="font-variant:small-caps;">M. Grosser</span>, *Bidualräume und Vervollständigungen von Banachmoduln*. Lecture Notes in Mathematics **717**, Springer Verlag, Berlin–Heidelberg–New York, 1979. <span style="font-variant:small-caps;">C. Herz</span>, The theory of $p$-spaces with an application to convolution operators. ttextit[Trans. Amer. Math. Soc.]{} **154** (1971), 69–82. <span style="font-variant:small-caps;">C. Herz</span>, Harmonic synthesis for subgroups. *Ann. Inst. Fourier* (Grenoble) **23** (1973), 91–123. <span style="font-variant:small-caps;">C. Herz</span>, Une généralisation de la notion de transformée de Fourier–Stieltjes. *Ann. Inst. Fourier* (Grenoble) **24** (1974), 145–157. <span style="font-variant:small-caps;">E. Hewitt</span> and <span style="font-variant:small-caps;">K. A. Ross</span>, *Abstract Harmonic Analysis*, I. Die Grundlehren der mathematischen Wissenschaften **115**, Springer Verlag, Berlin–Heidelberg–New York, 1963. <span style="font-variant:small-caps;">M. Ilie</span> and <span style="font-variant:small-caps;">N. Spronk</span>, Completely bounded homomorphisms of the Fourier algebras. *J. Funct. Anal.* **225** (2005), 480–499. <span style="font-variant:small-caps;">B. E. Johnson</span>, Cohomology in Banach algebras. *Mem. Amer. Math. Soc.* **127** (1972). <span style="font-variant:small-caps;">P. Jolissaint</span>, A characterization of completely bounded multipliers of Fourier algebras. *Colloq. Math.* **LXIII** (1992), 311–313. <span style="font-variant:small-caps;">S. Kwapień</span>, On operators factoring through $L_p$-space. *Bull. Soc. Math. France, Mém.* **31–32** (1972), 215–225. <span style="font-variant:small-caps;">R. Larsen</span>, *An Introduction to the Theory of Multipliers*. Die Grundlehren der mathematischen Wissenschaften **175**, Springer Verlag, Berlin–Heidelberg–New York, 1971 <span style="font-variant:small-caps;">A. Lambert</span>, *Operatorfolgenräume. Eine Kategorie auf dem Weg von den Banach-Räumen zu den Operatorräumen*. Dissertation zur Erlangung des Grades Doktor der Naturwissenschaften, Universität des Saarlandes, 2002. <span style="font-variant:small-caps;">A. Lambert</span>, <span style="font-variant:small-caps;">M. Neufang</span>, and <span style="font-variant:small-caps;">V. Runde</span>, Operator space structure and amenability for Figà-Talamanca–Herz algebras. *J. Funct. Anal.* **211** (2004), 245–269. <span style="font-variant:small-caps;">C. LeMerdy</span>, Factorization of $p$-completely bounded multilinear maps. *Pacific J. Math.* **172** (1996), 187–213. <span style="font-variant:small-caps;">B. Mathes</span>, Characterizations of row and column Hilbert space. *J. London Math. Soc.* (2) **50** (1994), 199–208. <span style="font-variant:small-caps;">D. Montgomery</span> and <span style="font-variant:small-caps;">L. Zippin</span>, *Topological Transformation Groups*. Interscience Publishers, New York–London, 1955. <span style="font-variant:small-caps;">M. Neufang</span>, <span style="font-variant:small-caps;">Z.-J. Ruan</span>, and <span style="font-variant:small-caps;">N. Spronk</span>, Completely isometric representations of $M_{\cb}A(G)$ and $\mathit{UCB}(\hat{G})^*$. *Trans. Amer. Math. Soc.* (to appear). <span style="font-variant:small-caps;">V. Paulsen</span>, *Completely Bounded Maps and Operator Algebras*. Cambridge Studies in Advanced Mathematics **78**, Cambridge University Press, Cambridge, 2002. <span style="font-variant:small-caps;">J. P. Pier</span>, *Amenable Locally Compact Groups*. John Wiley & Sons, Inc., New York, 1984. <span style="font-variant:small-caps;">G. Pisier</span>, The operator Hilbert space $OH$, complex interpolation and tensor norms. *Mem. Amer. Math. Soc.* **585** (1996). <span style="font-variant:small-caps;">G. Pisier</span>, *Introduction to the Theory of Operator Spaces*. London Mathematical Society Lecture Note Series **294**, Cambridge University Press, Cambridge, 2003. <span style="font-variant:small-caps;">Z.-J. Ruan</span>, The operator amenability of $A(G)$. *Amer. J. Math.* **117** (1995), 1449–1474. <span style="font-variant:small-caps;">V. Runde</span>, Amenability for dual Banach algebras. *Studia Math.* **148** (2001), 47–66. <span style="font-variant:small-caps;">V. Runde</span>, Representations of locally compact groups on ${\mathit{QSL}}_p$-spaces and a $p$-analog of the Fourier–Stieltjes algebra. *Pacific J. Math.* **221** (2005), 379–397. <span style="font-variant:small-caps;">V. Runde</span> and <span style="font-variant:small-caps;">N. Spronk</span>, Operator amenability of Fourier-Stieltjes algebras. *Math. Proc. Cambridge Philos. Soc.* **136** (2004), 675–686. <span style="font-variant:small-caps;">M. Takesaki</span> and <span style="font-variant:small-caps;">N. Tatsuuma</span>, Duality and subgroups, II. *J. Funct. Anal.* **11** (1972), 184–190. <span style="font-variant:small-caps;">E. I. Zel’manov</span>, On periodic compact groups. *Israel J. Math.* **77** (1992), 83–95. *Second author’s address*: = Department of Mathematical and Statistical Sciences *First author’s address*: School of Mathematics and Statistics\ 4364 Herzberg Laboratories\ Carleton University\ Ottawa, Ontario\ Canada K1S 5B6\ *E-mail*: `mneufang@math.carleton.ca`\ *Second author’s address*: Department of Mathematical and Statistical Sciences\ University of Alberta\ Edmonton, Alberta\ Canada T6G 2G1\ *E-mail*: `vrunde@ualberta.ca`
--- author: - 'K. Schöbel and M. Ansorg' date: 'Received / Accepted' title: Maximal mass of uniformly rotating homogeneous stars in Einsteinian gravity --- Introduction ============ Various numerical methods have been developed to investigate relativistic rotating models for extraordinarily compact astrophysical objects like neutron stars ( Wilson [@Wilson], Bonazzola & Schneider [@BonSchnei], Friedman et al. [@FriedIpsPar86; @FriedIpsPar89], Komatsu et al. [@KomEriHach89a; @KomEriHach89b], Lattimer et al. [@Latt], Neugebauer & Herold [@NeuHer], Herold & Neugebauer [@HerNeu], Bonazzola et al. [@BGSM; @BonGouMarck], Eriguchi et al. [@EriHachNom94], Stergioulas & Friedman [@SterFried], Nozawa et al. [@Nozawa]). For reviews see Friedman [@Friedman] and Stergioulas [@Stergioulas]. The homogeneous and uniformly rotating star models were first studied by Butterworth & Ipser ([@ButIps75; @ButIps76]) who found that, in addition to the analytically known Schwarzschild and Maclaurin solutions, they are bounded by a sequence of configurations rotating at the mass shedding limit[^1]. The investigation of such limit configurations is instructive since certain physical quantities reach maximal values there. Butterworth & Ipser for example estimated a 30% increase in mass for fixed central pressure owing to rotation. In the present Letter we completed their studies, finding two further limiting curves, in particular a sequence of stars with infinite central pressure and a sequence of Newtonian lens-shaped configurations that bifurcates from the Maclaurin spheroids before ending in a mass shedding limit (Bardeen [@Bardeen], Ansorg, Kleinwächter & Meinel [@AnsKleiMei03b]). All five limiting curves were found to circumscribe entirely the general relativistic solution for homogeneous star models that are continuously joined to the static Schwarzschild solution—hereafter called the ‘generalized Schwarzschild solution’. This was done using the recently developed AKM method (Ansorg, Kleinwächter & Meinel [@AnsKleiMei02], [@AnsKleiMei03c]), which allows one to solve the Einstein equations to high accuracy even for critical configurations. In particular we are able to determine to high precision the extreme configuration possessing both a mass shed and infinite central pressure. At this point several physical quantities reach their global maxima and we provide explicit values for these. In what follows we use units in which the speed of light and Newton’s constant of gravitation assume the value $1$. Metric tensor and field equations ================================= The line element of a stationary, axisymmetric and asymptotically flat spacetime describing a uniformly rotating perfect fluid body can be cast into the form $$ds^2= {\rm e}^{-2U}\left[{\rm e}^{2k}(d\rho^2+d\zeta^2)+W^2d\varphi^2\right] -{\rm e}^{2U}(ad\varphi+dt)^2 {\ensuremath{\,.}}$$ The corresponding Lewis-Papapetrou coordinates $(\rho,\zeta,\varphi,t)$ are uniquely defined if we require continuity of the metric coefficients and their first derivatives at the body’s surface. In a comoving frame, for which the metric assumes the same form with metric functions $U'$, $k'$, $W'$ and $a'$, the relativistic Euler equation can easily be integrated to determine the pressure $p$. For constant mass-energy density $\mu$ this results in $$\label{pressure} p=\mu\left({\rm e}^{V_0-U'}-1\right) {\ensuremath{\,,}}$$ where $V_0$ is the constant surface value of $U'$. Taking into account asymptotic flatness, boundary and transition conditions at the surface and regularity along the rotational axis ($\rho=0$), the interior and exterior field equations form a complete set of equations to be solved, which is done by applying the AKM method. For a comprehensive discussion of this multi domain spectral method see Ansorg et al. ([@AnsKleiMei03c]). Known static and Newtonian limits ================================= Schwarzschild solution ---------------------- Solving Einstein’s equation for a static homogeneous star yields the famous Schwarzschild metric which reads in the above coordinates $${\rm e}^U=\frac{1-M/(2r)}{1+M/(2r)} \quad {\rm e}^k=1-\left(\frac M{2r}\right)^2 \quad W={\rm e}^k\rho \quad a=0$$ for the exterior ($r\equiv\sqrt{\rho^2+\zeta^2}\ge R$) and $$\begin{gathered} {\rm e}^{U'} =\frac12\left[3\, \frac{1-M/(2R)}{1+M/(2R)} -\frac{1-Mr^2/(2R^3)}{1+Mr^2/(2R^3)} \right] \\ {\rm e}^{k'} =\frac{\left[1+M/(2R)\right]^3}{1+Mr^2/(2R^3)}\,{\rm e}^{U'} \quad W'={\rm e}^{k'}\rho \quad a'=0\end{gathered}$$ for the interior region ($0\le r\le R$). Here $M$ is the gravitational mass and $R$ denotes the coordinate radius given implicitly through $$M=\mu\frac{4\pi}3R^3\left(1+\frac M{2R}\right)^6 {\ensuremath{\,.}}$$ For any physically relevant solution the pressure has to remain finite, which is fulfilled for $R>M$. This imposes an upper bound on the mass, namely $$M<\frac4{9\sqrt{3\pi\mu}}=0.14477\ldots\frac1{\sqrt\mu} {\ensuremath{\,.}}$$ Maclaurin spheroids ------------------- In Newtonian gravity the problem of self gravitating rotating ideal fluids requires solving the Poisson equation for the body’s gravitational field while satisfying the Euler-Lagrange equation governing its motion as an ideal fluid. This leads to a free boundary value problem. A particular solution for uniform rotation and constant mass density $\mu$ can be found by assuming the surface to be a spheroid. One obtains the so called Maclaurin spheroids, parametrized here by focal distance $\rho_0$ and the ratio $r_{\rm p}/r_{\rm e}$ between polar radius $r_{\rm p}$ and equatorial radius $r_{\rm e}$. Having computed the gravitational field, the Euler-Lagrange equation is seen to be satisfied for a squared angular velocity $$\Omega^2=2\pi\mu\xi\left[(3\xi^2+1)\operatorname{arccot}\xi-3\xi\right] {\ensuremath{\,,}}\quad \xi\equiv\left[\frac{r_{\rm e}^2}{r_{\rm p}^2}-1\right]^{-\frac12}$$ (bottom solid curve in Fig. \[Omega2\]). This relation holds independent of $\rho_0$. Note that Maclaurin spheroids exist for every $r_{\rm p}/r_{\rm e}\in[0,1]$ and $\rho_0\in[0,\infty[\,$, thus comprising a two parameter solution with arbitrary mass for fixed $\mu$. First Newtonian lens sequence ----------------------------- On the Maclaurin curve, an infinite series of points corresponding to axisymmetric secular instabilities occurs, beginning at $r_{\rm p}/r_{\rm e}=0.17126$, and accumulating in the Maclaurin disk limit $r_{\rm p}/r_{\rm e}\to0$ (Chandrasekhar [@Chandrasekhar], Bardeen [@Bardeen]). They are bifurcation points of further Newtonian sequences and correspond to singular post-Newtonian corrections (see Petroff [@Petroff]). The first one of these sequences is comprised of two segments that depart from the first bifurcation point. One segment proceeds towards the Dyson rings ( Dyson [@Dyson1892; @Dyson1893], Wong [@Wong], Kowalewsky [@Kowalewsky], Poincaré [@Poincare1885a; @Poincare1885b; @Poincare1885c], Eriguchi & Sugimoto [@EriSug]) whereas the other one ends in a mass shedding limit (Bardeen [@Bardeen], Ansorg et al. [@AnsKleiMei03b]). The latter we will refer to as the ‘first Newtonian lens sequence’ motivated by the shape of the corresponding bodies. Generalized Schwarzschild solution ================================== For a systematic investigation of the general case the choice of parameters is not restricted by the numerical approach. Nevertheless it is convenient to take non-ambiguous parameters that are restricted to a compact interval (here $[0,1]$) in such a way that the endpoints represent limiting configurations. Corresponding to the limits found, we selected the following magnitudes: - Mass shed parameter (as defined in Ansorg et al. [@AnsKleiMei03c]) $$\beta \equiv-\frac{r_{\rm e}^2}{r_{\rm p}^2}\left. \frac{d(\zeta_{\rm s}^2)}{d(\rho^2)} \right|_{\rho=r_{\rm e}} =-\frac{r_{\rm e}}{r_{\rm p}^2}\lim_{\rho\to r_{\rm e}} \zeta_{\rm s}\frac{d\zeta_{\rm s}}{d\rho} {\ensuremath{\,,}}$$ where $\zeta_{\rm s}(\rho)$ is the function describing the surface shape. Maclaurin and Schwarzschild bodies are characterized by $\beta=1$ and the mass shed limit is fixed by $\beta=0$. - $\tilde p_{\rm c}\equiv p_{\rm c}/(\mu+p_{\rm c})$, where $p_{\rm c}$ is the central pressure. Thus $\tilde p_{\rm c}=0$ stands for the Newtonian limit where mass and hence pressure vanish and $\tilde p_{\rm c}=1$ for the limiting configurations with infinite central pressure. Additionally we will use the ratio $r_{\rm p}/r_{\rm e}$ of polar to equatorial coordinate radius that is $1$ only for Schwarzschild solutions. Our numerical analysis revealed that the generalized Schwarzschild solution is entirely bounded by the following five limiting curves (joined in the order listed): - the sequence of Schwarzschild solutions\ ($r_{\rm p}/r_{\rm e}=1$, $\beta=1$, $\tilde p_{\rm c}\in[0,1]$) - the Maclaurin sequence from the sphere to the first axisymmetric bifurcation point\ ($r_{\rm p}/r_{\rm e}\in[0.171,1]$, $\beta=1$, $\tilde p_{\rm c}=0$) - the first Newtonian lens sequence\ ($r_{\rm p}/r_{\rm e}\in[0.171,0.192]$, $\beta\in[0,1]$, $\tilde p_{\rm c}=0$) - a sequence of configurations rotating at the mass shedding limit ($r_{\rm p}/r_{\rm e}\in[0.192,0.573]$, $\beta=0$, $\tilde p_{\rm c}\in[0,1]$) - a sequence of configurations with infinite central pressure ($r_{\rm p}/r_{\rm e}\in[0.573,1]$, $\beta\in[0,1]$, $\tilde p_{\rm c}=1$) This makes it possible to determine maximal values of all interesting physical quantities (see section \[LastSection\]). Moreover, we can now state that on the generalized Schwarzschild solution no quasistationary transition to a Kerr black hole is possible (in contrast to what was found for the relativistic Dyson rings by Ansorg et al. ([@AnsKleiMei03a])) and that the surface remains convex in $\rho$-$\zeta$-coordinates. Fig. \[Omega2\] depicts the (squared) angular velocity versus the radius ratio $r_{\rm p}/r_{\rm e}$. This is the completion of results of Butterworth & Ipser ([@ButIps75; @ButIps76]).[^2] It shows that the configuration with maximal angular velocity rotates at the mass shedding limit and possesses infinite central pressure. The magnification in Fig. \[Zoom\] demonstrates that the mass shedding curve does not terminate at the first axisymmetric bifurcation point (C), as was conjectured by Butterworth and Ipser. Instead it is linked to the Maclaurin spheroids at this point via the first Newtonian lens sequence. The evolution of the gravitational mass for fixed central pressure is seen in Fig. \[Mass\] and reveals a 34.25% increase in the maximal mass with respect to the static case. The global maximum is again found for the same configuration as for angular velocity. In both figures we included the line above which ergo-regions appear. Interestingly, this line corresponds roughly to one of constant mass. Other physical quantities as baryonic mass, angular momentum, polar red shift and circumferential radius show a similar behaviour: For fixed mass shed parameter or radius ratio they increase with increasing central pressure. Likewise they increase with decreasing mass shed parameter or decreasing radius ratio if the central pressure is kept constant. So a global maximum for each of them is obtained on the common edge of mass shed and infinite central pressure curves, corresponding thus to a very special limit star. Maximal mass configuration {#LastSection} ========================== [lXr@l@lXc]{} Physical quantity && & value & &&\ Gravitational mass && $M$ & $=0.19435 $ & $\mu^{-1/2}$ && $\ast$\ Baryonic mass && $M_0$ & $=0.27316 $ & $\mu^{-1/2}$ && $\ast$\ Angular velocity && $\Omega$ & $=1.8822 $ & $\mu^{ 1/2}$ && $\ast$\ Angular momentum && $J$ & $=0.03637 $ & $\mu$ && $\ast$\ Polar radius && $r_{\rm p}$ & $=0.04856 $ & $\mu^{ 1/2}$ &&\ Equatorial radius && $r_{\rm e}$ & $=0.08475 $ & $\mu^{ 1/2}$ &&\ Radius ratio && $r_{\rm p}/r_{\rm e}$ & $=0.5730 $ & &&\ Circumferential radius && $R_\text{circ}$ & $=0.41538 $ & $\mu^{ 1/2}$ && $\ast$\ Polar red shift && $Z_{\rm p}$ & $=7.378 $ & && $\ast$\ In Table \[MaximalConfiguration\] masses and other quantities are listed for this extreme configuration. Because it resides on two critical curves we notice a loss in accuracy that was somewhat recovered by an extrapolation to an infinite approximation order $m$ of the AKM method (cf. Ansorg et al. [@AnsKleiMei03c]). Observe also the unexpectedly high value for the red shift $Z_{\rm p}$ of a photon emitted at one of the poles. Fig. \[CrossSection\] shows a meridional cross section of this configuration including the border of the ergo-region. As in general for diverging central pressure, the ergo-toroid degenerates by pinching together in the center. A detailed discussion, including the relation to relativistic ring solutions, more realistic equations of state and further going aspects like stability will be published elsewhere. We would like to thank Prof. R. Meinel and D. Petroff for valuable discussions and helpful advice. This work was supported by the Deutsche Forschungsgemeinschaft (DFG projects ME 1820/1–3 and SFB/TR 7 – B1). Ansorg, M., Kleinwächter, A. & Meinel, R. 2002, , 381, L49 Ansorg, M., Kleinwächter, A. & Meinel, R. 2003a, , 582, L87 Ansorg, M., Kleinwächter, A. & Meinel, R. 2003b, , in press (astro-ph/0208267) Ansorg, M., Kleinwächter, A. & Meinel, R. 2003c, , submitted (astro-ph/0301173) Bardeen, J. M. 1971, , 167, 425 Bonazzola, S., Gourgoulhon, E. & Marck, J. A. 1998, , 58, 104020 Bonazzola, S., Gourgoulhon, E., Salgado, M. & Marck, J. A. 1993, , 278, 421 Bonazzola, S. & Schneider, S. 1974, , 191, 273 Butterworth, E. M. & Ipser, J. R. 1975, , 200, L103 Butterworth, E. M. & Ipser, J. R. 1976, , 204, 200 Chandrasekhar, S. 1967, , 147, 334 Dyson, F. W. 1892, Phil. Trans. Roy. Soc., 184, 43 Dyson, F. W. 1893, Phil. Trans. Roy. Soc., 184A, 1041 Eriguchi, Y., Hachisu, I. & Nomoto, K. 1994, , 266, 179 Eriguchi, Y. & Sugimoto, D. 1981, Prog. Theor. Phys., 65, 1870 Friedman, J. L. 1998, in Black Holes and Relativistic Stars, ed. R. M. Wald (The University of Chicago Press, Chicago and London) 23 Friedman, J. L., Ipser, J. R. & Parker, L. 1986, , 304, 115; Erratum 1990, , 351, 705 Friedman, J. L., Ipser, J. R. & Parker, L. 1989, , 62, 3015 Herold, H. & Neugebauer, G. 1992, in Relativistic Gravity Research, Lecture Notes in Physics 410, ed. J. Ehlers & G. Schäfer (Springer, Berlin) 319 Komatsu, H., Eriguchi, Y. & Hachisu, I. 1989a, , 237, 355 Komatsu, H., Eriguchi, Y. & Hachisu, I. 1989b, , 239, 153 Kowalewsky, S. 1885, Astron. Nachr., 111, 37 Lattimer, J. M., Prakash, M., Masak, D. & Yahil, A. 1990, , 355, 241 Neugebauer, G. & Herold, H. 1992, in Relativistic Gravity Research, Lecture Notes in Physics 410, ed. J. Ehlers & G. Schäfer (Springer, Berlin) 305 Nozawa, T., Stergioulas, N., Gourgoulhon, E. & Eriguchi, Y. 1998, , 132, 431 Petroff, D. 2003, in preparation Poincaré, H. 1885a, C. R. Acad. Sci., 100, 346 Poincaré, H. 1885b, Bull. Astr., 2, 109 Poincaré, H. 1885c, Bull. Astr., 2, 405 Stergioulas, N. & Friedman, J. L. 1995, , 444, 306 Stergioulas, N. 1998, Living Reviews in Relativity, 1998-8, `http://www.livingreviews.org` Wilson, J. R. 1972, , 176, 195 Wong, C. Y. 1974, , 190, 675 [^1]: Due to uniform rotation, a shedding of matter sets in when centrifugal forces balance gravity at the equator. Then a cusp at the star’s equatorial rim appears. [^2]: Note that in contrast to our work they used proper radial distances and kept the baryonic mass constant.
--- abstract: '[We present unique radio observations of SS433, using MERLIN, the VLBA, and the VLA, which allow us to, for the first time, properly image and derive a meaningful spectral index for the ‘ruff’ of equatorial emission which surrounds SS433’s jet. We interpret this smooth ruff as a wind-like outflow from the binary.]{}' author: - 'Katherine M. Blundell$^1$, Michael P. Rupen$^2$, Amy J. Mioduszewski$^2$,   ' - 'Tom W. B. Muxlow $^3$ & Philipp Podsiadlowski$^1$' title: 'The ruff of equatorial emission around the SS433 jets: its spectral index and origin' --- 2[cm$^2$ ]{} 1[s$^{-1}$ ]{} SS433’s ruff of equatorial emission =================================== The central quarter-arcsecond of SS433’s appearance at 5GHz is rich in structure: both compact and smooth features may be found. To image this at radio wavelengths requires an interferometer with sufficiently long baselines to give adequate resolution. Those long baselines will act as a spatial frequency filter which only detects compact emission; they are insensitive to larger-scale structures. At a frequency such as 5GHz, short baselines are also needed to faithfully detect smoother extended emission. We illustrate this in Figure \[fig:ruff\]: the left figure shows the central region of SS433 imaged using only the VLBA; the right figure shows the same region, with the same contour levels, on the same epoch, at the same frequency, at the same resolution, using the same VLBA data, but adding in also the shorter baselines of MERLIN. In the left figure the only believable brightness structure is that associated with SS433’s familiar jet, although hints of surrounding emission are also seen. On the right figure, a wide smooth structure surrounding the jet appears, which we [@Blu01] have termed SS433’s [*ruff*]{}. Since the spatial filtering depends on the baseline length as measured in [*wavelengths*]{}, it is most severe at the highest frequencies, and even the VLBA alone can detect the ruff at 1.4GHz [@Blu01]. The spectral index of any extended emission may only be measured if that emission has been properly sampled at both frequencies. With VLBA data alone one can simply not detect the ruff at 5GHz, while at 1.4GHz it is obvious; the undersampling at high frequencies would lead to the derivation of a spuriously [*steep*]{} spectral index. Time variability is a further complication, making it essential to observe the two frequencies simultaneously. The observations we presented in [@Blu01] at 1.4GHz and at 5GHz were taken on the same day (1998Mar7), and included the VLBA, MERLIN, and the VLA. This is thus a [*unique*]{} dataset: there are sufficient short baselines at high frequency to adequately sample the emission, and both frequencies were observed quasi-simultaneously. We find a flat spectral index for the anomalous emission (see below). Paragi et al. [@Par02a] claim a steep spectral index; but their high-frequency data are undersampled (as they pointed out in [@Par99]), and their observations at the different frequencies are not simultaneous. Those data do not therefore usefully constrain the spectral index. Our measurements of the distribution of the spectral index across SS433’s ruff are shown in Figure\[fig:alpha\]. The spectral indices of the ruff were measured after convolving our images to a common beam of $10 \times 10$mas$^2$ HPBW. The resulting total flux densities, measured in identical boxes in these images, which were chosen to avoid the jet but include the full ‘ruff’ emission, are shown in Figure\[fig:alpha\]$a$. The spectral index for the combined (northern+southern) emission is $\alpha=-0.12\pm0.02$ ($S_{\nu} \propto \nu^\alpha$, where $S_{\nu}$ is the flux density at frequency $\nu$). Most resolved synchrotron sources are characterized by $\alpha < -0.4$; indeed, $\alpha=-0.1$ is normally considered the signature of thermal bremsstrahlung emission as is often observed in outflows from symbiotic binaries [@Sea84; @Mik01]. The complication here is that the peak surface brightness corresponds to a brightness temperature of $(2-4)\times10^7\,\rm K$ at 1.4GHz, implying a similar [*lower limit*]{} to the physical temperature of a thermally-emitting plasma. -------------------------- ------------------- ----------------------- \[-0.1cm\] Measurements from Measurements from [**VLBA**]{} only [**VLBA & MERLIN**]{} \[0.2cm\] \[-0.1cm\] Northern ruff Southern ruff \[0.2cm\] short baselines short baselines \[0.2cm\] -------------------------- ------------------- ----------------------- The distribution of the flux density perpendicular to the jet is shown in Figure\[fig:alpha\]$b$, which suggests that the spectral index is indeed almost flat throughout the ruff, and shows that the emission extends to  $40\,\rm mas$ at our sensitivity, or $\sim120\left(d/3\,\rm kpc\right)\,\rm AU$. Note also that the ruff is roughly symmetric about the jet. The origin of the smooth emission ================================= The most straightforward interpretation of the radio emission is that it arises from mass outflow from the binary system that is enhanced towards the orbital plane. Such mass loss could either (i) come from the companion (most likely an O or B star), (ii) be a disc wind from the outer parts of the accretion disc or (iii) arise from mass loss from a proto-common envelope surrounding the binary components. The detection of this mass loss will have important implications for our understanding of the evolutionary state of this unique system. It has been a long-standing puzzle how SS433 can survive so long in a phase of extreme mass transfer ($\dot{M} {\mbox {{\raisebox{-0.4ex}{$\stackrel{>}{{\scriptstyle\sim}}$}}}}\ 10^{-5}\Ms\yr^{-1}$) without entering into a common envelope phase where the compact object spirals completely into the massive companion (for a recent discussion see [@Kin00]). Since the theoretically predicted mass-transfer rate exceeds even the estimated mass-loss rate in the jets ($\dot{M}\sim 10^{-6}\Ms\yr^{-1}$; [@Beg80]), [@Kin00] proposed that most of this transferred mass is lost from the system in a radiation-pressure driven wind from the outer parts of the accretion disc [@King99]. A related problem exists in some intermediate-mass X-ray binaries (IMXBs). Models of the IMXB Cyg X-2 [@King99a; @Pod00; @Kolb00; @Taur00] show that the system must have passed through a phase where the mass-transfer rate was $\sim 10^{-5}\Ms\yr^{-1}$, exceeding the Eddington luminosity of the accreting star by many orders of magnitude, without entering into a common-envelope phase, and where almost all the transferred mass must have been lost from the system. The observed emission in SS433 presented here may provide direct evidence of how such mass loss takes place. The existence of a disc-like outflow was first postulated by [@Zwi91] to explain the variation with precession phase of the secondary minimum in the photometric light curve. [@Fab93] proposed a disc-like expanding envelope caused by mass-loss from the outer Lagrangian point L2 to explain the blue-shifted absorption lines of HI, HeI and FeII (see also [@Mam80], whose spectrum shows that all the emission lines seen in SS433 have P-Cygni profiles indicating the presence of outflowing gas). [@Fil88] observe remarkable double peaked Paschen lines, with speeds close to 300${\rm km\,s^{-1}}$. A rough estimate for the mass-outflow rate, described in our paper [@Blu01] is: $$\begin{aligned} \dot{M}&\simeq& 1.6\times 10^{-4}\,M_{\odot}\,\mbox{yr}^{-1}\,\, \\ &&\hspace{1cm}\times S_{50}^{3/4}\,d_{3}^{3/2}\,v_{300}\,\nu_{1.4}^{-1/2}\, \bar{g}_{10}^{-1/2}\, (\sin\alpha)_{30}^{1/4},\nonumber \end{aligned}$$ where $S_{50} = S_\nu/50$mJy, $d_{3}=d/3\,$kpc, $v_{300}= v_\infty/300\,$kms$^{-1}$, $\nu_{1.4}=\nu/1.4\,$GHz, $\bar{g}_{10}= \bar{g}/10$ ($\bar{g}$ is the Gaunt factor for free-free emission, $(\sin\alpha)_{30}=\sin\alpha/\sin 30^\circ$). One of the major uncertainties in this estimate is the velocity of the outflow, though a velocity of $\sim 300\,$kms$^{-1}$ is similar to that of the lines seen by [@Fil88] and is close to the characteristic orbital velocity of SS433, as one might expect for an outflow from the binary system rather than either binary component. Furthermore, if this outflow started soon after the supernova explosion which formed the compact object $\sim 10^4\,$yr ago and whose impressively circular remnant is seen clearly in the images of [@Dub98], a velocity of $\sim 300\,$kms$^{-1}$ implies an extent of the outflow of $\sim 3\,$arcmin (for $d=3\,$kpc). Indeed, this is exactly the size of the extended smooth emission seen by [@Dub98] and suggests that this may be the outer extent of the same outflow. The inferred mass-loss rate, $\dot{M}\sim 10^{-4}\,M_{\odot}\,$yr$^{-1}$, is much higher than any reasonable mass-loss rate from an O-star primary and suggests that it is connected with the unusual short-lived phase SS433 is experiencing. It could be mass loss from a common envelope that has already started to form around the binary, or a hot coronal wind from the outer parts of the accretion disc driven, e.g., by the X-ray irradiation from the central compact source. Acknowledgments {#acknowledgments .unnumbered} =============== K.M.B. thanks the Royal Society for a University Research Fellowship. We warmly thank the conference organisers for a very stimulating meeting. Begelman, M.C., Hatchett, S.P., McKee, C.F., Sarazin, C.L., Arons, J., 1980, ApJ, 238, 722 Blundell, K.M., Mioduszewski, A.J., Muxlow, T.W.B., Podsiadlowski, Ph., Rupen, M.P., 2001, ApJ, 562, L79 Dubner, G.M., Holdaway, M., Goss, W.M., & Mirabel, I.F., 1998, AJ, 116, 1842 Fabrika, S.N., 1993, MNRAS, 261, 241 Filippenko, A.V., Romani, R.W., Sargent, W.L.W., & Blandford, R.D., 1988, AJ, 96, 242 King, A.R. & Begelman, M.C. 1999, ApJ, 519, L169 King, A.R. & Ritter, H. 1999, MNRAS, 309, 253 King, A. R., Taam, R. E., & Begelman, M. C. 2000, ApJ, 530, [l]{}25 Kolb, U., Davies, M, King, A., & Ritter, H. 2000, MNRAS, 317, 438 Mammano, A., Ciatti, F., & Vittone, A. 1980, A&A, 85, 14 Miko[ł]{}ajewska, J. & Ivison R.J., 2001, MNRAS, 324, 1023 Paragi Z., Vermeulen R.C., Fejes I., Schilizzi R.T., Spencer R.E., & Stirling A.M., 1999, A&A, 348, 910 Paragi Z., Fejes I., Vermeulen R.C., Schilizzi R.T., Spencer R.E., & Stirling A.M., 2002a, in Proc. 6th European VLBI Network Symposium, eds Ros E., Porcas R.W., Lobanov A.P. and Zensus J.A, astro-ph/0207061 Paragi Z., Fejes I., Vermeulen R.C., Schilizzi R.T., Spencer R.E., & Stirling A.M., 2002n, these proceedings; astro-ph/0208125 Podsiadlowski, Ph. & Rappaport, S. 2000, ApJ, 529, 946 Seaquist, E.R., Taylor, A.R., & Button, S. 1984, ApJ, 284, 202 Tauris, T.M., van den Heuvel, E.P.J., & Savonije, G.J. 2000, ApJ, 530, L93 Zwitter, T., Calvani, M., & D’Odorico, S. 1991, A&A, 251 92
--- abstract: 'Jamming phenomena on a square lattice are investigated for two different models of anisotropic random sequential adsorption (RSA) of linear $k$-mers (particles occupying $k$ adjacent adsorption sites along a line). The length of a $k$-mer varies from 2 to 128. Effect of $k$-mer alignment on the jamming threshold is examined. For completely ordered systems where all the $k$-mers are aligned along one direction (*e.g.*, vertical), the obtained simulation data are very close to the known analytical results for 1d systems. In particular, the jamming threshold tends to the R[é]{}nyi’s Parking Constant for large $k$. In the other extreme case, when $k$-mers are fully disordered, our results correspond to the published results for short $k$-mers. It was observed that for partially oriented systems the jamming configurations consist of the blocks of vertically and horizontally oriented $k$-mers ($v$- and $h$-blocks, respectively) and large voids between them. The relative areas of different blocks and voids depend on the order parameter $s$, $k$-mer length and type of the model. For small $k$-mers ($k\leqslant 4$), denser configurations are observed in disordered systems as compared to those of completely ordered systems. However, longer $k$-mers exhibit the opposite behavior.' author: - 'Nikolai I. Lebovka' - 'Natalia N. Karmazina' - 'Yuri Yu. Tarasevich' - 'Valeri V. Laptev' title: 'Random sequential adsorption of partially oriented linear $k$-mers on square lattice' --- \[sec:introduction\]Introduction ================================ In the Random Sequential Adsorption (RSA), objects randomly deposit on a substrate; this process is irreversible, and the newly placed objects cannot overlap or pass through the previously deposited ones [@Evans]. The final state generated by the irreversible adsorption is a disordered one (known as the jamming state); no more objects can deposit in this state due to the absence of any free space of appropriate size and shape [@Lois2008; @Biroli2008; @Evans]. The fraction of the total surface occupied by the adsorbed objects, which is called the jamming concentration, is of central interest for understanding of the RSA processes. The RSA model studies of the objects with different shape (*e.g.*, squares, ellipses, or stiff rods (needles)) have shown that jamming concentration depends strongly on the object shape and size (see, *e.g.*, [@Evans]). The RSA model has attracted a great interest as a tool for explanation of the wide class of irreversible phenomena observed in adsorption of chainlike and polymer molecules on homogeneous and heterogeneous surfaces [@Loscar2003PRE]. Jamming of short flexible linear chains on a square lattice has been studied by Becklehimer [@Becklehimer1994] and Wang [@Wang1996]. Adsorption of semi-flexible chains has been recently investigated using RSA model by Kondrat [@Kondrat2002]. RSA of the binary mixtures of extended objects (linear and bent chains) has been described by Lončarević *et al.* [@Budinski24]. RSA on a two-dimensional (2d) square lattice has been examined for the stiff and flexible polymer chains simulated by a sequence of lattice points forming needles, T-shaped objects and crosses, as well as flexible linear chains and star-branched chains consisting of three and four arms [@Adamczyk2008JCP]. RSA model is widely used for simulation of thin film formation in the process of nanoparticle deposition [@Brosilow1991; @Talbot1989; @Budinski76; @Budinski24; @Rampf2002]. The interplay between the jamming and percolation phenomena is discussed in several works (*e.g.*, [@Vandewalle; @Rampf2002; @Adamczyk2008JCP; @KondratPre63]). Important exact result for jamming concentration $p_j$ of $k$-mers in one dimension (1d) random sequential adsorption has been reported by Krapivsky *et al.* [@Krapivsky2010] $$\label{eq:ue1Djamm} p_{j} = k \int\limits_{0}^{\infty}\exp\left(-u-2\sum\limits_{j=1}^{k-1}\frac{1-\exp(-ju)}{j}\right)\,du.$$ In particular, the jamming concentration for dimers ($k=2$) placed along one line is $p_{j}=1-\mathrm{e}^{-2} \approx 0.86466$ and for trimers ($k=3$) is $p_{j} = 3 D (2) - 3 \mathrm{e}^{-3} D (1) \approx 0.82365$, where $D(x) = \mathrm{e}^{-x^2}\int_{0}^{x} \mathrm{e}^{t^2} dt$ is Dawson’s integral. For $k \to \infty$, the jamming threshold tends to R[é]{}nyi’s Parking Constant ${p_{j}}\to {c_{R}} \approx 0.7475979202$ [@Renyi1958]. Many efforts have been concentrated on the study of deposition of $k$-mers on a discrete 2d substrate. Most of previous works have been devoted to the deposition of randomly oriented linear $k$-mers on square lattices. In square lattice models, the number of possible orientations of $k$-mers is restricted to 2. When orientation of $k$-mer is fixed, the situation is equivalent to the above mentioned case of deposition on a 1d lattice. For lattice model, Manna and Svrakic observed that deposited $k$-mers ($1 \leqslant k \leqslant 512$) tend to align parallel to each other [@Manna1991JPhysA]. They form large domains (stacks) with voids ranging from a single site up to the length of $k$-mer. The similar stacking and void formation is observed also for continuous RSA model of deposition of infinitely-thin line segments [@Ziff1990JPhysA; @Viot1992PhysicaA]. It is worth noting that principally different jamming behavior is observed for lattice and continuous models. For example, for deposition of extremely elongated objects $k \to \infty$ (*i.e.*, in the limit of infinite aspect ratio) the asymptotic jamming concentration $p_j(\infty)$ is 0 in the off-lattice case [@Viot1992PhysicaA]. The value of $p_j$ approaches zero with increase of the aspect ratio $k$ and follows the power law $$\label{eq:power} p_{j}(k) = p_{j}(\infty)+a/k^\alpha$$ with $p_{j}(\infty)=0$ and $a=1/(1+2\sqrt{2})\approx 0.26$ [@Viot1992PhysicaA]. In the discrete case, the asymptotic jamming concentration $p_j(\infty)$ remains finite [@Manna1991JPhysA; @Bonnier1994PRE; @Bonnier1996PRE; @KondratPre63]. In the latter case, the presence of finite coverage by the infinite $k$-mers has been interpreted as a consequence of the alignment constraint [@Bonnier1994PRE; @Bonnier1996PRE]. Note that for deposition on a discrete lattice [@Manna1991JPhysA; @Bonnier1994PRE; @Bonnier1996PRE; @KondratPre63], the limiting jamming concentration $p_{j}(\infty)$ depends upon the deposition mechanism. For the conventional RSA model (the vacant lattice site is randomly selected and any unsuccessful attempt of deposition of $k$-mer is rejected) and completely orientationally disordered deposition of linear $k$-mers on square lattices, different Monte Carlo studies have given the estimation $p_{j}(\infty)\approx0.66$ [@Bonnier1994PRE; @KondratPre63]. For, so called, “end-on” RSA model [@Bonnier1994PRE] (in this model, once a vacant site has been found, the deposition is (randomly) attempted in all the directions until the segment is adsorbed or rejected), the Monte Carlo calculation has given the noticeably smaller value $p_{j}(\infty)=0.583\pm 0.010$ [@Manna1991JPhysA]. Different functions have been tested to fit the $p_{j}(k)$ dependence for the deposition of randomly oriented $k$-mers on a two-dimensional square lattice (conventional RSA model)  [@Bonnier1994PRE; @Bonnier1996PRE; @Vandewalle; @KondratPre63]. MC data on jamming concentration $p_{j}(k)$ obtained for line segments of length $2\leqslant k\leqslant 512$ on square lattices of linear size $L\leqslant 4096$ (preserving in all cases the ratio $L/k>8$) have been fitted using the series expansion $$\label{eq:Bonnier} p_{j}(k) = p_{j}(\infty)+a/k+b/k^2,$$ and have given $p_{j}(\infty)=0.660\pm 0.002$, $a \approx 0.83$ and $b \approx -0.70$ [@Bonnier1994PRE; @Bonnier1996PRE]. The similar data for line segments of length $2 \leqslant k \leqslant 24$ on lattices of linear size $L=2000$ have been approximated using an empirical law [@Vandewalle] $$\label{eq:Vandewalle} p_{j}(k) = p^*(1-\gamma(1-1/k)^2).$$ However, this equation is in fact a particular case of Eq. \[eq:Bonnier\] under constrain of $b=-a/2$. It can be easily checked by using the substitutions of $p^*=p_{j}(\infty)+a/2$ and $\gamma=a/(2p^*)$. Fitting of the numerical data presented in Table \[tab:pjvsSk\] of [@Vandewalle]) with Eq. \[eq:Bonnier\] gives the following estimations: $p_{j}(\infty)=0.684 \pm 0.003$, $a=0.59 \pm 0.01$, $\rho=0.9998$ (coefficient of determination). Kondrat and Pȩkalski have reported the results for $k$-mers with length within $1\leqslant k \leqslant 2000$, lattice size $L=30,100,300,1000,2500$ and more than 100 independent runs [@KondratPre63]. Application of the power law (Eq. \[eq:power\]) have given $p_{j}(\infty)=0.66\pm 0.01$, $a \approx 0.44$ and $b \approx 0.77$. To our best knowledge, the very limited number of works have been devoted to jamming of non-randomly oriented linear $k$-mers and only particular case ($k=2$) has been taken into consideration  [@deOliveiraPRA1992; @Cherkasova]. For isotropic problem, the jamming concentration at $k=2$ is $p_j\approx0.9068$  [@NordJPC1985; @deOliveiraPRA1992; @Cherkasova]. In anisotropic problem, the vertical and horizontal orientations occur with different probabilities and degree of anisotropy can be characterized by the order parameter $s$ defined as $$\label{eq:S} s = \left|\frac{N_| - N_-}{N_| + N_-}\right|,$$ where $N_|$ and $N_-$ are the numbers of line segments oriented in the vertical and horizontal directions, respectively. Data of Monte Carlo simulations evidence that the value of $p_j$ decreases with order parameter $s$ increase [@deOliveiraPRA1992; @Cherkasova]. In particular case of complete ordering, *i.e.*, at $s=1$, the problem becomes one-dimensional. An interesting finding is that in the limit of $s\to 1$ the asymptotic fraction of dimers with horizontal direction does not vanish but equals to $ \mathrm{e}^{-2}[1-\exp(-2\mathrm{e}^{-2})]/2 \approx 0.016046$ [@deOliveiraPRA1992]. The main goal of the present study is to investigate the effects of $k$-mer length, alignment, and the deposition rules on the jamming threshold. This investigation is the natural development of the recent work devoted to the dimers, $k=2$ [@Cherkasova]. This work discusses the jamming phenomena [@Evans] for two different kinds of anisotropic sequential deposition of linear $k$-mers on a square lattice (particles occupying $k$ adjacent adsorption sites along a line). The paper is organized as follows. Section \[sec:model\] contains the basis of the two models of deposition of $k$-mers on a square lattice. The results obtained using finite size scaling theory and dependencies of the jamming threshold, $p$, and mean radius of pores, $r$, *vs.* order parameter, $s$, are examined and discussed in details in Section \[sec:results\]. We discuss the dependence of the jamming threshold on the model parameters of interest in this Section, too. ![Actual order parameter $s$, after each successful attempt to place a new $k$-mer ($k=3$) for RSA and RRSA models. Lattice size is $L=729$ and predetermined parameter is $s = 0.8$. \[fig:s\_for\_iter\]](f01_s_for_iter.eps){width="\linewidth"} \ \ \[sec:model\]Description of models and details of simulations ============================================================== One can imagine suspension of linear line segments of length $k$ ($k$-mers) in the bulk volume under substrate. In our simulations, a square lattice of linear size $L$ has been used as a substrate, and periodical boundary conditions have been applied both in horizontal and vertical directions (toroidal boundary conditions). The $k$-mers repulse each other and deposit one by one onto a substrate. The anisotropy of $k$-mer orientation in the suspension is predetermined and can be characterized by the input parameter $s$ defined as in Eq. \[eq:S\]. The number of species in the suspension is supposed to be infinitely large, thus deposition doesn’t change the anisotropy of the suspension. If different orientations of the deposited objects are not of equal probability (*i.e.*, $N_| \neq N_-$ ), the definition of the jamming state is to be refined [@Cherkasova]. Let us assume $N_| > N_-$. We define jamming for the fixed parameter $s$ as a situation when there exists no possibility of deposition for any additional vertically oriented objects. Nevertheless, there may be places for horizontally oriented objects. Two different deposition models have been studied. The first is the conventional Random Sequential Adsorption (RSA) model. In this model, the lattice site is randomly selected and an attempt of deposition of a $k$-mer with orientation defined by order parameter $s$ is done. Any unsuccessful attempt is rejected and $k$-mer with a new orientation is selected. The second model, called by us Relaxation Random Sequential Adsorption (RRSA), is very similar to the RSA, however, the unsuccessful attempt is not rejected and a new lattice site is randomly selected until the object will be deposited. Note that in contrast to the known RSA model with diffusion (see, *e.g.*, [@Gan1997PRE]), the RRSA model does not restrict the movement of species by the nearest sites only. The species may move all over the substrate searching for a sufficiently large empty space. In both models, the deposition terminates when a jamming state is reached along one of direction. Physically, the difference between RSA and RRSA models can reflect different binding of $k$-mers near the adsorbing substrate. In RSA model, binding of a $k$-mer by the substrate is weak and the $k$-mer returns to the bulk suspension after an unsuccessful attempt to precipitate. In RRSA model, binding of a $k$-mer to the adsorbing plane is strong, and $k$-mer has an additional possibility of joining the surface after an unsuccessful attempt. The differences between RSA and RRSA models can be evidently demonstrated by analysis of anisotropy of the deposited layer actually obtained in the course of adsorption (Fig. \[fig:s\_for\_iter\]). The preliminary study has shown that RSA model does not allow preservation of the order parameter $s$. In this model, the substrate “selects” the $k$-mer with appropriate orientation, and it results in deviation of predetermined order parameter $s$ from the actually obtained one, $s_0$. The MC simulation evidences that the value of $s_0$ noticeably exceeds the value of $s$. On the contrary, RRSA model better preserves the predetermined anisotropy, and $s_0 \approx s$ (Fig. \[fig:s\_for\_iter\]). ![Percolation probability $f$ *vs.* predetermined order parameter $s$ for RSA and RRSA models at different values of $k$. Lattice size is $L=1024$ and results are averaged over 100 independent runs. \[fig:f vs s\]](f04_f_vs_s){width="\linewidth"} ![Threshold order parameter $s_c=s_c(L \to \infty)$ *vs.* length of $k$-mer for RSA and RRSA models. Inset shows examples of scaling dependencies in the form of $s_c$ *vs.* $L^{-1/\nu}$ for $k=2$ and $k=8$. Here, $\nu = 4/3$ is the critical exponent of the correlation length for 2d random percolation problem [@Stauffer]. \[fig:sc\_vs k\]](f05_sc_vs_k.eps){width="\linewidth"} \ In our study, the length of $k$-mers varies between 2 and 128. To examine the finite size effect for RSA and RRSA models, different lattice sizes up to $L = 100k$ have been used. The number of runs varies from 10 to 1000 depending of the lattice size $L$. Two different random number generators have been applied for filling in the lattice with objects ($k$-mers) at given concentration and orientation. They are Mersenne Twister random number generator [@Matsumoto] with a period of $2^{19937} - 1$ and the generator of Marsaglia *et al.* [@Marsaglia1990]. The results obtained using different generators are almost undistinguishable. The connectivity of $k$-mers oriented in the vertical direction has been analyzed for jamming configurations, and the percolation threshold has been determined using the Hoshen-Kopelman algorithm [@Hoshen1976]. \[sec:results\]Results and discussion ===================================== \[subsec:JP\]Jamming configurations and its connectivity -------------------------------------------------------- Jamming configurations obtained in the simulations are strongly dependent upon the order parameter and length of $k$-mers. However, visually, the difference in the structure of jamming patterns is not noticeable for the studied RSA and RRSA models. Examples of the typical jamming configurations for RRSA model at different values of order parameter $s$ are presented in Fig. \[fig:djm\_m2\]. For randomly oriented linear $k$-mers (*i.e.*, at $s=0$) the typical domain structure in form of blocks of parallelly oriented $k$-mers has been observed. These blocks can be represented as the squares of size $k \times k$ [@Vandewalle]. One can present a jamming configuration as a combination of: - blocks of vertically oriented $k$-mers ($v$-blocks); - blocks of horizontally oriented $k$-mers ($h$-blocks); - empty sites (voids). The relative area occupied by $v$- and $h$-blocks is approximately the same at $s=0$ (Fig. \[fig:djm\_m2\](a)). Increase of $s$ results in decrease of the relative area occupied by $h$-blocks and $v$-blocks become dominating in jamming patterns. Moreover, the relative area occupied by $h$-blocks is visually larger for RRSA model than for RSA model (compare Fig. \[fig:djm\_m2\](b),(c)) with Fig. \[fig:djm\_m2\](e),(f)). Finally, at $s=1$, the jamming configuration transfers into the 1d-like domains (independent parallel 1d jamming lines of $k$-mers)(Fig. \[fig:djm\_m2\](d)). It is interesting that the infinite connectivity between the similar $v$- or $h$-blocks (*i.e.*, percolation) is not observed for randomly oriented linear $k$-mers (at $s=0$). The visual observations of jamming patterns show that connectivity between $v$-blocks increases and between $h$-blocks decreases with increase of $s$. It can be easily demonstrated by analyzing the structure of the largest cluster of vertically oriented $k$-mers (Fig. \[fig:djm\_p2\]). At small $s$ (*e.g.*, at $s=0.05$ in Fig. \[fig:djm\_p2\](a), (d)), the connectivity of $v$-blocks is limited and the largest cluster occupies only the finite part of the lattice. However, the size of the largest cluster is higher for RSA model than for RRSA model. Increase of $s$ results in growth of the size of the largest cluster, and this cluster spans through the lattice at some threshold value of $s_c$ (see, *e.g.*, Fig. \[fig:djm\_p2\](b) for RSA model and Fig. \[fig:djm\_p2\](f) for RRSA model). Examples of the percolation probability $f$ *vs.* the order parameter $s$ at $k=2,8$ and $L=1024$ are presented in Fig. \[fig:f vs s\]. The threshold value of $s_c$ that corresponds to the percolation of vertically oriented $k$-mers is determined from the condition of $f=0.5$. The usual finite size scaling analysis of the percolation behavior is done, and it is obtained that $s(L)$ follows scaling law governed by the universal scaling exponent $\nu$: $$\label{Eq:e2p} \left| {{s_{c}}(L) - {s_{c}}(\infty )} \right|\propto {L^{ - \frac{1}{\nu }}},$$ where $\nu = 4/3$ is the critical exponent of correlation length for 2d random percolation problem [@Stauffer]. For the thermodynamic limit ($L\to\infty$), dependencies of $s_c$ () *vs.* length of $k$-mer for RSA and RRSA models are presented in Fig. \[fig:sc\_vs k\]; the inset of this figure shows examples of finite size scaling dependencies for $k=2$ and $k=8$. The observed percolation behavior of the vertically oriented $k$-mers is rather different for RSA and RRSA model, which evidently reflects the difference in the structure of jamming patterns of these models. For the same $k$, the RSA model gives lower value $s_c$ than RRSA model, *e.g.* $k=16$, $s_c\approx 0.126$ (RSA model) and $s_c\approx 0.240$ (RRSA model). Note that in both models for the dimers ($k=2$), the threshold values of $s_c$ are rather close to $\approx 0.21$. However, the value of $s_c$ decreases as the length of $k$-mer increases for RSA model, and opposite behavior is observed for RRSA model. Finally, in continuous limit $k\to\infty$, the difference between the threshold order parameters of RSA and RRSA models becomes rather large, $\triangle s_c\approx 0.12$. The difference in behavior of $s_c(k)$ observed for RSA and RRSA models evidences higher connectivity between $v$-blocks for RSA model than for RRSA model, and this tendency increases as the length of $k$-mer increases. ----- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -- -- $s$ RSA RRSA RSA RRSA RSA RRSA RSA RRSA 0.0 0.906(8) 0.905(9) 0.846(6) 0.845(5) 0.810(4) 0.809(1) 0.747(6) 0.746(8) 0.1 0.906(7) 0.899(4) 0.846(4) 0.838(8) 0.810(2) 0.802(8) 0.747(6) 0.740(8) 0.2 0.906(3) 0.892(9) 0.845(8) 0.832(7) 0.809(5) 0.796(9) 0.747(9) 0.736(0) 0.3 0.905(8) 0.887(1) 0.844(8) 0.822(7) 0.808(7) 0.792(2) 0.748(0) 0.732(3) 0.4 0.905(3) 0.882(2) 0.843(5) 0.821(7) 0.807(6) 0.787(6) 0.748(8) 0.729(5) 0.5 0.904(2) 0.877(1) 0.841(9) 0.818(7) 0.806(3) 0.785(3) 0.750(0) 0.729(1) 0.6 0.902(6) 0.873(0) 0.839(9) 0.816(4) 0.805(0) 0.784(2) 0.752(3) 0.730(8) 0.7 0.898(5) 0.869(7) 0.837(1) 0.815(0) 0.803(6) 0.784(6) 0.755(2) 0.734(3) 0.8 0.892(5) 0.867(2) 0.833(8) 0.815(5) 0.802(3) 0.787(3) 0.759(7) 0.741(0) 0.9 0.882(3) 0.865(4) 0.828(7) 0.818(0) 0.801(7) 0.792(9) 0.766(0) 0.752(5) 1.0 1d $s$ RSA RRSA RSA RRSA RSA RRSA RSA RRSA 0 0.710(9) 0.709(3) 0.689(4) 0.687(9) 0.678(2) 0.674(3) 0.668(9) 0.666(3) 0.1 0.711(2) 0.704(3) 0.690(4) 0.683(8) 0.680(0) 0.670(9) 0.668(0) 0.663(2) 0.2 0.712(4) 0.700(2) 0.692(4) 0.680(5) 0.682(4) 0.667(6) 0.672(8) 0.660(2) 0.3 0.714(6) 0.697(4) 0.695(6) 0.677(4) 0.686(0) 0.665(4) 0.677(2) 0.659(4) 0.4 0.717(1) 0.696(2) 0.699(9) 0.677(6) 0.691(2) 0.666(1) 0.684(2) 0.659(9) 0.5 0.720(8) 0.697(0) 0.705(6) 0.678(6) 0.697(6) 0.668(6) 0.691(6) 0.662(3) 0.6 0.726(0) 0.700(3) 0.712(4) 0.683(2) 0.705(6) 0.673(3) 0.700(2) 0.667(5) 0.7 0.732(3) 0.705(7) 0.720(7) 0.690(2) 0.713(8) 0.681(4) 0.711(0) 0.676(6) 0.8 0.740(3) 0.715(6) 0.730(6) 0.701(5) 0.724(8) 0.693(5) 0.723(0) 0.688(8) 0.9 0.749(8) 0.730(7) 0.741(9) 0.719(1) 0.737(4) 0.712(8) 0.736(5) 0.709(4) 1.0 1d ----- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- -- -- ![Difference of jamming concentrations $p_j(L)-p_j(\infty)$ *vs.* $L/k$ ratio for completely ordered $k$-mers, ($s=1$) at different values of $k$(RSA and RRSA models).\[fig:1dscal\]](f08_pdj_vs_Lk.eps){width="\linewidth"} ![Relative effective pore radius $r/r_0$ *vs.* order parameter $s$ at different values of $k$ for RSA and RRSA models). Here, $r_o=r(s=0)$.[]{data-label="fig:pores"}](f11_r_vs_s.eps){width="\linewidth"} ![Jamming concentration $p_j$ *vs.* $k$ at different values of order parameter $s$. Inset shows $dp=p_j-p_j(k\to\infty)$ *vs.* $k$, the different slopes correspond to the different scaling exponents $\alpha$. (RSA and RRSA models).\[fig:Pvsk\_pow\]](f12_p_k_RSA.eps){width="\linewidth"} \[subsec:FSE\]Finite-size effects --------------------------------- Commonly, it is assumed that finite-size effects on the jamming coverage of a periodic lattice decrease very rapidly with the lattice size $L$ increase [@BarteltJMP1991; @BrosilowPRA1991]. For isotropic problem of adsorption of $k$-mers on a square lattice ($s=0$), it was observed that standard deviation of the coverage is a linear function of $k/L$ and it become negligible in the limit of $k/L\to 0$ [@Bonnier1994PRE]. Our investigations show that in case of anisotropic problem the finite-size effects, as well as $k$ and $s$, are rather sensitive to the type of adsorption model (RSA or RRSA). Fig. \[fig:scalingRRSAk38\] demonstrates examples of jamming concentration $p_j$ *vs.* $s$ for both models and different lattice sizes $L=32$–$2048$ in two particular cases when $k=3$ and $k=8$. For clearer demonstration of the finite-size effects on the jamming concentration, they are also represented in the form of $dp=p_j(L\to\infty)-p_j(L)$ *vs.* the inverse lattice size $1/L$ (see insets to Fig. \[fig:scalingRRSAk38\]). In RSA model, (Fig. \[fig:scalingRRSAk38\](a),(b) the finite size effects are rather small in the limits of $s\to 0$ and $s\to 1$. However, for the intermediate anisotropy $s$, they are large and $dp=p_j(L\to\infty)-p_j$ *vs.* $1/L$ looks like nonlinear even for the large lattice sizes $L$. From the other side, in RRSA model (Fig. \[fig:scalingRRSAk38\](c),(d)) the finite size effects are very small for any $k$ and $s$ with the exception of the limit of isotropic problem, $s\to 0$. The differences between the finite size effects of RSA and RRSA models at intermediate anisotropy ($s=0.8$) and different values of $k$ are presented in Fig. \[fig:JCforfixeds\] in the form of $p_j$ *vs.* $1/L$ dependencies. For RSA model, these dependencies look like nonlinear even for large lattice sizes, and this fact obstructs application of the scaling relation in the thermodynamic limit ($L\to\infty$) [@Stauffer] $$\label{Eq:e2} \left| {{p_{j}}(L) - {p_{j}}(\infty )} \right|\propto {L^{ - \frac{1}{\nu }}},$$ where $\nu = 1.0 \pm 0.1$ [@Vandewalle]. ----------- ---------- ---------- ---------- ---------- ---------- ---------- -- -- $s$ RSA RRSA RSA RRSA RSA RRSA 0 0.655(9) 0.652(8) 0.416(0) 0.417(5) 0.720(7) 0.713(7) 0.1 0.656(5) 0.650(8) 0.414(5) 0.414(9) 0.720(7) 0.730(1) 0.2 0.661(9) 0.648(5) 0.412(6) 0.411(1) 0.747(2) 0.741(3) 0.3 0.667(9) 0.648(0) 0.408(9) 0.405(4) 0.774(1) 0.753(2) 0.4 0.676(5) 0.649(6) 0.406(2) 0.397(0) 0.820(7) 0.766(6) 0.5 0.685(8) 0.652(9) 0.401(3) 0.385(7) 0.870(9) 0.776(1) 0.6 0.696(3) 0.659(1) 0.393(8) 0.370(9) 0.928(6) 0.788(2) 0.7 0.707(8) 0.669(1) 0.380(1) 0.352(8) 0.991(9) 0.809(8) 0.8 0.721(0) 0.683(0) 0.359(1) 0.328(2) 1.063(9) 0.830(8) 0.9 0.735(1) 0.705(0) 0.321(7) 0.295(0) 1.128(3) 0.876(5) 1.0 1d, exact ----------- ---------- ---------- ---------- ---------- ---------- ---------- -- -- As a result, we have to utilize rather large lattices ($L>512$–$2048$) and nonlinear relations for $p(1/L)$ in a form of polynomials of degree 3 for extrapolation of the results to thermodynamic limit ($L\to\infty$) (see, *e.g.*, Fig. \[fig:JCforfixeds\]). For RSA model, this technique allows to get rather reliable estimations of jamming concentrations at $L\to\infty$ with the error bar not exceeding $\pm0.002$. For RRSA model, the finite size effects are very small at any $L/k \gg 1$ and $s$ (with the exception of the limit of isotropic problem, $s\to 0$); therefore, the final investigations were performed for different $k$ with $L = 100k$ and $100$ independent runs. Our estimation of the error bar for $p(L\to\infty)$ is about $ \pm 0.0001$. For the particular case of the limit of isotropic problem, $s\to 0$ the procedure used for estimation of $p(L \to \infty)$ is the same as for RSA model. Note that at moderate ratio $L/k<5$ the finite size effects may be rather complex. The examples of $p_j(L)-p_j(\infty)$ *vs.* $L/k$ dependencies for completely ordered $k$-mers, ($s=1$) at different values of $k$ are presented in Fig. \[fig:1dscal\]). The observed noticeable oscillating scaling behavior evidently reflects commensurability of $k$ and $L$ values. The precision of the estimation of $p(L \to \infty)$ is tested for the limit cases of completely anisotropic ($s=1$) and isotropic ($s=0$) problems. The particular case of complete alignment ($s=1$) corresponds to the simplified 1d problem when RSA and RRSA models are indistinguishable. The data presented in Table \[tab:pjvsSk\] show also that our results for both RSA and RRSA models at $s=1$ are very close to the analytical results calculated 1d problem from Eq. . The numerically obtained data are precise within 4–5 significant digits. For isotropic problem ($s=0$), both RSA and RRSA models give very close estimations of $p_j(k)$ value. In this case, our results for $p_j$ for small $k$-mers ($k = 2$–$8$) (see Table \[tab:pjvsSk\]) are in a reasonable correspondence with the previously published data, *e.g.*, 0.9068 [@NordJPC1985], 0.9067(7) [@deOliveiraPRA1992], 0.906 [@Vandewalle] ($k=2$), 0.8465 [@Evans], 0.847 [@Vandewalle] ($k=3$), 0.811 [@Vandewalle] ($k=4$) and 0.757 [@Vandewalle] ($k=8$). The similar jamming behavior of the RSA and RRSA models in the isotropic case $s=0$ is expected. For RSA model in the isotropic case, rejection of the unsuccessful attempt is followed by the next choice of $k$-mers with random orientation. In this case, the RSA model also allows preservation of the predetermined order parameter. That is a reason why the RSA and RRSA models give the similar estimations for the values of $p_j(L \to \infty)$. However, the amplitude of order parameter fluctuations during the deposition of $k$-mers may be rather different for RSA and RRSA models, and in fact it results in a noticeable difference of the finite-size scaling effects at $s=0$ observed for RSA and RRSA models (see Fig. \[fig:scalingRRSAk38\]). Finally, the data on $p(L \to \infty)$ *vs.* $s$ dependencies for RSA and RSSA models at different $k$ obtained in a result of the described scaling analysis are presented in Table \[tab:pjvsSk\]. Jamming concentration --------------------- ### Jamming concentration vs. order parameter Fig. \[fig:PvsS\] demonstrates behavior of jamming concentrations, $p_j=p_j(L \to \infty)$ , as a function of order parameter, $s$, for RSA and RRSA models, respectively. The $p_j(s)$ dependencies are rather different for short and long $k$-mers. For instance, if $k$-mers are short ($k\leqslant 4$), more dense configurations are observed for disordered systems ($s=0$) as compared to those of completely ordered case ($s=1$). However, an opposite behavior is observed for longer $k$-mers. For RSA model, the value of $p_j(k)$ monotonically decreases ($k\leqslant 4$) or increases ($k>4$) when the order parameter $s$ increases. For RRSA model, the value of $p_j(k)$ goes through the minimum at $k\geqslant 3$ (RRSA-model) when $s$ increases. At each value of $k$, the jamming concentrations $p_j$ are practically the same at $s=0$ and $s=1$ for the both RSA and RRSA models. However, in the intermediate region of $s$ we observe $p_j(s)$(RSA)$>p_j(s)$(RRSA). The maximal difference between $p_j(s)$(RSA) and $p_j(s)$(RRSA) is observed at $s\approx 0.5$–$0.7$. Such behavior may be explained as follows. In RSA model, the substrate “selects” the $k$-mer with appropriate orientation. It results in amplification of the actual order parameter $s_0>s$ (Fig. \[fig:s\_for\_iter\]). Such amplification reflects intensive nucleation of $v$-domains at initial stages of deposition. At $s>0$, the $v$-domains merge into percolating clusters when $s>s_c$. For well oriented systems (at $s\approx 0.5-0.7$), the jamming structures in RSA model are large 1d-like $v$-domains with small inclusions of $h$-domains and voids. In RRSA model, the order parameter keeps the level of approximately $s_0\approx s$ (Fig. \[fig:s\_for\_iter\]). The packing of $k$-mers in the form of $h$-and $v$-domains arising at initial stages of RRSA deposition is sparser than for RSA model. As a result, the $v$-domains merge into percolating clusters at higher level of $s_c$ in RRSA model than in RSA model. Finally, for the well oriented systems (at $s\approx 0.5$–$0.7$), the jamming structures of RRSA model include higher number of $h$-domains with large voids between them than those of RSA model. The effective pore radius $r$ in the jammed structures is calculated as $$\label{eq:rpores} r=(1-p_j)/a,$$ where $1-p_j$ is the specific area of pores, $a=A/L^2$ is the specific total perimeter of pores, and $A$ is the total perimeter of pores. The same as for jamming concentration $p_j$, the values of $r$ are practically equivalent at $s=0$ and $s=1$ for both RSA and RRSA models. Fig. \[fig:porek\] compares the effective pore radius $r$ *vs.* value of $k$ (a) and jamming concentration $p_j=p_j(L \to \infty)$ (b) at $s=0$ and $s=1$ (RSA and RRSA models). The value of $r$ increases with $k$ and $s$ values growth (Fig. \[fig:porek\](a)). Moreover, direct correlations between the pore radius $r$ and the jamming concentration $p_j$ are observed. Increase of the jamming concentration evidently reflects decrease of the effective pore radius $r$ (Fig. \[fig:porek\](b)). Interesting correlations are also observed between the effective pore radius $r$ and the order parameter $s$ (Fig. \[fig:pores\]). In close analogy with behavior of the jamming concentration $p_j(s)$, dependencies $r$ were rather different for short ($k<4$) and longer $k$-mers. If $k$-mers are short, increase of $s$ results in increase of the effective pore radius $r$. The opposite behavior is observed for longer $k$-mers ($k>8$). At same values of $s$ and $k$, the RSA model demonstrates better jamming packing and smaller effective pore radius. ### Jamming concentration vs. length of $k$-mer For 1d problem, the jamming concentration *vs.* length of $k$-mer can be evaluated exactly as expressed by Eq. \[eq:ue1Djamm\]. Surprisingly enough, this very complex $p_{j}(k)$ dependence may be fitted rather well by a simple power law (Eq. \[eq:power\]) with $p_{j}(\infty)=c_{R}$ [@Renyi1958], $a= 0.2277\pm 0.0019$, $\alpha =1.0111 \pm 0.0023$, $\rho=0.99997$ (coefficient of determination). ![Fractal dimension of the RSA and RRSA jamming networks $d_f$ *vs.* order parameter $s$.[]{data-label="fig:df"}](f13_df.eps){width="\linewidth"} Fig. \[fig:Pvsk\_pow\] shows jamming concentration $p_j$ *vs.* length of linear segment $k$ at different anisotropy $s$ for RSA and RRSA models). It has been found that numerical results may be rather well fitted by a power law function Eq. \[eq:power\]. However, for problem of nonrandomly oriented linear $k$-mers, $p_{j}(\infty)$, $a$ and, $\alpha$ depend on the order parameter $s$ (Table \[tab:pjvsSkoo\]). For instance, for isotropic problem ($s=0$) the obtained data (Table \[tab:pjvsSkoo\]) are in satisfactory agreement with the previously reported data: $p_{j}(\infty)=0.66\pm 0.01$, $a \approx 0.44$ and $\alpha \approx 0.77$ [@KondratPre63]. The fitting parameters obtained for completely ordered $k$-mers, ($s=1$) (Table \[tab:pjvsSkoo\]) are also in satisfactory correspondence with the theoretical predictions for 1d problem. We should emphasize that for the studied problem of nonrandomly oriented $k$-mers, the jamming concentration decreases as an inverse power of the linear segment length, *i.e.*, $p_j \propto k^{-\alpha}$ and the exponent $\alpha$ is not universal. Note that power law behavior with exponent $\alpha\approx 0.2$ was observed for off-lattice RSA of rectangles [@Ziff1990JPhysA] and it was suggested that indicates about fractal structure of the jamming networks with a dimension $d_f=2-\alpha\approx 0.2$. For completely ordered $k$-mers, ($s=1$) the value of $d_f$ is close to $1$ and it is expected for 1d problem. For disordered systems at $s<1$ the fractal dimension $d_f$ increases for RRSA model and goes through minimum for RSA model when the order parameter $s$ decreases (Fig. \[fig:df\]). \[sec:concl\]Conclusion ======================= Two different models describing jamming of the partially ordered linear $k$-mers ($k=2$–$128$) with predetermined order parameter $s$ on a square lattice are discussed. In usual RSA model, the substrate “selects” the $k$-mer with appropriate orientation, which results in amplification of the actual order parameter, $s_0>s$. In relaxation RSA model (RRSA), the order parameter remains at the predetermined level, $s_0\approx s$. The similar jamming behavior is observed at different values of $k$ for both RSA and RRSA models in problems of completely disordered (at $s=0$) and ordered (at $s=1$) systems. Our new simulation results for jamming concentrations $p_j$ are in excellent agreement with simulation data ($s=0$, $k=2,3,4,8$) [@Evans; @Vandewalle; @KondratPre63; @KondratPre64; @Cherkasova] and analytical results [@Krapivsky2010] previously published for the problem of completely disordered system. For short $k$-mers ($k\leqslant 4$), more dense configurations are observed in disordered systems ($s=0$) as compared with those completely ordered ($s=1$). However, an opposite behavior is observed for longer $k$-mers. In partially oriented systems ($s<1$), the jamming configurations consist of the blocks of vertically and horizontally oriented $k$-mers ($v$-and $h$-blocks, respectively), and large voids between them. The $v$-blocks merge into the percolation cluster when the order parameter $s$ exceeds some critical value $s_c$, which depends upon $k$ and model of deposition. The critical exponent of random percolation $\nu=4/3$ is observed in the scaling relations. It is demonstrated that in the intermediate region of $0<s<1$ the RSA process allows obtaining of better jamming packing and smaller effective pore radius than RRSA process at the same values of $s$ and $k$. For RSA model, the value of $p_j(k)$ monotonically decreases ($k\leqslant 4$) or increases ($k>4$) when the order parameter $s$ increases. For RRSA model, the value of $p_j(k)$ goes through the minimum at $k\geqslant 3$ when $s$ increases. Finally, it is found that numerical results for $p_j$ *vs.* $k$ dependence may be rather well fitted by a simple power law function $p_j \propto k^{-\alpha}$ with parameters dependent on the order parameter $s$. The power exponent $\alpha$ grows when the order parameter $s$ increases from $\alpha\approx0.72$ at $s=0$ to $\alpha\approx 1$ at $s=1$. The jamming networks display fractal properties with fractal dimension $d_f\approx 1$ for completely ordered $k$-mers, ($s=1$) and value of $d_f$ increases for RRSA model and goes through minimum for RSA model when the order parameter $s$ decreases (Fig. \[fig:df\]). Work is partially supported by the Russian Foundation for Basic Research, grants no. 09-02-90440-Ukr\_f\_a, 09-08-00822-a, and 09-01-97007-r\_povolzhje\_a. [33]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} [****, ()](\doibase 10.1103/PhysRevLett.100.028001) [****, ()](http://dx.doi.org/10.1140/epjb/e2008-00029-9),  [****,  ()](\doibase 10.1103/PhysRevE.68.041106) [****,  ()](http://dx.doi.org/10.1007/BF02186881), [****,  ()](\doibase 10.1103/PhysRevLett.77.1773) [****,  ()](\doibase 10.1063/1.1505866) @noop [****,  ()]{} [****,  ()](\doibase DOI:10.1063/1.2907715) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevE.66.061106) [****,  ()](http://dx.doi.org/10.1007/s100510051047),  [****,  ()](\doibase 10.1103/PhysRevE.63.051108) @noop [**]{} (, , ) @noop [****,  ()]{}, [****, ()](\doibase 10.1103/PhysRevE.49.305) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevE.49.305) @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.46.6294) @noop [****,  ()]{} [****, ()](\doibase doi:10.1063/1.448279) [****,  ()](\doibase 10.1103/PhysRevE.55.107) @noop [**]{} (, , ) @noop [****,  ()]{} [****,  ()](\doibase DOI: 10.1016/0167-7152(90)90092-L) [****,  ()](\doibase 10.1103/PhysRevB.14.3438) [****,  ()](\doibase 10.1142/S0217979291001127) [****,  ()](\doibase 10.1103/PhysRevA.43.631) [****,  ()](\doibase 10.1103/PhysRevE.64.056118)
--- abstract: 'We report on the experimental realization of a conservative optical lattice for cold atoms with sub-wavelength spatial structure. The potential is based on the nonlinear optical response of three-level atoms in laser-dressed dark states, which is not constrained by the diffraction limit of the light generating the potential. The lattice consists of a 1D array of ultra-narrow barriers with widths less than 10 nm, well below the wavelength of the lattice light, physically realizing a Kronig-Penney potential. We study the band structure and dissipation of this lattice, and find good agreement with theoretical predictions. The observed lifetimes of atoms trapped in the lattice are as long as 60 ms, nearly $10^5$ times the excited state lifetime, and could be further improved with more laser intensity. The potential is readily generalizable to higher dimension and different geometries, allowing, for example, nearly perfect box traps, narrow tunnel junctions for atomtronics applications, and dynamically generated lattices with sub-wavelength spacings.' author: - 'Y. Wang' - 'S. Subhankar' - 'P. Bienias' - 'M. Łcki' - 'T-C. Tsui' - 'M. A. Baranov' - 'A. V. Gorshkov' - 'P. Zoller' - 'J. V. Porto' - 'S. L. Rolston' title: 'Dark state optical lattice with sub-wavelength spatial structure' --- [^1] [^2] Coherent control of position and motion of atoms with laser light has been a primary enabling technology in the physics of ultracold atoms. The paradigmatic examples of conservative optical potentials are the optical dipole trap and optical lattices, generated by far off-resonant laser fields, with the ac-Stark shift of atomic levels as the underlying mechanism. The scale and spatial resolution for such optical potential landscapes is determined by the diffraction limit, which is of order the wavelength of the light $\lambda$. This fundamentally limits optical manipulation of atoms. For example, in quantum simulation with cold atoms in optical lattices, the minimum lattice constant is $\lambda/2$, setting the energy scale for Hubbard models for both hopping (kinetic energy) and interaction of atoms, with challenging temperature requirements to observe quantum phases of interest (see [@Gross2017] and references therein). Developing tools to overcome the diffraction limit, allowing coherent optical manipulation of atoms on the sub-wavelength scale, is thus an outstanding challenge. Following recent theoretical proposals [@Dum1996; @Lacki2016; @Jendrzejewski2016] we report below first experiments demonstrating coherent optical potentials with sub-wavelength spatial structure by realizing a Kronig-Penney type optical lattice with barrier widths below $\lambda/50$. In the quest to beat the diffraction limit, several ideas have been proposed to create coherent optical potentials with sub-wavelength structure. These include Fourier-synthesis of lattices using multiphoton Raman transitions [@Ritt2006; @Salger2007], optical or radio-frequency dressing of optical potentials [@yi08a; @lundblad08a], and trapping in near-field guided modes with nano-photonic systems [@GonzalezTudela2015; @Gullans2012] (although they suffer from decoherence induced by nearby surfaces). An alternative approach uses the spatial dependence of the nonlinear atomic response associated with the dark state of a three-level system [@Gorshkov2008; @Kiffner2008; @Miles2014; @Sahrai2005; @Cho2007; @Kapale2006], as a means to realize sub-wavelength atomic addressing and excitation. The sub-wavelength resolution arises when optical fields are arranged so that the internal dark state composition varies rapidly (“twists”) over a short length scale. As proposed in [@Lacki2016; @Jendrzejewski2016], such a sub-wavelength twist can also be used to create a conservative potential with narrow spatial extent, due to the energy cost of the kinetic energy term of the Hamiltonian [@Dum1996; @Dutta1999; @Cheneau2008]. Unlike ac-Stark shift potentials, this twist-induced potential is a quantum effect, with magnitude proportional to $\hbar$. Using this effect, we create 1D lattices with barrier widths less than $\lambda/50$, where $\lambda$ is the wavelength of lattice light. This potential realizes the Kronig-Penney (KP) lattice model [@Kronig499]—a lattice of nearly $\delta$-function potentials. We study the band structure and dissipation, and find that the dark state nature of this potential results in suppressed scattering, in good agreement with theoretical models. ![(Color online) Level structures and experimental geometry. **(a)** The three levels in $^{171}$Yb used to realize the dark state are isolated from the fourth $^3$P$_1$, $m_F=+1/2$ state by a large magnetic field. They are coupled by a strong $\sigma^-$ polarized control field $\Omega_c$ (green) and a weak $\pi$ polarized probe field $\Omega_p$ (orange). The resulting dark state is a superposition of the ground states $\ket{g_1}$ and $\ket{g_2}$, with relative amplitudes determined by $ \Omega_c(x)/\Omega_p$. **(b)** Spatial dependence of the dark state composition is created using a standing wave control field $\Omega_c(x)$, and a traveling wave probe field $\Omega_p$. The geometric potential $V(x)$ (black) arises as the dark state rapidly changes its composition near the nodes of the standing wave. **(c)** The two counter-propagating $\sigma^-$ beams creating the standing wave are aligned with a strong magnetic field along $x$, while the $\pi$ beam travels along $y$.[]{data-label="fig:schematic"}](Fig1){width="8.cm"} Our approach is illustrated in Fig. \[fig:schematic\] (a). A three-level system is coupled in a $\Lambda$-configuration by two optical fields: a spatially varying strong control field $\Omega_c(x)= \Omega_c \sin{(k x)}$ and a constant weak probe field $\Omega_p$. The excited state $|e\rangle$ can decay to either ground state $|g_i\rangle$. Within the Born-Oppenheimer (BO) approximation, slowly-moving atoms in the dark state $|E_0(x)\rangle$ are decoupled from $|e\rangle$, where $|E_0(x)\rangle = \sin(\alpha) |g_1\rangle-\cos(\alpha) |g_2\rangle$, and $\alpha(x) = \arctan[\Omega_c(x)/\Omega_p]$ [@Lacki2016]. The two bright states $E_{\pm}(x)$ have excited state component $|e\rangle$, leading to light scattering. As shown in Fig. \[fig:schematic\] (b), the fields are arranged in such a way that the dark state changes composition over a narrow region in space, depending on the ratio $\epsilon= \Omega_p / \Omega_ c$. The kinetic energy associated with this large gradient in the spin wavefunction gives rise to a conservative optical potential $V(x)$ [@Lacki2016; @Jendrzejewski2016] for atoms in $|E_0(x)\rangle$, $$V(x)=\frac{\hbar^2}{2m} \left(\frac{d\alpha}{dx}\right)^2 =E_R\frac{\epsilon^2 \textrm{cos}^2(kx)}{[\epsilon^2+\textrm{sin}^2(kx)]^2}$$ where $k=2\pi/\lambda$, $E_R=\hbar^2 k^2/2m$ is the recoil energy, $m$ is the mass of the atom. The potential $V(x)$ can be viewed as arising from non-adiabatic corrections to the BO potential [@Lacki2016; @Jendrzejewski2016] or artificial scalar gauge potential [@Cheneau2008; @Dalibard2011; @Goldman2014]. When $\epsilon\ll 1$ this creates a lattice of narrow barriers spaced by $\lambda/2$, with the barrier height scaling as $1/\epsilon^2$, and the full width half maximum scaling as $0.2 \lambda \epsilon$ (Fig. \[fig:schematic\](b)). The potential $V(x)$ exhibits several properties that distinguish it from typical optical potentials based on ac-Stark shifts: (1) the explicit dependence on $\hbar$, via the recoil energy $E_R$, reveals the quantum nature of $V(x)$ arising from the gradient in the atomic wavefunction, whereas a typical optical potential can be described entirely classically as an induced dipole interacting with the electric field of the laser; (2) since gradients in wavefunctions always cost energy, $V(x)$ is always repulsive; (3) the geometric nature of the potential results in it being only dependent on $\epsilon$. By deriving both fields from the same laser it is relatively insensitive to technical noise; and (4) unlike near-field guided modes [@GonzalezTudela2015; @Gullans2012], our scheme works in the far field, thus avoiding the decoherence associated with the proximity of surfaces. We realize the $\Lambda$-configuration using three states selected from the $^1$S$_0$, $F=1/2$ and $^3$P$_1$, $F=1/2$ hyperfine manifolds in $^{171}$Yb. The two $^1$S$_0$ ground states $m_F=\pm1/2$ comprise the lower two states $|g_1\rangle$ and $|g_2\rangle$ (see Fig. \[fig:schematic\] (a)). The $^3$P$_1$, $m_F=-1/2$ state, with inverse lifetime $\Gamma=2\pi\times182$ kHz, makes up the third state $|e\rangle$ in the $\Lambda$-configuration. The $|g_i\rangle \rightarrow |e\rangle$ transitions are isolated from transitions to the other $^3$P$_1$, $m_F=+1/2$ states by applying a 12 mT magnetic field $\vec{\bm{B}}$ to Zeeman split the two $^3$P$_1$ states by $\Delta_B = 1.8\times 10^3 ~\Gamma$. The same field only slightly splits the $^1$S$_0$ ground states by -0.5 $\Gamma$ due to the small nuclear magnetic moment. The standing-wave control field $\Omega_c(x)$, traveling along $\vec{\bm{B}}$, is produced by two counter-propagating $\sigma^-$ polarized laser beams that couple the $|g_2\rangle$ and $|e\rangle$ states with independently controlled amplitudes $\Omega_{c1} e^{i k x}$ and $\Omega_{c2} e^{-i k x}$. A third beam, $\pi$ polarized and traveling normal to $\vec{\bm{B}}$, couples the $|g_1\rangle$ and $|e\rangle$ states with amplitude $\Omega_p e^{i k y}$. The laser frequency of the control and probe beams can be chosen to set the single-photon and two-photon detunings, $\Delta$ and $\delta$. We define $\delta=0$ as the dark state condition for the isolated three-level system, accounting for the Zeeman splitting. Off-resonant couplings to other states can introduce light shifts which require nonzero $\delta$ to maintain the dark state condition. We create an ultracold $^{171}$Yb gas by sympathetically cooling it with Rb atoms in a bichromatic crossed dipole trap [@Vaidya2015; @Herold2012]. After Yb atoms are collected in the trap with a temperature of $\simeq300$ nK ($T/T_F= 1.10$, where $T_F$ is the Fermi temperature), the magnetic field in the $x$ direction is ramped up in 100 ms to 12 mT, removing Rb atoms from the trap. The Yb atoms are then optically pumped into $|g_1\rangle$ using a 50 ms pulse from one of the control beams, resulting in $\simeq 1.5\times10^5$ Yb atoms polarized. The small $^{171}$Yb scattering length (-3$a_0$, with $a_0$ the Bohr radius), plus the lack of $s$-wave scattering in polarized fermions allow us to safely neglect interactions. The Rabi frequencies of each of the three beams are calibrated by measuring the two-photon Rabi frequencies from $|g_1\rangle \rightarrow |g_2\rangle$ at large $\Delta$ with different pairs of beams. The laser polarization purity and alignment to $\vec{\bm{B}}$ are carefully optimized, such that the residual fraction of wrong polarization measured in Rabi frequency is less than 0.5%. To load Yb atoms into the ground band of the dark state lattice, we first populate the spatially homogeneous dark state by ramping on $\Omega_{c1}$ followed by $\Omega_p$, and then adiabatically ramp on $\Omega_{c2}$ in 1 ms, creating the lattice. We measure the momentum distribution using a band mapping sequence [@Kastberg1995], by first ramping off $\Omega_{c2}$ in 0.5 ms, and then suddenly turning off all the other light fields. We then take absorption images after time-of-flight (TOF) along $y$ to measure the momentum along $x$ and $z$. See [@supplementary] for further details. ![(Color online) **(a)** Band mapping results for atoms loaded into the dark state lattice with three beams (upper), and with only $\Omega_c$ beams (lower). The white traces show the integrated momentum distribution in each direction ($x$ is the lattice direction). **(b,c)** Band spectroscopy: in (c) we plot the TOF column density integrated over $z$ after shaking the lattice vs. the shaking frequency; in (b) we plot the fraction of the population excited to the $p$-band (dark green) and $d$-band (magenta) Brillouin zones (see (c)) vs. shaking frequency. Gaussian fits (colored lines in (b)) are used to determine the center frequency and the width of the transition. **(d)** Band spacing scaling: $E_{n+1}-E_n$ is plotted vs. the band index $n$ of a dark state lattice with $\Omega_c=70\Gamma$, $\Omega_p=10\Gamma$, $\Delta=22\Gamma$ and $\delta=0$. The grey vertical bars indicate the transition width inferred from the measurements, while the green rectangles are predictions of the expected band spacings and widths [@supplementary]. []{data-label="fig:shake"}](Fig2){width="8cm"} The existence of lattice structure of $V(x)$ leads to Brillouin zones (BZ), visible in TOF images taken after band mapping. Since $k_{\textrm{B}} T$ is less than the band gap, the population is predominantly in the first BZ and distinct band edges are visible (upper panel in Fig. \[fig:shake\] (a)). The lower panel shows the result with no probe beam, where we find a nearly Gaussian distribution in the lattice direction. We also see nearly Gaussian distributions for atoms loaded in the other two-beam configurations: $\Omega_{c1}\ \&\ \Omega_p$ and $\Omega_{c2}\ \&\ \Omega_p$. For small $\epsilon$, this lattice maps to a 1D KP model. One characteristic feature of the KP lattice is that the energy of the $n$th-band scales as $n^2E_R$, such that the band spacing [*[increases]{}*]{} with $n$. In contrast, in a deep sinusoidal lattice the band spacing [*[decreases]{}*]{} with $n$. To map out the band structure, we excite atoms from the ground ($s$-) band into the higher bands by shaking the lattice using phase modulation of one of the $\sigma^-$ beams. After band mapping we measure the band populations, which become spatially separated after TOF (see Fig. \[fig:shake\] (c)). Fig. \[fig:shake\] (b) plots the frequency-dependent excitation into the first ($p$-) and second ($d$-) excited bands for $\epsilon=0.14$, extracted from the data in Fig. \[fig:shake\] (c). The $s\rightarrow d$ excitation arises from a two-step process involving the intermediate $p$-band. We map out the band structure up to the $g$-band and plot the energy differences for adjacent bands (see Fig. \[fig:shake\] (d)), which increases monotonically with $n$. The green rectangles show the theoretical band spacings and widths, calculated from a model that includes both the light shifts from states outside the three-level system [@supplementary], and mixing with the bright states. ![(Color online) Band structure scalings. Energies of the $p$- and the $d$-bands with respect to the $s$-band are plotted. **(a)** Vary $\epsilon$: $\Omega_c=100\Gamma$, $\Omega_p=5 \Gamma-20\Gamma$, $\Delta=22\Gamma$, and $\delta=0$. Dashed lines indicate the allowed transition energies predicted from modeling $V(x)$ alone, while the shaded regions are from a model including couplings to the bright states. Upper panels show representative potentials for the dark state (green) and bright state (blue). At $\epsilon= 0.075$, the bright/dark states are no longer good basis states because of the strong coupling between them. **(b)** Vary $\delta$: $\Omega_c=70\Gamma$, $\Omega_p=10\Gamma$, $\Delta=22\Gamma$. Upper panels show calculated dark state potentials for positive and negative $\delta$. []{data-label="fig:band"}](Fig3){width="8cm"} Another property of a KP lattice is that in the deep lattice limit, its band structure is almost independent of the barrier strength (defined as the area under the potential for a single barrier), which scales with $1/\epsilon$. The band spacings for different $\epsilon$ are plotted in Fig. \[fig:band\] (a) for fixed $\Omega_c=100\Gamma$ and $\Omega_p$ varied from $5 -20\Gamma$. As expected, the band spacings are almost independent of $\epsilon$, even though the probe power varies by an order of magnitude. The upper panels of Fig. \[fig:band\] (a) show the potentials of the upper bright state (blue) and dark state (green) for three $\epsilon$. For $\epsilon \leq 0.1$, mixing between $E_0(x)$ and $E_{\pm}(x)$ states introduces an avoided crossing and modifies the band structure, reducing the band spacing. For $\epsilon\sim0.1$ we realize a barrier width of 10 nm with minimal coupling to the bright state. The shaded regions are predictions based on a model that takes such bright state couplings into account, which are in better agreement with the measured band spacings, compared to the model that has no such couplings (dashed line). We attribute the discrepancy between theory and experiment to the residual polarization imperfections, calibration errors in the optical intensity, and limitations of band spectroscopy. We note that the theory predicts a vanishing band width near $\epsilon\simeq0.125$ and the growth of the bandwidth at even smaller $\epsilon$, due to the interference of the dark state and bright state mediated tunneling [@supplementary]. The data we present so far are taken under the dark state condition ($\delta=0$). Away from $\delta=0$, the state is no longer completely dark and it experiences an additional periodic potential with amplitude $\delta$ [@supplementary; @Bienias2017] (Fig. \[fig:band\] (b)). This additional potential perturbs the KP lattice and modifies the band structure. We verify this effect by measuring the band spacings as a function of $\delta$ (Fig. \[fig:band\] (b)), and find it agrees with the prediction (shaded area), with the systematic deviation likely coming from the same factors as in Fig. \[fig:band\] (a). ![(Color online) **(a)** Lifetime of dark state lattice, $\tau$, scaled by the excited state lifetime $\Gamma^{-1}$ vs. $\Delta$, with $\Omega_c=70\Gamma$, $\Omega_p=10\Gamma$, and $\delta=0$. Inset: lifetime of the dark state in spatially homogeneous control fields, with $\Omega_{c1}=35\Gamma$, $\Omega_{c2}=0$, $\Omega_p=10\Gamma$, and $\delta=0$. Upper three panels: the two bright state potentials $E_-(x)$ (red) and $E_+(x)$ (blue), and the dark state potential (green) at different $\Delta$. **(b)** Lifetime vs. $\Omega_p$ in a dark state lattice where $\epsilon=0.2$ and $\Delta=0$. The solid black lines are predictions scaled with a factor 2.2 (except for (a) inset, where no scaling is applied). The error bars represent one standard deviation uncertainty from fitting the population decay data. []{data-label="fig:lifetime"}](Fig4){width="8cm"} Finally, we study dissipation in this lattice. The non-adiabatic corrections to the BO potential that give rise to $V(x)$ also weakly couple the dark state with the bright states, which leads to light scattering, heating the atoms out of the trap. We measure the lifetime, $\tau$, in a dark state lattice (Fig. \[fig:lifetime\] (a)) for different $\Delta$, and find it significantly longer for $\Delta>0$ than for $\Delta<0$. This is in contrast to an optical lattice based on ac-Stark shifts, where the heating rate is independent of the sign of $\Delta$ [@Gordon1980; @Gerbier2010]. To intuitively understand this asymmetry, we use the model described in [@Jendrzejewski2016] and note that the coupling to the bright states mostly takes place inside the barrier. An atom can scatter light by admixing with the bright states $E_{\pm}(x)$ (approximately $\Delta$ independent) or exiting into the energy-allowed $E_-(x)$ state via non-adiabatic couplings (strongly $\Delta$ dependent). The $E_-(x)$ state (red, Fig. \[fig:lifetime\] (a), upper panels) contributes more to the loss, explaining the $\Delta$ asymmetry. The result of the model [@supplementary] is depicted as the black solid line, with an empirical scale factor of 2.2 applied to the theory to account for the unknown relationship between the scattering rate and loss rate ($1/\tau$). The lifetime in a homogeneous control field measured when one of the $\Omega_c$ beams is blocked, is shown in Fig. \[fig:lifetime\] (a) inset. The $\tau\sim4\times10^5/\Gamma$ lifetime is almost independent of $\Delta$ as theory would predict, and is 70% of the expected lifetime due to non-adiabatic coupling to the bright states and off-resonant scattering from states outside the three-level system. The non-adiabatic bright state coupling also leads to a counter-intuitive dependence of the dissipation on the laser power. Fig. \[fig:lifetime\] (b) shows the lifetime at constant barrier height (fixed $\epsilon$) as a function of Rabi frequencies. Remarkably, the lifetime [*[increases]{}*]{} with Rabi frequency. In contrast, for a regular optical lattice at a fixed detuning the lifetime due to scattering does not improve with more laser power. For the dark state lattice, larger $\Omega_{c,p}$ increases the energy separations between BO potentials, resulting in decreased scattering. In general the lifetime improves with more laser power and at blue detuning. However, couplings to $E_+(x)$ adversely affects the barrier height (similar to the case with $\epsilon\ll1$ in Fig. \[fig:band\] (a)). With realistic increase in laser intensity, we can potentially improve the lifetime by an order of magnitude while maintaining the ultra-narrow barriers. The conservative nanoscale optical potential demonstrated here adds to the toolbox of optical control of atoms, enabling experiments requiring sub-wavelength motional control of atoms. Such sharp potential barriers could be useful for the creation of narrow tunnel junctions for quantum gases [@Eckel2014] or for building sharp-wall box-like traps [@Gaunt2013]. In addition, spin and motional localization on small length scales can enhance the energy scale of weak, long range interactions [@Lacki2016]. The dark state lattice is readily generalizable to 2D, and for example, can be used to study Anderson localization with random strength in the barrier height [@Morong2015]. By stroboscopically shifting the lattice [@Nascimbene2015], the narrow barriers should enable the creation of optical lattices with spacings much smaller than the $\lambda/2$ spacing set by the diffraction limit, which would significantly increase the characteristic energy scales relevant for interaction many-body atomic systems. [32]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty [****,  ()](\doibase 10.1126/science.aal3837) [****,  ()](\doibase 10.1103/PhysRevLett.76.1788) [****,  ()](\doibase 10.1103/PhysRevLett.117.233001) [****,  ()](\doibase 10.1103/PhysRevA.94.063422) [****,  ()](\doibase 10.1103/PhysRevA.74.063622) [****,  ()](\doibase 10.1103/PhysRevLett.99.190405) [****,  ()](http://stacks.iop.org/1367-2630/10/i=7/a=073015) [****,  ()](\doibase 10.1103/PhysRevLett.100.150401) [****, ()](http://dx.doi.org/10.1038/nphoton.2015.54) [****,  ()](\doibase 10.1103/PhysRevLett.109.235309) [****,  ()](\doibase 10.1103/PhysRevLett.100.093005) [****, ()](\doibase 10.1103/PhysRevLett.100.073602) [****,  ()](\doibase 10.1103/PhysRevX.3.031014) [****,  ()](\doibase 10.1103/PhysRevA.72.013820) [****,  ()](\doibase 10.1103/PhysRevLett.99.020502) [****,  ()](\doibase 10.1103/PhysRevA.73.023813) [****,  ()](\doibase 10.1103/PhysRevLett.83.1934) [****,  ()](http://stacks.iop.org/0295-5075/83/i=6/a=60001) [****,  ()](\doibase 10.1098/rspa.1931.0019) [****,  ()](\doibase 10.1103/RevModPhys.83.1523) [****,  ()](http://stacks.iop.org/0034-4885/77/i=12/a=126401) [****,  ()](\doibase 10.1103/PhysRevA.92.043604) [****,  ()](\doibase 10.1103/PhysRevLett.109.243003) [****,  ()](\doibase 10.1103/PhysRevLett.74.1542) @noop [ ]{} @noop [ ]{} [****, ()](\doibase 10.1103/PhysRevA.21.1606) [****,  ()](\doibase 10.1103/PhysRevA.82.013615) [****,  ()](\doibase 10.1103/PhysRevA.92.023625) [****,  ()](http://dx.doi.org/10.1038/nature12958) [****,  ()](\doibase 10.1103/PhysRevLett.110.200406) [****,  ()](\doibase 10.1103/PhysRevLett.115.140401) Acknowledgments {#acknowledgments .unnumbered} =============== We thank Victor M. Galitski for stimulating discussions and Luis A. Orozco for a close reading of the manuscript. Y.W., S.S., T-C. T. J.V.P., and S.L.R. acknowledge support by NSF PFC at JQI and ONR. P.B. and A.V.G. acknowledge support by NSF PFC at JQI, AFOSR, ARL CDQI, ARO, ARO MURI, and NSF QIS. M.Ł. acknowledges support of the National Science Centre, Poland via project 2016/23/D/ST2/00721. M.Ł., M.A.B., and P.Z. acknowledge support from the ERC Synergy Grant UQUAM, the Austrian Science Fund through SFB FOQUS (FWF Project No. F4016-N23), and EU FET Proactive Initiative SIQS. in [1,...,12]{} [ ]{} [^1]: These two authors contributed equally [^2]: These two authors contributed equally
--- abstract: | We define a one-parameter family of entropies, each assigning a real number to any probability measure on a compact metric space (or, more generally, a compact Hausdorff space with a notion of similarity between points). These entropies generalise the Shannon and Rényi entropies of information theory. We prove that on any space $X$, there is a single probability measure maximising all these entropies simultaneously. Moreover, all the entropies have the same maximum value: the *maximum entropy* of $X$. As $X$ is scaled up, the maximum entropy grows; its asymptotics determine geometric information about $X$, including the volume and dimension. We also study the large-scale limit of the maximising measure itself, arguing that it should be regarded as the canonical or uniform measure on $X$. Primarily we work not with entropy itself but its exponential, called diversity and (in its finite form) used as a measure of biodiversity. Our main theorem was first proved in the finite case by Leinster and Meckes [@LeinsterMaximizing2016]. author: - 'Tom Leinster[^1] Emily Roff[^2]' bibliography: - 'maxent2.bib' nocite: '\nocite{}' title: The maximum entropy of a metric space --- Introduction {#S_intro} ============ This paper introduces and explores a largely new invariant of compact metric spaces, the maximum entropy. Intuitively, it measures how much room a probability distribution on the space has available to spread out. Maximum entropy has several claims to importance. First, it is the maximal value of not just *one* measure of entropy, but an *uncountable infinity* of them. It is a theorem, proved here, that they all have the same maximum. Second, these entropies have already been found useful and meaningful in the life sciences, where they (or rather their exponentials) are interpreted as measures of biological diversity [@LeinsterMeasuring2012; @VeresoglouFungal2014]. Third, the exponential of maximum entropy—called maximum diversity—plays a similar role for metric spaces as cardinality does for sets. Like cardinality, it increases when the space is made bigger (either by adding new points or increasing distances), and in the special case of a finite space where all distances are $\infty$, it is literally the cardinality. Maximum diversity at large scales is also closely related to volume and dimension (themselves analogues of cardinality), as explained below. #### Measuring diversity The backdrop for the theory is a compact Hausdorff topological space $X$, equipped with a way to measure the similarity between each pair of points. This data is encoded as a *similarity kernel*: a continuous function $K: X \times X \to [0, \infty)$ taking strictly positive values on the diagonal. We call the pair $(X,K)$ a *space with similarities*. In a metric space, we naturally view points as similar if they are close together, and we define a similarity kernel $K$ by $K(x,y) = e^{-d(x,y)}$. Of course, there are other possible choices of kernel, but this particular choice proves to be a wise one for reasons explained in Example \[metric\_1\]. For simplicity, in this introduction we focus on the case of metric spaces rather than fully general spaces with similarity. We would like to quantify the extent to which a probability distribution on a metric space is spread out across the space, in a way that is sensitive to distance. A thinly spread distribution will be said to have ‘high diversity’, or equivalently ‘high entropy’. Definitions are given later; here we just describe the intuitive idea. (120,25) [(18,5)[(0,0)\[b\][![Three probability measures on a subset of the plane. Dark regions indicate high concentration of measure. For example, the measure in [(a)]{} gives high probability to points near the middle of the space.[]{data-label="fig:distros"}](newpotato_1 "fig:"){width="36mm"}]{}]{}]{} [(60,5)[(0,0)\[b\][![Three probability measures on a subset of the plane. Dark regions indicate high concentration of measure. For example, the measure in [(a)]{} gives high probability to points near the middle of the space.[]{data-label="fig:distros"}](newpotato_3 "fig:"){width="36mm"}]{}]{}]{} [(102,5)[(0,0)\[b\][![Three probability measures on a subset of the plane. Dark regions indicate high concentration of measure. For example, the measure in [(a)]{} gives high probability to points near the middle of the space.[]{data-label="fig:distros"}](newpotato_2 "fig:"){width="36mm"}]{}]{}]{} [(18,0)[(0,0)\[b\][(a)]{}]{}]{} [(60,0)[(0,0)\[b\][(b)]{}]{}]{} [(102,0)[(0,0)\[b\][(c)]{}]{}]{} Figure \[fig:distros\] depicts three distributions on the same space. Distribution [(a)]{} is the least diverse, with most of its mass concentrated in a small region. Distribution [(b)]{} is uniform, and might therefore seem to be the most diverse or spread out distribution possible. However, there is an argument that distribution [(c)]{} is more diverse. In moving from [(b)]{} to [(c)]{}, some of the mass has been pushed out to the ends, so a pair of points chosen at random according to distribution [(c)]{} may be more likely to be far apart than when chosen according to [(b)]{}. One can indeed define diversity in terms of the expected proximity between a random pair of points. But that is just one of an infinite family of ways to quantify diversity, each of which captures something slightly different about how a distribution is spread across the space. To define that family of diversity measures, we first introduce the notion of the *typicality* of a point with respect to a distribution. Given a compact metric space $X$, a probability measure $\mu$ on $X$, and a point $x \in X$, we regard $x$ as ‘typical’ of $\mu$ if a point chosen at random according to $\mu$ is usually near to $x$, and ‘atypical’ if it is likely to be far away. Formally, define a function $K\mu$ on $X$ by $$(K\mu)(x) = \int e^{-d(x,\cdot)} {\,\mathrm{d}}\mu.$$ We call $(K\mu)(x)$ the typicality of $x$, and $1/(K\mu)(x)$ its atypicality. A distribution is widely spread across $X$ if most points are distant from most of the mass—that is, if the atypicality function $1/K\mu$ takes large values on most of $X$. A reasonable way to quantify the diversity of a probability measure $\mu$, then, is as the average atypicality of points in $X$. Here the ‘average’ need not be the arithmetic mean, but could be a power mean of any order. Thus, we obtain an infinite family $(D_q^K)_{q \in [-\infty, \infty]}$ of diversities. Explicitly, for $q \neq 1, \pm \infty$, we define the diversity of order $q$ of $\mu$ to be $$D_q^K(\mu) = \left( \int \left( 1/K\mu \right)^{1-q} {\,\mathrm{d}}\mu \right)^{1/(1-q)},$$ while at $q=1$ and $q = \pm \infty$ this expression takes its limiting values. The entropy $H_q^K(\mu)$ of order $q$ is $\log D_q^K(\mu)$: entropy is the logarithm of diversity. #### Diversity and entropy Any finite set can be given the structure of a compact metric space by taking all distances between distinct points to be $\infty$. The similarity kernel $K = e^{-d(\cdot, \cdot)}$ is then the Kronecker delta $\delta$. In this trivial case, the entropy $H_q^\delta$ is precisely the Rényi entropy of order $q$, well-known in information theory. In particular, $H_1^\delta$ is Shannon entropy. Entropy is an important quantitative and conceptual tool in many fields, including in mathematical ecology, where the exponentials $D_q^\delta$ of the Rényi entropies are known as the Hill numbers and used as measures of biological diversity [@HillDiversity1973]. However, the Hill numbers have a serious deficiency. They fail to reflect a fundamental intuition about diversity, namely that, all else being equal, a biological community is regarded as more diverse when the species are very different than when they are very similar. To repair this deficiency, one can equip the set of species in an ecological community with a kernel (or matrix) $K$ recording their pairwise similarities. The choice $K = \delta$ represents the crude assumption that each species in the community is completely dissimilar to each other species. Using this data, one can define generalised Hill numbers, sensitive to species similarity. These are the similarity-sensitive diversity measures of [@LeinsterMeasuring2012], of which our measures are the continuous generalisation. #### A maximisation theorem Crucially, when comparing the diversity of distributions, different values of the parameter $q$ lead to different results. That is, given a collection $M$ of probability distributions on a metric space and given distinct $q, q' \in [0, \infty]$, the diversities $D_q^K$ and $D_{q'}^K$ generally give different orderings to the elements of $M$. Examples in the biological setting can be found in Section 5 of [@LeinsterMeasuring2012]. The surprise of our main theorem () is that when it comes to *maximising* diversity on a compact metric space $X$, there is consensus: there is guaranteed to exist some probability measure $\mu$ on $X$ that maximises $D_q^K(\mu)$ for every nonnegative $q$ at once. Moreover, the diversity of order $q$ of a maximising distribution is the same for all $q \in [0, \infty]$. Thus, one can speak unambiguously about the maximum diversity of a compact metric space $X$—defined to be $${D_{\max}(X)} = \sup_{\mu} D_q^K(\mu)$$ for any $q \in [0,\infty]$—knowing that there exists a probability distribution attaining this supremum for all orders $q$. extends to compact spaces a result that was established for finite spaces in [@LeinsterMaximizing2016]. (Note that the maximising measure on a finite metric space is not usually uniform.) While the proof of the result for compact spaces follows broadly the same strategy as in the finite case, substantial analytic issues arise. #### Geometric connections The maximum diversity theorem has geometric significance, linking diversity measures to the intrinsic volumes of classical convex geometry and to geometric measure theory. Roughly speaking, maximum diversity provides a measure of the *size* of a metric space. More specifically, Corollary \[cor:comp\] of our main theorem connects maximum diversity with another, more extensively studied invariant of a metric space: its magnitude. First introduced as a generalised Euler characteristic for enriched categories ([@LeinsterEuler2008; @Leinstermagnitude2013]), magnitude specialises to metric spaces by way of Lawvere’s observation that metric spaces are enriched categories [@LawvereMetric1973]. The magnitude $|X| \in {{\mathbb}{R}}$ of a metric space $X$ captures a rich variety of classical geometric data, including some intrinsic volumes of Riemannian manifolds and of compact sets in $\ell_1^n$ and ${\mathbb}{R}^n$. The definition of magnitude and a few of its basic properties are given in Sections \[S\_magnitude\] and \[S\_metric\] below; a detailed survey can be found in [@Leinstermagnitude2017]. In Sections \[S\_prep\_lemmas\] and \[S\_main\_theorem\] we show that the maximum diversity of a compact space is the magnitude of a certain subset: the support of any maximising measure. In we use this fact, and known facts about magnitude, to establish a handful of examples of maximum diversity for metric spaces. Many results on magnitude are asymptotic, in the following sense. Given a space $X$ with metric $d$, and a positive real number $t$, define the scaled metric space $tX$ to be the set $X$ equipped with the metric $t \cdot d$. It has proved fruitful to consider, for a fixed metric space $X$, the entire family of spaces $(tX)_{t > 0}$ and the (partially-defined) magnitude function $t \mapsto |tX|$. For instance, in [@Barcelomagnitudes2018], Barceló and Carbery showed that the volume of a compact subset of ${{\mathbb}{R}}^n$ can be recovered as the leading term in an asymptotic expansion of its magnitude function, while in [@Gimperleinmagnitude2017], Gimperlein and Goffeng showed (subject to technical conditions) that lower order terms capture surface area and the integral of mean curvature. Given this, and given the relationship between magnitude and maximum diversity, it is natural to consider the function $t \mapsto {D_{\max}(tX)}$. Indeed, the asymptotic properties of maximum diversity have already been shown to be of geometric interest. In [@MeckesMagnitude2015], Meckes defined the maximum diversity of a compact metric space to be the maximum value of its diversity of order 2, and used this definition—now vindicated by our main theorem—to prove the following relationship between maximum diversity and Minkowski dimension: \[minkowski\] Let $X$ be a compact metric space, and let $\dim_{\mathrm{Mink}}(X)$ denote the Minkowski dimension of $X$. Then $$\lim_{t \to \infty} \frac{\log {D_{\max}(tX)}}{\log t} = \dim_{\mathrm{Mink}}(X),$$ and the left-hand side is defined if and only if the right-hand side is defined. That is, the Minkowski dimension of $X$ is the growth rate of ${D_{\max}(tX)}$ for large $t$. Proposition \[prop:preufm-euc\] below is a companion result for the volume of sets $X \subseteq {{\mathbb}{R}}^n$: $$\lim_{t \to \infty} \frac{{D_{\max}(tX)}}{t^n} \propto \textrm{Vol}(X).$$ In short, maximum diversity determines dimension and volume. #### Entropy and uniform measure Taking logarithms throughout, the maximum diversity theorem tells us that every compact metric space admits a probability measure maximising the entropies $H_q^K$ of all orders $q$ simultaneously. Statisticians have long recognised that maximum entropy distributions are special. However, it is not necessarily helpful to view the maximum entropy measure on a metric space $X$ as being the ‘canonical measure’, since it is not scale-invariant: if we multiply all distances in $X$ by a constant factor $t$, the maximising measure changes. In Section \[S\_uniform\] we propose a canonical, scale-invariant, choice of probability measure on each of a wide class of metric spaces, and call it the *uniform measure*. It is the limit as $t \to \infty$ of the maximum entropy measure on $tX$. We use the examples of maximum diversity established in to show that, in several familiar cases, this definition succeeds in capturing the classical notion of uniform distribution. #### Conventions {#conventions .unnumbered} Throughout, a **measure** on a topological space means a Radon measure. All measures are positive, unless stated otherwise. A function $f: {\mathbb}{R} \to {\mathbb}{R}$ is **increasing** if $f(y) \leq f(x)$ for all $y \leq x$, and **strictly increasing** if $f(y) < f(x)$ for all $y < x$; **decreasing** and **strictly decreasing** are used similarly. #### Acknowledgements {#acknowledgements .unnumbered} We thank Mark Meckes for many useful conversations, and especially for allowing us to include Proposition \[euc-unique\], which is due to him. Thanks also to Christina Cobbold for helpful suggestions. Topological and analytic preliminaries {#S_background} ====================================== In this paper we are concerned with properties of probability measures on a topological space. We begin by collecting the key topological and measure-theoretic facts that will be needed, also taking the opportunity to fix notation. #### Topologising spaces of functions {#topologising-spaces-of-functions .unnumbered} Let $X$ and $Y$ be topological spaces. We write ${\mathbf}{Top}(X,Y)$ for the set of continuous functions from $X$ to $Y$, which can be topologised as follows. For subsets $K \subseteq X$ and $U \subseteq Y$, write $$F(K,U) = \{f \in {\mathbf}{Top}(X,Y) {\, : \,}fK \subseteq U\}.$$ The **compact-open topology** on ${\mathbf}{Top}(X,Y)$ is the topology generated by $F(K,U)$ for all compact $K \subseteq X$ and open $U \subseteq Y$; its properties are described in Section 46 of [@MunkresTopology2000]. The most important property of the compact-open topology involves **locally compact** spaces, that is, those in which every neighbourhood of a point contains a compact neighbourhood. Every compact Hausdorff space is locally compact. \[l.c.exponentiable\] Let $Y$ be a locally compact space, and let $X$ and $Z$ be any topological spaces. A map $f: X \times Y \to Z$ is continuous if and only if the map $\overline{f}: X \to {\mathbf}{Top}(Y,Z)$ given by $\overline{f}(x)(y) = f(x,y)$ is continuous with respect to the compact-open topology. This is Proposition 7.1.5 in [@BorceuxHandbook1994a]. Categorically, this states that locally compact spaces $Y$ are exponentiable: the functor $- \times Y: {\mathbf}{Top} \to {\mathbf}{Top}$ has a right adjoint, given by ${\mathbf}{Top}(Y, -)$ with the compact-open topology. Now let $X$ be a topological space and $Y$ a *metric* space. The set ${\mathbf}{Top}(X, Y)$ carries the metric $d_\infty$ given by $$d_\infty(f,g) = \sup_{x \in X} d(f(x), g(x)).$$ The **uniform topology** on ${\mathbf}{Top}(X, Y)$ is the topology induced by this metric. \[c.o.uniform\] Let $X$ be a compact topological space and $Y$ a metric space. Then the compact-open and uniform topologies on ${\mathbf}{Top}(X, Y)$ are equal. This follows from Theorems 46.7 and 46.8 in [@MunkresTopology2000]. We will only use function spaces ${\mathbf}{Top}(X, Y)$ in which $X$ is compact Hausdorff and $Y$ is metric, and we always understand ${\mathbf}{Top}(X, Y)$ to come equipped with the unambiguous topology of . The case $Y = {{\mathbb}{R}}$ is especially important, and we write $C(X) = {\mathbf}{Top}(X, {{\mathbb}{R}})$. The metric $d_\infty$ on $C(X)$ is induced by the **uniform norm** $\|\cdot\|_\infty$ on $C(X)$, defined by $\|f\|_\infty = \sup_{x \in X} |f(x)|$. The uniform topology on function spaces is functorial in the following sense. \[uniform\_functorial\] Let $X$ be a compact topological space, $Y$ and $Y'$ metric spaces, and $\phi: Y \to Y'$ a continuous function. Then the induced map $$\phi \circ - : {\mathbf}{Top}(X, Y) \to {\mathbf}{Top}(X, Y')$$ is continuous with respect to the uniform topology on the domain and codomain. It is elementary consequence of the definitions that $\phi \circ -$ is continuous with respect to the compact-open topology on the domain and codomain. The result then follows from . The preceding results imply: \[uniform\_expbl\] Let $X$ be any topological space, $Y$ a compact Hausdorff space, and $Z$ a metric space. A map $f: X \times Y \to Z$ is continuous if and only if the map $\overline{f}: X \to {\mathbf}{Top}(Y,Z)$ is continuous with respect to the uniform topology. #### Topologising spaces of measures {#topologising-spaces-of-measures .unnumbered} [***Until further notice, let $X$ denote a compact Hausdorff space.***]{} Equip the vector space $C(X)$ with the uniform norm $\|\cdot\|_\infty$. By the Riesz representation theorem, its topological dual $C(X)^*$ is isomorphic to the space $M(X)$ of signed measures on $X$. The dual norm on $M(X)$ is the total variation norm, $\|\mu\| = |\mu|(X)$, and the pairing corresponding to the isomorphism is $\langle f, \mu \rangle = \int_X f {\,\mathrm{d}}\mu$ (where $f \in C(X), \mu \in M(X)$). Being a dual vector space, $M(X)$ carries the **weak\* topology**, which is the coarsest topology such that the map $\langle f, - \rangle: M(X) \to {{\mathbb}{R}}$ is continuous for each $f \in C(X)$. Whenever we invoke a topology on $M(X)$ or one of its subsets, we will always mean this one. It is Hausdorff, and the Banach–Alaoglu theorem states that closed bounded subsets of $M(X)$ are compact. Denote by $P(X)$ the set of probability measures on $X$, and by $P_\leq(X)$ the set of measures $\mu$ such that $\mu(X) \leq 1$. By the Banach–Alaoglu theorem, both $P(X)$ and $P_{\leq}(X)$ are compact and Hausdorff. #### The Riesz pairing {#the-riesz-pairing .unnumbered} The pairing map $$\langle-,-\rangle: C(X) \times M(X) \to {{\mathbb}{R}}$$ is not in general continuous. However, it *is* continuous on $C(X) \times P_\leq(X)$: \[pairing\_cts\] Let $Q$ be a closed bounded subset of $M(X)$. Then: 1. \[part:pairing\_cts\_1\] there is a continuous map $C(X) \to C(Q)$ defined by $f \mapsto \langle f, - \rangle$; 2. \[part:pairing\_cts\_2\] the restriction of the pairing map $C(X) \times M(X) \to {{\mathbb}{R}}$ to $C(X) \times Q$ is continuous. For (\[part:pairing\_cts\_1\]), first note that for each $f \in C(X)$, the map $\langle f, - \rangle : Q \to {{\mathbb}{R}}$ is continuous, by definition of the weak\* topology. To show that the resulting map $C(X) \to C(Q)$ is continuous, let $f,g \in C(X)$. Then $$\|\langle f, - \rangle - \langle g, - \rangle \|_\infty = \sup_{\mu \in Q} |\langle f - g, \mu \rangle| \leq \|f-g\|_\infty \sup_{\mu \in Q} \|\mu\|,$$ and $\sup_{\mu \in Q} \|\mu\|$ is finite as $Q$ is bounded. Part (\[part:pairing\_cts\_2\]) follows from , since $Q$ is compact (by the Banach–Alaoglu theorem) and Hausdorff. #### Supports of functions and measures {#supports-of-functions-and-measures .unnumbered} The **support** of a function $f: X \to [0, \infty)$ is $${\mathrm{supp} \, }f = \{x \in X {\, : \,}f(x) > 0\}.$$ Note that we use this set, rather than its closure. Thus, ${\mathrm{supp} \, }f$ is open when $f$ is continuous. Every measure $\mu$ on $X$ has a **support**; that is, there is a smallest closed set ${\mathrm{supp} \, }\mu$ such that $\mu(X \setminus {\mathrm{supp} \, }\mu) = 0$. (Recall our convention that ‘measure’ means ‘positive Radon measure’, and see, for instance, Chapter III, §2, No. 2 of [@BourbakiIntegration1965].) The support is characterised by $${\mathrm{supp} \, }\mu = \{x \in X {\, : \,}\mu(U) > 0 \text{ for all open neighbourhoods \(U\) of \(x\)}\},$$ and has the property that $\int_X f {\,\mathrm{d}}\mu = \int_{{\mathrm{supp} \, }\mu} f {\,\mathrm{d}}\mu $ for all $f \in L^1(X, \mu)$. One of the connections between the two concepts of support is the following. \[disjoint\_supps\] Let $\mu$ be a measure on $X$, and let $f: X \to [0,\infty)$ be a continuous function. Then $${\mathrm{supp} \, }f \cap {\mathrm{supp} \, }\mu \neq \emptyset \iff \int_X f {\,\mathrm{d}}\mu > 0.$$ The forwards implication is Proposition 9 in Chapter III, §2, No. 3 of [@BourbakiIntegration1965], and the backwards implication is trivial. #### Approximations to Dirac measures {#approximations-to-dirac-measures .unnumbered} Suppose that we have fixed a measure $\mu$ on our space $X$. The Dirac measure $\delta_x$ at a point $x$ is not in general absolutely continuous with respect to $\mu$, but it can be approximated by absolutely continuous measures, in the following sense: \[approximate\_delta\] Let $\mu$ be a measure on $X$ and $x \in {\mathrm{supp} \, }\mu$. For each equicontinuous set of functions $E \subseteq C({\mathrm{supp} \, }\mu)$ and each ${\varepsilon}> 0$, there exists a nonnegative function $u \in C(X)$ such that $u\mu$ is a probability measure and for all $f \in E$, $$\left| \int_X f {\,\mathrm{d}}(u\mu) - f(x) \right| \leq {\varepsilon}.$$ By equicontinuity, we can choose a subset $U \subseteq {\mathrm{supp} \, }\mu$, containing $x$ and open in ${\mathrm{supp} \, }\mu$, such that $|f(y) - f(x)| \leq {\varepsilon}$ for all $y \in U$ and $f \in E$. By Urysohn’s lemma, we can choose a nonnegative function $u \in C({\mathrm{supp} \, }\mu)$ such that ${\mathrm{supp} \, }u \subseteq U$ and $u(x) > 0$. Then $\int_{{\mathrm{supp} \, }\mu} u {\,\mathrm{d}}\mu > 0$, so by rescaling we can arrange that $\int_{{\mathrm{supp} \, }\mu} u {\,\mathrm{d}}\mu = 1$. By Tietze’s extension theorem, $u$ can be extended to a nonnegative function continuous on $X$, and then $u\mu$ is a probability measure on $X$. Moreover, for all $f \in E$, $$\begin{aligned} \left| \int_X f {\,\mathrm{d}}(u \mu) - f(x) \right| & = \left| \int_U (f(y) - f(x)) u(y) {\,\mathrm{d}}\mu(y) \right| \\ & \leq {\varepsilon}\int_U u(y) {\,\mathrm{d}}\mu(y) = {\varepsilon},\end{aligned}$$ as required. Any function $G$ on ${{\mathbb}{R}}^n$ gives rise to a family of functions $(G_t)_{t > 0}$ on ${{\mathbb}{R}}^n$, defined by $G_t(x) = t^n G(tx)$. Assuming that $G \in L^1({{\mathbb}{R}}^n)$ and $\int G = 1$, we also have $G_t \in L^1({{\mathbb}{R}}^n)$ and $\int G_t = 1$ for every $t > 0$. The convolution $G_t * \mu$ of $G_t$ with any finite signed measure $\mu$ on ${{\mathbb}{R}}^n$ also belongs to $L^1({{\mathbb}{R}}^n)$ (Proposition 8.49 of [@FollandReal1999]). Writing $\lambda$ for Lebesgue measure on ${{\mathbb}{R}}^n$, the next lemma states that $(G_t * \mu)\lambda$ approximates $\mu$. \[lemma:approx-conv\] Let $G \in L^1({{\mathbb}{R}}^n)$ with $\int_{{{\mathbb}{R}}^n} G {\,\mathrm{d}}\lambda = 1$, and let $f \in C({{\mathbb}{R}}^n)$ be a function of bounded support. Then for all probability measures $\mu$ on ${{\mathbb}{R}}^n$, $$\int_{{{\mathbb}{R}}^n} f \cdot (G_t * \mu) {\,\mathrm{d}}\lambda \to \int_{{{\mathbb}{R}}^n} f {\,\mathrm{d}}\mu \quad \text{ as } t \to \infty,$$ uniformly in $\mu$. Define $\tilde{G} \in L^1({{\mathbb}{R}}^n)$ by $\tilde{G}(x) = G(-x)$. It is elementary that $$\int_{{{\mathbb}{R}}^n} f \cdot (G_t * \mu) {\,\mathrm{d}}\lambda - \int_{{{\mathbb}{R}}^n} f {\,\mathrm{d}}\mu = \int_{{{\mathbb}{R}}^n} \bigl( f * \tilde{G}_t - f \bigr) {\,\mathrm{d}}\mu$$ for all finite signed measures $\mu$ on ${{\mathbb}{R}}^n$. Hence when $\mu$ is a probability measure, $$\biggl| \int_{{{\mathbb}{R}}^n} f \cdot (G_t * \mu) {\,\mathrm{d}}\lambda - \int_{{{\mathbb}{R}}^n} f {\,\mathrm{d}}\mu \biggr| \leq \bigl\| f * \tilde{G}_t - f \bigr\|_\infty \to 0$$ as $t \to \infty$, by Theorem 8.14(b) of [@FollandReal1999]. #### Integral power means {#integral-power-means .unnumbered} Here we review the theory of the power means of a real-valued function on an arbitrary probability space $(X, \mu)$ (now temporarily abandoning the convention that $X$ denotes a compact Hausdorff topological space). This is classical material; for example, Chapter VI of Hardy, Littlewood and Pólya [@HardyInequalities1952] covers the case where $X$ is a real interval. The **essential supremum** of a function $f: X \to {{\mathbb}{R}}$ with respect to $\mu$ is $${\mathrm{ess \, sup}}_\mu(f) = \inf \left\{a \in {{\mathbb}{R}}{\, : \,}\mu\left(\{x {\, : \,}f(x) >a\}\right) = 0\right\},$$ and its **essential infimum**, ${\mathrm{ess \, inf}}_\mu(f)$, is defined similarly. A function $f: X \to [0,\infty)$ is **essentially bounded** if ${\mathrm{ess \, sup}}_\mu(f)$ is finite. \[def\_powermean\] Let $(X, \mu)$ be a probability space and let $f: X \to [0, \infty)$ be a measurable function such that both $f$ and $1/f$ are essentially bounded. We define for each $t \in [-\infty, \infty]$ a real number $$M_t(\mu, f) \in (0, \infty),$$ the **power mean of $f$ of order $t$, weighted by $\mu$**, as follows. - For $t \in (-\infty, 0) \cup (0, +\infty)$, $$\begin{aligned} \label{powermean_def_1} M_t(\mu,f) = \left(\int_X f^t {\,\mathrm{d}}\mu \right)^{1/t}. \end{aligned}$$ - For $t = 0$, $$M_0(\mu,f) = \exp\left(\int_X \log f{\,\mathrm{d}}\mu\right).$$ - For $t = \pm\infty$, $$\begin{aligned} M_{+\infty} (\mu, f) &= {\mathrm{ess \, sup}}_\mu f, \\ M_{-\infty} (\mu,f) &= {\mathrm{ess \, inf}}_\mu f. \end{aligned}$$ As we shall see, the definitions in the three exceptional cases are determined by the requirement that the power mean is continuous in its order. In the case where $X$ is a finite set $\{1, \ldots, n\}$, the definition reduces to that of the finite power means (as in Chapter III of [@HardyInequalities1952]). In particular, the mean of order $0$ is the geometric mean: $$M_0(\mu, f) = \prod_{i = 1}^n f(i)^{\mu\{i\}}.$$ \[powermean\_dual\] We have assumed here that $f$ and $1/f$ are essentially bounded, or equivalently that $${\mathrm{ess \, inf}}_\mu(f) > 0, \qquad {\mathrm{ess \, sup}}_\mu(f) < \infty.$$ This assumption guarantees that $f^t \in L^1(X, \mu)$ for all real $t$ and that $M_t(\mu, f) \in (0, \infty)$ for all $t \in [-\infty, \infty]$. If $f$ satisfies this assumption then so does $1/f$, and a duality relationship holds: $$M_{-t}(\mu,f) = \frac{1}{M_t(\mu, 1/f)}.$$ Power means are increasing and continuous in their order: \[powermean\_mono\] Let $(X, \mu)$ be a probability space and let $f: X \to [0, \infty)$ be a measurable function such that both $f$ and $1/f$ are essentially bounded. 1. \[part:pm-const\] If there is some constant $c$ such that $f(x) = c$ for almost all $x \in X$, then $M_t(\mu, f) = c$ for all $t \in [-\infty, \infty]$. 2. \[part:pm-strict\] Otherwise, $M_t(\mu, f)$ is strictly increasing in $t \in [-\infty, \infty]$. Part (\[part:pm-const\]) is trivial. Part (\[part:pm-strict\]) is proved in Section 6.11 of [@HardyInequalities1952] in the case where $X$ is a real interval and $\mu$ is determined by a density function, and the proof extends without substantial change to an arbitrary probability space. \[powermean\_cts\] Let $(X, \mu)$ be a probability space and let $f: X \to [0, \infty)$ be a measurable function such that both $f$ and $1/f$ are essentially bounded. Then $M_t(\mu, f)$ is continuous in $t \in [-\infty, \infty]$. Again, this is proved in the case of a real interval in Section 6.11 of [@HardyInequalities1952]. The generalisation to an arbitrary probability space is sketched as Exercise 1.8.1 of [@NiculescuConvex2006], although the hypotheses on $f$ there are weaker than ours, and at $t = 0$ only continuity from the right is proved: $$\lim_{t \to 0+} M_t(\mu, f) = M_0(\mu, f).$$ Under our hypotheses on $f$, continuity from the left then follows from the duality stated in . #### Differentiation under the integral sign {#differentiation-under-the-integral-sign .unnumbered} We will require the following standard result in calculus on measure spaces, whose proof can be found in, for example, [@KlenkeProbability2013] (Theorem 6.28). \[diff\_int\] Let $(X,\mu)$ be a measure space and $J \subseteq {{\mathbb}{R}}$ an open interval. Let $f: X \times J \to {{\mathbb}{R}}$ be a map with the following properties: 1. \[part:di-int\] for all $t \in J$, the map $f(-, t): X \to {{\mathbb}{R}}$ is integrable; 2. \[part:di-diff\] for almost all $x \in X$, the map $f(x, -): J \to {{\mathbb}{R}}$ is differentiable; 3. \[part:di-bound\] there is an integrable function $h: X \to {{\mathbb}{R}}$ such that for all $t \in J$, for almost all $x \in X$, we have $\bigl|\tfrac{\partial f}{\partial t} (x, t)\bigr| \leq h(x)$. Then $\tfrac{\partial f}{\partial t} (-, t): X \to {{\mathbb}{R}}$ is integrable for each $t \in J$, and the function $t \mapsto \int_X f(-, t) {\,\mathrm{d}}\mu$ is differentiable with derivative $t \mapsto \int_X \tfrac{\partial f}{\partial t} (-, t) {\,\mathrm{d}}\mu$. Typicality {#S_typicality} ========== The setting for the rest of this paper is a space $X$ equipped with a notion of similarity or proximity between points in $X$ (which may or may not be derived from a metric). Later, we will study the entropy and diversity of any probability measure on such a space. But first, we show how any probability measure on $X$ gives rise to a function on $X$, called its ‘typicality function’, whose value at a point $x$ indicates how concentrated the measure is near $x$. \[def\_similarity\] Let $X$ be a compact Hausdorff space. A **similarity kernel** on $X$ is a continuous function $K: X \times X \to [0, \infty)$ satisfying $K(x,x)>0$ for all $x \in X$. The pair $(X,K)$ is a **(compact Hausdorff) space with similarities**. Since we will only be interested in compact Hausdorff spaces, we drop the ‘compact Hausdorff’ and simply refer to spaces with similarities. The terminology of similarity originates with the following family of examples. \[ecosystem\_1\] There has been vigorous discussion in ecology of how best to quantify the diversity of a biological community. This is a conceptual and mathematical challenge, quite separate from the practical and statistical difficulties, and a plethora of different diversity measures have been proposed over 70 years of debate in the ecological literature [@MagurranMeasuring2004]. Any realistic diversity measure should reflect the degree of variation between the species present. All else being equal, a lake containing four species of carp should be counted as less diverse than a lake containing four very different species of fish. The similarity between species may be measured genetically, phylogenetically, functionally, or in some other way (as discussed in [@LeinsterMeasuring2012]); how it is done will not concern us here. Mathematically, we take a finite set $X = \{1, \ldots, n\}$ (whose elements represent the species) and a real number $Z_{ij} \geq 0$ for each pair $(i, j)$ (representing the degree of similarity between species $i$ and $j$). A similarity coefficient $Z_{ij} = 0$ means that species $i$ and $j$ are completely dissimilar, and we therefore assume that $Z_{ii} > 0$ for all $i$. Thus, $Z = (Z_{ij})$ is an $n \times n$ nonnegative real matrix with strictly positive entries on the diagonal. Many ways of assigning inter-species similarities are calibrated on a scale of $0$ to $1$, with $Z_{ii} = 1$ for all $i$ (each species is identical to itself). For example, percentage genetic similarity gives similarity coefficients in $[0, 1]$, as does the similarity measure $e^{-d(i, j)}$ derived from a metric $d$ and discussed below. The simplest possible choice of $Z$ is the identity matrix, embodying the crude assumption that different species have nothing whatsoever in common. In the language of Definition \[def\_similarity\], we are considering here the case of finite spaces with similarities: $X = \{1, \ldots, n\}$ (with the discrete topology) and the similarity kernel $K$ is given by $K(i, j) = Z_{ij}$. When $Z$ is the identity matrix, $K$ is the Kronecker delta. \[metric\_1\] Any compact metric space $(X, d)$ can be regarded as a space with similarities $(X, K)$ by putting $$K(x, y) = e^{-d(x, y)}$$ ($x, y \in X$). The extreme case where $d(x, y) = \infty$ for all $x \neq y$ produces the Kronecker delta. Although the negative exponential is not the only reasonable function transforming distances into similarities, it turns out to be a particularly fruitful choice. It is associated with the very fertile theory of the magnitude of metric spaces (surveyed in [@Leinstermagnitude2017]), which has deep connections with convex geometry, geometric measure theory and potential theory. Moreover, the general categorical framework of magnitude all but forces this choice of transformation, as explained in Example 2.4(3) of [@Leinstermagnitude2017]. In the examples above, the similarity kernel is **symmetric**: $K(x,y) = K(y,x)$ for all $x,y \in X$. We do not include symmetry in the definition of similarity kernel, partly because asymmetric similarity matrices occasionally arise in ecology (e.g. [@LeinsterMeasuring2012], Appendix, Proposition A7), and also because of the argument of Gromov ([@GromovMetric2001], p. xv) and Lawvere ([@LawvereMetric1973], p. 138–9) that the symmetry condition in the definition of metric can be too restrictive. To obtain our main result, however, it will be necessary to add symmetry as a hypothesis on $K$. Most measures of biological diversity depend (at least in part) on the relative abundance distribution ${\mathbf}{p} = (p_1, \ldots, p_n)$ of the species, where ‘relative’ means that the $p_i$ are normalised to sum to $1$. Multiplying the similarity matrix $Z$ by the column vector ${\mathbf}{p}$ gives another vector $Z{\mathbf}{p}$, with $i$th entry $$(Z{\mathbf}{p})_i = \sum_j Z_{ij} p_j.$$ This is the expected similarity between an individual of species $i$ and an individual chosen at random. Thus, $(Z{\mathbf}{p})_i$ measures how typical individuals of species $i$ are within the community. The generalisation to an arbitrary space with similarities is as follows. \[def\_typicality\] Let $(X, K)$ be a space with similarities. For each $\mu \in M(X)$ and $x \in X$, define $$(K\mu)(x) = \int_X K(x,-) {\,\mathrm{d}}\mu \in {{\mathbb}{R}}.$$ This defines a function $K\mu: X \to {{\mathbb}{R}}$, the **typicality function** of $(X,K,\mu)$. When $\mu$ is a probability measure (the case of principal interest), $(K\mu)(x)$ is the expected similarity between $x$ and a random point. It therefore detects the extent to which $x$ is similar, or near, to sets of large measure. In the next section, we will define entropy and diversity in terms of the typicality function $K\mu$. For that, we will need $K\mu$ to satisfy some analytic conditions, which we establish now. [***For the rest of this section, let $(X, K)$ be a space with similarities.***]{} \[Kbar\_cts\] The function $\overline{K}: X \to C(X)$ defined by $x \mapsto K(x, -)$ is continuous. Since $X$ is compact Hausdorff and $K$ is continuous, this follows from . \[Kmu\_cts\] For each $\mu \in M(X)$, the function $K\mu: X \to {{\mathbb}{R}}$ is continuous. Note that $K\mu$ is the composite $$X {\xrightarrow}{\overline{K}} C(X) {\xrightarrow}{\langle-,\mu\rangle} {{\mathbb}{R}}.$$ We have just proved that $\overline{K}$ is continuous, and $\langle-,\mu\rangle = \int_X - {\,\mathrm{d}}\mu$ is a continuous linear functional. Hence $K\mu$ is continuous. \[K\_\*\_cts\] The map $$\begin{array}{cccc} K_*: &P(X) &\to &C(X) \\ &\mu &\mapsto &K\mu \end{array}$$ is continuous. Both $X$ and $P(X)$ are compact Hausdorff so, applying twice, an equivalent statement is that the map $$\begin{array}{ccc} X &\to &C(P(X)) \\ x &\mapsto &(K-)(x) = (\mu \mapsto (K\mu)(x)) \end{array}$$ is continuous. This map is the composite $$X {\xrightarrow}{\overline{K}} C(X) \to C(P(X)),$$ where the second map is $f \mapsto \langle f,-\rangle$ and is continuous by (\[part:pairing\_cts\_1\]). Hence, $K_*: P(X) \to C(X)$ is continuous. \[Kmu\_props\] For every measure $\mu$ on $X$, the typicality function $K\mu$ has the following properties: 1. \[part:Kp-supp\] ${\mathrm{supp} \, }K\mu \supseteq {\mathrm{supp} \, }\mu$. 2. \[part:Kp-eb\] Both $K\mu$ and $1/K\mu$ are essentially bounded with respect to $\mu$. For (\[part:Kp-supp\]), let $x \in {\mathrm{supp} \, }\mu$. Since $K$ is positive on the diagonal, $x \in {\mathrm{supp} \, }K(x,-)$, so ${\mathrm{supp} \, }\mu \cap {\mathrm{supp} \, }K(x,-) \neq \emptyset$. Hence by , $$(K\mu)(x) = \int_X K(x,-) {\,\mathrm{d}}\mu > 0.$$ For (\[part:Kp-eb\]), ${\mathrm{supp} \, }\mu$ is compact, and $K\mu$ is continuous with $K\mu\big|_{{\mathrm{supp} \, }\mu} > 0$, so both $K\mu$ and $1/K\mu$ are bounded on ${\mathrm{supp} \, }\mu$. Hence both are essentially bounded on $X$. Diversity and entropy {#S_diversity} ===================== Here we introduce the main subject of the paper: a one-parameter family of functions that quantify the degree of spread of a probability measure on a compact Hausdorff space $X$, with respect to a chosen notion of similarity between points of $X$. Specifically, take a probability measure $\mu$ on a space with similarities $(X, K)$. The measure can be regarded as widely spread across $X$ if most points are dissimilar to most of the rest of $X$, or in other words, if the typicality function $K\mu: X \to {{\mathbb}{R}}$ takes small values on most of $X$. An equivalent way to say this is that the ‘atypicality’ function $1/K\mu$ takes large values on most of $X$. In ecological terms, a community is diverse if it is predominantly made up of species that are unusual or atypical within that community (for example, many rare and highly dissimilar species). The diversity of $\mu$ is, therefore, defined as the mean atypicality of a point. It is useful to consider not just the arithmetic mean, but the power means of all orders: \[def\_diversity\] Let $(X, K)$ be a space with similarities and let $q \in [-\infty, \infty]$. The **diversity of order $q$** of a probability measure $\mu$ on $X$ is $$D_q^K(\mu) = M_{1-q}(\mu, 1/K\mu) \in (0, \infty).$$ The **entropy of order $q$** of $\mu$ is $H_q^K(\mu) = \log D_q^K(\mu)$. By the duality of , an equivalent definition is $$D_q^K(\mu) = 1/M_{q - 1}(\mu, K\mu).$$ On the right-hand side, the denominator is the mean typicality of a point in $X$, which is a measure of *lack* of diversity; its reciprocal is then a measure of diversity. The power means in this formula and Definition \[def\_diversity\] are well-defined because $K\mu$ and $1/K\mu$ are essentially bounded with respect to $\mu$ (). Explicitly, $$\begin{aligned} \label{4} D_q^K(\mu)= \begin{cases} \left( \int_{X} \left( K\mu \right)^{q-1} {\,\mathrm{d}}\mu \right)^{1/(1-q)} & \text{ if } q \in (-\infty, 1) \cup (1, \infty),\\ \exp \left( -\int_X \log (K\mu) {\,\mathrm{d}}\mu\right) & \text{ if } q = 1, \\ 1 / {\mathrm{ess \, sup}}_\mu K\mu & \text{ if } q = \infty, \\ 1 / {\mathrm{ess \, inf}}_\mu K\mu & \text{ if } q = - \infty. \end{cases}\end{aligned}$$ We usually work with the diversities $D_q^K$ rather than the entropies $H_q^K$, but evidently it is trivial to translate results on diversity into results on entropy. Let $X$ be the set $\{1, \ldots, n\}$ with the discrete topology, let $K$ be the Kronecker delta on $X$ (the ‘simplest possible choice’ of Example \[ecosystem\_1\]), and let $\mu$ be the uniform measure on $X$. Then $K\mu \equiv 1/n$, so $D_q^K(\mu) = n$ and $H_q^K(\mu) = \log n$ for all $q$. This conforms to the intuition that the larger we take $n$ to be, the more thinly spread the uniform measure on $\{1, \ldots, n\}$ becomes. The next two examples also concern the finite case. They are described in terms of the ecological scenario of Example \[ecosystem\_1\]. Thus, $X = \{1, \ldots, n\}$ is a set of species, $Z_{ij} = K(i, j)$ is the similarity between species $i$ and $j$, and $\mu = {\mathbf}{p} = (p_1, \ldots, p_n)$ gives the proportions in which the species are present. \[eg:fin-div-1\] Put $Z = I$ (distinct species have nothing in common). Then the diversity of order $0$ is $$D_0^I({\mathbf}{p}) = \sum_{i \in {\mathrm{supp} \, }{\mathbf}{p}} p_i \cdot \frac{1}{p_i} = |{\mathrm{supp} \, }{\mathbf}{p}|.$$ This is just the number of species present. It is the simplest diversity measure of all. But it takes no account of species abundances beyond presence and absence, whereas ordinarily, for instance, a community of two species is considered more diverse if they are equally abundant than if their proportions are $(0.99, 0.01)$. The diversities of nonzero orders do, however, reflect the balance between species. For example, the diversity of order $1$ is $$D_1^I({\mathbf}{p}) = \exp\Biggl( - \sum_{i \in {\mathrm{supp} \, }{\mathbf}{p}} p_i \log p_i\Biggr) = \prod_{i \in {\mathrm{supp} \, }{\mathbf}{p}} p_i^{-p_i}$$ and the entropy $H_1^I({\mathbf}{p}) = \log D_1^I({\mathbf}{p})$ of order $1$ is the Shannon entropy $-\sum p_i \log p_i$, which can be understood as measuring the uniformity of the distribution ${\mathbf}{p}$. The diversity of order $2$ is $$D_2^I({\mathbf}{p}) = 1\biggl/\sum_{i = 1}^n p_i^2.$$ The denominator is the probability that two individuals are chosen at random are of the same species, and $D_2^I({\mathbf}{p})$ itself is the expected number of such trials needed in order to obtain a matching pair. The diversity of order $\infty$ is $$D_\infty^I({\mathbf}{p}) = 1\bigl/\max_i p_i,$$ which measures the extent to which the community is dominated by a single species. All four of these diversity measures (or simple transformations of them) are used by ecologists [@MagurranMeasuring2004]. For a general parameter value $q \neq 1, \pm\infty$, the diversity of order $q$ is $$D_q^I({\mathbf}{p}) = \Biggl( \sum_{i \in {\mathrm{supp} \, }{\mathbf}{p}} p_i^q \Biggr)^{1/(1 - q)}.$$ In ecology, $D_q^I$ is known as the **Hill number** of order $q$ [@HillDiversity1973], and in information theory, $H_q^I = \log D_q^I$ is called the **Rényi entropy** of order $q$ [@Renyimeasures1961]. For reasons explained in Remark \[rmk:q-range\], we usually restrict to $q \geq 0$. The parameter $q$ controls the emphasis placed on rare or common species. Low values of $q$ emphasise rare species; high values emphasise common species. At one extreme, $D_0^I({\mathbf}{p})$ is simply the number of species present, regardless of abundance; thus, diversity of order $0$ attaches as much importance to rare species as common ones. At the other, diversity of order $\infty$ depends only on the abundance of the most common species, completely ignoring rarer ones. If a community loses one or more rare species, while at the same time the species that remain become more evenly balanced, its low-order diversity will fall but its high-order diversity will rise. For example, $D_q^I$ measures the relative abundance distribution $(0.5, 0.5, 0)$ as less diverse than $(0.8, 0.1, 0.1)$ when $q < 0.852\ldots$, but more diverse for all higher values of $q$. The moral is that when judging which of two communities is the more diverse, the answer depends critically on the parameter $q$. Different values of $q$ may produce opposite judgements. \[eg:fin-div-2\] Still in the ecological setting, consider now a general similarity matrix $Z$, thus taking into account the varying similarities between species (as in Example \[ecosystem\_1\]). The diversity measures $D_q^Z$ and the role of the parameter $q$ can be understood much as in the case $Z = I$, but now incorporating inter-species similarity. For example, $$D_2^Z({\mathbf}{p}) = 1\Bigl/\sum_{i, j} p_i Z_{ij} p_j$$ is the reciprocal expected similarity between a random pair of individuals (rather than the reciprocal probability that they are of the same species), and $$D_\infty^Z({\mathbf}{p}) = 1\Bigl/ \max_{i \in {\mathrm{supp} \, }{\mathbf}{p}} (Z{\mathbf}{p})_i$$ reflects the dominance of the largest cluster of species (rather than merely the largest single species). \[eg:div2\] Let $(X, K)$ be an arbitrary space with similarities. Among all the diversity measures $(D_q^K)_{q \in [0, \infty]}$, one with especially convenient mathematical properties is the diversity of order $2$: $$D_2^K(\mu) = \frac{1}{\int_X \int_X K(x, y) {\,\mathrm{d}}\mu(x) {\,\mathrm{d}}\mu(y)}.$$ Meckes used $D_2^K$, and more particularly the maximum diversity $\sup_{\mu \in P(X)} D_2^K(\mu)$ of order $2$, to prove results on the Minkowski dimension of metric spaces ([@MeckesMagnitude2015], Section 7). These include not only Theorem \[minkowski\], but also results that do not mention maximum diversity in their statements. We now establish the basic analytic properties of diversity. First, we show that for a fixed probability measure $\mu$, the diversity $D_q^K(\mu)$ is a continuous and decreasing function of its order $q$. Second, we show that for fixed $q \in (0,\infty)$, the diversity $D_q^K(\mu)$ is continuous in the measure $\mu$. The first fact is immediate, but proving the second is considerably more involved. \[diversity\_cts\_q\] Let $(X, K)$ be a space with similarities and let $\mu \in P(X)$. 1. \[part:dcq-cts\] $D_q^K(\mu)$ is continuous in its order $q \in [-\infty, \infty]$. 2. \[part:dcq-dec\] If $K\mu$ is constant on the support of $\mu$, then the function $q \mapsto D_q^K(\mu)$ is constant on $[-\infty, \infty]$; otherwise, it is strictly decreasing in $q \in [-\infty, \infty]$. Part (\[part:dcq-cts\]) follows from , and part (\[part:dcq-dec\]) from . \[typicality\_constant\] A critical role will be played by measures $\mu$ satisfying the first case of Proposition \[diversity\_cts\_q\](\[part:dcq-dec\]). We call $\mu$ **balanced** if the function $K\mu$ is constant on ${\mathrm{supp} \, }\mu$. (In [@LeinsterMaximizing2016], where $X$ is taken to be finite, such measures were called ‘invariant’.) Equivalently, $\mu$ is balanced if $D_q^K(\mu)$ is constant over $q \in [-\infty, \infty]$. If $(K\mu)|_{{\mathrm{supp} \, }\mu}$ has constant value $c$ then $D_q^K(\mu)$ has constant value $1/c$. \[diversity\_cts\_mu\] Let $(X, K)$ be a space with similarities. For every $q \in (0, \infty)$, the diversity function $D_q^K: P(X) \to {{\mathbb}{R}}$ is continuous. (Recall that we always use the weak$^*$ topology on $P(X)$.) The proof of this proposition takes the form of three lemmas, addressing the three cases $q \in (1,\infty)$, $q \in (0,1)$ and $q = 1$. In the first case, matters are straightforward. For every $q \in (1,\infty)$, the diversity function $D_q^K: P(X) \to {{\mathbb}{R}}$ is continuous. The map $D_q^K$ is the composite $$P(X) {\xrightarrow}{\triangle} P(X) \times P(X) {\xrightarrow}{K_* \times \text{Id}} C(X) \times P(X) {\xrightarrow}{(-)^{q - 1} \times \text{Id}} C(X) \times P(X) {\xrightarrow}{\langle -, - \rangle} {{\mathbb}{R}}{\xrightarrow}{(-)^{1/(q - 1)}} {{\mathbb}{R}}.$$ Here $\triangle$ is the diagonal, which is certainly continuous. The map $K_*$ was defined and proved to be continuous in , and $(-)^{q - 1}: C(X) \to C(X)$ is continuous by . The restricted pairing $\langle -, - \rangle$ on $C(X) \times P(X)$ is continuous by . Finally, $(-)^{1/(q-1)}$ is evidently continuous. Hence $D_q^K$ is continuous. The case $q \in (0, 1)$ is more difficult. The main work goes into handling the possibility that $(K\mu)(x) = 0$ for some $x$, in which case the function $(K\mu)^{q-1}$ is not defined everywhere. Our strategy is as follows. Were we to assume that $K(x,y) > 0$ for all $x,y \in X$, then $K\mu$ would be positive everywhere, by compactness. We would then be free of the subtlety just described. We approximate this convenient situation by covering $X$ with subsets $U_1, \ldots, U_n$ such that $K$ is strictly positive on $U_i \times U_i$, for each $i$. (That is, points within each $U_i$ are reasonably similar.) We then decompose the function $(D_q^K)^{1 - q}$ as a sum of functions $d_i$. Roughly speaking (and we will make an accurate statement shortly), $d_i(\mu)$ is the contribution $$\label{eq:roughly-di} \int_{U_i} (K\mu)^{q - 1} {\,\mathrm{d}}\mu$$ to $(D_q^K)^{1 - q}$ made on $U_i$. It is enough to show that each of these functions $d_i: P(X) \to {{\mathbb}{R}}$ is continuous. The most delicate part of the argument is to show that $d_i$ is continuous at measures $\mu \in P(X)$ whose support does not meet $U_i$. This is because although the integral  vanishes at $\mu$, there may be measures $\nu \in P(X)$ arbitrarily close to $\mu$ whose support *does* meet $U_i$. In that case, the integral  for $\nu$ does not vanish, and the function $K\nu$ may take small values on $U_i$, in which case the integrand $(K\nu)^{q - 1}$ is large. But in order for $d_i$ to be continuous, we need the integral of this large function to be small. The details of the argument involve an estimate that depends on the particular form of the diversity function. The proof below implements the argument just sketched, with one further refinement. In keeping with the viewpoint of $M(X)$ as the dual of $C(X)$, we primarily work not with the cover $(U_i)$ itself, but with a continuous partition of unity $(p_i)$ subordinate to it. The formula  for $d_i(\mu)$ is adapted accordingly, effectively replacing the characteristic function of $U_i$ by $p_i$. \[q\_in\_01\] For every $q \in (0,1)$, the diversity function $D_q^K: P(X) \to {{\mathbb}{R}}$ is continuous. This proof proceeds in four steps. #### Step 1: partitioning the space Put $$b = \frac{1}{2} \inf_{x \in X} K(x,x).$$ By the topological hypotheses, $b > 0$ and we can find a finite open cover $U_1,\ldots, U_n$ of $X$ such that $K(x,y) \geq b$ whenever $x,y \in \overline{U_i}$ for some $i$. We can also find a continuous partition of unity $p_1, \ldots, p_n$ such that ${\mathrm{supp} \, }p_i \subseteq U_i$ for each $i$. For all $\mu \in P(X)$, $$D_q^K(\mu)^{1-q} = \int_X (K\mu)^{q-1} {\,\mathrm{d}}\mu = \sum_{i=1}^n \int_X (K\mu)^{q-1} p_i {\,\mathrm{d}}\mu.$$ To see that $D_q^K$ is continuous it will suffice to show that, for each $i$, the map $$\begin{aligned} d_i: P(X) &\to {{\mathbb}{R}}\\ \mu &\mapsto \int_X (K\mu)^{q-1} p_i {\,\mathrm{d}}\mu\end{aligned}$$ is continuous. For the rest of the proof, fix $i \in \{1,\ldots,n\}$. #### Step 2: bounding $K\mu$ below Let $\mu \in P(X)$. Then for all $x \in \overline{U_i}$, $$\begin{aligned} (K\mu)(x) & = \int_X K(x,y) {\,\mathrm{d}}\mu(y) \\ & \geq \int_{U_i} K(x,y) p_i(y) {\,\mathrm{d}}\mu(y) \\ & \geq b \int_X p_i {\,\mathrm{d}}\mu.\end{aligned}$$ Thus, $(K\mu)(x) \geq b \int p_i {\,\mathrm{d}}\mu$ for all $x \in \overline{U_i}$. By , this lower bound on $(K\mu)|_{\overline{U_i}}$ is strictly positive if ${\mathrm{supp} \, }p_i \cap {\mathrm{supp} \, }\mu \neq \emptyset$. #### Step 3: continuity at nontrivial measures Here we show that $d_i$ is continuous at each element of the set $$P_i(X) = \{\mu \in P(X) {\, : \,}{\mathrm{supp} \, }p_i \cap {\mathrm{supp} \, }\mu \neq \emptyset\}.$$ implies that $\mu \in P_i(X)$ if and only if $\int p_i {\,\mathrm{d}}\mu > 0$, so $P_i(X)$ is an open subset of $P(X)$. Thus, it is equivalent to show that the restriction of $d_i$ to $P_i(X)$ is continuous. We begin by showing that there is a well-defined, continuous map $G_i: P_i(X) \to C(\overline{U_i})$ given by $$G_i(\mu) = (K\mu)^{q - 1}|_{\overline{U_i}}.$$ It is well-defined because, for each $\mu$, the map $K\mu$ is continuous and strictly positive on $\overline{U_i}$ (by step 2). To show that $G_i$ is continuous, consider the following spaces and maps, defined below: $$P_i(X) {\xrightarrow}{K_*} C_i^+(X) {\xrightarrow}{\text{res}} C^+(\overline{U_i}) {\xrightarrow}{(-)^{q - 1}} C^+(\overline{U_i}) \hookrightarrow C(\overline{U_i}).$$ Here $$\begin{aligned} C_i^+(X) & = \{f \in C(X) {\, : \,}f(x) > 0 \text{ for all } x \in \overline{U_i}\}, \\ C^+(\overline{U_i}) & = \{g \in C(\overline{U_i}) {\, : \,}g(x) > 0 \text{ for all } x \in \overline{U_i}\} = {\mathbf}{Top}(\overline{U_i}, (0, \infty)).\end{aligned}$$ The first map $K_*$ is the restriction of $K_*: P(X) \to C(X)$; the restricted $K_*$ is well-defined by step 2 and continuous by . The second map is restriction, which is certainly continuous, the third map $(-)^{q - 1}$ is continuous by and compactness of $\overline{U_i}$, and the last map is inclusion. The composite of these maps is $G_i$, which is therefore also continuous, as claimed. To show that $d_i$ is continuous on $P_i(X)$, consider the chain of maps $$P_i(X) {\xrightarrow}{\triangle} P_i(X) \times P(X) {\xrightarrow}{G_i \times (p_i \cdot -)} C(\overline{U_i}) \times P_{\leq}(\overline{U_i}) {\xrightarrow}{\langle -,-\rangle} {{\mathbb}{R}}.$$ The first map is the diagonal followed by an inclusion; it is continuous. In the second, $p_i \cdot -$ is a restriction of the linear map $M(X) \to M(\overline{U_i})$ defined by $\mu \mapsto p_i \mu$, which is continuous. Since $G_i$ is continuous too, so is $G_i \times (p_i \cdot -)$. The third map is continuous by (\[part:pairing\_cts\_2\]). And the composite of the chain is $d_i|_{P_i(X)}$, which is, therefore, also continuous. #### Step 4: continuity at trivial measures Finally, we show that the function $d_i$ is continuous at all points $\mu \in P(X)$ such that ${\mathrm{supp} \, }p_i \cap {\mathrm{supp} \, }\mu = \emptyset$. Fix such a $\mu$. Given $\nu\in P(X)$, if ${\mathrm{supp} \, }p_i \cap {\mathrm{supp} \, }\nu = \emptyset$ then $d_i(\nu) = 0$, and otherwise $$d_i(\nu) = \int_{\overline{U_i}} (K\nu)^{q-1} p_i {\,\mathrm{d}}\nu \leq \int_{\overline{U_i}} \left( b \int_X p_i {\,\mathrm{d}}\nu \right)^{q-1} p_i {\,\mathrm{d}}\nu = b^{q-1} \left( \int_X p_i {\,\mathrm{d}}\nu \right)^q$$ (using step 2 and that $q < 1$). So in either case, $$\label{eq:cts-triv} 0 \leq d_i(\nu) \leq b^{q-1} \left( \int_X p_i {\,\mathrm{d}}\nu\right)^q.$$ Now as $\nu \to \mu$ in $P(X)$, we have $$\int_X p_i {\,\mathrm{d}}\nu \to \int_X p_i {\,\mathrm{d}}\mu = 0,$$ so $$b^{q-1} \left( \int_X p_i {\,\mathrm{d}}\nu\right)^q \to 0$$ (using that $q > 0$). Hence the bounds  give $d_i(\nu) \to 0 = d_i(\mu)$, as required. The remaining case, $q = 1$, will be deduced from the cases $q \in (0, 1)$ and $q \in (1, \infty)$, exploiting the fact that $D_q^K(\mu)$ is decreasing in $q$. The diversity function $D_1^K: P(X) \to {{\mathbb}{R}}$ is continuous. Let $\mu \in P(X)$ and ${\varepsilon}> 0$. Since $D_q^K(\mu)$ is continuous and decreasing in $q$ (), we can choose $q^+ \in (1,\infty)$ such that $$0 \leq D_1^K(\mu) - D_{q^+}^K(\mu) < {\varepsilon}/2.$$ Since $D_{q^+}^K: P(X) \to {{\mathbb}{R}}$ is continuous, we can find a neighbourhood $U^+$ of $\mu$ such that for all $\nu \in U^+$, $$\bigl|D_{q^+}^K(\mu) - D_{q^+}^K(\nu)\bigr| < {\varepsilon}/2.$$ Then for all $\nu \in U^+$, $$D_1^K(\nu) \geq D_{q^+}^K(\nu) \geq D_1^K(\mu) - {\varepsilon}.$$ Similarly, we can find a neighbourhood $U^-$ of $\mu$ such that for all $\nu \in U^-$, $$D_1^K(\nu) \leq D_1^K(\mu) + {\varepsilon}$$ Hence $|D_1^K(\nu) - D_1^K(\mu)| \leq {\varepsilon}$ for all $\nu \in U^+ \cap U^-$. This completes the proof of : the diversity function of each finite positive order is continuous. excludes the cases $q = 0$ and $q = \infty$. Diversity of order $0$ is not continuous even in the simplest case of a finite set and the identity similarity matrix; for as we saw in Example \[eg:fin-div-1\], $D_0^I({\mathbf}{p})$ is the cardinality of ${\mathrm{supp} \, }{\mathbf}{p}$, which is not continuous in ${\mathbf}{p}$. Diversity of order $\infty$ need not be continuous either. For example, take $X = \{1, 2, 3\}$ and the similarity matrix $$Z = \begin{pmatrix} 1 &1 &0 \\ 1 &1 &1 \\ 0 &1 &1 \end{pmatrix},$$ and put ${\mathbf}{p} = (1/2 - t, 2t, 1/2 - t)$. Then $D_\infty^Z({\mathbf}{p})$ is $1$ if $t \in (0, 1/2)$, but $2$ if $t = 0$. Magnitude {#S_magnitude} ========= In order to show that maximum diversity and maximum entropy are well-defined, we first have to define a closely related invariant, magnitude. Magnitude has been defined and studied at various levels of generality, including finite enriched categories and compact metric spaces, for which it has strong geometric content. (See [@Leinstermagnitude2017] for a survey.) We will define the magnitude of a space with similarities. First we consider signed measures whose typicality function takes constant value $1$. \[def\_weighting\] Let $X = (X,K)$ be a space with similarities. A **weight measure** on $X$ is a signed measure $\mu \in M(X)$ such that $K\mu \equiv 1$ on $X$. This generalises the definition of weight measure on a compact metric space, introduced in [@Willertonmagnitude2014]. Note that despite our convention that ‘measure’ means positive measure, a weight measure is a *signed* measure. \[eg:mag-fin\] Let $X = \{1, \ldots, n\}$ be a finite set, writing, as usual, $K(i, j) = Z_{ij}$. Then a weight measure on $X$ is a vector ${\mathbf}{w} \in {{\mathbb}{R}}^n$ such that $(Z{\mathbf}{w})_i = 1$ for $i = 1, \ldots, n$. If $Z$ is invertible then there is exactly one weight measure, but in general there may be none or many. Even if $Z$ has many weight measures, the total weight $\sum_i w_i$ turns out to be independent of the weighting ${\mathbf}{w}$ chosen, just as long as $Z$ is symmetric (or, more generally, the transpose of $Z$ admits a weighting too). This common quantity $\sum_i w_i$ is called the magnitude of $(X, K)$, and its independence of the weighting chosen is a special case of the following result. A space with similarities $(X, K)$ is **symmetric** if $K$ is symmetric. \[lemma:wtg-ind\] Let $(X, K)$ be a symmetric space with similarities. Then $\mu(X) = \nu(X)$ for any weight measures $\mu$ and $\nu$ on $X$. Since $\nu$ is a weight measure, $$\mu(X) = \int_X {\,\mathrm{d}}\mu(x) = \int_X \left( \int_X K(x, y) {\,\mathrm{d}}\nu(y) \right) {\,\mathrm{d}}\mu(x).$$ Since $\mu$ is a weight measure, $$\nu(X) = \int_X {\,\mathrm{d}}\nu(y) = \int_X \left( \int_X K(y, x) {\,\mathrm{d}}\mu(x) \right) {\,\mathrm{d}}\nu(y).$$ So by symmetry of $K$ and Tonelli’s theorem, $\mu(X) = \nu(X)$. This lemma makes the following definition valid. \[def\_magnitude\] Let $(X, K)$ be a symmetric space with similarities admitting at least one weight measure. The **magnitude** of $(X,K)$ is $$|(X,K)|= \mu(X),$$ for any weight measure $\mu$ on $(X, K)$. We often write $|(X, K)|$ as just $|X|$. We will mostly be concerned with *positive* weight measures. (Note that in an unfortunate clash of terminology, a weight measure on a finite set is positive if and only if the corresponding vector is nonnegative.) \[zero\_magnitude\] Let $(X, K)$ be a symmetric space with similarities admitting a positive weight measure. Then $|X| \geq 0$, with equality if and only if $X = \emptyset$. The inequality is immediate from the definition of magnitude, as is the fact that the magnitude of the empty set is $0$. Now suppose that $X$ is nonempty. Choose $x \in X$ and a positive weight measure $\mu$ on $(X, K)$. Since $\int_X K(x,-) {\,\mathrm{d}}\mu= 1$, the measure $\mu$ is nonzero. Hence, $|X| = \mu(X) > 0$. Let $(X, K)$ be a space with similarities. Given a closed subset $Y$ of $X$, we regard $Y$ as a space with similarities by restriction of the similarity kernel $K$. Any measure $\nu \neq 0$ on $Y$ can be normalised and extended by zero to give a probability measure ${\widehat{\nu}}$ on $X$, defined by $${\widehat{\nu}}(U) = \frac{\nu(U \cap Y)}{\nu(Y)}$$ for all Borel sets $U \subseteq X$. In particular, whenever $Y \neq \emptyset$ and $\nu$ is a positive weight measure on $Y$, the probability measure ${\widehat{\nu}}$ on $X$ is well-defined (by ), with $${\widehat{\nu}}(U) = \frac{\nu(U \cap Y)}{|Y|}$$ for all Borel sets $U \subseteq X$. The construction $\nu \mapsto {\widehat{\nu}}$ relates the notion of weight measure to that of balanced measure (Remark \[typicality\_constant\]) as follows. \[balanced\_tfae\] Let $(X, K)$ be a symmetric space with similarities. The following are equivalent for a probability measure $\mu$ on $X$: 1. \[part:it-const\] $\mu$ is balanced (that is, $K\mu$ is constant on ${\mathrm{supp} \, }\mu$); 2. \[part:it-flat\] the function $q \mapsto D_q^K(\mu)$ is constant on $[-\infty, \infty]$; 3. \[part:it-supp\] $\mu = {\widehat{\nu}}$ for some positive weight measure $\nu$ on ${\mathrm{supp} \, }\mu$; 4. \[part:it-wext\] $\mu = {\widehat{\nu}}$ for some positive weight measure $\nu$ on some nonempty closed subset $Y \subseteq X$. When these conditions hold, $D_q^K(\mu) = |Y|$ for all nonempty $Y \subseteq X$ admitting a positive weight measure $\nu$ such that ${\widehat{\nu}} = \mu$, and all $q \in [-\infty, \infty]$. The equivalence of (\[part:it-const\]) and (\[part:it-flat\]) is (\[part:dcq-dec\]). Now assuming (\[part:it-const\]), we prove (\[part:it-supp\]). Write $c$ for the constant value of $K\mu$ on ${\mathrm{supp} \, }\mu$. Then $c > 0$ by (\[part:Kp-supp\]), so we can define a measure $\nu$ on ${\mathrm{supp} \, }\mu$ by $\nu(W) = \mu(W)/c$ for all Borel sets $W \subseteq {\mathrm{supp} \, }\mu$. Then $\nu$ is a weight measure on ${\mathrm{supp} \, }\mu$, since for all $y \in {\mathrm{supp} \, }\mu$, $$(K\nu)(y) = \int_{{\mathrm{supp} \, }\mu} K(y,-){\,\mathrm{d}}\nu = \frac{1}{c} \int_X K(y,-) {\,\mathrm{d}}\mu = \frac{1}{c} (K\mu)(y) = 1.$$ Finally, ${\widehat{\nu}} = \mu$: for given a Borel set $U \subseteq X$, $${\widehat{\nu}}(U) = \frac{\nu(U \cap {\mathrm{supp} \, }\mu)}{\nu({\mathrm{supp} \, }\mu)} = \frac{\mu(U \cap {\mathrm{supp} \, }\mu)}{\mu({\mathrm{supp} \, }\mu)} = \mu(U),$$ since $\mu$ is a probability measure. This completes the proof that (\[part:it-const\]) implies (\[part:it-supp\]). Trivially, (\[part:it-supp\]) implies (\[part:it-wext\]). Finally, we assume (\[part:it-wext\]) and prove (\[part:it-const\]). Take $Y$ and $\nu$ as in (\[part:it-wext\]). For all $x \in {\mathrm{supp} \, }\mu$, $$(K\mu)(x) = \int_X K(x,-) {\,\mathrm{d}}{\widehat{\nu}} = \frac{1}{\nu(Y)} \int_Y K(x,-) {\,\mathrm{d}}\nu = \frac{1}{\nu(Y)},$$ since $\nu$ is a weight measure on $Y$. This proves (\[part:it-const\]). It also proves the final statement: for by , $$D_q^K(\mu) = \nu(Y) = |Y|$$ for all $q \in [-\infty, \infty]$. Balanced and maximising measures {#S_prep_lemmas} ================================ In the case of the Kronecker delta on a finite discrete space, maximising diversity is very simple. Indeed, it is a classical and elementary result that for each $q \in [0, \infty]$, the Rényi entropy $H_q^I$ of order $q$ (Example \[eg:fin-div-1\]) is maximised by the uniform distribution, and that unless $q = 0$, the uniform distribution is unique with this property. The same is therefore true of the diversity measures $D_q^I$. For a finite space with an arbitrary similarity kernel, maximising measures are no longer uniform [@LeinsterMaximizing2016]. We cannot, therefore, expect that on a general space with similarities, diversity is maximised by the ‘uniform’ measure (whatever that might mean). Nevertheless, maximising measures have a different uniformity property: they are balanced. That is the main result of this section. \[rmk:q-range\] We usually restrict the parameter $q$ to lie in the range $[0, \infty]$. Even in the simplest case of the Kronecker delta on a finite set, $D_q^K$ and $H_q^K$ behave quite differently for negative $q$ than for positive $q$. When $q < 0$, the uniform measure no longer maximises $D_q^K$ or $H_q^K$, and in fact *minimises* them among all measures of full support (as can be shown using Proposition \[diversity\_cts\_q\](\[part:dcq-dec\])). [***For the rest of this section, let $(X, K)$ be a symmetric space with similarities.***]{} \[def\_maximising\] Let $q \in [0, \infty]$. A probability measure on $X$ is **$q$-maximising** if it maximises $D_q^K$. A probability measure on $X$ is **maximising** if it is $q$-maximising for all $q \in [0,\infty]$. We will show in Section \[S\_main\_theorem\] that any measure that is $q$-maximising for some $q > 0$ is, in fact, maximising. That proof will depend on the next result: that any measure that is $q$-maximising for some $q \in (0, 1)$ is balanced. We prove this using a variational argument. The strategy is similar to that used in the finite case ([@LeinsterMaximizing2016], Section 5), which can be interpreted as follows. Let $X$ be a set of species with symmetric similarity matrix $Z$, and let ${\mathbf}{p}$ be a probability distribution on $X$. Take $i^-, i^+ \in {\mathrm{supp} \, }{\mathbf}{p}$ with the property that $(Z{\mathbf}{p})_{i^-}$ is minimal and $(Z{\mathbf}{p})_{i^+}$ is maximal. Ecologically, then, $i^-$ is the least typical species present in the community, and $i^+$ is the most typical. Suppose that ${\mathbf}{p}$ is not balanced, that is, some species are more typical than others. Then $(Z{\mathbf}{p})_{i^-} < (Z{\mathbf}{p})_{i^+}$. For $t \in {{\mathbb}{R}}$, write $${\mathbf}{p}_t = {\mathbf}{p} + t(\delta_{i^-} - \delta_{i^+}),$$ where $\delta_i$ is the vector with $i^{{\textrm{th}}}$ entry equal to 1 and all other entries 0. When $t$ is sufficiently small, ${\mathbf}{p}_t$ is a probability distribution on $X$, and describes the relative abundance distribution after $t$ units of the most typical species $i^+$ have been replaced by $t$ units of the least typical species $i^-$. It is intuitively plausible that this substitution should increase the diversity of the community, and indeed it can be shown that, at least for $q \in (0,1)$, the derivative of $D_q^K({\mathbf}{p}_t)$ at $t=0$ is strictly positive. Now fix $q \in (0,1)$, and suppose that ${\mathbf}{p}$ is $q$-maximising. Then the derivative of $D_q^K({\mathbf}{p}_t)$ at $t=0$ is $0$. Hence, by the previous paragraph, the distribution ${\mathbf}{p}$ is balanced. The moral is that although a $q$-maximising distribution does not generally have all species equally *abundant*, they are all equally *typical*. Extending the argument from finite spaces to compact spaces introduces complications of two kinds. First, there are routine matters arising from replacing sums by integrals. But more significantly, and unlike in the finite case, the Dirac measure $\delta_x$ need not be absolutely continuous with respect to $\mu$ for points $x$ in the support of a probability measure $\mu$. For this reason, when $x^\pm$ are the least and most typical points of $X$, the signed measure $\mu + t(\delta_{x^-} - \delta_{x^+})$ need not be positive (hence, is not a probability measure), even for small $t$. We therefore use approximations to Dirac measures, as provided by . \[qmax\_balanced\] For $q \in (0,1)$, every $q$-maximising measure on $(X, K)$ is balanced. Let $q \in (0,1)$ and let $\mu$ be a $q$-maximising measure on $(X, K)$. Since $K\mu$ is continuous, it attains its infimum and supremum on the compact set ${\mathrm{supp} \, }\mu$. Take $x^-, x^+ \in {\mathrm{supp} \, }\mu$ such that $$(K\mu)(x^-) = \inf_{{\mathrm{supp} \, }\mu} K\mu, \qquad (K\mu)(x^+) = \sup_{{\mathrm{supp} \, }\mu} K\mu.$$ To prove that $\mu$ is balanced, it will suffice to show that $(K\mu)(x^-) = (K\mu)(x^+)$. Let ${\varepsilon}> 0$. We first construct functions $u^\pm$ such that the measures $u^\pm \mu$ approximate the Dirac measures at $x^\pm$, using . Write $$E = \{ (K\mu)^{q - 1}|_{{\mathrm{supp} \, }\mu} \} \cup \{ K(x, -)|_{{\mathrm{supp} \, }\mu} {\, : \,}x \in X \} \subseteq C({\mathrm{supp} \, }\mu)$$ (which is well-defined by Lemma \[Kmu\_cts\] and Proposition \[Kmu\_props\](\[part:Kp-supp\])). Then $E$ is compact, since it is the union of a singleton with the image of the compact space $X$ under the composite of continuous maps $$X {\xrightarrow}{\overline{K}} C(X) {\xrightarrow}{\text{restriction}} C({\mathrm{supp} \, }\mu)$$ (using Lemma \[Kbar\_cts\]). Hence $E$ is equicontinuous (e.g. by Theorem IV.6.7 of [@DunfordSchwartz1958]). So by , we can choose a nonnegative function $u^- \in C(X)$ such that $\int_X u^- {\,\mathrm{d}}\mu = 1$ and $$\begin{aligned} \left| \int_X (K\mu)^{q-1} {\,\mathrm{d}}(u^-\mu) - (K\mu)(x^-)^{q-1}\right| & \leq {\varepsilon}, \\ \left| \int_X K(x,-) {\,\mathrm{d}}(u^-\mu) - K(x, x^-) \right| & \leq {\varepsilon},\end{aligned}$$ the latter for all $x \in X$. Choose $u^+$ similarly for $x^+$. Since $u^- - u^+$ is bounded, we can choose an open interval $I \subseteq {{\mathbb}{R}}$, containing $0$, such that the function $1+ t\left(u^- - u^+\right) \in C(X)$ is strictly positive for each $t \in I$. Then for each $t \in I$, we have a probability measure $$\mu_t = (1 + t(u^- - u^+)) \mu$$ on $X$, with ${\mathrm{supp} \, }\mu_t = {\mathrm{supp} \, }\mu$. Note that $\mu_0 = \mu$. We will exploit the fact that $D_q^K(\mu_t)$ has a local maximum at $t = 0$, showing that the function $t \mapsto D_q^K(\mu_t)^{1 - q}$ is differentiable at $0$ and, therefore, has derivative $0$ there. For each $t \in I$, $$\begin{aligned} D_q^K(\mu_t)^{1 - q} & = \int (K\mu_t)^{q - 1} {\,\mathrm{d}}\mu + t \int (K\mu_t)^{q - 1} d\bigl((u^- - u^+)\mu\bigr) \nonumber \\ & = a(t) + b(t), \label{max_bal_0} \end{aligned}$$ say. (Since ${\mathrm{supp} \, }(K\mu_t) \supseteq {\mathrm{supp} \, }(\mu_t) = {\mathrm{supp} \, }\mu$, the integrand $(K\mu_t)^{q - 1}$ is well-defined and continuous on ${\mathrm{supp} \, }\mu$, and both integrals are finite.) We now show that $a(t)$ and $b(t)$ are differentiable at $t = 0$, compute their derivatives there, and bound the derivatives below. To differentiate the integral $a(t)$, we use . Choose a bounded open subinterval $J$ of $I$, also containing $0$, whose closure $\overline{J}$ is a subset of $I$. We verify that the function $f: X \times J \to {{\mathbb}{R}}$ defined by $$f(x, t) = (K\mu_t)(x)^{q - 1} = \Bigl[ (K\mu)(x) + t K\bigl((u^- - u^+) \mu\bigr)(x) \Bigr]^{q - 1}$$ satisfies the conditions of . For condition \[diff\_int\](\[part:di-int\]), we have already seen that $f(-, t) = (K\mu_t)^{q - 1}$ is $\mu$-integrable for each $t \in I$. For condition \[diff\_int\](\[part:di-diff\]): for all $x \in {\mathrm{supp} \, }\mu$, the function $f(x, -)$ is differentiable on $I$ (and in particular on $J$), with derivative $$t \mapsto \frac{\partial f}{\partial t} (x,t) = (q-1) \Bigl[ (K\mu)(x) + t K\bigl((u^- - u^+) \mu\bigr) (x) \Bigr]^{q - 2} \cdot K\bigl((u^- - u^+)\mu\bigr)(x).$$ For condition \[diff\_int\](\[part:di-bound\]), this formula shows that $\partial f/\partial t$ is continuous on $({\mathrm{supp} \, }\mu) \times I$. Hence $|\partial f/\partial t|$ is continuous on the compact space $({\mathrm{supp} \, }\mu) \times \overline{J}$, and therefore bounded on $({\mathrm{supp} \, }\mu) \times J$, with supremum $H$, say. Let $h: X \to {{\mathbb}{R}}$ be the function with constant value $H$. Then $h$ is $\mu$-integrable and $\bigl|\tfrac{\partial f}{\partial t}(x, t)\bigr| \leq h(x)$ for all $x \in {\mathrm{supp} \, }\mu$ and $t \in J$, as required. We can therefore apply , which implies that $a(t)$ is differentiable at $t = 0$ and $$\begin{aligned} a'(0) & = (q - 1) \int (K\mu)(x)^{q - 2} K\bigl((u^- - u^+)\mu\bigr)(x) {\,\mathrm{d}}\mu(x) \nonumber \\ & = (q-1) \int (K\mu)(x)^{q-2} \biggl( \int K(x, y) {\,\mathrm{d}}((u^- - u^+) \mu)(y) \biggr) {\,\mathrm{d}}\mu(x) \nonumber \\ & \geq (q-1) \int (K\mu)^{q-2} \left( K(-, x^-) - K(-, x^+) + 2 {\varepsilon}\right) {\,\mathrm{d}}\mu, \label{eq:a-bound}\end{aligned}$$ where the inequality follows from the defining properties of $u^-$ and $u^+$ and the fact that $q < 1$. Now consider $b(t)$. By definition of derivative, $b$ is differentiable at $0$ if and only if the limit $$\lim_{t \to 0} \int (K\mu_t)^{q-1} {\,\mathrm{d}}((u^- - u^+)\mu)$$ exists, and in that case $b'(0)$ is that limit. As $t \to 0$, $$K\mu_t = K\mu + t K\bigl( (u^- - u^+) \mu\bigr) \to K\mu$$ in $C({\mathrm{supp} \, }\mu)$, so $(K\mu_t)^{q - 1} \to (K\mu)^{q - 1}$ in $C({\mathrm{supp} \, }\mu)$ (by ), so $$\int_{{\mathrm{supp} \, }\mu} (K\mu_t)^{q - 1} {\,\mathrm{d}}\bigl( (u^- - u^+) \mu\bigr) \to \int_{{\mathrm{supp} \, }\mu} (K\mu)^{q - 1} {\,\mathrm{d}}\bigl( (u^- - u^+) \mu\bigr).$$ Hence $b'(0)$ exists and is given by $$b'(0) = \int_X (K\mu)^{q - 1} {\,\mathrm{d}}\bigl( (u^- - u^+) \mu\bigr).$$ By the defining properties of $u^-$ and $u^+$, it follows that $$\label{eq:b-bound} b'(0) \geq (K\mu)(x^-)^{q-1} - (K\mu)(x^+)^{q-1} - 2{\varepsilon}.$$ Returning to equation , we have now shown that both $a(t)$ and $b(t)$ are differentiable at $t = 0$. So too, therefore, is $D_q^K(\mu_t)^{1 - q}$. But by the maximality of $\mu$, its derivative there is 0. Hence the bounds  and  give $$\begin{aligned} 0 & \geq (q-1) \int (K\mu)^{q-2} \left( K(-, x^-) - K(-, x^+) + 2{\varepsilon}\right) {\,\mathrm{d}}\mu + (K\mu)(x^-)^{q-1} - (K\mu)(x^+)^{q-1} - 2{\varepsilon}\nonumber \\ & = (q-1) \left( \int (K\mu)^{q-2} K(x^-, -) {\,\mathrm{d}}\mu - \int (K\mu)^{q-2} K(x^+, -) {\,\mathrm{d}}\mu + 2{\varepsilon}\int (K\mu)^{q - 2} {\,\mathrm{d}}\mu \right) \nonumber \\ & \phantom{= \mbox{}} \mbox{} + (K\mu)(x^-)^{q-1} - (K\mu)(x^+)^{q-1} - 2{\varepsilon}, \label{ie_calc_1}\end{aligned}$$ using the symmetry of $K$. Consider the first integral in . We have $(K\mu)(x) \geq (K\mu)(x^-)$ for all $x \in {\mathrm{supp} \, }\mu$, by definition of $x^-$. Since $q - 2 < 0$, this implies that $$\int (K\mu)^{q - 2} K(x^-, -) {\,\mathrm{d}}\mu \leq (K\mu)(x^-)^{q - 2} \int K(x^-, -) {\,\mathrm{d}}\mu = (K\mu)(x^-)^{q - 1}.$$ A similar statement holds for $x^+$. Since $q - 1 < 0$, it follows from  that $$\label{ie_calc_2} 0 \geq q \left( (K\mu)(x^-)^{q-1} - (K\mu)(x^+)^{q-1} \right) - 2{\varepsilon}\left( (1-q) \int (K\mu)^{q-2} {\,\mathrm{d}}\mu + 1 \right).$$ Put $c = (1-q) \int (K\mu)^{q-2} {\,\mathrm{d}}\mu + 1 \in {{\mathbb}{R}}$. Then by , the defining properties of $x^-$ and $x^+$, and the fact that $0 < q < 1$, $$2{\varepsilon}c \geq q \left( (K\mu)(x^-)^{q-1} - (K\mu)(x^+)^{q-1} \right) \geq 0.$$ Taking ${\varepsilon}\to 0$, we see that $(K\mu)(x^-) = (K\mu)(x^+)$, which proves the result. \[qmax\_balanced\_cor\] Assume that $X$ is nonempty. For each $q \in (0,1)$, there exists a balanced $q$-maximising probability measure on $X$. Fix $q \in (0,1)$. The function $D_q^K$ is continuous on the nonempty compact space $P(X)$ (), so attains a maximum at some $\mu \in P(X)$. By , $\mu$ is balanced. Thus, balanced $q$-maximising measures exist for arbitrarily small $q > 0$. In the next section, we will use a limiting argument to find a balanced $0$-maximising measure. The following lemma shows that any such measure maximises diversity of all orders simultaneously. \[balanced\_maximising\] For $0 \leq q' \leq q \leq \infty$, any balanced probability measure that is $q'$-maximising is also $q$-maximising. In particular, any balanced measure that is $0$-maximising is maximising. Let $0 \leq q' \leq q \leq \infty$ and let $\mu$ be a balanced $q'$-maximising measure. Then for all $\nu \in P(X)$, $$D_q^K(\nu) \leq D_{q'}^K(\nu) \leq D_{q'}^K(\mu) = D_q^K(\mu)$$ where the first inequality follows from (\[part:dcq-dec\]), the second inequality from maximality of $D_{q'}^K(\mu)$, and the equality from Lemma \[balanced\_tfae\] and $\mu$ being balanced. For the limiting argument, we will use: \[lemma:closed\] 1. \[part:closed-bal\] The set of balanced probability measures is closed in $P(X)$. 2. \[part:closed-max\] For each $q \in (0, \infty)$, the set of $q$-maximising probability measures is closed in $P(X)$. For (\[part:closed-bal\]), by Lemma \[balanced\_tfae\] and (\[part:dcq-dec\]), the set of balanced measures is $$\{ \mu \in P(X) {\, : \,}D_1^K(\mu) = D_2^K(\mu) \}.$$ But $D_1^K, D_2^K: P(X) \to {{\mathbb}{R}}$ are continuous (by ), so by a standard topological argument, this set is closed. Part (\[part:closed-max\]) is immediate from the continuity of $D_q^K$. The maximisation theorem {#S_main_theorem} ======================== We now come to our main theorem: \[main\_theorem\] Let $(X, K)$ be a nonempty symmetric space with similarities. 1. \[part:main-meas\] There exists a probability measure $\mu$ on $X$ that maximises $D_q^K(\mu)$ for all $q \in [0, \infty]$ simultaneously. 2. \[part:main-div\] The maximum diversity $\sup_{\mu \in P(X)} D_q^K(\mu)$ is independent of $q \in [0, \infty]$. We have already shown that for each $q \in (0,1)$ there exists a balanced $q$-maximising probability measure on $X$ (). Since $P(X)$ is compact, we can choose some $\mu \in P(X)$ such that for every $q > 0$ and neighbourhood $U$ of $\mu$, there exist $q' \in (0, q)$ and a balanced $q'$-maximising measure in $U$. By , for every $q > 0$, every neighbourhood of $\mu$ contains a balanced $q$-maximising measure. To prove both parts of the theorem, it suffices to show that $\mu$ is balanced and maximising. Since the set of balanced measures is closed (Lemma \[lemma:closed\](\[part:closed-bal\])), $\mu$ is balanced. Since the set of $q$-maximising measures is closed for each $q > 0$ (Lemma \[lemma:closed\](\[part:closed-max\])), $\mu$ is $q$-maximising for each $q > 0$. Now given any $\nu \in P(X)$, we have $D_q^K(\mu) \geq D_q^K(\nu)$ for all $q > 0$; then passing to the limit as $q \to 0+$ and using the continuity of diversity in its order ((\[part:dcq-cts\])) gives $D_0^K(\mu) \geq D_0^K(\nu)$. Hence $\mu$ is $0$-maximising. But $\mu$ is also balanced, so by Lemma \[balanced\_maximising\], $\mu$ is maximising. The symmetry hypothesis in the theorem cannot be dropped, even in the finite case ([@LeinsterMaximizing2016], Section 8). Part (\[part:main-div\]) of the theorem shows that maximum diversity is an unambiguous real invariant of a space, not depending on a choice of parameter $q$: \[def\_maxdiv\] Let $(X, K)$ be a nonempty symmetric space with similarities. The **maximum diversity** of $(X, K)$ is $${D_{\max}(X, K)} = \sup_{\mu \in P(X)} D_q^K(\mu) \in (0, \infty),$$ for any $q \in [0,\infty]$. Similarly, the **maximum entropy** of $(X, K)$ is $${H_{\max}(X, K)} = \log{D_{\max}(X, K)} = \sup_{\mu \in P(X)} H_q^K(\mu),$$ for any $q \in [0, \infty]$. We often abbreviate ${D_{\max}(X, K)}$ to ${D_{\max}(X)}$. The well-definedness of maximum diversity can be understood as follows. As explained and proved in Section \[S\_prep\_lemmas\], for a maximising measure $\mu$, all points in ${\mathrm{supp} \, }\mu$ are equally typical. Diversity is mean atypicality, and although the notion of mean varies with the order $q$, all means have the property that the mean of a constant function is that constant (). Thus, our maximising measure $\mu$ has the same diversity of all orders. That diversity is ${D_{\max}(X)}$. A corollary of is that to find a measure that maximises diversity of *all* positive orders, it suffices to find one that maximises diversity of just *one* positive order. \[one\_is\_enough\] Let $(X, K)$ be a symmetric space with similarities. Suppose that $\mu \in P(X)$ is $q$-maximising for some $q \in (0,\infty]$. Then $\mu$ is maximising. Fix $q \in (0,\infty]$ and let $\mu$ be a $q$-maximising measure. Then $$D_q^K(\mu) \leq D_0^K(\mu) \leq {D_{\max}(X)} = D_q^K(\mu),$$ so equality holds throughout. As $D_q^K(\mu) = D_0^K(\mu)$ with $q \neq 0$, (\[part:dcq-dec\]) implies that $\mu$ is balanced. But also $D_0^K(\mu) = {D_{\max}(X)}$, so $\mu$ is $0$-maximising. then implies that $\mu$ is maximising. The exclusion of the case $q = 0$ here is necessary: not every $0$-maximising measure is maximising, even in the finite case ([@LeinsterMaximizing2016], end of Section 6) asserts the mere existence of a maximising measure and the well-definedness of maximum diversity. But we can describe the maximum diversity and maximising measures somewhat explicitly, in terms of magnitude and weight measures: \[cor:comp\] Let $(X, K)$ be a nonempty symmetric space with similarities. 1. \[part:comp-max\] We have $$\label{eq:max-mag} {D_{\max}(X)} = \sup_Y |Y|,$$ where the supremum is over the nonempty closed subsets $Y$ of $X$ admitting a positive weight measure. 2. \[part:comp-meas\] A probability measure $\mu$ on $X$ is maximising if and only if it is equal to ${\widehat{\nu}}$ for some positive weight measure $\nu$ on some subset $Y$ attaining the supremum in . In that case, ${D_{\max}(X)} = |{\mathrm{supp} \, }\mu|$. For any $q \in [0, \infty]$, $$\begin{aligned} {D_{\max}(X)} & = \sup\{D_q^K(\mu) {\, : \,}\mu \in P(X), \ \mu \text{ is balanced}\} \label{eq:comp1} \\ & = \sup\{|Y| {\, : \,}\text{nonempty closed } Y \subseteq X \text{ admitting a positive weight measure}\}, \label{eq:comp2} \end{aligned}$$ where  follows from the existence of a balanced maximising measure and  from Lemma \[balanced\_tfae\]. This proves (\[part:comp-max\]). Every maximising measure is balanced, so (\[part:comp-meas\]) also follows, again using Lemma \[balanced\_tfae\]. When $X$ is finite, this result provides an algorithm for computing maximum diversity and maximising measures ([@LeinsterMaximizing2016], Section 7). As an immediate consequence of (\[part:comp-max\]), maximum diversity is monotone with respect to inclusion: \[md\_monotone\_1\] Let $X$ be a symmetric space with similarities, and let $Y \subseteq X$ be a nonempty closed subset. Then ${D_{\max}(Y)} \leq {D_{\max}(X)}$. Maximum diversity is also monotone in another sense: reducing the similarity between points increases the maximum diversity. For metric spaces, this means that as distances increase, so does maximum diversity. \[md\_monotone\_2\] Let $X$ be a nonempty compact Hausdorff space. Let $K, K'$ be symmetric similarity kernels on $X$ such that $K(x, y) \geq K'(x, y)$ for all $x, y \in X$. Then $${D_{\max}(X, K)} \leq {D_{\max}(X, K')}.$$ Fix $q \in [0, \infty]$. By definition of maximum diversity, it is equivalent to show that $$\sup_{\mu \in P(X)} D_q^{K}(\mu) \leq \sup_{\mu \in P(X)} D_q^{K'}(\mu).$$ Recall that $D_q^{K}(\mu) = M_{q-1}(\mu, 1/K\mu)$ for all $\mu \in P(X)$. The hypotheses imply that $K\mu \geq K'\mu$ pointwise, and the power mean is increasing in its second variable, so the result follows. Maximising measures need not have full support. Ecologically, that may seem counterintuitive: can maximising diversity really entail eliminating some species? This phenonemon is discussed in depth in Section 11 of [@LeinsterMaximizing2016], but in short: if a species is so ordinary that all of its features are displayed more vividly by some other species, then maximising diversity may indeed mean omitting it in favour of species that are more distinctive. With this in mind, it is to be expected that any species absent from a maximising distribution is at least as ordinary or typical as those present: \[lemma:supertypical\] Let $\mu$ be a maximising measure on a nonempty symmetric space with similarities $(X, K)$. Then $(K\mu)(x) \geq 1/{D_{\max}(X)}$ for all $x \in X$. Recall that $K\mu$ has constant value $1/{D_{\max}(X)}$ on ${\mathrm{supp} \, }\mu$, by Proposition \[qmax\_balanced\]. The proof uses an observation that will also be used later: on $M(X)$, there is a symmetric bilinear form ${\langle -, - \rangle}_X$ defined by $$\label{eq:form} {\langle \nu, \pi \rangle}_X = \int_X \int_X K(x, y) {\,\mathrm{d}}\nu(x) {\,\mathrm{d}}\pi(y)$$ ($\nu, \pi \in M(X)$). Thus, $D_2^K(\nu) = 1/{\langle \nu, \nu \rangle}_X$. Let $x \in X$. For $s \in [0, 1]$, put $$\nu_s = (1 - s)\mu + s \delta_x \in P(X).$$ Then for all $s \in [0, 1]$, $$\begin{aligned} 1/D_2^K(\nu_S) & = {\bigl\langle (1 - s)\mu + s\delta_x, (1 - s)\mu + s\delta_x \bigr\rangle}_X \\ & = (1 - s)^2 / {D_{\max}(X)} + 2s(1 - s)\cdot (K\mu)(x) + s^2 K(x, x).\end{aligned}$$ Rearranging gives $$\frac{1}{D_2^K(\nu_S)} - \frac{1}{{D_{\max}(X)}} = \biggl\{ \biggl( \frac{1}{{D_{\max}(X)}} - 2(K\mu)(x) + K(x, x) \biggr) s + 2 \biggl( (K\mu)(x) - \frac{1}{{D_{\max}(X)}} \biggr) \biggr\} s.$$ But the left-hand side is nonnegative for all $s \in (0, 1]$, so the affine function $\{\cdots\}$ in $s$ is nonnegative too, from which it follows that $(K\mu)(x) - 1/{D_{\max}(X)} \geq 0$. It follows that although some species may be absent from a maximising distribution, none can be too different from those present: \[cor:support\] Let $\mu$ be a maximising measure on a nonempty symmetric space with similarities $(X, K)$. Then for all $x \in X$, there exists $y \in {\mathrm{supp} \, }\mu$ such that $K(x, y) \geq 1/{D_{\max}(X)}$. Let $x \in X$. Then by Lemma \[lemma:supertypical\], $$\frac{1}{{D_{\max}(X)}} \leq (K\mu)(x) = \int_{{\mathrm{supp} \, }\mu} K(x, y) {\,\mathrm{d}}\mu(y) \leq \sup_{y \in {\mathrm{supp} \, }\mu} K(x, y),$$ and since ${\mathrm{supp} \, }\mu$ is compact, the supremum is attained. Metric spaces {#S_metric} ============= The rest of this paper has a more geometric focus. We specialise to the case of a compact metric space $X = (X, d)$, using the similarity kernel $K(x, y) = e^{-d(x, y)}$ and writing $D_q^K$ as $D_q^X$. We have seen that maximum diversity is closely related to magnitude. Here, we review some of the geometric properties of magnitude (surveyed at greater length in [@Leinstermagnitude2017]) and their consequences for maximum diversity. We then compute the maximum diversity of several classes of metric space. Most of the theory of the magnitude of metric spaces assumes that the space is **positive definite**, meaning that for every finite subset $\{x_1, \ldots, x_n\}$, the matrix $(e^{-d(x_i, x_j)})$ is positive definite. Many of the most familiar metric spaces have this property, including all subsets of ${{\mathbb}{R}}^n$ with the Euclidean or $\ell^1$ (taxicab) metric, all subsets of hyperbolic space, and all ultrametric spaces ([@MeckesPositive2013], Theorem 3.6). There are several equivalent definitions of the magnitude of a positive definite compact metric space (as shown by Meckes in [@MeckesMagnitude2015] and [@Leinstermagnitude2017]). The simplest is this: $$|X| = \sup \{ |Y| {\, : \,}\text{finite } Y \subseteq X \}.$$ When $X$ admits a weight measure (and in particular, when $X$ is finite), this is equivalent to Definition \[def\_magnitude\]. Indeed, Meckes proved ([@MeckesPositive2013], Theorems 2.3 and 2.4): \[pdms\_variational\] Let $X$ be a positive definite compact metric space. Then $$|X| = \sup_\mu \frac{\mu(X)^2}{\int_X \int_X e^{-d(x, y)} {\,\mathrm{d}}\mu(x) {\,\mathrm{d}}\mu(y)},$$ where the supremum is over all $\mu \in M(X)$ such that the denominator is nonzero. The supremum is attained by $\mu$ if and only if $\mu$ is a scalar multiple of a weight measure, and if $\mu$ is a weight measure then $|X| = \mu(X)$. Note that the supremum is over *signed* measures, unlike the similar expression for maximum diversity in Example \[eg:div2\]. Work such as [@Barcelomagnitudes2018] has established that even for some of the most straightforward spaces (including Euclidean balls), no weight measure exists; in that case, the supremum is not attained. An important property of magnitude for positive definite spaces, immediate from the definition, is that if $Y \subseteq X$ then $|Y| \leq |X|$. From Corollary \[cor:comp\](\[part:comp-max\]), it follows that $$\label{eq:max-leq-mag} {D_{\max}(X)} \leq |X|$$ for all positive definite compact metric spaces $X \neq \emptyset$. Any one-point subset of $X$ has a positive weight measure and magnitude $1$, so also $${D_{\max}(X)} \geq 1.$$ If $X$ does not admit a weight measure then it follows from Corollary \[cor:comp\](\[part:comp-meas\]) that no maximising measure on $X$ has full support. Indeed, the apparent rarity of spaces admitting a weight measure suggests that the supremum in Corollary \[cor:comp\] runs over a rather small class of subsets $Y$. Corollary \[cor:comp\] implies that the problem of computing maximum diversity is closely related to the problem of computing magnitude. There are a few spaces of geometric interest whose magnitude is known exactly, including spheres with the geodesic metric (Theorem 7 of [@Willertonmagnitude2014]), Euclidean balls of odd dimension (whose magnitude is a rational function of the radius [@Barcelomagnitudes2018; @Willertonmagnitude2017; @Willertonmagnitude2018]), and convex bodies in ${{\mathbb}{R}}^n$ with the $\ell^1$ metric (Theorem 5.4.6 of [@Leinstermagnitude2017]; the magnitude is closely related to the intrinsic volumes). But for many very simple spaces, including even the $2$-dimensional Euclidean disc, the magnitude remains unknown. On the other hand, maximum diversity is sometimes more tractable than magnitude. Meckes showed that for compact $X \subseteq {{\mathbb}{R}}^n$, maximum diversity is a quantity that is already known, if little explored, in potential theory: up to a known constant factor, ${D_{\max}(X)}$ is the Bessel capacity of order $(n + 1)/2$ of $X$ ([@MeckesMagnitude2015], Section 6). In the rest of this section, we analyse the few classes of metric space for which we are able to calculate the maximum diversity exactly. In principle this includes all finite spaces, since Corollary \[cor:comp\] then provides an algorithm for calculating the maximum diversity (described in Section 7 of [@LeinsterMaximizing2016]). This class aside, all our examples are instances of the following result. \[lemma:pos-pos-dmax\] Let $X$ be a nonempty positive definite compact metric space admitting a positive weight measure $\mu$. Then: 1. \[part:ppd-meas\] the normalisation ${\widehat{\mu}}$ of $\mu$ is the unique maximising measure on $X$; 2. \[part:ppd-dmax\] ${D_{\max}(X)} = |X|$. For (\[part:ppd-dmax\]), since $X$ admits a positive weight measure, Corollary \[cor:comp\](\[part:comp-max\]) gives ${D_{\max}(X)} \geq |X|$. But the opposite inequality  also holds, giving ${D_{\max}(X)} = |X|$. Now by Corollary \[cor:comp\](\[part:comp-meas\]), it follows that that ${\widehat{\mu}}$ is a maximising measure. For uniqueness, let $\nu$ be any maximising measure on $X$. Then $$\frac{\nu(X)}{\int_X \int_X e^{-d(x, y)} {\,\mathrm{d}}\nu(x) {\,\mathrm{d}}\nu(y)} = D_2^X(\nu) = {D_{\max}(X)} = |X|,$$ so Theorem \[pdms\_variational\] implies that $\nu$ is a scalar multiple of ${\widehat{\mu}}$. But both are probability measures, so $\nu = {\widehat{\mu}}$. \[eg:scattered\] Let $X$ be a finite metric space with $n$ points, satisfying $d(x, y) > \log(n - 1)$ whenever $x \neq y$. Then $X$ is positive definite and its unique weight measure is positive (Proposition 2.4.17 of [@Leinstermagnitude2013]), so Lemma \[lemma:pos-pos-dmax\] applies. \[eg:ms-interval\] As shown in Theorem 2 of [@Willertonmagnitude2014], a line segment $[0, \ell] \subseteq {{\mathbb}{R}}$ has weight measure $$\tfrac{1}{2} (\delta_0 + \delta_\ell + \lambda_{[0, \ell]}),$$ where $\delta_x$ denotes the Dirac measure at a point $x$ and $\lambda_{[0, \ell]}$ is Lebesgue measure on ${{\mathbb}{R}}$ restricted to $[0, \ell]$. Hence $$|[0, \ell]| = 1 + \tfrac{1}{2}\ell.$$ By Lemma \[lemma:pos-pos-dmax\], the maximum diversity of $[0, \ell]$ is equal to its magnitude, and its unique maximising measure is $$\frac{\delta_0 + \delta_\ell + \lambda_{[0, \ell]}}{2 + \ell}.$$ In fact, every compact subset of ${{\mathbb}{R}}$ has a positive weight measure (by Lemma 2.8 and Corollary 2.10 of [@MeckesPositive2013]), as well as being positive definite, so again, Lemma \[lemma:pos-pos-dmax\] applies. \[eg:ms-hgs\] Let $X$ be a nonempty compact metric space that is **homogeneous**, that is, its isometry group acts transitively on points. There is a unique isometry-invariant probability measure on $X$, the **Haar probability measure** $\mu$ (Theorems 4.11 and 5.3 of [@SteinlageHaar1975]). As observed in [@Willertonmagnitude2014] (Theorem 1), the measure $$\frac{\mu}{\int_X e^{-d(x, y)} {\,\mathrm{d}}\mu(x)}$$ is independent of $y \in X$ and is a positive weight measure on $X$. Hence $$|X| = \frac{1}{\int_X e^{-d(x, y)} {\,\mathrm{d}}\mu(x)}$$ for all $y \in X$. This is the reciprocal of the expected similarity between a random pair of points. Assuming also that $X$ is positive definite, Lemma \[lemma:pos-pos-dmax\] implies that ${D_{\max}(X)}$ is equal to $|X|$ and the Haar probability measure $\mu$ is the unique maximising measure. We have shown that every symmetric space with similarities has at least one maximising measure. Although some spaces have multiple maximising measures ([@LeinsterMaximizing2016], Section 9), we now show that for many metric spaces, the maximising measure is unique. \[ip-unique\] Let $X$ be a nonempty compact metric space such that the bilinear form ${\langle -, - \rangle}_X$ on $M(X)$ (defined in ) is positive definite. Then $X$ admits exactly one maximising measure. Since ${\langle -, - \rangle}_X$ is an inner product, the function $\mu \mapsto {\langle \mu, \mu \rangle}_X$ on $M(X)$ is strictly convex. Its restriction to the convex subset $P(X) \subseteq M(X)$ therefore attains a minimum at most once. But $D_2^X(\mu) = 1/{\langle \mu, \mu \rangle}_X$, so $\mu$ minimises ${\langle -, - \rangle}_X$ on $P(X)$ if and only if it is 2-maximising, or equivalently maximising (by Corollary \[one\_is\_enough\]). The result follows. We deduce that for two important classes of metric spaces, maximising measures are unique. Every nonempty positive definite finite metric space has exactly one maximising measure. This is immediate from Lemma \[ip-unique\]. The following more substantial result is due to Mark Meckes (personal communication, 2019). \[euc-unique\] Every nonempty compact subset of Euclidean space has exactly one maximising measure. Let $X$ be a nonempty compact subset of ${{\mathbb}{R}}^n$. Then $X$ is positive definite, so by Lemma 2.2 of [@MeckesPositive2013], ${\langle \mu, \mu \rangle}_X \geq 0$ for all $\mu \in M(X)$. By Lemma \[ip-unique\], it now suffices to prove that if ${\langle \mu, \mu \rangle}_X = 0$ then $\mu = 0$. Let $F$ be the function on ${{\mathbb}{R}}^n$ defined by $F(x) = e^{-\|x\|}$. Then $${\langle \mu, \nu \rangle}_X = \int_{{{\mathbb}{R}}^n} (F * \mu) {\,\mathrm{d}}\nu$$ ($\mu, \nu \in M(X)$), where $*$ denotes convolution. By the standard properties of the Fourier transform $\hat{\ }$, it follows that $${\langle \mu, \mu \rangle}_X = \int_{{{\mathbb}{R}}^n} \hat{F} |\hat{\mu}|^2.$$ But $\hat{F}$ is everywhere strictly positive (Theorem 1.14 of [@SteinIntroduction1971]), so if ${\langle \mu, \mu \rangle}_X = 0$ then $\hat{\mu} = 0$ almost everywhere, which in turn implies that $\mu = 0$ (paragraph 1.7.3(b) of [@RudinFourier1962]). The uniform measure {#S_uniform} =================== For many of the spaces $X$ that arise most often in mathematics, there is a choice of probability measure on $X$ that seems obvious or natural. For finite sets, it is the uniform measure. For homogeneous spaces, it is Haar measure. For subsets of ${{\mathbb}{R}}^n$ with finite nonzero volume, it is normalised Lebesgue measure. In this section, we propose a method for assigning a canonical probability measure to any compact metric space $X$. We will call it the *uniform measure*. The idea behind this method has two parts. The first is very standard in statistics: take the probability distribution that maximises entropy. For example, in the context of differential entropy of probability distributions on ${{\mathbb}{R}}$, the maximum entropy distribution supported on a prescribed bounded interval is the uniform distribution on it, and the maximum entropy distribution with a prescribed mean and variance is the normal distribution. However, given a compact metric space $X$, simply taking the maximising measure on $X$ does not give a suitable notion of uniform measure in the sense above (even putting aside the question of uniqueness). The problem is the failure of scale-invariance. For many uses of metric spaces, the choice of scale factor is somewhat arbitrary: if we multiplied all the distances by a constant $t > 0$, we would regard the space as essentially unchanged. (Formally, scaling by $t$ defines an automorphism of the category of metric spaces, for any of the standard notions of map between metric spaces.) But the maximising measure depends critically on the scale factor, as almost every example in the previous section shows. There now enters the second part of the idea: pass to the large-scale limit. Thus, we define the uniform measure on a space to be the limit of the maximising measures as the scale factor increases to $\infty$. Let us make this precise. Let $X = (X, d)$ be a metric space and $t \in (0, \infty)$. We write $td$ for the metric on $X$ defined by $(td)(x, y) = t\cdot d(x, y)$, and $K^t$ for the similarity kernel on $X$ defined by $K^t(x, y) = e^{-td(x, y)}$. We denote by $tX$ the set $X$ equipped with the metric $td$. If $X$ is a subspace of ${{\mathbb}{R}}^n$ then $tX = (X, td)$ is isometric to $(\{tx {\, : \,}x \in X\}, d)$, where $d$ is Euclidean distance. But for our purposes, it is better to regard the set $X$ as fixed and the metric as varying with $t$. An immediate consequence of is that the maximum diversity of a compact metric space increases monotonically with the scale factor: ${D_{\max}(tX)}$ is increasing in $t \in (0, \infty)$. \[defn:ufm\] Let $X$ be a compact metric space. Suppose that $tX$ has a unique maximising measure $\mu_t$ for all $t \gg 0$, and that $\lim_{t \to \infty} \mu_t$ exists in $P(X)$. Then the **uniform measure** on $X$ is $\mu_X = \lim_{t \to \infty} \mu_t$. This definition has the desired property of scale-invariance: Let $X$ be a compact metric space and $t > 0$. Then the uniform measures on $X$ and $tX$ are equal: $\mu_X = \mu_{tX}$, with one side of the equality defined if and only if the other is. This is immediate from the definition. The next few results show that the definition of uniform measure succeeds in capturing the ‘canonical’ measure on several significant classes of space. \[prop:ufm-fin\] On a nonempty finite metric space, the uniform measure exists and is equal to the uniform probability measure in the standard sense. Let $X = \{x_1, \ldots, x_n\}$ be a finite metric space. For $t > 0$, write $Z^t$ for the $n \times n$ matrix with entries $e^{-td(x_i, x_j)}$. For $t \gg 0$, the space $tX$ is positive definite with positive weight measure, by Example \[eg:scattered\]. Expressed as a vector, the weight measure on $tX$ (for $t \gg 0$) is $$(Z^t)^{-1} \begin{pmatrix} 1 \\ \vdots \\ 1 \end{pmatrix}.$$ The normalisation of this weight measure is the unique maximising measure $\mu_t$ on $tX$, by Lemma \[lemma:pos-pos-dmax\]. As $t \to \infty$, we have $Z^t \to I$ in the topological group $\mathrm{GL}_n({{\mathbb}{R}})$, giving $(Z^t)^{-1} \to I$ and so $\mu_t \to (1/n, \ldots, 1/n)$. This result shows that the uniform measure is not in general uniformly distributed; that is, balls of the same radius may have different measures. Our concept of uniform measure also behaves well on homogeneous spaces. We restrict to those spaces $X$ such that $tX$ is positive definite for every $t > 0$, which is equivalent to the classical condition that $X$ is of **negative type**. (For our purposes, this can be taken as the definition of negative type. The proof of equivalence essentially goes back to Schoenberg; see Theorem 3.3 of [@MeckesPositive2013].) \[prop:ufm-hgs\] On a nonempty, homogeneous, compact metric space of negative type, the uniform measure exists and is equal to the Haar probability measure. Let $X$ be such a space. The Haar probability measure $\mu$ on $X$ is the unique isometry-invariant probability measure on $X$, so it is also the Haar probability measure on $tX$ for every $t > 0$. Hence by Example \[eg:ms-hgs\], $\mu_t = \mu$ for all $t$, and the result follows trivially. When applied to a real interval, our definition of uniform measure also produces the uniform measure in the standard sense. \[prop:ufm-interval\] On the line segment $[0, \ell]$ of length $\ell > 0$, the uniform measure exists and is equal to Lebesgue measure restricted to $[0, \ell]$, normalised to a probability measure. Write $X = [0, \ell]$ and $d$ for the metric on ${{\mathbb}{R}}$. For each $t > 0$, the metric space $tX = (X, td)$ is isometric to the interval $[0, t\ell]$ with metric $d$, which by Example \[eg:ms-interval\] has unique maximising measure $$\frac{\delta_0 + \delta_{t\ell} + \lambda_{[0, t\ell]}}{2 + t\ell}.$$ Transferring this measure across the isometry, $tX$ therefore has unique maximising measure $$\mu_t = \frac{\delta_0 + \delta_\ell + t\lambda_{[0, \ell]}}{2 + t\ell}.$$ Hence $\mu_t \to \lambda_{[0, \ell]}/\ell$ as $t \to \infty$, as required. We now embark on the proof that Proposition \[prop:ufm-interval\] extends to Euclidean spaces of arbitrary dimension. Precisely, let $X$ be a compact subspace of ${{\mathbb}{R}}^n$ with nonzero volume, write $\lambda_X$ for $n$-dimensional Lebesgue measure $\lambda$ restricted to $X$, and write ${\widehat{\lambda_X}} = \lambda_X/\lambda(X)$ for its normalisation to a probability measure. We will show that ${\widehat{\lambda_X}}$ is the uniform measure on $X$. In Propositions \[prop:ufm-fin\]–\[prop:ufm-interval\], we computed the uniform measures on the spaces $X$ concerned by constructing an explicit maximising measure on $tX$ for each $t > 0$, then taking the limit as $t \to \infty$. This strategy is not available to us for $X \subseteq {{\mathbb}{R}}^n$, since we have no explicit description of the maximising measures of Euclidean sets. The argument is, therefore, less direct. We begin by showing that at large scales, ${\widehat{\lambda_X}}$ comes arbitrarily close to maximising diversity, in the sense of the last part of the following proposition. \[prop:preufm-euc\] Let $X$ be a compact subspace of ${{\mathbb}{R}}^n$ with nonzero volume $\lambda(X)$. Then $$\lim_{t \to \infty} \frac{{D_{\max}(tX)}}{|tX|} = 1 \qquad \text{and} \qquad \lim_{t \to \infty} \frac{{D_{\max}(tX)}}{t^n} = \frac{\lambda(X)}{n!\omega_n},$$ where $\omega_n$ is the volume of the unit ball in ${{\mathbb}{R}}^n$. Moreover, for all $q \in [0, \infty]$, $$\lim_{t \to \infty} \frac{D_q^{tX}({\widehat{\lambda_X}})}{{D_{\max}(tX)}} = 1.$$ We first show that for all $t > 0$ and $q \in [0, \infty]$, $$\label{eq:ue1} |tX| \geq {D_{\max}(tX)} \geq D_q^{tX}({\widehat{\lambda_X}}) \geq \frac{\lambda(X)t^n}{n!\omega_n}.$$ The first inequality in  is an instance of , since ${{\mathbb}{R}}^n$ is positive definite. The second holds by definition of maximum diversity. For the third, diversity is decreasing in its order (Proposition \[diversity\_cts\_q\](\[part:dcq-dec\])), so it suffices to prove the case $q = \infty$. The inequality then states that $$\frac{1}{\sup_{x \in X} (K^t {\widehat{\lambda_X}})(x)} \geq \frac{\lambda(X)t^n}{n!\omega_n},$$ or equivalently, for all $x \in X$, $$\label{eq:ue2} (K^t {\widehat{\lambda_X}})(x) \leq \frac{n!\omega_n}{\lambda(X)t^n}.$$ Now for all $x \in X$, $$(K^t {\widehat{\lambda_X}})(x) = \frac{1}{\lambda(X)} \int_X e^{-t\|x - y\|} {\,\mathrm{d}}y \leq \frac{1}{\lambda(X)} \int_{{{\mathbb}{R}}^n} e^{-t\|x - y\|} {\,\mathrm{d}}y.$$ The last integral is $n!\omega_n/t^n$, by a standard calculation (as in Lemma 3.5.9 of [@Leinstermagnitude2013]). So we have now proved inequality  and, therefore, the last of the inequalities . Dividing  through by $|tX|$ gives $$1 \geq \frac{{D_{\max}(tX)}}{|tX|} \geq \frac{D_q^{tX}({\widehat{\lambda_X}})}{|tX|} \geq \frac{\lambda(X)t^n}{n! \omega_n |tX|}$$ for all $t > 0$ and $q \in [0, \infty]$. Theorem 1 of [@Barcelomagnitudes2018] states, in part, that the final term converges to $1$ as $t \to \infty$. Hence all terms do, and the result follows. 1. The fact that ${D_{\max}(X)}/|tX| \to 1$ as $t \to \infty$ is one of a collection of results expressing the intimate relationship between maximum diversity and magnitude. Perhaps the deepest of these is a result of Meckes, which uses the description of maximum diversity as a Bessel capacity (mentioned in Section \[S\_metric\]) to establish that for each $n \geq 1$, there is a constant $\kappa_n$ such that $$|X| \leq \kappa_n {D_{\max}(X)}$$ for all nonempty compact $X \subseteq {{\mathbb}{R}}^n$ (Corollary 6.2 of [@MeckesMagnitude2015]). This is a companion to the elementary fact that ${D_{\max}(X)} \leq |X|$ (inequality (\[eq:max-leq-mag\])). 2. The second equation in Proposition \[prop:preufm-euc\] implies, in particular, that one can recover the volume of $X \subseteq {{\mathbb}{R}}^n$ from the asymptotic behaviour of the function $t \mapsto {D_{\max}(tX)}$. This result is in the same spirit as Theorem \[minkowski\], which states that one can also recover the Minkowski dimension of $X$. Thus, the asymptotics of ${D_{\max}(tX)}$ contain fundamental geometric information. \[thm:ufm-euc\] On a compact set $X \subseteq {{\mathbb}{R}}^n$ of nonzero Lebesgue measure, the uniform measure exists and is equal to Lebesgue measure restricted to $X$, normalised to a probability measure. By Proposition \[euc-unique\], $tX$ has a unique maximizing measure $\mu_t$ for each $t > 0$. We have to show that $\int_X f {\,\mathrm{d}}\mu_t \to \int_X f{\,\mathrm{d}}{\widehat{\lambda_X}}$ for each $f \in C(X)$. For $t > 0$, define $F^t \in C({{\mathbb}{R}}^n)$ by $F^t(x) = e^{-t\|x\|}$. We will apply Lemma \[lemma:approx-conv\] to the function $G(x) = e^{-\|x\|}/n!\omega_n$; then $G_t = t^n F^t/n!\omega_n$. We have $\int_{{{\mathbb}{R}}^n} G {\,\mathrm{d}}\lambda = 1$, since $\int_{{{\mathbb}{R}}^n} F^1 {\,\mathrm{d}}\lambda = n!\omega_n$, as noted in the proof of Proposition \[prop:preufm-euc\]. First we prove the weaker statement that for all nonnegative $f \in C(X)$, $$\label{eq:liminf} \liminf_{t \to \infty} \int_X f {\,\mathrm{d}}\mu_t \geq \int_X f {\,\mathrm{d}}{\widehat{\lambda_X}}.$$ Fix $f$, and choose a nonnegative extension $\bar{f} \in C({{\mathbb}{R}}^n)$ of bounded support. Let ${\varepsilon}> 0$. By Lemma \[lemma:approx-conv\], we can choose $T_1 > 0$ such that for all $t \geq T_1$, $$\int_{{{\mathbb}{R}}^n} \bar{f} \cdot \biggl( \frac{t^n F^t}{n!\omega_n} * \mu_t \biggr) {\,\mathrm{d}}\lambda - \int_{{{\mathbb}{R}}^n} \bar{f} {\,\mathrm{d}}\mu_t \leq \frac{{\varepsilon}}{2}.$$ By Proposition \[prop:preufm-euc\], we can also choose $T_2 > 0$ such that for all $t \geq T_2$, $$\frac{t^n/n!\omega_n}{{D_{\max}(tX)}} \geq \frac{1}{\lambda(X)} - \frac{{\varepsilon}}{2\int_X f {\,\mathrm{d}}\lambda}.$$ Then for all $t \geq \max\{T_1, T_2\}$, $$\begin{aligned} \int_X f {\,\mathrm{d}}\mu_t & = \int_{{{\mathbb}{R}}^n} \bar{f} {\,\mathrm{d}}\mu_t \label{eq:euc1} \\ & \geq \int_{{{\mathbb}{R}}^n} \bar{f} \cdot \biggl( \frac{t^n F^t}{n!\omega_n} * \mu_t \biggr) {\,\mathrm{d}}\lambda - \frac{{\varepsilon}}{2} \label{eq:euc2} \\ & \geq \int_X f \cdot \biggl( \frac{t^n F^t}{n!\omega_n} * \mu_t \biggr) {\,\mathrm{d}}\lambda - \frac{{\varepsilon}}{2} \label{eq:euc3} \\ & = \int_X f \cdot \frac{t^n}{n!\omega_n} (K^t \mu_t) {\,\mathrm{d}}\lambda - \frac{{\varepsilon}}{2} \label{eq:euc4} \\ & \geq \int_X f \cdot \frac{t^n/n!\omega_n}{{D_{\max}(tX)}} {\,\mathrm{d}}\lambda - \frac{{\varepsilon}}{2} \label{eq:euc5} \\ & \geq \int_X f {\,\mathrm{d}}{\widehat{\lambda_X}} - {\varepsilon}, \label{eq:euc6}\end{aligned}$$ where  holds because $\mu_t$ is supported on $X$, because $t \geq T_1$,  because $\bar{f}$, $F^t$ and $\mu_t$ are nonnegative,  because $F^t * \mu_t = K^t\mu_t$,  by Lemma \[lemma:supertypical\], and  because $t \geq T_2$ and $f \geq 0$. The claimed inequality  follows. Now observe that if $f \in C(X)$ satisfies  then so does $f + c$ for all constants $c$. But every function in $C(X)$ can be written as the sum of a nonnegative function in $C(X)$ and a constant, so  holds for all $f \in C(X)$. Let $f \in C(X)$. Applying  to $-f$ in place of $f$ gives $$\limsup_{t \to \infty} \int_X f {\,\mathrm{d}}\mu_t \leq \int_X f {\,\mathrm{d}}{\widehat{\lambda_X}},$$ which together with  itself implies that $$\lim_{t \to \infty} \int_X f {\,\mathrm{d}}\mu_t = \int_X f{\,\mathrm{d}}{\widehat{\lambda_X}}.$$ This completes the proof. Let $X \subseteq {{\mathbb}{R}}^n$ be a compact set of nonzero volume. Then ${\mathrm{supp} \, }\mu_t \to X$ in the Hausdorff metric ${d_{\mathrm{H}}}$ as $t \to \infty$. Indeed, Corollary \[cor:support\] applied to the similarity kernel $K^t$ gives $t {d_{\mathrm{H}}}(X, {\mathrm{supp} \, }\mu_t) \leq {H_{\max}(tX)}$, so $${d_{\mathrm{H}}}(X, {\mathrm{supp} \, }\mu_t) \leq \frac{{H_{\max}(tX)}}{t} = \frac{{H_{\max}(tX)}}{\log t} \cdot \frac{\log t}{t} \to n \cdot 0 = 0$$ as $t \to \infty$, by Theorem \[minkowski\]. (The same argument applies to any compact metric space of finite Minkowski dimension.) So when $t$ is large, the support of $\mu_t$ is Hausdorff-close to $X$. On the other hand, the support of the uniform measure $\lim_{t \to \infty} \mu_t = {\widehat{\lambda_X}}$ need not be $X$: some nonempty open sets may have measure zero. Any nontrivial union of an $n$-dimensional set with a lower-dimensional set provides an example. Open questions {#S_conjectures} ============== (1) Maximum diversity is a numerical invariant of compact metric spaces (and more generally, symmetric spaces with similarity). What properties does this invariant have with respect to products, unions, etc., of spaces? Similarly, what are the maximising measures on a product or union of spaces, and what is the uniform measure? (2) Almost nothing is known about the maximising measures on specific non-finite metric spaces. For instance, what is the maximising measure on a Euclidean ball or cube? We do not even know the support of the maximising measure. We conjecture that in the case of a Euclidean ball, the support of the maximising measure is a finite union of concentric spheres. (3) The uniform measure, when defined, provides a canonical way of equipping a metric space with a probability measure. But so too does the Hausdorff measure. More exactly, if the Hausdorff dimension $d$ of $X$ is finite then we have the Hausdorff measure $\mathcal{H}^d$ on $X$, which if $0 < \mathcal{H}^d(X) < \infty$ can be normalised to a probability measure on $X$. What is the relationship between the Hausdorff probability measure and the uniform measure? It is probably not simple: for example, on $\{1, 1/2, 1/3, \ldots, 0\} \subseteq {{\mathbb}{R}}$, the uniform measure is well-defined (it is $\delta_0$), but the Hausdorff probability measure is not. The fact that the growth of ${D_{\max}(tX)}$ is governed by the Minkowski dimension (Theorem \[minkowski\]) also suggests a link between the uniform measure and the Minkowski content. (4) What is the relationship between our notion of uniform measure on a compact metric space and that proposed by Ostrovsky and Sirota [@OstrovskyUniform2014] (which is based on entropy of a different kind)? (5) For *finite* spaces with similarity, the diversity measures studied here were first introduced in ecology [@LeinsterMeasuring2012] and have been successfully applied there. What are the biological applications of our diversity measures on infinite compact spaces? It may seem implausible that there could be any, since the points of the space are usually interpreted as species. However, in microbial biology it is common to treat the space of possible organisms as a continuum. Sometimes groupings are created, such as serotypes (strains) of a virus or operational taxonomic units (genetically similar classes) of bacteria, but it is recognised that such divisions can be artificial. What information do our diversity measures, and maximum diversity, convey about continuous spaces of organisms? [^1]: School of Mathematics, University of Edinburgh; Tom.Leinster@ed.ac.uk [^2]: School of Mathematics, University of Edinburgh; Emily.Roff@ed.ac.uk
--- abstract: 'This paper is devoted to the mathematical analysis of some Diffuse Interface systems which model the motion of a two-phase incompressible fluid mixture in presence of capillarity effects in a bounded smooth domain. First, we consider a two-fluids parabolic-hyperbolic model that accounts for unmatched densities and viscosities without diffusive dynamics at the interface. We prove the existence and uniqueness of local solutions. Next, we introduce dissipative mixing effects by means of the mass-conserving Allen-Cahn approximation. In particular, we consider the resulting nonhomogeneous Navier-Stokes-Allen-Cahn and Euler-Allen-Cahn systems with the physically relevant Flory-Huggins potential. We study the existence and uniqueness of global weak and strong solutions and their separation property. In our analysis we combine energy and entropy estimates, a novel end-point estimate of the product of two functions, and a logarithmic type Gronwall argument.' address: - | $^\dagger$Department of Mathematics & Institute for Scientific Computing and Applied Mathematics\ Indiana University\ Bloomington, IN 47405, USA - | $^\ddagger$Dipartimento di Matematica\ Politecnico di Milano\ Milano 20133, Italy - | $^\ast$ School of Mathematical Sciences\ Key Laboratory of Mathematics for Nonlinear Sciences (Fudan University), Ministry of Education\ Shanghai Key Laboratory for Contemporary Applied Mathematics\ Fudan University\ Shanghai 200433, China author: - 'Andrea Giorgini$^\dagger$, Maurizio Grasselli$^\ddagger$ & Hao Wu$^\ast$' title: | Diffuse Interface Models for Incompressible Binary Fluids\ and the Mass-Conserving Allen-Cahn Approximation --- Introduction ============ The flow of a two-phase or multicomponent incompressible mixture is nowadays one of the most attractive theoretical and numerical problems in Fluid Mechanics (see, for instance, [@AMW1998; @GKL2018; @GZ2018; @LIN2012; @PS2016] and the references therein). This is mainly due to the interplay between the motion of the interface separating the two fluids (or phases) and the surrounding fluids. A natural description of this phenomenon is based on a free-boundary formulation. Let $\Omega$ be a bounded domain in $\mathbb{R}^d$ with $d=2,3$, and $T>0$. We assume that $\Omega$ is filled by two incompressible fluids (e.g. two liquids or a liquid and a gas), and we denote by $\Omega_1=\Omega_1(t)$ and $\Omega_2=\Omega_2(t)$ the subsets of $\Omega$ containing, respectively, the first and the second fluid portions for any time $t\geq 0$. The equations of motion are $$\label{FB} \begin{cases} \rho_1 \big( \partial_t \uu_1 + \uu_1 \cdot \nabla \uu_1 \big) - \nu_1 \div D \uu_1 +\nabla p_1 =0, \quad &\div \uu_1 =0, \quad \text{ in } \Omega_1\times (0,T),\\ \rho_2 \big( \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2 \big) - \nu_2 \div D \uu_2 +\nabla p_2 =0, \quad &\div \uu_2 =0, \quad \text{ in } \Omega_2 \times (0,T). \end{cases}$$ Here, $\uu_1$, $\uu_2$ and $p_1$ and $p_2$ are, respectively, the velocities and pressures of the two fluids, while $\rho_1, \rho_2$ and $\nu_1,\nu_2$ are the (constant) densities and viscosities of the two fluids, respectively. The symmetric gradient is $D=\frac12 (\nabla +\nabla^t)$. The effect of the gravity are neglected for simplicity. Denoting by $\Gamma=\Gamma(t)$ the (moving) interface between $\Omega_1$ and $\Omega_2$, system can be equipped with the classical free boundary conditions $$\label{Y-L} \uu_1=\uu_2, \quad \big( \nu_1 D \uu_1 -\nu_2 D \uu_2 \big) \cdot \n_\Gamma = (p_1-p_2+\sigma H)\n_\Gamma \quad \text{ on }\Gamma \times (0,T),$$ together with the no-slip boundary condition $$\label{FB-ns} \uu_1=\mathbf{0}, \quad \uu_2=\mathbf{0} \quad \text{ on }\partial \Omega \times (0,T).$$ The vector $\n_\Gamma$ in is the unit normal vector of the interface from $\partial \Omega_1(t)$, $H$ is the mean curvature of the interface ($H= - \div \n_\Gamma$). In this setting, $\Gamma(t)$ is assumed to move with the velocity given by $$\label{inter-vel} V_{\Gamma(t)}= \uu \cdot \n_{\Gamma(t)}.$$ The coefficient $\sigma>0$ is the surface tension, which introduces a discontinuity in the normal stress proportional to the mean curvature of the surface. Since $\ddt \mathcal{H}^{d-1}(\Gamma(t))= - \int_{\Gamma(t)} H V_\Gamma \, \d \mathcal{H}^{d-1}$, where $\mathcal{H}^{d-1}$ is the $d-1$-dimensional Hausdorff measure, the (formal) energy identity for system - is $$\label{FB-energy} \ddt \Big\lbrace \sum_{i=1,2} \int_{\Omega_i(t)} \frac{\rho_i}{2}|\uu_i|^2 \, \d x + \sigma \mathcal{H}^{d-1}(\Gamma(t)) \Big\rbrace + \sum_{i=1,2} \int_{\Omega_i(t)} \nu_i |D \uu_i|^2 \, \d x =0.$$ We refer the reader to [@A2007; @DS1995; @Plo1993; @PS2010-1; @PS2016; @T1995; @TT1995] for the analysis of classical and varifold solutions to the system -. The twofold Lagrangian and Eulerian nature of system - has led to the breakthrough idea (mainly from numerical analysts, see the review [@SS2003]) to reformulate the above system in the Eulerian description by interpreting the effect of the surface tension as a singular force term localized at the interface. Let us introduce the so-called level set function $\phi: \Omega\times(0,T) \rightarrow \mathbb{R}$ such that $$\phi>0 \quad \text{ in } \Omega_1\times (0,T), \quad \phi<0 \quad \text{ in }\Omega_2 \times (0,T), \quad \phi=0 \quad \text {on } \Gamma \times (0,T),$$ namely the interface is the zero level set of $\phi$. We consider the Heaviside type function $$\label{Heav} K(\phi)= \begin{cases} 1 \quad &\phi>0,\\ 0 \quad &\phi=0,\\ -1 \quad &\phi<0, \end{cases}$$ and we denote by $\uu$ the velocity such that $\uu=\uu_1$ in $\Omega_1\times (0,T)$ and $\uu=\uu_2$ in $\Omega_2\times (0,T)$. It was shown in [@CHMS1994 Section 2] that the system - is formally equivalent to $$\label{FB2} \begin{cases} \rho(\phi) \big( \partial_t \uu + \uu \cdot \nabla \uu\big) - \div (\nu(\phi)D\uu) + \nabla P=\sigma H(\phi) \nabla \phi \delta (\phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi = 0, \end{cases} \quad \text{in }\Omega\times(0,T),$$ together with the boundary condition . Here $$\rho(\phi)= \rho_1 \frac{1+K(\phi)}{2}+ \rho_2 \frac{1-K(\phi)}{2}, \ \nu(\phi)=\nu_1 \frac{1+K(\phi)}{2}+ \nu_2 \frac{1-K(\phi)}{2}, \ H(\phi)= \div \left( \frac{\nabla \phi}{|\nabla \phi|} \right).$$ Here, $\delta$ is the Dirac distribution, and $\nabla \phi$ is oriented as $\n_\Gamma$. The equation $_3$ represents the motion of the interface $\Gamma$ that is simply transported by the flow. This follows from the immiscibility condition, which translates into $(\uu, 1) \in \text{Tan} \lbrace (x,t)\in \Omega \times (0,T): x\in \Gamma(t) \rbrace$. Although seems to be more amenable than -, the presence of the Dirac mass still makes the analysis challenging. In the literature, two different approaches have been used to overcome the singular nature of the right-hand side of $_1$, which both rely on the idea of continuous transition at the interface. The first approach is the Level Set method developed in the seminal works [@CHMS1994; @OK2005; @OS1988; @SSO1994] (see also the review [@SS2003]). This approach consists in approximating the Heaviside function $K(\phi)$ by a smoothing regularization $K_\varepsilon(\phi)$. More precisely, for a given $\varepsilon>0$, we introduce the function $$\label{Heav-e} K_\varepsilon(\phi)= \begin{cases} 1 \quad &\phi>\varepsilon,\\ \frac12 \Big[ \frac{\phi}{\varepsilon} + \frac{1}{\pi} \sin \big( \frac{\pi \phi}{\varepsilon}\big) \Big]\quad &|\phi| \leq \varepsilon,\\ -1 \quad &\phi<-\varepsilon. \end{cases}$$ The resulting approximating system reads as follows $$\label{LS1} \begin{cases} \rho_\varepsilon(\phi) \big( \partial_t \uu + \uu \cdot \nabla \uu\big) - \div (\nu_\varepsilon(\phi)D\uu) + \nabla P=\sigma H(\phi) \nabla \phi \delta_\varepsilon (\phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi = 0, \end{cases} \quad \text{in } \Omega \times (0,T),$$ where $$\rho_\varepsilon(\phi)= \rho_1 \frac{1+K_\varepsilon(\phi)}{2}+ \rho_2 \frac{1-K_\varepsilon(\phi)}{2}, \quad \nu_\varepsilon(\phi)=\nu_1 \frac{1+K_\varepsilon(\phi)}{2}+ \nu_2 \frac{1-K_\varepsilon(\phi)}{2}, \quad \delta_\varepsilon= \frac{\mathrm{d} K_\varepsilon(\phi)}{\mathrm{d} \phi}.$$ As a consequence of the approximation , the thickness of the interface is approximately $\frac{2\varepsilon}{|\nabla \phi|}$. This necessarily requires that $|\nabla \phi|=1$ when $|\phi|\leq \varepsilon$, namely $\phi$ is a signed-distance function near the interface. However, even though the initial condition is suitably chosen, the evolution under the transport equation $_3$ does not guarantee that this property remains true for all time. This fact had led to different numerical algorithms aiming to avoid the expansion of the interface (see [@SS2003] and the references therein). In addition, as pointed out in [@LT1998], another drawback of this approach is that the dynamics is sensitive to the particular choice of the approximation for the surface stress tensor. The second approach is the so-called Diffuse Interface method (see [@AMW1998; @FLSY2005; @GKL2018]). This is based on the postulate that the interface is a layer with positive volume, whose thickness is determined by the interactions of particles occurring at small scales. In this context, the auxiliary function $\phi$ represents the difference between the fluids concentrations (or rescaled density/volume fraction). This function may exhibit a smooth transition at the interface, which is identified as intermediate level sets between the two values $1$ and $-1$. The evolution equations for the state variables (density, velocity, concentration) are derived by combining the theory of binary mixtures and the energy-based formalism from thermodynamics and statistical mechanics. In this framework, the surface stress tensor is replaced by a diffuse stress tensor whose action is essentially localized in the regions of high gradients, namely, $-\sigma \div(\nabla \phi \otimes \nabla \phi)$. This tensor is known as (Korteweg) capillary tensor (cf., e.g., [@AMW1998]). The resulting Diffuse Interface system, also called “complex fluid" model (see, e.g., [@LIN2012 Sec.5]), is the following $$\label{Complex} \begin{cases} \rho(\phi) \big( \partial_t \uu + \uu \cdot \nabla \uu \big) - \div (\nu(\phi)D\uu) + \nabla P= -\sigma \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi = 0, \end{cases} \quad \text{in } \Omega \times(0,T),$$ equipped with the no-slip boundary condition $$\label{Complex-bc} \uu=\mathbf{0} \quad \text{on } \partial \Omega \times (0,T).$$ Here $$\label{rhonu} \rho(\phi)= \rho_1 \frac{1+\phi}{2}+ \rho_2 \frac{1-\phi}{2}, \quad \nu(\phi)=\nu_1 \frac{1+\phi}{2}+ \nu_2 \frac{1-\phi}{2}.$$ The energy associated to system is defined as $$E(\uu,\phi)= \int_{\Omega} \frac12 \rho(\phi) |\uu|^2+ \sigma \Big( \frac12 |\nabla \phi|^2 + \Psi(\phi) \Big) \, \d x,$$ where $\Psi$ is a double-well potential from $[-1,1] \rightarrow \mathbb{R}$, and the corresponding energy identity is $$\label{Complex-EE} \ddt E(\uu, \phi) +\int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x=0.$$ This model dissipates energy due to viscosity, but there are no regularization effects for $\phi$. It is worth noting that is also related to the models for viscoelastic fluids (see, for instance, [@LLZ2005]) or to the two-dimensional incompressible MHD system without magnetic diffusion (cf., e.g., [@RWXZ2014] and the references therein). Notice that, after rescaling the capillary tensor and the free energy by a parameter $\varepsilon$, it is possible to recognize the connection between - and . Indeed, we have formally the convergences of the stress tensor (see, for instance, [@AL2018; @LIN2012; @PS2016] for further details on the sharp interface limit) $$-\int_{\Omega} \sigma \varepsilon \div(\nabla \phi \otimes \nabla \phi) \cdot \vv \, \d x \xrightarrow{\varepsilon\rightarrow 0} \int_{\Gamma} \sigma H \n_\Gamma \cdot \vv \, \d \mathcal{H}^{d-1},$$ where the limit integral corresponds to the weak formulation of , and of the (Helmholtz) free energy $\int_{\Omega} \left(\frac{\varepsilon}{2} |\nabla \phi|^2 + \frac{1}{\varepsilon}\Psi(\phi)\right) \, \d x$ to the area functional $\mathcal{H}^{d-1}(\Gamma)$ (see [@Modica]). Before proceeding with the introduction of diffusive relaxations of the transport equation and their physical motivations, it is important to point out two main properties of $_2$-$_3$: - Conservation of mass: $$\label{CM} \int_{\Omega} \phi(t)\, \d x=\int_{\Omega} \phi_0\, \d x, \quad \forall \, t \in [0,T].$$ - Conservation of $L^\infty(\Omega)$-norm: $$\label{CLinf} \| \phi(t)\|_{L^\infty(\Omega)}=\| \phi_0\|_{L^\infty(\Omega)}, \quad \forall \, t\in [0,T],$$ which implies that $$\label{Crange} -1\leq \phi_0(x)\leq 1 \quad \text{a.e. in} \ \Omega \quad \Rightarrow \quad -1\leq \phi(x,t)\leq 1 \quad \text{a.e. in} \ \Omega \times (0,T).$$ The theory of binary mixtures takes into accounts dissipative mechanisms occurring at the interfaces. The molecules of two fluids interact at a miscoscopic scale, and their disposition is the result of a competition between the diffusion of molecules and the attraction of molecules of the same fluid (mixing vs demixing or “philic" vs “phobic" effects). This liquid-liquid phase separation phenomenon, though already well-known in Materials Science, has recently become a sort of paradigm in Cell Biology (see, for instance, [@AD2019; @BTP2015; @HWF2014; @ShB17]). This competition is described in the Helmholtz free energy of the system $\E(\phi)$ defined by $$\E(\phi)= \int_{\Omega} \frac12 |\nabla \phi|^2 + \Psi(\phi) \, \d x.$$ The first term describes weakly non-local interactions (see [@CH1958], cf. also [@E1989]). The potential $\Psi$ is the Flory-Huggins free energy density[^1] $$\label{Log} \Psi(s)=\frac{\theta}{2}\left[ (1+s)\log(1+s)+(1-s)\log(1-s)\right]-\frac{% \theta_0}{2} s^2, \quad s \in [-1,1].$$ We consider hereafter the case $0<\theta<\theta_0$, which implies, in particular, that $\Psi$ is a non-convex potential[^2]. It is worth mentioning that the Landau theory that leads to the well-known Ginzburg-Landau free energy is just an approximation of the above $\E(\phi)$ obtained through a Taylor expansion of the logarithmic potential $\Psi$. This choice is very common in the related literature (see, for instance, [@CM1995] and [@ES1986]). However, it has the main drawback that the solution does not belong in general to the physical interval $[-1,1]$ (cf. ). In order to include dissipative mechanisms in the dynamics of the concentration, we define the first variation of the Helmholtz free energy. This is called chemical potential and it is given by $$\mu= \frac{\delta \E(\phi)}{\delta \phi}= -\Delta \phi+ \Psi'(\phi).$$ Two fundamental relaxation models proposed in the Diffuse Interface theory for binary mixture are the following modifications of the transport equation $_3$: - **Mass-conserving Allen-Cahn dynamics**[^3] ([@RS1992; @YFLS2006]) $$\partial_t \phi +\uu\cdot \nabla \phi + \gamma \big(\mu- \overline{\mu}\big)=0 \quad \text{in} \ \Omega \times (0,T), \quad \partial_\n \phi=0 \quad \text{on} \ \partial \Omega \times (0,T);$$ - **Cahn-Hilliard dynamics** ([@CH1958; @CH1961]) $$\partial_t \phi +\uu\cdot \nabla \phi - \gamma \Delta \mu=0 \quad \text{in} \ \Omega \times (0,T), \quad \partial_\n \phi= \partial_\n \mu=0 \quad \text{on} \ \partial \Omega \times (0,T).$$ Here $\overline{\mu}$ is the spatial average defined by $$\overline{\mu}= \frac{1}{|\Omega|}\int_{\Omega} \mu \, \d x,$$ and $\gamma$ is the elastic relaxation time. We point out that from the thermodynamic viewpoint the relaxation terms describe dissipative diffusional flux at the interface (cf. [@HMR2012; @LT1998]). As for the transport equation, both the mass-conserving Allen-Cahn and Cahn-Hilliard equations satisfy the conservation properties and . In addition, their dynamics maintain the integrity of the interface: the mixing-demixing mechanism (which also translates into $\mu$) allows a balance which avoids uncontrolled expansion or shrinkage of the interface layer (cf. [@FLSY2005]). In this work, we study a Diffuse Interface model that has been recently derived in [@GKL2018 Part I, Chap.2, 4.2.1]. It accounts for unmatched densities and viscosities of the fluids, as well as dissipation due to interface mixing. The dynamics of $\phi$ is described through the following modification of the transport equation $$\partial_t \phi +\uu\cdot \nabla \phi + \gamma \Big( \mu + \rho'(\phi)\frac{|\uu|^2}{2} - \xi \Big)=0, \quad \text{ in} \ \Omega \times (0,T),$$ where $\uu$ denotes the volume averaged fluid velocity and $$\xi(t)= \frac{1}{|\Omega|}\int_\Omega \mu+ \rho'(\phi)\frac{|\uu|^2}{2} \, \d x, \quad \text{ in} \ (0,T).$$ Here the dissipation mechanism is similar to that of the mass-conserving Allen-Cahn dynamics, but it also includes an extra term due to the difference of densities. We thus have the nonhomogeneous Navier-Stokes-Allen-Cahn system $$\label{Complex2} \begin{cases} \rho(\phi) \big( \partial_t \uu + \uu \cdot \nabla \uu \big) - \div (\nu(\phi)D\uu) + \nabla P= -\sigma \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi + \gamma \big(\mu + \rho'(\phi)\frac{|\uu|^2}{2} - \xi\big) =0, \end{cases} \quad \text{in } \Omega \times (0,T).$$ This system is usually subject to a no-slip boundary condition for $\uu$ and a homogeneous Neumann boundary condition for $\phi$, namely $$\label{bc-C2} \uu=\mathbf{0},\quad \partial_{\n} \phi =0 \quad \text{ on } \partial\Omega \times (0,T).$$ In the last part of this work, we will consider the mass-conserving Euler-Allen-Cahn system $$\label{Euler} \begin{cases} \partial_t \uu + \uu \cdot \nabla \uu + \nabla P= - \sigma \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi + \gamma \big(\mu- \overline{\mu}\big)=0, \end{cases} \quad \text{ in } \Omega \times (0,T),$$ endowed with the boundary conditions $$\label{bc-EAC} \uu\cdot \n =0,\quad \partial_{\n} \phi =0 \quad \text{ on } \partial\Omega \times (0,T).$$ The above model is obtained from in the case of inviscid flow and matched densities. We observe that other boundary conditions can be considered, for instance, periodic (cf. [@GKL2018 Part I, Chap.2, 4.2.3]) and also [@MCYZ2017] for moving contact lines. The mathematical literature concerning systems similar to the Navier-Stokes-Allen-Cahn system - has been widely developed in last years, in terms of both physical modeling and well-posedness analysis. First, we report that there are different ways of accounting for the unmatched densities for incompressible binary mixtures. Among the existing literature, we mention [@AGG2012; @B2001; @HMR2012; @HMR2012-2; @FK2017; @LT1998]. The model herein studied is derived via an energetic variational approach in [@JLL2017] (see also [@GKL2018] and, for the Navier-Stokes-Cahn-Hilliard system [@LSY2015]). The system - has been investigated in [@JLL2017] in the case of constant viscosity and standard Allen-Cahn equation with regular Landau potential $\Psi_0(s)=\frac14(s^2-1)^2$ and no mass conservation. The authors prove the existence of a global weak solution in three dimensions and the existence as well as uniqueness of the global strong solution in two dimensions. In the latter case, they also show the convergence of a weak solution to a single stationary state and they establish the existence of a global attractor. Thanks to their choice of potential and the absence of mass constraint, the authors can easily ensure that $\phi$ takes values in the physical range $[-1,1]$. This fact is crucial for their proofs. However, the mass constraint would not allow to establish a comparison principle even if the double-well potential is smooth. We also mention the previous contributions [@GG2010; @GG2010-2; @HM2019; @W17; @WX13; @XZL2010] for the case with constant density, and [@AL2018] for the sharp interface limit in the Stokes case. Additionally, there are works devoted to Navier-Stokes-Allen-Cahn models in which density is regarded as an independent variable (see, for instance, [@FL2019; @LDH2016; @LH2018]). In these works the potential is the classical Landau double-well and there is no mass conservation. The (non-conserved) compressible case (see [@Bl1999; @FK2017] for modeling issues) has been analyzed, for instance, in [@DLL2013; @FRPS2010; @K2012; @YZ2019] (see also [@W2011] for sharp interface limits). On the other hand, in comparison with the viscous case above-mentioned, only few works have been addressed with the Euler-Allen-Cahn system -. In this respect, we mention [@ZGH2011] (see also [@Gal2016] for a nonlocal model), where the authors prove local existence of smooth solutions for the Euler-Allen-Cahn in the case of no-mass conservation and Landau potential. The aim of this paper is to address the existence, uniqueness and (possibly) regularity of the solutions to the aforementioned Diffuse Interface systems[^4]: the complex fluid model -, the Navier-Stokes-Allen-Cahn system -, and the Euler-Allen-Cahn system -. On one hand, the purpose of our analysis is to stay as close as possible to a thermodynamically grounded framework by keeping densities and viscosities to be dependent on $\phi$, and the physically relevant Flory-Huggins potential . Although this choice requires some technical efforts, it provides results which are physically more reasonable. On the other hand, by working in this general setting, we demonstrate that the dynamics originating from a general initial condition become global (in time) when the mass-conserving Allen-Cahn relaxation is taken into account. The latter is achieved in three dimensions for finite energy (weak) solutions, and in two dimensions even for more regular solutions in the case of non-constant density and viscosity and of constant density and zero viscosity. Before concluding this introduction, we make some more precise comments on the analysis and on the main novelties of our techniques. First, we recall that the existence and uniqueness of local (in time) regular solutions to the complex fluid system - has been proven in [@LLZ2005; @LZ2008] for constant density and viscosity. Here we generalize this result by allowing $\rho$ and $\nu$ to depend on $\phi$ and taking a more general initial datum $(\uu_0,\phi_0) \in (\V_\sigma\cap \mathbf{H}^2(\Omega)) \times W^{2,p}(\Omega)$, with $p>2$ in two dimensions and $p>3$ in three dimensions (see Theorem \[CF-T\]). Next, we study the Navier-Stokes-Allen-Cahn system -. We prove the existence of a global weak solution with $(\uu_0,\phi_0)\in \H_\sigma \times H^1(\Omega)$ (see Theorems \[weak-D\] and \[W-S\]), and the existence of a global strong solution with $(\uu_0,\phi_0)\in \V_\sigma \times H^2(\Omega)$ such that $\Psi'(\phi_0)\in L^2(\Omega)$ (see Theorem \[strong-D\]). For the latter, we combine a classical energy approach, a new end-point estimate of the product of two functions in $L^2(\Omega)$ (see Lemma \[result1\] below), and a new estimate for the Stokes system with non-constant viscosity. The proof is concluded with a logarithmic Gronwall argument that leads to double-exponential control. However, in light of the singularity of the Flory-Huggins potential, the uniqueness of these strong solutions seem to be a hard task. To overcome this issue, we then establish global estimates on the derivatives of the entropy[^5] (entropy estimates) provided that $\Vert\rho^\prime\Vert_{L^\infty(-1,1)}$ is small enough and $F''(\phi_0)\in L^1(\Omega)$. These estimates allow us to prove that $F''(\phi)^2 \log (1+F''(\phi)) \in L^1(\Omega \times (0,T))$, and, in turn, the uniqueness of such strong solutions. As a consequence of these entropy estimates, we achieve the so-called uniform separation property. The latter says that $\phi$ stays uniformly away from the pure states in finite time[^6]. This fact, besides being physically relevant, entails further regularity properties of the solution. Note that in the case of a smooth potential and no mass conservation (cf. [@JLL2017]) this issue is trivial since the potential is smooth and a comparison principle holds. Finally, we consider the inviscid case, namely the Euler-Allen-Cahn system -. Although this system turns out to be similar to the MHD equations with magnetic diffusion and without viscosity, the classical argument in the literature (see, e.g., [@CW2011]) does not apply because of the logarithmic potential. In our proof, it is crucial to make use of the structure of the incompressible Euler equations $_1$-$_2$, and the end-point estimate of the product (Lemma \[result1\]). This gives the existence of global solutions with $(\uu_0,\phi_0) \in (\H_\sigma\cap \mathbf{H}^1(\Omega)) \times H^2(\Omega)$ in two dimensions. Next, in light of the entropy estimates, we also prove the existence of smoother global solutions originating from $(\uu_0,\phi_0) \in (\H_\sigma\cap \mathbf{W}^{1,p}(\Omega)) \times H^2(\Omega)$ provided that $p>2$ and $\nabla \mu_0:= \nabla (-\Delta \phi_0+\Psi'(\phi_0))\in \mathbf{L}^2(\Omega)$. **Plan of the paper.** In Section \[2\] we introduce the notation, some functional inequalities and then prove an estimate for the product of two functions. In Section \[S-Complex\] we show the local well-posedness of system -. Section \[S-WEAK\] is devoted to the existence of global weak solutions for the Navier-Stokes-Allen-Cahn system -. In Section \[S-STRONG\] we study the existence and uniqueness of strong solutions to the Navier-Stokes-Allen-Cahn system -. Section \[EAC-sec\] is devoted to the global existence of solutions to the Euler-Allen-Cahn system -. In Appendix \[App-0\] we prove a result on the Stokes problem with variable viscosity, and in Appendix \[App\] we recall the Osgood lemma and two logarithmic versions of the Gronwall lemma. Preliminaries {#2} ============= Notation -------- For a real Banach space $X$, its norm is denoted by $\|\cdot\|_{X}$. The symbol $\langle\cdot, \cdot\rangle_{X',X}$ stands for the duality pairing between $X$ and its dual space $X'$. The boldface letter $\bm{X}$ denotes the vectorial space endowed with the product structure. We assume that $\Omega\subset \mathbb{R}^d$, $d=2,3$, is a bounded domain with smooth boundary $\partial \Omega$, $\n$ is the unit outward normal vector on $\partial \Omega$, and $\partial_\n$ denotes the outer normal derivative on $\partial \Omega$. We denote the Lebesgue spaces by $L^p(\Omega)$ $(p\geq 1)$ with norms $\|\cdot\|_{L^p(\Omega)}$. When $p=2$, the inner product in the Hilbert space $L^2(\Omega)$ is denoted by $(\cdot, \cdot)$. For $s\in \mathbb{R}$, $p\geq 1$, $W^{s,p}(\Omega)$ is the Sobolev space with corresponding norm $\|\cdot\|_{W^{s,p}(\Omega)}$. If $p=2$, we use the notation $W^{s,p}(\Omega)=H^s(\Omega)$. For every $f\in (H^1(\Omega))'$, we denote by $\overline{f}$ the generalized mean value over $\Omega$ defined by $\overline{f}=|\Omega|^{-1}\langle f,1\rangle_{(H^1(\Omega))',H^1(\Omega)}$. If $f\in L^1(\Omega)$, then $\overline{f}=|\Omega|^{-1}\int_\Omega f \, \d x$. Thanks to the generalized Poincaré inequality, there exists a positive constant $C=C(\Omega)$ such that $$\label{normH1-2} \| f\|_{H^1(\Omega)}\leq C \Big(\| \nabla f\|_{L^2(\Omega)}^2+ \Big|\int_{\Omega} f \, \d x\Big|^2\Big)^\frac12, \quad \forall \, f \in H^1(\Omega).$$ We introduce the Hilbert space of solenoidal vector-valued functions $$\begin{aligned} &\H_\sigma=\{ \uu\in \mathbf{L}^2(\Omega): \mathrm{div}\, \uu=0,\ \uu\cdot \n =0\quad \text{on}\ \partial \Omega\} = \overline{C_{0,\sigma}^\infty(\Omega)}^{\mathbf{L}^2(\Omega)},\\ & \V_\sigma =\{ \uu\in \mathbf{H}^1(\Omega): \mathrm{div}\, \uu=0,\ \uu=\mathbf{0}\quad \text{on}\ \partial \Omega\}=\overline{C_{0,\sigma}^\infty(\Omega)}^{\mathbf{H}^1(\Omega)},\end{aligned}$$ where $C_{0,\sigma}^\infty(\Omega)$ is the space of divergence free vector fields in $C_{0}^\infty(\Omega)$. We also use $( \cdot ,\cdot )$ and $\| \cdot \|_{L^2(\Omega)}$ for the inner product and the norm in $\H_\sigma$. The space $\V_\sigma$ is endowed with the inner product and norm $( \uu,\vv )_{\V_\sigma}= ( \nabla \uu,\nabla \vv )$ and $\|\uu\|_{\V_\sigma}=\| \nabla \uu\|_{L^2(\Omega)}$, respectively. We denote by $\V_\sigma'$ its dual space. We recall the Korn inequality $$\label{KORN} \|\nabla\uu\|_{L^2(\Omega)} \leq \sqrt2\|D\uu\|_{L^2(\Omega)} \leq \sqrt2 \| \nabla \uu\|_{L^2(\Omega)}, \quad \forall \, \uu \in \V_\sigma,$$ where $D\uu = \frac12\big(\nabla \uu+ (\nabla \uu)^t\big)$. We define the Hilbert space $\W_\sigma= \mathbf{H}^2(\Omega)\cap \V_\sigma$ with inner product and norm $ ( \uu,\vv)_{\W_\sigma}=( \A\uu, \A \vv )$ and $\| \uu\|_{\W_\sigma}=\|\A \uu \|$, where $\A$ is the Stokes operator. We recall that there exists $C>0$ such that $$\label{H2equiv} \| \uu\|_{H^2(\Omega)}\leq C\| \uu\|_{\W_\sigma}, \quad \forall \, \uu\in \W_\sigma.$$ Analytic tools -------------- We recall the Ladyzhenskaya, Agmon, Gagliardo-Nirenberg, Brezis-Gallouet-Wainger and trace interpolation inequalities: $$\begin{aligned} \label{LADY} &\| f\|_{L^4(\Omega)}\leq C \|f\|_{L^2(\Omega)}^{\frac12}\|f\|_{H^1(\Omega)}^{\frac12}, &&\forall \, f \in H^1(\Omega), \ d=2,\\ \label{GN2} &\| f\|_{L^p(\Omega)}\leq C p^\frac12 \| f\|_{L^2(\Omega)}^{\frac{2}{p}} \| f\|_{H^1(\Omega)}^{1-\frac{2}{p}}, &&\forall \, f \in H^1(\Omega),\ \ 2\leq p<\infty, \ d=2,\\ \label{GN3} &\| f\|_{L^p(\Omega)}\leq C(p) \| f\|_{L^2(\Omega)}^{\frac{6-p}{2p}} \| f\|_{H^1(\Omega)}^{\frac{3(p-2)}{2p}}, &&\forall \, f \in H^1(\Omega),\ \ 2\leq p\leq 6, \ d=3,\\ \label{Agmon2d} &\| f\|_{L^\infty(\Omega)}\leq C \|f\|_{L^2(\Omega)}^{\frac12}\|f\|_{H^2(\Omega)}^{\frac12}, && \forall \, f \in H^2(\Omega),\ d=2,\\ \label{GN-L4} &\| \nabla f\|_{W^{1,4}(\Omega)}\leq C\| f \|_{H^2(\Omega)}^\frac12 \| f \|_{L^\infty(\Omega)}^\frac12, && \forall \, f \in H^2(\Omega),\ d=3,\\ \label{BGI} &\| f\|_{L^\infty(\Omega)}\leq C \| f\|_{H^1(\Omega)} \log^\frac12 \Big({e}\frac{\| f\|_{H^2(\Omega)}}{\| f\|_{H^1(\Omega)}} \Big), &&\forall \, f \in H^2(\Omega), \ d=2,\\ \label{BGW} &\| f\|_{L^\infty(\Omega)}\leq C(p) \| f\|_{H^1(\Omega)} \log^\frac12 \Big( C(p) \frac{\| f\|_{W^{1,p}(\Omega)}}{\| f\|_{H^1(\Omega)}} \Big) , &&\forall \, f \in W^{1,p}(\Omega), \ p>2, \ d=2,\\ \label{trace} &\| f\|_{L^2(\partial \Omega)} \leq C \| f\|_{\L2}^\frac12 \| f\|_{H^1(\Omega)}^\frac12, &&\forall \, f \in H^1(\Omega), \ d=2.\end{aligned}$$ Here, the constant $C$ depends only on $\Omega$, whereas the constant $C(p)$ depends on $\Omega$ and $p$. We now prove the following end-point estimate for the product of two functions, which will play an important role in the subsequent analysis. This is a generalization of [@GMT2019 Proposition C.1]. \[result1\] Let $\Omega$ be a bounded domain in $\mathbb{R}^2$ with smooth boundary. Assume that $f\in H^1(\Omega)$ and $g\in L^p(\Omega)$ for some $p>2$, $g$ is not identical to $0$. Then, we have $$\label{logprod} \| f g\|_{L^2(\Omega)}\leq C \Big(\frac{p}{p-2} \Big)^\frac12 \| f\|_{H^1(\Omega)} \| g\|_{L^2(\Omega)} \log^{\frac12} \Big( {e} |\Omega|^{\frac{p-2}{2p}} \frac{\| g\|_{L^p(\Omega)}}{\| g\|_{L^2(\Omega)}} \Big),$$ for some positive constant $C$ depending only on $\Omega$. Let us consider the Neumann operator $A=-\Delta + I$ on $L^2(\Omega)$ with domain $D(A)=\lbrace u\in H^2(\Omega): \partial_{\n}u=0$ on $\partial \Omega\rbrace$. By the classical spectral theory, there exists a sequence of positive eigenvalues $\lambda_k$ ($k\in \mathbb{N}$) associated with $A$ such that $\lambda_1=1$, $\lambda_{k}\leq \lambda_{k+1}$ and $\lambda_{k}\rightarrow +\infty$ as $k$ goes to $+\infty$. The sequence of eigenfunctions $w_k\in D(A)$ satisfying $A w_k=\lambda_k w_k$ forms an orthonormal basis in $L^2(\Omega)$ and an orthogonal basis in $H^1(\Omega)$. Let us fix $N \in \mathbb{N}_0$ whose value will be chosen later. We write $f$ as follows $$\label{decompsition} f=\sum_{n=0}^N f_n + f_N^{\bot},$$ where $$f_n=\sum_{k:\, {e}^n\leq \sqrt{\lambda_k}<{e}^{n+1}} (f,w_k) w_k, \quad f_{N}^{\bot}= \sum_{k:\,\sqrt{\lambda_k} \geq {e}^{N+1}} (f,w_k) w_k.$$ By using the above decomposition and Hölder’s inequality, we have $$\begin{aligned} \| f g\|_{L^2(\Omega)} \leq \sum_{n=0}^N \| f_n g \|_{L^2(\Omega)}+ \| f_{N}^{\bot} g \|_{L^2(\Omega)} \leq \sum_{n=0}^N \| f_n\|_{L^\infty(\Omega)} \| g\|_{L^2(\Omega)} + \| f_N^{\bot}\|_{L^{p'}(\Omega)} \| g\|_{L^p(\Omega)},\end{aligned}$$ where $p>2$ and $\frac{1}{p}+\frac{1}{p'}=\frac12$. By using and , we obtain $$\begin{aligned} \| fg\|_{L^2(\Omega)} &\leq C \sum_{n=0}^N \| f_n\|_{L^2(\Omega)}^\frac12 \| f_n\|_{H^2(\Omega)}^\frac12 \| g\|_{L^2(\Omega)}+ C \Big(\frac{2p}{p-2}\Big)^\frac12 \| f_N^{\bot}\|_{L^2(\Omega)}^\frac{2}{p'} \| f_N^{\bot}\|_{H^1(\Omega)}^{1-\frac{2}{p'}} \| g\|_{L^p(\Omega)},\end{aligned}$$ for some $C$ independent of $p$. We recall that $$\| f_n\|_{L^2(\Omega)}^2= \sum_{k:\, {e}^n\leq \sqrt{\lambda_k}<{e}^{n+1}} |(f,w_k)|^2 \leq \frac{1}{{e}^{2n}} \sum_{k:\, {e}^n\leq \sqrt{\lambda_k}<{e}^{n+1}} \lambda_k |(f,w_k)|^2 = \frac{1}{{e}^{2n}} \| f_n\|_{H^1(\Omega)}^2,$$ where we have used the fact $D(A^\frac12)=H^1(\Omega)$. Observing that $\partial_\n f_n=0$ on $\partial \Omega$ ($f_n$ is a finite sum of $w_k$’s), by the regularity theory of the Neumann problem, we have $$\begin{aligned} \| f_n\|_{H^2(\Omega)}^2 &\leq C \| A f_n\|_{L^2(\Omega)}^2= C \sum_{k:\, {e}^n\leq \sqrt{\lambda_k}<{e}^{n+1}} \lambda_k^2 |(f,w_k)|^2\\ &\leq C \sum_{k:\, {e}^n\leq \sqrt{\lambda_k}<{e}^{n+1}} {e}^{2(n+1)} \lambda_k |(f,w_k)|^2\\ &\leq C {e}^{2(n+1)} \| f_n\|_{H^1(\Omega)}^2.\end{aligned}$$ Thus, we infer that $$\| f_n\|_{L^2(\Omega)}^\frac12 \| f_n\|_{H^2(\Omega)}^\frac12 \leq C {e}^\frac12 \| f_n\|_{H^1(\Omega)},$$ where the constant is independent of $n$. On the other hand, reasoning as above, we deduce that $$\| f_{N}^{\bot}\|_{L^2(\Omega)}^2\leq \frac{1}{{e}^{2(N+1)}} \| f_{N}^{\bot}\|_{H^1(\Omega)}^2.$$ Combining the above inequalities and applying the Cauchy-Schwarz inequality, we get $$\begin{aligned} \| fg\|_{L^2(\Omega)} &\leq C \sum_{n=0}^N {e}^\frac12 \| f_n\|_{H^1(\Omega)} \| g\|_{L^2(\Omega)} + C \frac{\Big(\frac{2p}{p-2}\Big)^\frac12}{{e}^{\frac{2(N+1)}{p'}}} \| f_N^{\bot}\|_{H^1(\Omega)} \| g\|_{L^p(\Omega)} \notag\\ &\leq C \| g\|_{L^2(\Omega)} \Bigg( \sum_{n=0}^N {e}^\frac12 \| f_n\|_{H^1(\Omega)} + \frac{\Big(\frac{2p}{p-2}\Big)^\frac12}{{e}^{\frac{(p-2)(N+1)}{p}}} \frac{\| g\|_{L^p(\Omega)}}{\| g\|_{L^2(\Omega)}} \| f_N^{\bot}\|_{H^1(\Omega)}\Bigg) \notag\\ &\leq C \| g\|_{L^2(\Omega)} \Bigg( {e} (N+1) + \frac{\Big(\frac{2p}{p-2}\Big)}{{e}^{\frac{2(p-2)(N+1)}{p}}} \frac{\| g\|^2_{L^p(\Omega)}}{\| g\|^2_{L^2(\Omega)}} \Bigg)^\frac12 \Bigg( \sum_{n=0}^N \| f_n\|_{H^1(\Omega)}^2 +\| f_N^{\bot}\|^2_{H^1(\Omega)} \Bigg)^\frac12 \notag \\ &\leq C \| g\|_{L^2(\Omega)} \Bigg( {e} (N+1) + \frac{ \Big(\frac{2p}{p-2}\Big)}{{e}^{\frac{2(p-2)(N+1)}{p}}} \frac{\| g\|_{L^p(\Omega)}^2}{\| g\|_{L^2(\Omega)}^2} \Bigg)^\frac12 \| f\|_{H^1(\Omega)}, \label{est2}\end{aligned}$$ where we have used the fact $p'=\frac{2p}{p-2}$ and the constant $C$ is independent of $N$. Now, we choose the non-negative integer $N \in \mathbb{N}_0$ such that $$\frac{p}{2(p-2)} \log \Bigg( {e} |\Omega|^{\frac{p-2}{p}}\frac{\| g\|^2_{L^{p}(\Omega)}}{\| g\|^2_{L^2(\Omega)}}\Bigg) \leq N+1 < 1+ \frac{p}{2(p-2)}\log \Bigg({e} |\Omega|^{\frac{p-2}{p}}\frac{\| g\|^2_{L^p(\Omega)}}{\| g\|^2_{L^2(\Omega)}}\Bigg).$$ We observe that the logarithm term in the above relations is greater than $1$ for any function $g\in L^p(\Omega)$ with $p>2$, $g\neq 0$. Then by using the choice of $N$ in , we infer that $$\begin{aligned} \| f g\|_{L^2(\Omega)} &\leq C \| f\|_{H^1(\Omega)} \| g\|_{L^2(\Omega)} \Bigg( {e} \Bigg[ 1+\frac{p}{2(p-2)} \log \Big( {e} |\Omega|^{\frac{p-2}{p}} \frac{\| g\|^2_{L^p(\Omega)}}{\| g\|^2_{L^2(\Omega)}}\Big) \Bigg] +\frac{2p}{{e} (p-2) |\Omega|^{\frac{p-2}{p}}} \Bigg)^\frac12\\ &\leq C \| f\|_{H^1(\Omega)} \| g\|_{L^2(\Omega)} \Bigg( \frac{3{e}}{2} \frac{p}{(p-2)} \log \Big( {e}^2 |\Omega|^{\frac{p-2}{p}} \frac{\| g\|^2_{L^p(\Omega)}}{\| g\|^2_{L^2(\Omega)}}\Big) +\frac{2p}{{e} (p-2) |\Omega|^{\frac{p-2}{p}}} \Bigg)^\frac12,\end{aligned}$$ which implies the desired conclusion. The conclusion of Lemma \[result1\] holds as well in $\mathbb{T}^2$. It is well-known that $H^1(\Omega)$ is not an algebra in two dimensions. An interesting application of Lemma \[result1\] together with the Brezis-Gallouet-Wainger inequality is that $$\| f g\|_{H^1(\Omega)}\leq C_1 \| f\|_{H^1(\Omega)} \| g\|_{H^1(\Omega)} \log^{\frac12} \Big( C_2 \frac{\| g\|_{W^{1,p}(\Omega)}}{\| g\|_{H^1(\Omega)}}\Big),$$ for any $f\in H^1(\Omega)$, $g \in W^{1,p}(\Omega)$ for some $p>2$, where $C_1$ and $C_2$ are two positive constants depending only on $\Omega$ and $p$. Lemma \[result1\] can be regarded as a generalization of Hölder and Young inequalities. This inequality is sharp since the product between $f$ and $g$ is not defined in $\L2$ if $f\in H^1(\Omega)$ and $g\in L^2(\Omega)$. Indeed, we have the following counterexample in $\mathbb{R}^2$: $$g(x)= \frac{1}{|x|\log^{\frac34} \big(\frac{1}{|x|}\big)}, \quad f(x)=\log^{\frac12-\frac{1}{100}} \Big( \frac{1}{|x|}\Big), \quad 0< x\leq 1.$$ We notice that $g \in L^2(B_{\mathbb{R}^2}(0,1))$ since $$\int_{B_{\mathbb{R}^2}(0,1)} |g(x)|^2 \, \d x= 2 \pi \int_0^1 \frac{1}{r \log^{\frac32}(\frac{1}{r} \big) } \, \d r= 2\pi \int_1^{+\infty} \frac{1}{s\log^{\frac32}(s)} \, \d s <+\infty.$$ However, $g \notin L^p(B_{\mathbb{R}^2}(0,1))$ for any $p>2$ because $$\int_{B_{\mathbb{R}^2}(0,1)} |g(x)|^p \, \d x= 2 \pi \int_0^1 \frac{1}{r^{p-1} \log^{\frac{3p}{4}}(\frac{1}{r} \big) } \, \d r = 2 \pi \int_1^{+\infty} \frac{1}{s^{3-p}\log^{ \frac{3p}{4}}(s)} \, \d s =+\infty.$$ We easily observe that $f \in L^2(B_{\mathbb{R}^2}(0,1))$, but $f \notin L^\infty(B_{\mathbb{R}^2}(0,1))$ since $\lim_{|x|\rightarrow 0} f(x)=+\infty$. This, in turn, implies that $f \notin W^{1,p}(B_{\mathbb{R}^2}(0,1))$ for any $p>2$, due to the Sobolev embedding theorem. Nonetheless, we have $$\partial_{x_i}f(x)= \Big(\frac12 -\frac{1}{100}\Big) \frac{x_i}{|x|^2} \frac{1}{\log^{\frac{1}{2}+\frac{1}{100}} \big(\frac{1}{|x|} \big)}, \quad i=1,2,$$ such that $$\begin{aligned} \int_{B_{\mathbb{R}^2}(0,1)} |\partial_{x_i}f(x)|^2 \, \d x &\leq 2\pi \Big(\frac12 -\frac{1}{100}\Big)^2 \int_0^1 \frac{1}{r \log^{2(\frac12+\frac{1}{100})} \big( \frac{1}{r}\big)} \, \d r\\ &\leq C \int_0^1 \frac{1}{r \log^{1+\frac{1}{50}} \big( \frac{1}{r}\big)} < + \infty.\end{aligned}$$ Thus, we have $f \in W^{1,2}(B_{\mathbb{R}^2}(0,1))$. Finally, we observe that $$\begin{aligned} \int_{B_{\mathbb{R}^2}(0,1)} |g(x)f(x)|^2 \, \d x & = \int_{B_{\mathbb{R}^2}(0,1)} \frac{\log^{1-\frac{1}{50}} \big( \frac{1}{|x|}\big)}{|x|^2\log^{\frac32} \big(\frac{1}{|x|}\big) } \, \d x\\ &= 2 \pi \int_0^1 \frac{1}{r \log^{\frac12 + \frac{1}{50}} \big( \frac{1}{r} \big) } \, \d r =+\infty,\end{aligned}$$ namely, the product $fg \notin L^2(B_{\mathbb{R}^2}(0,1))$. The above counterexample can be generalized to any pair of functions $$g(x)= \frac{1}{|x|\log^\alpha \big(\frac{1}{|x|}\big)}, \quad f(x)=\log^\beta \big( \frac{1}{|x|}\big), \quad x\in B_\mathbb{R}^2(0,1),$$ where $\frac12 <\alpha<1$ and $\beta< \frac12 $ such that $\alpha-\beta<\frac{1}{2}$. Complex Fluids Model: Local Well-posedness {#S-Complex} ========================================== In this section we consider the complex fluids system $$\label{CF} \begin{cases} \rho(\phi)(\partial_t \uu + \uu \cdot \nabla \uu) - \div (\nu(\phi)D\uu) + \nabla P= - \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi = 0, \end{cases} \quad \text{ in } \Omega \times (0,T),$$ subject to the boundary condition $$\label{boundary-P} \uu=\mathbf{0} \quad \text{ on } \partial\Omega \times (0,T),$$ and to the initial conditions $$\label{IC-P} \uu(\cdot, 0)= \uu_0, \quad \phi(\cdot, 0)=\phi_0 \quad \text{ in } \Omega.$$ We recall that $\uu$ is the (volume) averaged velocity of the binary mixture, $P$ is the pressure, and $\phi$ denotes the difference of the concentrations (volume fraction) of the two fluids. The coefficients $\rho(\cdot)$ and $\nu(\cdot)$ represent the density and the viscosity of the mixture depending on $\phi$. Throughout this work, motivated by the linear interpolation density and viscosity functions in , we assume that $$\label{Hp-rn} \rho, \nu \in C^2([-1,1]): \quad \rho(s)\in [\rho_1, \rho_2], \quad \nu(s)\in [\nu_1,\nu_2] \ \ \text{for}\ \ s\in[-1,1],$$ where $\rho_1$, $\rho_2$ and $\nu_1$, $\nu_2$ are, respectively, the (positive) densities and viscosities of two homogeneous (different) fluids. In addition, we will use the notation $$\rho_\ast=\min \lbrace \rho_1,\rho_2\rbrace>0,\qquad \nu_\ast =\min \lbrace \nu_1,\nu_2 \rbrace>0.$$ The aim of this section is to prove the local existence and uniqueness of regular solutions to problem - with general initial data. This generalizes [@LZ2008 Theorem 1.1] to the case with unmatched densities and viscosities depending on the concentration, and to initial data $\phi_0$ belonging to $W^{2,p}(\Omega)$, instead of $\phi_0 \in H^3(\Omega)$ (see also [@LLZ2005 Theorem 2.2] for the Cauchy problem in $\mathbb{R}^2$). \[CF-T\] Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^2$. For any initial datum $(\uu_0, \phi_0)$ such that $\uu_0 \in \V_\sigma\cap \mathbf{H}^2(\Omega)$, $\phi_0 \in W^{2,p}(\Omega)$ for some $p>2$, with $|\phi_0(x)|\leq 1$, for all $x\in \Omega$, there exists a positive time $T_0$, which depends only on the norms of the initial data, and a unique solution $(\uu,\phi)$ to problem - on $[0,T_0]$ such that $$\begin{aligned} & \uu \in L^\infty(0,T_0; \mathbf{V}_\sigma\cap \mathbf{H}^2(\Omega)) \cap L^\frac{2p}{p-2}(0,T_0;\mathbf{W}^{2,p}(\Omega))\cap W^{1,2}(0,T_0; \V_\sigma) \cap W^{1,\infty}(0,T_0; \H_\sigma(\Omega)), \\ & \phi \in L^\infty(0,T_0;W^{2,p}(\Omega))\cap W^{1,\infty}(0,T_0; H^1(\Omega)\cap L^\infty(\Omega)),\ \ \ |\phi(x,t)|\leq 1\ \ \mathrm{in}\ \ \Omega\times[0,T_0].\end{aligned}$$ We perform some *a priori* estimates for the solutions to problem -, and then we prove the uniqueness. With these arguments, the existence of local solutions to - follows from the method of successive approximations (Picard’s method). This relies on the definition of a suitable sequence $(\uu_k,\phi_k)$ via an iteration scheme, *a priori* bounds on $(\uu_k,\phi_k)$ in terms of $(\uu_{k-1},\phi_{k-1})$, and uniform estimates of $(\uu_k-\uu_{k-1},\phi_k-\phi_{k-1})$ (by arguing as in the uniqueness proof reported below). We refer to [@LS1975] for the details of this type of argument in the case of the nonhomogeneous Navier-Stokes equations (see also, e.g., [@LH2018] for the Navier-Stokes-Allen-Cahn system). **First estimate.** Multiplying $_1$ by $\uu$ and integrating over $\Omega$, we find $$\frac12\int_{\Omega} \rho(\phi) \partial_t |\uu|^2 \, \d x +\frac12 \int_{\Omega} \rho(\phi) \uu \cdot \nabla \big( |\uu|^2\big) \, \d x + \int_{\Omega} \nu(\phi) |D\uu|^2 \, \d x= -\int_{\Omega} \Delta \phi \nabla \phi \cdot \uu \, \d x.$$ Taking the gradient of $_3$, we have $$\nabla \partial_t \phi + \nabla (\uu\cdot \nabla \phi)=0.$$ Multiplying the above identity by $\nabla \phi$, integrating over $\Omega$ and using the no-slip boundary condition of $\uu$, we obtain $$\frac12\ddt \int_\Omega |\nabla \phi|^2 \, \d x - \int_{\Omega} (\uu \cdot \nabla \phi ) \Delta \phi \, \d x=0.$$ By adding the two obtained equations, and using the identity $$\partial_t \rho(\phi) + \div (\rho(\phi)\uu)= \rho'(\phi) \big(\partial_t \phi + \uu \cdot \nabla \phi \big)=0,$$ we find the basic energy law $$\frac12 \ddt \int_{\Omega} \Big( \rho(\phi) |\uu|^2 + |\nabla \phi|^2 \Big) \, \d x + \int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x =0.$$ Integrating over $[0,t]$, we obtain $$E_0(\uu(t),\phi(t))+ \int_0^t \int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x = E_0(\uu_0,\phi_0), \quad \forall \, t \geq 0.$$ where $$E_0(\uu,\phi)= \frac12 \int_{\Omega} \rho(\phi) |\uu|^2 + |\nabla \phi|^2 \, \d x.$$ In addition, the transport equation yields that, for all $p\in [2, \infty]$, it holds $$\label{cons-Lp} \| \phi(t)\|_{L^p(\Omega)}= \| \phi_0\|_{L^p(\Omega)}, \quad \forall \, t \geq 0.$$ Thus, we infer that $$\label{CF-1} \uu \in L^\infty(0,T;\H_\sigma)\cap L^2(0,T;\V_\sigma), \quad \phi \in L^\infty(0,T;H^1(\Omega)\cap L^\infty(\Omega)).$$ **Second estimate.** We multiply $_1$ by $\partial_t \uu$ and integrate over $\Omega$. After integrating by parts and using the fact that $\partial_t \uu =0$ on $\partial \Omega$, we obtain $$\begin{aligned} & \frac12 \ddt \int_{\Omega} \nu(\phi)|D \uu|^2 \, \d x+ \int_{\Omega} \rho(\phi) |\partial_t \uu|^2 \, \d x \\ &\quad = \frac12 \int_{\Omega} \nu'(\phi)\partial_t \phi |D \uu|^2 \, \d x - \int_{\Omega} \rho(\phi) (\uu \cdot \nabla) \uu \cdot \partial_t \uu \, \d x + \int_{\Omega} (\nabla \phi \otimes \nabla \phi) : \nabla \partial_t \uu \, \d x.\end{aligned}$$ Combining , and , together with the relation $\partial_t \phi= - \uu \cdot \nabla \phi$, we have $$\begin{aligned} &\frac12 \ddt \int_{\Omega} \nu(\phi)|D \uu|^2 \, \d x+ \int_{\Omega} \rho(\phi) |\partial_t \uu|^2 \, \d x \notag \\ &\quad \leq C \| \partial_t \phi\|_{\L2} \| D \uu\|_{L^4(\Omega)}^2 + C \| \uu\|_{L^\infty(\Omega)} \| \nabla \uu\|_{\L2} \| \partial_t \uu\|_{\L2}+ \| \nabla \phi\|_{L^4(\Omega)}^2 \| \nabla \partial_t \uu\|_{\L2} \notag \\ &\quad \leq C \| \uu \cdot \nabla \phi\|_{\L2} \| D \uu\|_{\L2} \| \uu\|_{H^2(\Omega)} + C \| \uu\|_{\L2}^\frac12 \| \uu\|_{H^2(\Omega)}^\frac12 \| \nabla \uu\|_{\L2} \| \partial_t \uu\|_{\L2} \notag \\ &\qquad + C \| \nabla \phi\|_{\L2} \| \phi\|_{H^2(\Omega)} \| \nabla \partial_t \uu\|_{\L2} \notag \\ &\quad \leq \frac{\rho_\ast}{4} \| \partial_t \uu\|_{\L2}^2+ \frac{\nu_\ast}{4} \| D \partial_t \uu\|_{\L2}^2 + C \| \uu\|_{L^\infty(\Omega)}\| \nabla \phi\|_{\L2} \| D \uu\|_{\L2} \| \uu\|_{H^2(\Omega)} \notag \\ &\qquad +C \| \nabla \uu\|_{\L2}^2 \| \uu\|_{H^2(\Omega)} + C \| \phi\|_{H^2(\Omega)}^2 \notag \\ &\quad \leq \frac{\rho_\ast}{4} \| \partial_t \uu\|_{\L2}^2+ \frac{\nu_\ast}{4} \| D \partial_t \uu\|_{\L2}^2+ C \| D \uu\|_{\L2} \| \uu\|_{H^2(\Omega)}^\frac32 \notag \\ &\qquad+C \| D \uu\|_{\L2}^2 \| \uu\|_{H^2(\Omega)} + C \| \phi\|_{H^2(\Omega)}^2. \label{CF-2}\end{aligned}$$ Here we have also used that $\| \nabla \uu\|_{\L2}$ is equivalent to $\| D \uu\|_{\L2}$ thanks to . Next, we rewrite $_1$-$_2$ as a Stokes problem with non-constant viscosity $$\begin{cases} -\div (\nu(\phi)D \uu)+\nabla P= \f, & \text{ in } \Omega \times (0,T),\\ \div \uu=0,& \text{ in } \Omega \times (0,T),\\ \uu=\mathbf{0}, & \text{ on } \partial \Omega \times (0,T), \end{cases}$$ where $\f= -\rho(\phi)(\partial_t \uu + \uu \cdot \nabla \uu) - \mathrm{div}(\nabla \phi \otimes \nabla \phi)$. By exploiting Theorem \[Stokes-e\] with $p=2$, $s=2$, $r=\infty$, we infer that $$\begin{aligned} \| \uu\|_{H^2(\Omega)} &\leq C \| \rho(\phi) \partial_t \uu\|_{\L2}+ C\| \rho(\phi) (\uu \cdot \nabla \uu)\|_{\L2} \\ &\quad + C\| \mathrm{div}(\nabla \phi \otimes \nabla \phi)\|_{\L2} + C \|\nabla \phi \|_{L^\infty(\Omega)}\| D \uu\|_{L^2(\Omega)} \\ &\leq C \| \partial_t \uu\|_{\L2} + C \| \uu\|_{L^\infty(\Omega)} \|\nabla \u \|_{\L2} + C \| \nabla \phi\|_{L^\infty(\Omega)} \big( \|\phi \|_{H^2(\Omega)} + \| D \uu\|_{L^2(\Omega)}\big)\\ &\leq C\| \partial_t \uu\|_{\L2} + C\| \uu\|_{H^2(\Omega)}^\frac12 \|D \u \|_{\L2} + C \| \nabla \phi\|_{L^\infty(\Omega)} \big( \|\phi \|_{H^2(\Omega)} + \| D \uu\|_{L^2(\Omega)}\big).\end{aligned}$$ Here we have used and . Thus, by Young’s inequality we find $$\begin{aligned} \| \uu\|_{H^2(\Omega)} &\leq C\| \partial_t \uu\|_{\L2} + C \|D \u \|_{\L2}^2 + C \| \nabla \phi\|_{L^\infty(\Omega)} \big( \|\phi \|_{H^2(\Omega)} + \| D \uu\|_{L^2(\Omega)}\big). \label{CF-uH2}\end{aligned}$$ Inserting into , and using again Young’s inequality, we get $$\begin{aligned} & \frac12 \ddt \int_{\Omega} \nu(\phi)|D \uu|^2 \, \d x + \frac{\rho_\ast}{2}\int_{\Omega} |\partial_t \uu|^2 \, \d x \notag \\ & \leq \frac{\nu_\ast}{4} \| D \partial_t \uu\|_{\L2}^2 + C \|D \uu \|_{L^2(\Omega)}^4+ C \|\nabla \phi \|_{L^\infty(\Omega)}^2 (\| \phi\|_{H^2(\Omega)}^2+\| D \uu\|_{L^2(\Omega)}^2)+ C \|\phi\|_{H^2(\Omega)}^2. %\\ %&\quad \leq \frac{\nu_\ast}{4} \| D \partial_t \uu\|_{\L2}^2 %+ C \| D \uu \|_{\L2}(1+\| D \uu \|_{\L2}^3) (1+ \| \nabla \phi\|_{L^%\infty(\Omega)}^{12}) +C \|\phi\|_{H^2(\Omega)}^2. \label{CF-2b}\end{aligned}$$ **Third estimate.** We differentiate with respect to the time to obtain $$\begin{aligned} \rho(\phi) \partial_{tt} \uu &+ \rho(\phi) \big( \partial_t \uu \cdot \nabla \uu + \uu \cdot \nabla \partial_t \uu\big) + \rho'(\phi)\partial_t \phi (\partial_t \uu + \uu \cdot \nabla \uu) - \div ( \nu(\phi) D \partial_t \uu) \\ & \quad - \div(\nu'(\phi)\partial_t \phi D \uu ) + \nabla \partial_t P= - \div(\nabla \phi \otimes \nabla \partial_t \phi+ \nabla \partial_t \phi \otimes \nabla \phi).\end{aligned}$$ Multiplying the above equation by $\partial_t \uu$ and integrating over $\Omega$, we are led to $$\begin{aligned} & \frac12 \int_{\Omega} \rho(\phi) \partial_t |\partial_t \uu|^2 \, \d x +\int_{\Omega} \rho(\phi) \big( \partial_t \uu \cdot \nabla \uu + \uu \cdot \nabla \partial_t \uu\big)\cdot \partial_t \uu \, \d x + \int_{\Omega} \rho'(\phi)\partial_t \phi (\partial_t \uu + \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x\\ & + \int_{\Omega} \nu(\phi) |D \partial_t \uu|^2 \, \d x+ \int_{\Omega} \nu'(\phi) \partial_t \phi D \uu : D \partial_t \uu \, \d x= \int_{\Omega} \big( \nabla \phi \otimes \nabla \partial_t \phi + \nabla \partial_t \phi \otimes \nabla \phi \big) : \nabla \partial_t \uu \, \d x.\end{aligned}$$ Since $$\begin{aligned} &\frac12 \int_{\Omega} \rho(\phi) \partial_t |\partial_t \uu|^2 \, \d x + \frac12 \int_{\Omega} \rho(\phi) \uu \cdot \nabla|\partial_t \uu|^2 \,\d x\\ &\quad = \frac12 \ddt \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \,\d x - \frac12 \int_{\Omega} \underbrace{ \big( \partial_t \rho(\phi)+ \div( \rho(\phi) \uu ) \big)}_{=0} |\partial_t \uu|^2\, \d x = \frac12 \ddt \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \,\d x,\end{aligned}$$ we have $$\begin{aligned} &\frac12 \ddt \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \,\d x + \int_{\Omega} \nu(\phi) |D \partial_t \uu|^2 \, \d x\notag\\ &\quad = -\int_{\Omega} \rho(\phi) (\partial_t \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x - \int_{\Omega} \rho'(\phi)\partial_t \phi (\partial_t \uu + \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x\notag\\ &\qquad - \int_{\Omega} \nu'(\phi) \partial_t \phi D \uu : D \partial_t \uu \, \d x + \int_{\Omega} \big( \nabla \phi \otimes \nabla \partial_t \phi + \nabla \partial_t \phi \otimes \nabla \phi \big) : \nabla \partial_t \uu \, \d x. \label{CF-3}\end{aligned}$$ We now estimate the terms on the right-hand side of the above equality. By using , , and the equation $_3$, we find $$\begin{aligned} -\int_{\Omega} \rho(\phi) (\partial_t \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x &\leq C \| \partial_t \uu\|_{L^4(\Omega)}^2 \|\nabla \uu\|_{\L2} \notag \\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2+ C\| \partial_t \uu\|_{\L2}^2 \| D \uu\|_{\L2}^2, \label{CF-4}\end{aligned}$$ and $$\begin{aligned} & -\int_{\Omega} \rho'(\phi)\partial_t \phi (\partial_t \uu + \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x\notag\\ &\quad \leq C \| \partial_t \phi \|_{\L2} \| \partial_t \uu\|_{L^4(\Omega)}^2 + C \| \partial_t \phi\|_{\L2} \| \uu \cdot \nabla \uu\|_{L^4(\Omega)}\| \partial_t \uu\|_{L^4(\Omega)} \notag \\ &\quad \leq C \| \uu \cdot \nabla \phi\|_{\L2} \|\partial_t \uu \|_{\L2} \| \nabla \partial_t \uu\|_{\L2}\notag \\ & \qquad + C \| \uu \cdot \nabla \phi\|_{\L2} \| \uu\|_{L^\infty(\Omega)} \| \nabla \uu\|_{\L2}^\frac12 \|\uu \|_{H^2(\Omega)}^\frac12 \|\partial_t \uu \|_{\L2}^\frac12 \| \nabla \partial_t \uu\|_{\L2}^\frac12 \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + C \| \uu\|_{L^\infty(\Omega)}^2 \| \partial_t \uu\|_{\L2}^2 + C \| \uu\|_{L^\infty(\Omega)}^\frac83 \| D \uu\|_{\L2}^\frac23 \| \uu\|_{H^2(\Omega)}^\frac23 \| \partial_t \uu\|_{\L2}^\frac23 \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + C \| \uu\|_{H^2(\Omega)} \| \partial_t \uu\|_{\L2}^2 + C \| D \uu\|_{\L2}^\frac23 \| \uu\|_{H^2(\Omega)}^2 \| \partial_t \uu\|_{\L2}^\frac23 \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + C \| \partial_t \uu\|_{\L2}^3 + C \| D \uu\|_{\L2}^2 \| \partial_t \uu\|_{\L2}^2\notag \\ &\qquad + C \| \nabla \phi\|_{L^\infty(\Omega)} (\|\phi \|_{H^2(\Omega)}+\| D \uu\|_{\L2}) \| \partial_t \uu\|_{\L2}^2+C \| D \uu\|_{\L2}^\frac23 \| \partial_t \uu\|_{\L2}^\frac83 \notag \\ &\qquad + C \|D \uu \|_{\L2}^\frac{2}{3} \big( \| D \uu\|_{\L2}^4+ \| \nabla \phi \|_{L^\infty(\Omega)}^2( \|\phi \|_{H^2(\Omega)}^2+ \| D \uu\|_{\L2}^2) \big) \| \partial_t \uu\|_{\L2}^\frac23, \label{CF-5}\end{aligned}$$ where we have also used and . Moreover, we obtain $$\begin{aligned} &-\int_{\Omega} \nu'(\phi) \partial_t \phi D \uu : D \partial_t \uu \, \d x \notag \\ &\quad \leq C \| \partial_t \phi\|_{L^4(\Omega)} \| D \uu\|_{L^4(\Omega)} \| D \partial_t \uu \|_{\L2} \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 +C \| \uu \cdot \nabla \phi\|_{\L2} \| \nabla \partial_t \phi\|_{\L2} \| D \uu\|_{\L2}\| \uu\|_{H^2(\Omega)} \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 +C \| \nabla \partial_t \phi\|_{\L2} \| D \uu\|_{\L2} \| \uu\|_{H^2(\Omega)}^\frac32, \label{CF-6}\end{aligned}$$ and $$\begin{aligned} &\int_{\Omega} \big( \nabla \phi \otimes \nabla \partial_t \phi + \nabla \partial_t \phi \otimes \nabla \phi \big) : \nabla \partial_t \uu \, \d x\notag \\ &\quad \leq C \| \nabla \phi\|_{L^\infty(\Omega)} \|\nabla \partial_t \phi \|_{\L2} \| D \partial_t \uu\|_{\L2} \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 +C\| \nabla \phi\|_{L^\infty(\Omega)}^2 \|\nabla \partial_t \phi \|_{\L2}^2. \label{CF-7}\end{aligned}$$ It is clear that an estimate of $\nabla \partial_t \phi$ is needed in order to control of the last two terms in and . For this purpose, we have $$\nabla \partial_t \phi= (\nabla \uu)^t \nabla \phi + \nabla^2 \phi \, \uu.$$ Then, we easily deduce that $$\begin{aligned} \|\nabla \partial_t \phi \|_{\L2} &\leq \| \nabla \uu\|_{\L2}\| \nabla \phi\|_{L^\infty(\Omega)}+ \| \phi\|_{W^{2,p}(\Omega)} \| \uu\|_{L^{\frac{2p}{p-2}}(\Omega)} \notag \\ & \leq C \| D \uu\|_{\L2} \big( \| \nabla \phi\|_{L^\infty(\Omega)}+ \| \phi\|_{W^{2,p}(\Omega)}\big), \label{CF-nphit}\end{aligned}$$ for $p>2$. Combining - and with -, we arrive at $$\begin{aligned} & \frac12 \ddt \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \,\d x + \frac{3\nu_\ast}{4}\int_{\Omega} |D \partial_t \uu|^2 \, \d x\notag\\ &\quad \leq C\| \partial_t \uu\|_{\L2}^2 \| D \uu\|_{\L2}^2+ \| \partial_t \uu\|_{\L2}^3 \notag \\ &\qquad + C \| \nabla \phi\|_{L^\infty(\Omega)} (\|\phi \|_{H^2(\Omega)}+\| D \uu\|_{\L2}) \| \partial_t \uu\|_{\L2}^2+C \| D \uu\|_{\L2}^\frac23 \| \partial_t \uu\|_{\L2}^\frac83 \notag \\ &\qquad + C \|D \uu \|_{\L2}^\frac{2}{3} \big( \| D \uu\|_{\L2}^4+ \| \nabla \phi \|_{L^\infty(\Omega)}^2( \|\phi \|_{H^2(\Omega)}^2+ \| D \uu\|_{\L2}^2) \big) \| \partial_t \uu\|_{\L2}^\frac23\notag \\ &\qquad + C \| D \uu\|_{\L2}^2 (\| \nabla \phi\|_{L^\infty(\Omega)}+\| \phi\|_{W^{2,p}(\Omega)}) (\| \partial_t \uu\|_{\L2}^\frac32 +\| D \uu\|_{\L2}^3 )\notag \\ &\qquad + C \| D \uu\|_{\L2}^2 (\| \nabla \phi\|_{L^\infty(\Omega)}+\| \phi\|_{W^{2,p}(\Omega)}) \| \nabla \phi\|_{L^\infty}^\frac32 \big( \|\phi \|_{H^2(\Omega)}^\frac32+ \| D \uu\|_{\L2}^\frac32 \big) \notag \\ & \qquad +C \| D \uu\|_{\L2}^2 \big( \| \nabla \phi\|_{L^\infty(\Omega)}^2+ \| \phi\|_{W^{2,p}(\Omega)}^2\big) \| \nabla \phi \|_{L^\infty(\Omega)}^2. \label{CF-8}\end{aligned}$$ **Fourth estimate.** In light of and , we are left to control the $W^{2,p}(\Omega)$-norm of $\phi$. To this aim, we make use of the following equivalent norm $$\| f\|_{W^{2,p}(\Omega)}=\Big( \| f\|_{L^p(\Omega)}^p + \sum_{|\alpha|=2} \| \partial^\alpha f\|_{L^p(\Omega)}^p \Big)^\frac{1}{p},$$ where $\alpha$ is a multi-index. Next, we apply $\partial^\alpha$ to the transport equation $_3$ $$\partial_t \partial^\alpha \phi + \partial^\alpha \big( \uu \cdot \nabla \phi \big)=0.$$ Multiplying the above equation by $|\partial^\alpha \phi|^{p-2}\partial^\alpha \phi$ and integrating over $\Omega$, we get $$\begin{aligned} \frac{1}{p} \ddt \int_{\Omega} |\partial^\alpha \phi|^p \, \d x&+ \int_{\Omega} \partial^\alpha \big( \uu \cdot \nabla \phi \big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x=0. \label{CF-9}\end{aligned}$$ Since $\uu$ is divergence free, the above can be rewritten as $$\begin{aligned} \frac{1}{p}\ddt \int_{\Omega} |\partial^\alpha \phi|^p \, \d x&+ \int_{\Omega} \Big( \partial^\alpha \big( \uu \cdot \nabla \phi \big)- \uu \cdot \nabla \partial^\alpha \phi\Big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x=0. \label{CF-10}\end{aligned}$$ By summing over all multi-inder of order $2$, and using , we find $$\begin{aligned} \frac{1}{p}\ddt \| \phi\|_{W^{2,p}(\Omega)}^p= -\sum_{|\alpha|=2} \int_{\Omega} \Big( \partial^\alpha \big( \uu \cdot \nabla \phi \big)- \uu \cdot \nabla \partial^\alpha \phi\Big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x. \label{CF-11}\end{aligned}$$ It is easily seen that the right-hand side can be written as $$\begin{aligned} &\sum_{|\alpha|=2} \int_{\Omega} \Big( \partial^\alpha \big( \uu \cdot \nabla \phi \big)- \uu \cdot \nabla \partial^\alpha \phi\Big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x \notag \\ &= \sum_{|\alpha|=2} \int_{\Omega} \big( \partial^\alpha \uu \cdot \nabla \phi \big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x+ \sum_{|\beta|=1, |\gamma|=1, \beta+\gamma=\alpha} \int_{\Omega} \big( \partial^\beta \uu \cdot \nabla \partial^\gamma \phi) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x.\label{CF-12}\end{aligned}$$ Observe that $$\begin{aligned} \sum_{|\alpha|=2} \int_{\Omega} \big( \partial^\alpha \uu \cdot \nabla \phi \big) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x \leq C \| \uu\|_{W^{2,p}(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)} \| \phi\|_{W^{2,p}(\Omega)}^{p-1}, \label{CF-13}\end{aligned}$$ and $$\begin{aligned} \sum_{|\beta|=1, |\gamma|=1, \beta+\gamma=\alpha} \int_{\Omega} \big( \partial^\beta \uu \cdot \nabla \partial^\gamma \phi) |\partial^\alpha \phi |^{p-2} \partial^\alpha \phi \, \d x\leq C \| \uu \|_{W^{1,\infty}(\Omega)} \| \phi\|_{W^{2,p}(\Omega)}^{p}. \label{CF-14}\end{aligned}$$ Collecting - together, and using the Sobolev embedding $W^{2,p}(\Omega) \hookrightarrow W^{1,\infty}(\Omega)$ (with $p>2$), we obtain $$\frac{1}{p} \ddt \| \phi\|_{W^{2,p}(\Omega)}^p \leq C \| \uu\|_{W^{2,p}(\Omega)} \| \phi\|_{W^{2,p}(\Omega)}^p.$$ Notice that the above inequality is equivalent to $$\begin{aligned} \frac{1}{2} \ddt \| \phi\|_{W^{2,p}(\Omega)}^2 \leq C \| \uu\|_{W^{2,p}(\Omega)} \| \phi\|_{W^{2,p}(\Omega)}^2. \label{CF-15}\end{aligned}$$ Next, by exploiting Theorem \[Stokes-e\] with $s=p>2$ and $r=\infty$, we deduce that $$\begin{aligned} & \| \uu\|_{W^{2,p}(\Omega)} \notag\\ &\quad \leq C \|\rho(\phi) \partial_t \uu + \rho(\phi) (\uu \cdot \nabla \uu) + \mathrm{div}(\nabla \phi \otimes \nabla \phi)\|_{L^p(\Omega)}+ C \|\nabla \phi \|_{L^\infty(\Omega)} \| D \uu\|_{L^p(\Omega)} \notag \\ &\quad \leq C\big(\| \partial_t \uu\|_{L^p(\Omega)}+ \|\uu\|_{L^\infty(\Omega)} \| D \uu\|_{L^p(\Omega)} + \| \phi \|_{W^{2,p}(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)}+ \|\nabla \phi \|_{L^\infty(\Omega)} \| D \uu\|_{L^p(\Omega)}\big) \notag \\ &\quad \leq C \| \partial_t \uu\|_{\L2}^\frac{2}{p} \| D \partial_t \uu \|_{\L2}^\frac{p-2}{p} +C \| \uu\|_{H^2(\Omega)}^\frac12 \| \uu\|_{W^{2,p}(\Omega)}^\frac12 \| \uu\|_{L^p(\Omega)}^\frac12 \notag \\ &\qquad +C\| \nabla \phi\|_{L^\infty(\Omega)} \big( \| \phi\|_{W^{2,p}(\Omega)}+ \| \uu\|_{W^{2,p}(\Omega)}^\frac12 \| \uu\|_{L^p(\Omega)}^\frac12 \big). \notag\end{aligned}$$ Thus, by Young’s inequality we find $$\begin{aligned} \| \uu\|_{W^{2,p}(\Omega)} &\leq C \| \partial_t \uu\|_{\L2}^\frac{2}{p} \| D \partial_t \uu \|_{\L2}^\frac{p-2}{p}+C \| \phi\|_{W^{2,p}(\Omega)}^2\notag\\ &\quad +C (\| \uu\|_{H^2(\Omega)}+\| \nabla \phi\|_{L^\infty(\Omega)}^2) \| D \uu\|_{\L2}. \label{CF-uW2p}\end{aligned}$$ Inserting and into , we are led to $$\begin{aligned} \frac{1}{2} \ddt \| \phi\|_{W^{2,p}(\Omega)}^2 &\leq C \big( \| \partial_t \uu\|_{\L2}^\frac{2}{p} \| D \partial_t \uu \|_{\L2}^\frac{p-2}{p} + \| \phi\|_{W^{2,p}(\Omega)}^2 \big) \| \phi\|_{W^{2,p}(\Omega)}^2 \notag \\ & \quad + C ( \| \uu\|_{H^2(\Omega)}+ \| \nabla \phi\|_{L^\infty(\Omega)}^2) \|D \uu\|_{\L2} \| \phi\|_{W^{2,p}(\Omega)}^2 \notag\\ &\leq \frac{\nu_\ast}{4} \| D \partial_t \uu\|_{\L2}^{2} + C \| \partial_t \uu\|_{\L2}^{\frac{4}{p+2}} \|\phi\|_{W^{2,p}(\Omega)}^\frac{4p}{p+2} + C \| \phi\|_{W^{2,p}(\Omega)}^4 \notag \\ &\quad + C \big(\| \partial_t \uu\|_{\L2} + \| \phi\|_{H^2(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)} + \| \nabla \phi\|_{L^\infty(\Omega)}^2 \big) \| D \uu\|_{\L2} \| \phi\|_{W^{2,p}(\Omega)}^2\notag \\ &\quad + C (\| D \uu\|_{\L2}^3+ \| \nabla \phi\|_{L^\infty(\Omega)} \| D \uu\|_{\L2}^2 ) \| \phi\|_{W^{2,p}(\Omega)}^2. \label{CF-16}\end{aligned}$$ **Final estimate.** By adding , and together, and using the embeddings $W^{2,p}(\Omega)\hookrightarrow W^{1,\infty}(\Omega)$, $W^{2,p}(\Omega)\hookrightarrow H^2(\Omega)$ for $p>2$, we deduce that $$\begin{aligned} \ddt Y(t) + \rho_\ast\int_{\Omega} |\partial_t \uu|^2 \, \d x + \nu_\ast\int_{\Omega} |D \partial_t \uu|^2 \, \d x \leq C(1+ Y^3(t)), \label{Y}\end{aligned}$$ where $$Y(t)= \int_{\Omega} \nu(\phi(t))|D \uu(t)|^2 \, \d x + \int_{\Omega} \rho(\phi(t))|\partial_t \uu(t)|^2 \,\d x+ \| \phi(t)\|_{W^{2,p}(\Omega)}^2.$$ Concerning the initial data, we observe from that $$\begin{aligned} \int_{\Omega} \rho(\phi(0))|\partial_t \uu(0)|^2 \,\d x \leq C \big( \| \uu_0\|_{L^\infty(\Omega)}^2 + \|\phi_0 \|_{W^{2,p}(\Omega)} ^2\big) \| \nabla \uu_0\|_{\L2}^2+C \| \uu_0\|_{H^2(\Omega)}^2+ C \|\phi_0\|_{W^{2,p}(\Omega)}^4,\end{aligned}$$ which, in turn, implies $$Y(0)\leq Q (\| \uu_0\|_{H^2(\Omega)}, \| \phi_0\|_{W^{2,p}(\Omega)}),$$ where $Q$ is a positive continuous and increasing function of its arguments. Finally, we deduce from that there exists a positive time $T_0<\frac{1}{2C(1+Y(0))^2}$, which depends on the parameters of the system and on the norms of the initial data $\| \uu_0\|_{H^2(\Omega)}$ and $\| \phi_0\|_{W^{2,p}(\Omega)}$, such that $$\begin{aligned} & \int_{\Omega} |D \uu(t)|^2 \, \d x + \int_{\Omega} |\partial_t \uu(t)|^2 \,\d x+ \| \phi (t)\|_{W^{2,p}(\Omega)}^2 +\int_0^t \|\partial_t \uu(\tau)\|_{H^1(\Omega)}^2 \, \d \tau \leq C_0,\label{CF-est}\end{aligned}$$ for all $t \in [0,T_0]$, where $C_0$ is a positive constant depending on $T_0$, $\| \uu_0\|_{H^2(\Omega)}$, $\| \phi_0\|_{W^{2,p}(\Omega)}$. In addition, we learn from and that $$\int_0^t \| \uu(\tau)\|_{W^{2,p}(\Omega)}^{\frac{2p}{p-2}} \, \d \tau \leq C,\quad \forall\, t\in [0,T_0].$$ Similarly, we also deduce from , , and that $$\|\uu(t)\|_{H^2(\Omega)}+\|\partial_t\phi(t)\|_{H^1(\Omega)}+\|\partial_t\phi(t)\|_{L^\infty(\Omega)}\leq C, \quad \forall\, t\in [0,T_0].$$ We have obtained all the necessary *a priori* estimates. Then the existence result follows as outlined at the beginning of the proof. **Uniqueness.** Let $(\uu_1,\phi_1)$ and $(\uu_2,\phi_2)$ be two solutions to problem - originating from the same initial datum. The difference of solutions $(\uu,\phi, P):=(\uu_1-\uu_2, \phi_1-\phi_2, P_1-P_2)$ solves the system $$\begin{aligned} &\rho(\phi_1)\big( \partial_t \uu + \uu_1 \cdot \nabla \uu + \uu \cdot \nabla \uu_2 \big)- \div \big( \nu(\phi_1)D\uu\big)+ \nabla P \notag\\ &\quad = - \div(\nabla \phi_1 \otimes \nabla \phi) -\div(\nabla \phi \otimes \nabla \phi_2) - (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \notag\\ &\qquad + \div \big( (\nu(\phi_1)-\nu(\phi_2))D\uu_2\big), \label{CF-Diff1}\\ & \partial_t \phi +\uu_1\cdot \nabla \phi +\uu \cdot \nabla \phi_2=0,\label{CF-Diff2}\end{aligned}$$ for almost every $(x,t) \in \Omega \times (0,T)$, together with the incompressibility constraint $\div \uu=0$. Multiplying by $\uu$ and integrating over $\Omega$, we find $$\begin{aligned} &\frac12 \ddt \int_{\Omega} \rho(\phi_1) |\uu|^2 \, \d x + \int_{\Omega} \rho(\phi_1) (\uu_1 \cdot \nabla) \uu \cdot \uu \, \d x +\int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x +\int_{\Omega} \nu(\phi_1)|D \uu|^2 \, \d x \notag\\ &=\int_{\Omega} (\nabla \phi_1\otimes \nabla \phi+ \nabla \phi\otimes \nabla \phi_2) : \nabla \uu \, \d x - \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x \notag \\ &\quad - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x + \frac12 \int_{\Omega} |\uu|^2 \rho'(\phi_1) \partial_t \phi_1 \,\d x. \label{U1}\end{aligned}$$ Noticing the identity $$\int_{\Omega} \rho(\phi_1) ( \uu_1 \cdot \nabla)\uu \cdot \uu \, \d x= \int_{\Omega} \rho(\phi_1) \uu_1 \cdot \nabla \Big( \frac12 |\uu|^2 \Big) \, \d x= - \frac12\int_{\Omega} \rho'(\phi_1) (\nabla \phi_1 \cdot \uu_1) |\uu|^2 \, \d x,$$ since $\phi_1$ solves the transport equation $_3$, we can rewrite as follows $$\begin{aligned} &\frac12 \ddt \int_{\Omega} \rho(\phi_1) |\uu|^2 \, \d x +\int_{\Omega} \nu(\phi_1)|D \uu|^2 \, \d x \notag\\ &=\int_{\Omega} (\nabla \phi_1\otimes \nabla \phi+ \nabla \phi\otimes \nabla \phi_2) : \nabla \uu \, \d x - \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x \notag \\ &\quad - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x- \int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x. \label{U2}\end{aligned}$$ By using the embedding $W^{2,p}(\Omega)\hookrightarrow W^{1,\infty}(\Omega)$ for $p>2$, we find that $$\begin{aligned} \int_{\Omega} (\nabla \phi_1\otimes \nabla \phi+ \nabla \phi\otimes \nabla \phi_2) : \nabla \uu \, \d x &\leq \big( \| \nabla \phi_1\|_{L^\infty(\Omega)}+ \| \nabla \phi_2\|_{L^\infty(\Omega)}\big) \| \nabla \phi\|_{\L2} \| \nabla \uu\|_{\L2}\\ &\leq \frac{\nu_\ast}{8}\|D \uu \|_{\L2}^2 +C \| \nabla \phi\|_{\L2}^2.\end{aligned}$$ Next, by Hölder’s inequality, we have $$\begin{aligned} &- \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x\\ &\quad \leq C \| \phi\|_{L^6(\Omega)} \|\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2 \|_{L^3(\Omega)} \| \uu \|_{\L2} \\ &\quad \leq C \big( \| \partial_t \uu_2\|_{L^3(\Omega)}+ \| \uu_2\|_{L^\infty(\Omega)}\| \nabla \uu_2\|_{L^3(\Omega)} \big)\| \phi\|_{H^1(\Omega)} \| \uu\|_{\L2},\end{aligned}$$ $$\begin{aligned} - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x & \leq C \| \phi\|_{L^6(\Omega)} \| D \uu_2\|_{L^3(\Omega)} \| D \uu\|_{\L2}\\ &\leq \frac{\nu_\ast}{8}\|D \uu \|_{\L2}^2 +C \| D \uu_2\|_{L^3(\Omega)}^2 \| \phi\|_{H^1(\Omega)}^2,\end{aligned}$$ and $$\begin{aligned} - \int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x \leq C \| \nabla \uu_2\|_{L^\infty(\Omega)} \| \uu\|_{\L2}^2.\end{aligned}$$ Collecting the above estimates together, we deduce from that $$\begin{aligned} &\ddt \int_{\Omega} \rho(\phi_1) |\uu|^2 \, \d x +\frac{3\nu_\ast}{2}\int_{\Omega} |D \uu|^2 \, \d x \notag\\ &\quad \leq C (1+ \| \partial_t \uu_2\|_{L^3(\Omega)}+ \| \uu_2\|_{L^\infty(\Omega)}\| \nabla \uu_2\|_{L^3(\Omega)} + \|D \uu_2 \|_{L^3(\Omega)}^2+ \| \nabla \uu_2\|_{L^\infty(\Omega)}) \notag \\ &\qquad \times \big(\| \uu\|_{\L2}^2 + \| \phi\|_{H^1(\Omega)}^2 \big). \label{U3}\end{aligned}$$ Next, we multiply by $\phi$ and get $$\frac12 \ddt \| \phi\|_{\L2}^2 + \int_{\Omega} (\uu \cdot \nabla \phi_2) \phi \, \d x=0.$$ Then taking the gradient of and multiplying the resulting equation by $\nabla \phi$, we find $$\frac12 \ddt \| \nabla \phi\|_{\L2}^2 + \int_{\Omega} \nabla \big( \uu_1 \cdot \nabla \phi \big) \cdot \nabla \phi \, \d x+ \int_{\Omega} \nabla \big( \uu \cdot \nabla \phi_2 \big) \cdot \nabla \phi \, \d x=0.$$ By adding the last two equations, we obtain $$\begin{aligned} \frac12 \ddt \|\phi\|_{H^1(\Omega)}^2 &+ \int_{\Omega} (\nabla \uu_1 \nabla \phi) \cdot \nabla \phi \, \d x+ \int_{\Omega} (\nabla \uu \nabla \phi_2) \cdot \nabla \phi \, \d x\\ &+ \int_{\Omega} (\nabla^2 \phi_2 \uu) \cdot \nabla \phi \, \d x+ \int_{\Omega} (\uu \cdot \nabla \phi_2) \phi \, \d x=0.\end{aligned}$$ We have $$\begin{aligned} -\int_{\Omega} (\nabla \uu_1 \nabla \phi) \cdot \nabla \phi \, \d x \leq \| \nabla \uu_1 \|_{L^\infty(\Omega)} \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ and by $W^{2,p}(\Omega)\hookrightarrow W^{1,\infty}(\Omega)$, $p>2$, we get $$\begin{aligned} -\int_{\Omega} (\nabla \uu \nabla \phi_2) \cdot \nabla \phi \, \d x & \leq \|\nabla \uu \|_{\L2} \| \nabla \phi_2\|_{L^\infty(\Omega)} \| \nabla \phi\|_{\L2}\\ & \leq \frac{\nu_\ast}{8} \|D \uu \|_{\L2}^2+ C \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} -\int_{\Omega} (\uu \cdot \nabla \phi_2) \phi \, \d x \leq \| \uu\|_{\L2} \| \nabla \phi_2\|_{L^\infty(\Omega)} \| \phi\|_{\L2} \leq C \| \uu\|_{\L2}^2 +C\| \phi\|_{\L2}^2.\end{aligned}$$ Using , we obtain $$\begin{aligned} -\int_{\Omega} (\nabla^2 \phi_2 \uu) \cdot \nabla \phi \, \d x &\leq \| \nabla^2 \phi_2\|_{L^p(\Omega)} \| \uu\|_{L^{\frac{2p}{p-2}}(\Omega)} \| \nabla \phi\|_{\L2}\\ &\leq C \| \phi_2\|_{W^{2,p}(\Omega)} \| \uu\|_{\L2}^{\frac{p-2}{p}}\| \nabla \uu\|_{\L2}^{\frac{2}{p}} \| \nabla \phi\|_{\L2}\\ &\leq C \| \uu\|_{\L2}^{\frac{p-2}{p}}\| \nabla \uu\|_{\L2}^{\frac{2}{p}} \| \nabla \phi\|_{\L2}\\ &\leq \frac{\nu_\ast}{8} \| D \uu\|_{\L2}^2+ C \| \uu \|_{\L2}^2 + C \| \nabla \phi\|_{\L2}^2.\end{aligned}$$ Collecting the above estimates, we are led to $$\begin{aligned} \ddt \|\phi\|_{H^1(\Omega)}^2 &\leq \frac{\nu_\ast}{2} \| D \uu\|^2+ C (1+ \| \nabla \uu_1\|_{L^\infty(\Omega)} ) \big( \| \uu\|_{\L2}^2+ \|\phi\|_{H^1(\Omega)}^2 \big). \label{U4}\end{aligned}$$ By adding and , we end up with the differential inequality $$\begin{aligned} \ddt \big( \| \uu\|_{\L2}^2+ \|\phi\|_{H^1(\Omega)}^2 \big) + \nu_\ast \| D \uu\|^2 \leq C R(t) \big( \| \uu\|_{\L2}^2+ \|\phi\|_{H^1(\Omega)}^2\big), \label{U4a}\end{aligned}$$ where $$\begin{aligned} R= 1+ \| \partial_t \uu_2\|_{L^3(\Omega)}+ \| \uu_2\|_{L^\infty(\Omega)}\| \nabla \uu_2\|_{L^3(\Omega)} + \|D \uu_2 \|_{L^3(\Omega)}^2+ \| \nabla \uu_2\|_{L^\infty(\Omega)}+ \|\nabla \uu_1 \|_{L^\infty(\Omega)}.\end{aligned}$$ Since $R\in L^1(0,T_0)$, then the uniqueness of strong solutions follows from Gronwall’s lemma. The local well-posedness result stated in Theorem \[CF-T\] is also valid in three dimensional case, provided that the initial condition $\phi \in W^{2,p}(\Omega)$ for some $p>3$. The strategy used in the above proof can be adapted to the this case by using the corresponding Sobolev inequalities in three dimensions. Mass-Conserving Navier-Stokes-Allen-Cahn System: Weak Solutions =============================================================== \[S-WEAK\] In this section, we consider the Navier-Stokes-Allen-Cahn system for a binary mixture of two incompressible fluids with different densities. This model was proposed in [@GKL2018 Section 4.2.2] and derived through an energetic variational approach (see also [@JLL2017] for the case with no mass constraint). The system reads as follows $$\label{NSAC-D} \begin{cases} \rho(\phi)\big( \partial_t \uu + \uu \cdot \nabla \uu \big)- \div \big( \nu(\phi)D\uu\big)+ \nabla P = - \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi + \mu + \displaystyle{\rho'(\phi)\frac{|\uu|^2}{2}} = \xi, \smallskip\\ \mu= -\Delta \phi + \Psi' (\phi), \quad \xi= \displaystyle{\overline{\mu+ \rho'(\phi)\frac{|\uu|^2}{2}}}, \end{cases} \quad \text{ in } \Omega \times (0,T),$$ subject to the boundary conditions $$\label{boundary-D} \uu=\mathbf{0},\quad \partial_{\n} \phi =0 \quad \text{ on } \partial\Omega \times (0,T),$$ and to the initial conditions $$\label{IC-D} \uu(\cdot, 0)= \uu_0, \quad \phi(\cdot, 0)=\phi_0 \quad \text{ in } \Omega.$$ Here, $\rho(\phi)$ and $\nu(\phi)$ are, respectively, the density and the viscosity of the mixture, which satisfy the assumptions . The nonlinear function $\Psi$ is the Flory-Huggins potential . The total energy of system - is given by $$E(\uu,\phi)= \int_{\Omega} \frac12 \rho(\phi) |\uu|^2+ \frac12 |\nabla \phi|^2 + \Psi(\phi) \, \d x. \label{NSAC-Denergy}$$ The main results of this section concern with the existence of global weak solutions. \[weak-D\] Let $\Omega$ be a bounded domain in $\mathbb{R}^d$ with smooth boundary, $d=2,3$. Assume that the initial datum $(\uu_0,\phi_0)$ satisfies $\uu_0 \in \H_\sigma, \phi_0\in H^1(\Omega)\cap L^\infty(\Omega)$ with $ \| \phi_0\|_{L^\infty(\Omega)}\leq 1$ and $ |\overline{\phi}_0|<1$. Then, there exists a global weak solution $(\uu,\phi)$ to system - in the following sense: - For all $T>0$, the pair $(\uu,\phi)$ satisfies $$\begin{aligned} &\uu \in L^\infty(0,T;\H_\sigma)\cap L^2(0,T;\V_\sigma),\\ &\phi \in L^\infty(0,T; H^1(\Omega))\cap L^q(0,T;H^2(\Omega)),\quad \partial_t \phi \in L^q(0,T;L^2(\Omega)), \\ &\phi \in L^\infty(\Omega\times (0,T)) : |\phi(x,t)|<1 \ \text{a.e. in} \ \Omega\times(0,T),\\ &\mu \in L^q(0,T;L^2(\Omega)),\end{aligned}$$ with $q=2$ if $d=2$, $q=\frac{4}{3}$ if $d=3$. - For all $T>0$, the system is solved as follows $$\begin{aligned} &-\int_0^T\!\int_\Omega (\rho'(\phi) \partial_t \phi \eta(t)+ \rho(\phi)\eta'(t)) \uu\cdot \vv \, \d x \d t+ \int_0^T\!\int_\Omega \big(\rho(\phi)\uu\cdot\nabla\uu \big) \cdot \vv \eta(t) \, \d x\d t \\ &\quad + \int_0^T\!\int_\Omega \nu(\phi)(D\uu: D \vv)\eta(t) \, \d x\d t= \int_\Omega \rho(\phi_0)\uu_0\vv\eta(0) \, \d x+ \int_0^T\!\int_\Omega \big((\nabla \phi \otimes \nabla \phi): \nabla \vv \big)\eta(t) \, \d x\d t,\end{aligned}$$ for $\vv \in \V_\sigma$, $\eta\in C^1([0,T])$ with $\eta(T)=0$, and $$\begin{aligned} &\partial_t \phi+ \uu\cdot \nabla \phi -\Delta \phi+ \Psi'(\phi)+ \displaystyle{\rho'(\phi)\frac{|\uu|^2}{2}}=\overline{\Psi'(\phi)+ \rho'(\phi)\frac{|\uu|^2}{2}}, \quad \text{a.e. in} \ \Omega \times (0,T).\end{aligned}$$ - The pair $(\uu,\phi)$ fulfills the regularity $\uu \in C([0,T];(\H_\sigma)_w)$ and $\phi \in C([0,T];(H^1(\Omega))_w)$, for all $T>0$, and $\uu|_{t=0}=\uu_0$, $\phi|_{t=0}=\phi_0$ in $\Omega$. In addition, $ \partial_{\n}\phi=0$ on $\partial\Omega\times(0,T)$ for all $T>0$. - The energy inequality $$\begin{aligned} E(\uu(t), \phi(t))+\int_0^t \int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x \d\tau + \int_0^t \|\partial_t \phi + \uu \cdot \nabla \phi \|_{\L2}^2 \, \d \tau \leq E(\uu_0, \phi_0)\end{aligned}$$ holds for all $t \geq 0$. Next, we investigate the special case with matched densities (i.e. $\rho_1=\rho_2$, so that $\rho\equiv 1$). The resulting model is the homogeneous mass-conserving Navier-Stokes-Allen-Cahn system $$\label{NSAC} \begin{cases} \partial_t \uu + \uu \cdot \nabla \uu - \div (\nu(\phi)D\uu) + \nabla p= - \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi + \mu= \overline{\mu}, \\ \mu= -\Delta \phi + \Psi' (\phi), \end{cases} \quad \text{ in } \Omega \times (0,T).$$ This system is associated with the boundary and the initial conditions $$\label{bic} \uu=\mathbf{0},\quad \partial_{\n} \phi =0 \quad \text{ on } \partial\Omega \times (0,T), \quad \uu(\cdot, 0)= \uu_0, \quad \phi(\cdot, 0)=\phi_0 \quad \text{ in } \Omega.$$ We first state the existence of global weak solutions, whose proof follows from similar [*a priori*]{} estimates as the ones obtained for the nonhomogeneous case in the proof of Theorem \[weak-D\] below. \[W-S\] Let $\Omega$ be a bounded domain in $\mathbb{R}^d$, $d=2,3$, with smooth boundary. Assume that the initial datum $(\uu_0,\phi_0)$ satisfies $\uu_0 \in \H_\sigma, \phi_0\in H^1(\Omega)\cap L^\infty(\Omega)$ with $ \| \phi_0\|_{L^\infty(\Omega)}\leq 1$ and $ |\overline{\phi}_0|<1$. Then there exists a global weak solution $(\uu,\phi)$ to problem -. This is, the solution $(\uu,\phi)$ satisfies, for all $T>0$, $$\begin{aligned} &\uu \in L^\infty(0,T;\H_\sigma)\cap L^2(0,T;\V_\sigma),\\ &\partial_t \uu \in L^2(0,T;\V'_\sigma) \ \text{if} \ d=2, \quad \partial_t \uu \in L^\frac43(0,T;\V'_\sigma) \ \text{if} \ d=3,\\ &\phi \in L^\infty(0,T; H^1(\Omega))\cap L^2(0,T;H^2(\Omega)), \\ &\phi \in L^\infty(\Omega\times (0,T)) : |\phi(x,t)|<1 \ \text{a.e. in } \ \Omega\times(0,T),\\ &\partial_t \phi \in L^2(0,T;L^2(\Omega)) \ \text{if} \ d=2, \quad \partial_t \phi \in L^\frac43(0,T;L^2(\Omega)) \ \text{if} \ d=3,\end{aligned}$$ and $$\begin{aligned} &\l \partial_t \uu, \vv\r + (\uu \cdot \nabla \uu, \vv)+ (\nu(\phi)D\uu,\nabla \vv) = (\nabla \phi \otimes \nabla \phi, \nabla \vv), &&\forall \, \vv \in \V_\sigma, \ \text{a.e.} \ t \in (0,T),\\ &\partial_t \phi+ \uu\cdot \nabla \phi -\Delta \phi+ \Psi'(\phi)=\overline{\Psi'(\phi)}, && \text{a.e.} \ (x,t) \in \Omega \times (0,T).\end{aligned}$$ Moreover, the initial and boundary conditions and the energy inequality hold as in Theorem \[weak-D\]. Furthermore, due to the particular form of the density function, we are able to prove a uniqueness result in dimension two. \[uni2d\] Assume $d=2$. Let $(\uu_1,\phi_1)$ and $(\uu_2,\phi_2)$ be two weak solutions to problem - on $[0,T]$ subject to the same initial condition $(\uu_0, \phi_0)$ which satisfies the assumptions of Theorem \[W-S\]. Moreover, we assume that $\phi_1$ satisfies the additional regularity $L^\gamma(0,T;H^2(\Omega))$ with $\gamma>\frac{12}{5}$. Then $(\uu_1,\phi_1)=(\uu_2,\phi_2)$ on $[0,T]$. The existence of strong solutions obtained in Section \[S-STRONG\] (cf. Remark \[strong-hom\]), which yields the regularity $\phi\in L^\gamma(0,T;H^2(\Omega))$, where $\gamma>\frac{12}{5}$, entails that the Theorem \[uni2d\] can be regarded as a weak-strong uniqueness result for problem - in two dimensions. That is, the weak solution originating from an initial condition $(\uu_0,\phi_0)$ such that $\uu_0\in \V_\sigma$ and $\phi_0\in H^2(\Omega)$ with $\Psi'(\phi_0)\in \L2$ coincides with the (unique) strong solution departing from the same initial datum. Proof of Theorem \[weak-D\] --------------------------- First, we derive *a priori* estimates of problem - that will be crucial to prove the existence of global weak solutions. **Mass conservation and energy dissipation.** First, integrating the equation $_3$ over $\Omega$ and using the definition of $\xi$, we observe that $$\frac{1}{|\Omega|}\int_{\Omega} \phi (t) \, \d x=\frac{1}{|\Omega|} \int_{\Omega} \phi_0 \, \d x, \quad \forall \, t \geq 0.$$ Next, we derive the energy equation associated with . Multiplying $_1$ by $\uu$ and integrating over $\Omega$, we obtain $$\label{NSAC-D1} \int_{\Omega} \frac12 \rho(\phi) \partial_t |\uu|^2 \, \d x+ \int_{\Omega} \rho(\phi) (\uu \cdot \nabla) \uu \cdot \uu \, \d x+ \int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x= -\int_{\Omega} \Delta \phi \nabla \phi \cdot \uu \, \d x.$$ Here we have used the relation $-\Delta \phi \nabla \phi= \frac12 \nabla |\nabla \phi|^2 -{\rm div}(\nabla \phi \otimes \nabla \phi)$ and the incompressibility condition $_2$. Thanks to the no-slip boundary condition for $\uu$, we observe that $$\begin{aligned} \int_{\Omega} \rho(\phi) (\uu \cdot \nabla) \uu \cdot \uu \, \d x &= \int_{\Omega} \rho(\phi) \uu \cdot \nabla \Big( \frac12 |\uu|^2 \Big) \, \d x \\ &= - \frac12 \int_{\Omega} \div ( \rho(\phi) \uu ) |\uu|^2 \, \d x = - \int_{\Omega} \rho'(\phi)\nabla \phi \cdot \uu \ \frac{|\uu|^2}{2} \, \d x.\end{aligned}$$ Next, we multiply $_3$ by $\partial_t \phi+ \uu \cdot \nabla \phi$ and integrate over $\Omega$. Noticing that $\overline{\partial_t \phi + \uu\cdot \nabla \phi}=0$, we get $$\label{NSAC-D2} \| \partial_t \phi+ \uu \cdot \nabla \phi \|_{\L2}^2+ \int_{\Omega} \mu \, \big( \partial_t \phi+ \uu \cdot \nabla \phi\big) \, \d x + \int_{\Omega} \rho'(\phi) \frac{|\uu|^2}{2}\big(\partial_t \phi+ \uu \cdot \nabla \phi\big)\, \d x=0.$$ On the other hand, the following equalities hold $$\begin{aligned} &\int_{\Omega} \mu \, \partial_t \phi \, \d x= \ddt \int_{\Omega} \frac12 |\nabla \phi|^2 + \Psi(\phi) \, \d x, \\ &\int_{\Omega} \mu \, \uu \cdot \nabla \phi \, \d x= \int_{\Omega} -\Delta \phi \nabla \phi \cdot \uu \, \d x + \int_{\Omega} \uu \cdot \nabla \Psi(\phi) \, \d x= \int_{\Omega} -\Delta \phi \nabla \phi \cdot \uu \, \d x,\\ & \int_{\Omega} \rho'(\phi) \frac{|\uu|^2}{2} \partial_t \phi \, \d x= \int_{\Omega} \partial_t (\rho(\phi)) \frac{|\uu|^2}{2} \, \d x.\end{aligned}$$ Thus, by adding and , and using the above identities, we obtain the energy equation $$\begin{aligned} \ddt E(\uu, \phi)+\int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x + \|\partial_t \phi + \uu \cdot \nabla \phi \|_{\L2}^2=0.\label{BEL-D}\end{aligned}$$ **Lower-order estimates.** We assume that $\phi \in L^\infty(\Omega\times (0,T))$ such that $ |\phi(x,t)|<1$ almost everywhere in $\Omega\times(0,T)$ (cf. Existence of weak solutions below). Since $\rho$ is strictly positive, it is immediately seen from that $$\label{E-bound} E(\uu(t),\phi(t))+ \int_0^t \int_{\Omega} \nu(\phi) |D \uu|^2 \, \d x \d \tau + \int_0^t \|\partial_t \phi + \uu \cdot \nabla \phi \|_{\L2}^2 \, \d \tau\leq E(\uu_0, \phi_0), \quad \forall \, t \geq 0.$$ Therefore, we deduce $$\label{B1-D} \uu \in L^\infty(0,T; \H_\sigma)\cap L^2(0,T;\V_\sigma), \quad \phi\in L^\infty(0,T;H^1(\Omega))$$ and $$\label{B2-D} \partial_t \phi + \uu \cdot \nabla \phi \in L^2(0,T; L^2(\Omega)).$$ In light of and , when $d=2$, we have $$\Big\| - \rho'(\phi) |\uu|^2 \Big\|_{\L2}\leq C \| \uu\|_{L^4(\Omega)}^2 \leq C \| \nabla \uu\|_{\L2},$$ which entails that $\rho'(\phi) |\uu|^2\in L^2(0,T;\L2)$. Instead, when $d=3$, we have $$\Big\| - \rho'(\phi) |\uu|^2 \Big\|_{\L2}\leq C \| \uu\|_{L^4(\Omega)}^2 \leq C \| \nabla \uu\|_{\L2}^\frac32,$$ thus $\rho'(\phi) |\uu|^2\in L^\frac43(0,T;\L2)$. Since $\overline{\rho'(\phi) |\uu|^2} \in L^\infty(0,T)$, we also learn that $$\label{B3-D} \mu - \overline{\mu}\in L^q(0,T;L^2(\Omega)),$$ for $q=2$ if $d=2$, $q=\frac{4}{3}$ if $d=3$. Thanks to the boundary condition for $\phi$, we see that $\overline{\Delta \phi}=0$. Then, multiplying $_4$ by $-\Delta \phi$ and integrating by parts, we have $$\int_{\Omega} |\Delta \phi|^2 + F''(\phi) |\nabla \phi|^2 \, \d x= \theta_0 \|\nabla \phi\|_{\L2}^2 - \int_{\Omega} (\mu-\overline{\mu})\Delta \phi \, \d x,$$ where $F$ is the convex part of the potential $\Psi$, i.e. $F(s)=\frac{\theta}{2}\left[ (1+s)\log(1+s)+(1-s)\log(1-s)\right].$ By and , we obtain $$\label{H2-D} \| \Delta \phi \|_{\L2} \leq C(1+ \| \mu-\overline{\mu}\|_{\L2}).$$ Then, from the regularity theory of the Neumann problem, we infer that $$\label{B4-D} \phi \in L^q(0,T;H^2(\Omega)).$$ From , and the above bounds, we have $$\begin{aligned} \| \uu \cdot \nabla \phi\|_{\L2} &\leq C \| \uu \|_{L^4(\Omega)} \|\nabla \phi\|_{L^4(\Omega)}\notag\\ &\leq C \| \uu \|_{\L2}^\frac12\| \nabla \uu\|_{\L2}^\frac12 \|\nabla \phi\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12\notag\\ & \leq C \| \nabla \uu\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12,\quad \text{if}\ d=2,\end{aligned}$$ and $$\begin{aligned} \| \uu \cdot \nabla \phi\|_{\L2} &\leq C \| \uu \|_{L^4(\Omega)} \|\nabla \phi\|_{L^4(\Omega)}\notag\\ &\leq C \| \uu \|_{\L2}^\frac14\| \nabla \uu\|_{\L2}^\frac34 \|\phi\|_{L^\infty(\Omega)}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12\notag\\ & \leq C \| \nabla \uu\|_{\L2}^\frac34 \| \phi\|_{H^2(\Omega)}^\frac12,\quad \text{if}\ d=3,\end{aligned}$$ which implies $\uu \cdot \nabla \phi\in L^q(0,T;L^2(\Omega))$. Thus $$\label{B5-D} \partial_t \phi \in L^q(0,T;L^2(\Omega)).$$ Moreover, we observe that $$\begin{aligned} \| \mu-\overline{\mu}\|_{\L2}&\leq \| \partial_t \phi\|_{\L2}+ \| \uu \cdot \nabla \phi\|_{\L2} + \Big\|-\rho'(\phi)\frac{|\uu|^2}{2}\Big\|_{\L2} + |\Omega|^{-\frac12} \Big\|-\rho'(\phi)\frac{|\uu|^2}{2}\Big\|_{L^1(\Omega)} \notag\\ &\leq \| \partial_t \phi\|_{\L2}+ C \| \uu \|_{L^4(\Omega)} \|\nabla \phi\|_{L^4(\Omega)}+C \| \uu\|_{L^4(\Omega)}^2+ C \| \uu\|_{\L2}^2\notag \\ &\leq \| \partial_t \phi\|_{\L2}+ C \| \nabla \uu\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12 +C \| \uu\|_{\L2}\| \nabla \uu\|_{\L2} + C \| \uu\|_{\L2}^2\notag \\ &\leq \| \partial_t \phi\|_{\L2}+ C \| \nabla \uu\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12 +C \| \nabla \uu\|_{\L2} + C,\quad \text{if}\ d=2, \label{mu-L2-2}\end{aligned}$$ and $$\begin{aligned} \| \mu-\overline{\mu}\|_{\L2}&\leq \| \partial_t \phi\|_{\L2}+ C \| \nabla \uu\|_{\L2}^\frac34 \| \phi\|_{H^2(\Omega)}^\frac12 +C \| \nabla \uu\|_{\L2}^\frac32 + C,\quad \text{if}\ d=3. \label{mu-L2-3}\end{aligned}$$ Recalling and , and using Young’s inequality, we find that $$\label{estH2-D} \|\phi \|_{H^2(\Omega)}\leq C(1+ \| \partial_t \phi\|_{\L2}+ \| \nabla \uu\|_{\L2}), \quad \text{if}\ d=2,$$ and $$\label{estH3-D} \|\phi \|_{H^2(\Omega)}\leq C(1+ \| \partial_t \phi\|_{\L2}+ \| \nabla \uu\|_{\L2}^\frac32), \quad \text{if}\ d=3.$$ In order to recover the full $L^2$-norm of $\mu$, we observe that $$\overline{\mu}=\overline{F'(\phi)}- \theta_0 \overline{\phi}.$$ Since $|\overline{\phi}(t)|=|\overline{\phi_0}|<1$, it is well-known that $$\int_{\Omega} |F'(\phi)| \, \d x \leq C \int_{\Omega} F'(\phi) (\phi-\overline{\phi}) \, \d x+ C$$ for some positive constant $C$ depending on $F$ and $\overline{\phi}$. Multiplying $_4$ by $\phi - \overline{\phi}$ and using the boundary condition on $\phi$, we obtain $$\|\nabla \phi\|_{\L2}^2+ \int_{\Omega} F'(\phi) (\phi-\overline{\phi})= \int_{\Omega} \mu (\phi-\overline{\phi}) \, \d x+ \int_{\Omega} \theta_0 \phi (\phi-\overline{\phi}) \, \d x.$$ Combining the above two relations and exploiting the energy bounds , we reach $$\label{mubar} \| F'(\phi) \|_{L^1(\Omega)} \leq C (1+ \| \mu-\overline{\mu}\|_{\L2}).$$ This actually implies that $$\mu \in L^q(0,T;L^2(\Omega)) \label{muq2}$$ and, in view of , we also have $$F'(\phi)\in L^q(0,T;L^2(\Omega)),\label{fpq2}$$ where $q=2$ if $d=2$, $q=\frac{4}{3}$ if $d=3$. Besides, we have the following estimate for the time translations of $\uu$: \[est-tran\] For any $\delta\in(0,T)$, the following bound holds $$\begin{aligned} \int_0^{T-\delta}\|\uu(t+\delta)-\uu(t)\|_{\L2}^2 \, \d t\leq C\delta^\frac14.\label{est-tr}\end{aligned}$$ We only present the proof for the case $d=3$. The case $d=2$ follows along the same lines. It follows from and the interpolation with $p=3$ that $\uu\in L^4(0,T;L^3(\Omega))$. Similar to [@Lions] (see also [@JLL2017 Lemma 3.5]), we have $$\begin{aligned} &\|\sqrt{\rho(\phi(t+\delta))}(\uu(t+\delta)-\uu(t))\|_{\L2}^2 \nonumber\\ &\quad \leq -\int_\Omega(\rho(\phi(t+\delta))-\rho(\phi(t)))\uu(t)\cdot (\uu(t+\delta)-\uu(t)) \, \d x\nonumber\\ &\qquad -\int_{t}^{t+\delta} \!\int_\Omega \rho(\phi(\tau))(\uu(\tau)\cdot\nabla)\uu(\tau)\cdot (\uu(t+\delta)-\uu(t))\, \d x\d \tau\nonumber\\ &\qquad -\int_t^{t+\delta}\!\int_\Omega \nu(\phi(\tau))D\uu(\tau):D(\uu(t+\delta)-\uu(t))\, \d x\d \tau\nonumber\\ &\qquad +\int_t^{t+\delta}\!\int_\Omega (\nabla \phi(\tau)\otimes\nabla \phi(\tau)): \nabla (\uu(t+\delta)-\uu(t)) \, \d x\d \tau\nonumber\\ &\qquad +\int_t^{t+\delta}\!\int_\Omega \rho'(\phi)\partial_\tau \phi(\tau)\uu(\tau)\cdot (\uu(t+\delta)-\uu(t))\, \d x\d \tau\nonumber := \sum_{i=1}^{5}J_i.\nonumber\end{aligned}$$ Observe now $$\begin{aligned} \int_0^{T-\delta} J_1(t)\, \d t &\leq \int_0^{T-\delta}\!\int_{t}^{t+\delta}\!\int_\Omega |\rho'(\phi)||\partial_\tau\phi(\tau)||\uu(t)| (|\uu(t+\delta)|+|\uu(t)|)\, \d x\d\tau\d t\nonumber\\ &\leq \int_0^{T-\delta}(\|\uu(t+\delta)\|_{L^3(\Omega)}+\|\uu(t)\|_{L^3(\Omega)}) \|\uu(t)\|_{L^6(\Omega)}\int_t^{t+\delta}\| \partial_\tau\phi(\tau)\|_{L^2(\Omega)} \, \d \tau \d t\nonumber\\ &\leq C\delta^\frac14\left(\int_0^T \|\nabla \uu(t)\|_{L^2(\Omega)} \, \d t\right) \left(\int_0^{T}\| \partial_t\phi(t)\|_{L^2(\Omega)}^\frac43\d t\right)^\frac34 \leq C\delta^\frac14,\nonumber\end{aligned}$$ and, in a similar manner, $$\begin{aligned} \int_0^{T-\delta} J_5(t) \, \d t &\leq \int_0^{T-\delta}(\|\uu(t+\delta)\|_{L^3(\Omega)}+\|\uu(t)\|_{L^3(\Omega)}) \int_t^{t+\delta}\|\uu(\tau)\|_{L^6(\Omega)}\| \partial_\tau\phi(\tau)\|_{L^2(\Omega)} \, \d \tau \d t\nonumber\\ &\leq C\delta^\frac14\left(\int_0^T \|\nabla \uu(t)\|_{L^2(\Omega)}\, \d t\right) \left(\int_0^{T}\| \partial_t\phi(t)\|_{L^2(\Omega)}^\frac43\d t\right)^\frac34 \leq C\delta^\frac14.\nonumber\end{aligned}$$ Next, we have $$\begin{aligned} &\int_0^{T-\delta} J_2(t) \, \d t\nonumber\\ &\quad \leq \int_0^{T-\delta}\!\int_{t}^{t+\delta} \| \rho(\phi(\tau))\|_{L^\infty(\Omega)}\|\uu(\tau)\|_{L^6(\Omega)} \|\nabla\uu(\tau)\|_{L^2(\Omega)} \, \d \tau (\|\uu(t+\delta)\|_{L^3(\Omega)}+\|\uu(t)\|_{L^3(\Omega)})\, \d t\nonumber\\ &\quad \leq C\delta^\frac12\int_0^{T-\delta}\left(\int_t^{t+\delta} \|\nabla\uu(\tau)\|_{L^2(\Omega)}^2 \, \d \tau\right)^\frac12(\|\uu(t+\delta)\|_{L^3(\Omega)}+\|\uu(t)\|_{L^3(\Omega)}) \, \d t\nonumber\\ &\quad \leq C\delta^\frac12\left(\int_0^T \|\nabla\uu(t)\|_{L^2(\Omega)}^2 \, \d t\right)^\frac12\int_0^T\|\uu(t)\|_{L^3(\Omega)} \, \d t \leq C\delta^\frac12,\nonumber\end{aligned}$$ and $$\begin{aligned} &\int_0^{T-\delta} J_3(t) \, \d t\nonumber\\ &\quad \leq \int_0^{T-\delta} \int_t^{t+\delta} \|\nu(\phi(\tau))\|_{L^\infty(\Omega)}\|D\uu(\tau)\|_{L^2(\Omega)} \, \d \tau (\|D\uu(t+\delta)\|_{\L2}+\|D\uu(t)\|_{\L2})\, \d t\nonumber\\ &\quad \leq C\delta^\frac12 \int_0^{T-\delta} \left(\int_t^{t+\delta} \|\nabla \uu(\tau)\|_{L^2(\Omega)}^2 \, \d \tau\right)^\frac12 (\|D\uu(t+\delta)\|_{\L2}+\|D\uu(t)\|_{\L2}) \, \d t\nonumber\\ &\quad \leq C\delta^\frac12 \left(\int_0^{T} \|\nabla \uu(t)\|_{L^2(\Omega)}^2 \, \d t\right)^\frac12 \int_0^{T} \|\nabla \uu(t)\|_{\L2}\, \d t \leq C\delta^\frac12,\nonumber\end{aligned}$$ Finally, by using we get $$\begin{aligned} \int_0^{T-\delta} J_4(t) \, \d t &\leq \int_0^{T-\delta}\! \int_t^{t+\delta} \|\nabla \phi(\tau)\|_{L^4(\Omega)}^2 \, \d \tau (\|\nabla \uu(t+\delta)\|_{\L2} +\|\nabla \uu(t)\|_{\L2}) \, \d t\nonumber\\ &\quad \leq C\delta^\frac14 \int_0^{T-\delta} \left(\int_t^{t+\delta} \| \phi(\tau)\|_{H^2(\Omega)}^\frac43 \, \d \tau\right)^\frac34 (\|\nabla \uu(t+\delta)\|_{\L2} +\|\nabla \uu(t)\|_{\L2})\, \d t\nonumber\\ &\quad \leq C\delta^\frac14\left(\int_0^{T} \| \phi(t)\|_{H^2(\Omega)}^\frac43 \, \d t\right)^\frac34 \int_0^T \|\nabla \uu(t)\|_{\L2} \, \d t \leq C\delta^\frac14.\end{aligned}$$ From the above estimate and the fact that $\rho$ is strictly bounded from below, we obtain the conclusion . The proof is complete. **Existence of weak solutions.** With the above *a priori* estimates, we are able to prove the existence of a global weak solution by using a semi-Galerkin scheme similar to [@JLL2017]. More precisely, for any $n\in \mathbb{N}$, we find a local-in-time approximating solution $(\uu_n, \phi_n)$ where $\uu_n$ solves $_1$ as in the classical Galerkin approximation and $\phi_n$ is the (non-discrete) solution to the Allen-Cahn equations $_3$-$_4$ with the velocity $\uu_n$, the singular potential and the nonlocal term. This is achieved via a Schauder fixed point argument. For this approach, it is needed to solve separately a convective nonlocal Allen-Cahn equation. This can be done by introducing a family of regular potentials $\lbrace \Psi_\varepsilon \rbrace$ that approximates the original singular potential $\Psi$ by setting (see, e.g., [@GGW2018]) $$\Psi_\varepsilon(s)=F_\varepsilon(s)-\frac{\theta_0}{2}s^2,\quad \forall\, s\in \mathbb{R},\nonumber$$ where $$F_\varepsilon(s)= \begin{cases} \displaystyle{\sum_{j=0}^2 \frac{1}{j!}} F^{(j)}(1-\varepsilon) \left[s-(1-\varepsilon)\right]^j, \qquad\qquad \!\! \forall\,s\geq 1-\varepsilon,\\ F(s), \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \forall\, s\in[-1+\varepsilon, 1-\varepsilon],\\ \displaystyle{\sum_{j=0}^2 \frac{1}{j!}} F^{(j)}(-1+\varepsilon)\left[ s-(-1+\varepsilon)\right]^j, \qquad\ \forall\, s\leq -1+\varepsilon. \end{cases} \nonumber$$ Substituting the regular potential $\Psi_\varepsilon$ into the original Allen-Cahn equation, we are able to prove the existence of an approximating solution $\phi_\varepsilon$ to the resulting regularized equation using the semigroup approach like in [@JLL2017 Lemma 3.2] or simply by the Galerkin method. For the approximating solution $\phi_{\varepsilon}$, we can derive estimates that are uniform in $\varepsilon$ and then pass to the limit as $\varepsilon\to 0$ to recover the case with singular potential. Here, we would like to remark that, thanks to the singular potential, we can show that the phase field takes values in $[-1,1]$ (using a similar argument like in [@GGW2018]), without the additional assumption $s\rho'(s)\geq 0$ for $|s|>1$ that was required in [@JLL2017]. Next, thanks to the [*a priori*]{} estimates showed above, it follows that the existence time interval of any solution $(\uu_n, \phi_n)$ is independent to $n$. From the same argument, we deduce uniform estimates that allows compactness for the phase field $\phi$. Then, the key issue is to obtain uniform estimates of translations $\int_0^{T-\delta} \|\uu(t+\delta)-\uu(t)\|_{L^2(\Omega)}^2\, \d t$ (see Lemma \[est-tran\]) that yields compactness of the velocity field in the case of unmatched densities (cf. [@Lions]). The above two-level approximating procedure is standard and we omit the details here. **Time continuity and initial condition.** We first observe that the regularity properties and , together with the global bound $\| \phi\|_{L^\infty(0,T;L^\infty(\Omega))}\leq 1$, entail that $\phi \in C([0,T]; L^p(\Omega))$, for any $2\leq p<\infty$ if $d=2,3$. In addition, since $\phi \in L^\infty(0,T;H^1(\Omega))$, we also infer from [@STRAUSS Theorem 2.1] that $\phi \in C([0,T];(H^1(\Omega))_w)$. If $d=2$, since $\phi \in L^2(0,T;H^2(\Omega))\cap W^{1,2}(0,T;L^2(\Omega))$, we deduce that $\phi \in C([0,T];H^1(\Omega))$. Next, the weak formulation of $_1$-$_2$ can be written as $$\begin{aligned} \ddt \l \mathbb{P}(\rho(\phi)\uu), \vv \r=\l \widetilde{\f}, \vv\r,\end{aligned}$$ for all $\vv \in \V_\sigma$, in the sense of distribution on $(0,T)$, where $\mathbb{P}$ is the Leray projection onto $\H_\sigma$ and $$\l \widetilde{\f}, \vv\r= (\rho'(\phi)\partial_t \phi \, \uu, \vv)-(\rho(\phi)(\uu \cdot\nabla) \uu, \vv)- (\nu(\phi) D \uu, \nabla \vv)+ (\nabla \phi \otimes \nabla \phi, \nabla \vv).$$ Arguing similarly to the proof of Lemma \[est-tran\], we observe that $$\begin{aligned} \| \widetilde{\f}\|_{\V_\sigma'} &\leq C \| \partial_t \phi\|_{L^2(\Omega)} \| \uu\|_{L^3(\Omega)}+ C \| \uu\|_{L^3(\Omega)} \| D \uu\|_{\L2}+ C \| D \uu\|_{\L2}+ C \| \nabla \phi\|_{L^4(\Omega)}^2\\ &\leq C \| \partial_t \phi\|_{L^2(\Omega)} \| D \uu\|_{\L2}^\frac12 + C \| D\uu\|_{\L2}^{\frac32}+ \| D \uu\|_{\L2} + C \| \phi\|_{H^2(\Omega)}\\ &\leq C \big( 1+ \| \partial_t \phi\|_{L^2(\Omega)}^\frac43 + \| D \uu\|_{\L2}^2+C \| \phi\|_{H^2(\Omega)}^\frac43 \big).\end{aligned}$$ In light of the regularity of the weak solution, we infer that $\widetilde{\f} \in L^1(0,T;\V_\sigma')$. By definition of the weak time derivative, this implies that $\partial_t \mathbb{P}(\rho(\phi)\uu)\in L^1(0,T;\V_\sigma')$. Observing that $\mathbb{P} (\rho(\phi)\uu) \in L^\infty(0,T;\H_\sigma)$, we have $\mathbb{P} (\rho(\phi)\uu) \in C([0,T];\V_\sigma')$. As a consequence, we deduce from [@STRAUSS Theorem 2.1] that $\mathbb{P} (\rho(\phi)\uu) \in C([0,T]; (\H_\sigma)_w)$. It easily follows from the properties of the Leray operator $\mathbb{P}$ that $\mathbb{P} (\rho(\phi)\uu) \in C([0,T]; (\mathbf{L}^2(\Omega))_w)$. Now, repeating the argument in [@ADG2013 Section 5.2], we deduce that $\rho(\phi)\uu \in C([0,T]; (\mathbf{L}^2(\Omega))_w)$. Therefore, since $\rho(\phi) \in C([0,T];L^2(\Omega))$ and $\rho(\phi)\geq \rho_\ast>0$, we conclude that $\uu \in C([0,T];(\mathbf{L}^2(\Omega))_w)$. Finally, thanks to the time continuity of $\uu$ and $\phi$, a standard argument ensures that $\uu|_{t=0}=\uu_0$, $\phi|_{t=0}=\phi_0$ in $\Omega$. $\square$ Proof of Theorem \[uni2d\] -------------------------- Let us consider two global weak solutions $(\uu_1,\phi_1)$ and $(\uu_2,\phi_2)$ to problem - given by Theorem \[W-S\]. Denote the differences of solutions by $\uu=\uu_1-\uu_2$, $\phi=\phi_1-\phi_2$. Then we have $$\begin{aligned} \l \partial_t \uu, \vv\r + (\uu_1 \cdot \nabla \uu, \vv)&+ (\uu \cdot \nabla \uu_2, \vv)+ (\nu(\phi_1)D\uu,\nabla \vv) + ((\nu(\phi_1)-\nu(\phi_2))D\uu_2,\nabla \vv) \notag \\ &= (\nabla \phi_1 \otimes \nabla \phi, \nabla \vv)+ (\nabla \phi \otimes \nabla \phi_2, \nabla \vv) \label{NS-diff}\end{aligned}$$ for all $ \vv \in \V_\sigma$, almost every $t \in (0,T)$, and $$\partial_t \phi+ \uu_1\cdot \nabla \phi + \uu \cdot \nabla \phi_2-\Delta \phi+ \Psi'(\phi_1)-\Psi'(\phi_2)=\overline{\Psi'(\phi_1)}-\overline{\Psi'(\phi_2)} \label{AC-diff}$$ almost every $(x,t) \in \Omega \times (0,T)$. Following the same strategy as in [@GMT2019], we take $\vv= \A^{-1}\uu$, where $\A$ is the Stokes operator, and we find $$\begin{aligned} \frac12 \ddt \| \uu\|_{\ast}^2 &+ (\nu(\phi_1) D\uu, \nabla \A^{-1}\uu) = (\uu\otimes \uu_1, \nabla \A^{-1}\uu)+ (\uu_2\otimes \uu, \nabla \A^{-1}\uu)\\ &- ((\nu(\phi_1)-\nu(\phi_2))D\uu_2,\nabla \A^{-1}\uu) + (\nabla \phi_1 \otimes \nabla \phi, \nabla \A^{-1}\uu) +(\nabla \phi \otimes \nabla \phi_2, \nabla \A^{-1}\uu),\end{aligned}$$ where $\| \uu\|_{\ast}= \| \nabla \A^{-1} \uu\|_{\L2}$, which is a norm on $\V'_\sigma$ equivalent to the usual dual norm. Here, we have used the equality $\uu_i \cdot \nabla \uu= \div ( \uu\otimes \uu_i)$, $i=1,2$, due to the incompressibility condition. Multiplying by $\phi$, integrating over $\Omega$ and observing that $$\int_{\Omega} (\uu_1 \cdot \nabla \phi) \, \phi \,\d x= \int_{\Omega} \uu_1 \cdot \frac12 \nabla \phi^2 \,\d x=0, \quad \int_{\Omega} (\overline{\Psi'(\phi_1)}-\overline{\Psi'(\phi_2)} ) \phi \, \d x= (\overline{\Psi'(\phi_1)}-\overline{\Psi'(\phi_2)} ) \overline{\phi} =0,$$ we obtain $$\frac12 \ddt \|\phi\|_{\L2}^2+ \| \nabla \phi\|_{\L2}^2+ \int_{\Omega} (\uu \cdot \nabla \phi_2) \, \phi \, \d x+ \int_{\Omega} (F'(\phi_1)-F'(\phi_2)) \, \phi \, \d x= \theta_0 \| \phi\|_{\L2}^2.$$ By adding the above two equations and using the convexity of $F$, we deduce that $$\begin{aligned} \label{u-est1} & \ddt G(t) + (\nu(\phi_1) D\uu, \nabla \A^{-1}\uu) + \| \nabla \phi\|_{\L2}^2 \notag \\ &\quad \leq (\uu\otimes \uu_1, \nabla \A^{-1}\uu)+ (\uu_2\otimes \uu, \nabla \A^{-1}\uu) - ((\nu(\phi_1)-\nu(\phi_2))D\uu_2,\nabla \A^{-1}\uu) \notag \\ &\qquad + (\nabla \phi_1 \otimes \nabla \phi, \nabla \A^{-1}\uu) +(\nabla \phi \otimes \nabla \phi_2, \nabla \A^{-1}\uu)- (\uu \cdot \nabla \phi_2 , \phi) + \theta_0 \| \phi\|_{\L2}^2,\end{aligned}$$ where $$G(t)= \frac12 \| \uu(t)\|_{\ast}^2+ \frac12 \|\phi(t)\|_{\L2}^2.$$ In order to recover a $L^2(\Omega)$-norm of $\uu$, which is a key term to control the nonlinear terms on the right-hand side, we obtain by integration by parts (see [@GMT2019 (3.9)]) $$\begin{aligned} (\nu(\phi_1) D\uu, \nabla \A^{-1}\uu)&= (\nabla \uu, \nu(\phi_1)D \A^{-1}\uu)\\ &=- (\uu, \div (\nu(\phi_1)D \A^{-1}\uu )\\ &=- (\uu, \nu'(\phi_1) D \A^{-1}\uu \nabla \phi_1) - \frac12 (\uu, \nu(\phi_1) \Delta \A^{-1}\uu).\end{aligned}$$ Here we have used that $\div \nabla^t \vv= \nabla \div \vv$. Notice that, by definition of Stokes operator, there exists a scalar function $p\in L^\infty(0,T;H^1(\Omega))\cap L^2(0,T;H^2(\Omega))$ (unique up to a constant) such that $ -\Delta \A^{-1} \uu+ \nabla p= \uu$ for almost any $(x,t)\in \Omega \times (0,T)$. Moreover, we have the following estimates from [@Galdi] and [@GMT2019 Appendix B] $$\label{p} \| p\|_{\L2}\leq C \| \nabla \A^{-1} \uu \|_{\L2}^{\frac12} \| \uu\|_{\L2}^{\frac12},\ \| p \|_{H^1(\Omega)}\leq C \|\uu \|_{\L2}, \ \| p\|_{H^2(\Omega)}\leq C \| \nabla \uu\|_{\L2}.$$ Then, we can write $$\begin{aligned} - \frac12 (\uu, \nu(\phi_1) \Delta A^{-1}\uu)&= \frac12 ( \uu, \nu(\phi_1) \uu ) -\frac12 (\uu, \nu(\phi_1) \nabla p)\\ &= \frac12 ( \uu, \nu(\phi_1) \uu ) + \frac12 (\div (\nu(\phi_1)\uu ), p)\\ &= \frac12 ( \uu, \nu(\phi_1) \uu ) + \frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p).\end{aligned}$$ Hence, recalling that $\nu(\cdot)\geq \nu_\ast>0$, we find $$\begin{aligned} (\nu(\phi_1) D\uu, \nabla \A^{-1}\uu) \geq \frac{\nu_\ast}{2} \| \uu\|_{\L2}^2+ \frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p)- (\uu, \nu'(\phi_1) D \A^{-1}\uu \nabla \phi_1).\end{aligned}$$ Owing to the above estimate, we rewrite as follows $$\begin{aligned} \label{u-est2} \ddt G(t) &+ \frac{\nu_\ast}{2} \| \uu\|_{\L2}^2 + \| \nabla \phi\|_{\L2}^2 \notag \\ &=(\uu\otimes \uu_1, \nabla \A^{-1}\uu)+ (\uu_2\otimes \uu, \nabla \A^{-1}\uu) + ((\nu(\phi_1)-\nu(\phi_2))D\uu_2,\nabla \A^{-1}\uu) \notag \\ &\quad + (\nabla \phi_1 \otimes \nabla \phi, \nabla \A^{-1}\uu) +(\nabla \phi \otimes \nabla \phi_2, \nabla \A^{-1}\uu)- (\uu \cdot \nabla \phi_2 , \phi) \notag \\ &\quad + \theta_0 \| \phi\|_{\L2}^2 + (\uu, \nu'(\phi_1) D \A^{-1}\uu \nabla \phi_1)-\frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p).\end{aligned}$$ By the Ladyzhenskaya inequality , together with and the bounds for weak solutions, we have $$\begin{aligned} & (\uu\otimes \uu_1, \nabla \A^{-1}\uu)+ (\uu_2\otimes \uu, \nabla \A^{-1}\uu)\\ &\quad \leq \| \uu\|_{\L2} \big( \| \uu_1\|_{L^4(\Omega)}+ \| \uu_2\|_{L^4(\Omega)}\big) \| \nabla \A^{-1}\uu\|_{L^4(\Omega)}\\ &\quad \leq C \big(\| \uu_1\|_{H^1(\Omega)}^\frac12+ \| \uu_2\|_{H^1(\Omega)}^\frac12) \| \uu\|_{\L2}^\frac32 \|\uu \|_{\ast}^\frac12\\ &\quad \leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2 + C \big(\| \uu_1\|_{H^1(\Omega)}^2+ \| \uu_2\|_{H^1(\Omega)}^2) \|\uu \|_{\ast}^2.\end{aligned}$$ Similarly, we obtain $$\begin{aligned} &(\nabla \phi_1 \otimes \nabla \phi, \nabla \A^{-1}\uu) +(\nabla \phi \otimes \nabla \phi_2, \nabla \A^{-1}\uu) \\ &\quad \leq \big( \| \nabla \phi_1\|_{L^4(\Omega)}+ \| \nabla \phi_2\|_{L^4(\Omega)}\big) \| \nabla \phi\|_{\L2} \| \nabla \A^{-1}\uu\|_{L^4(\Omega)}\\ &\quad \leq C \big(\| \phi_1\|_{H^2(\Omega)}^\frac12+ \| \phi_2\|_{H^2(\Omega)}^\frac12) \| \nabla \phi\|_{\L2} \|\uu \|_{\L2}^\frac12 \|\uu \|_{\ast}^\frac12\\ &\quad \leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2 + \frac{1}{12} \| \nabla \phi\|_{\L2}^2+ C \big(\| \phi_1\|_{H^2(\Omega)}^2+ \| \phi_2\|_{H^2(\Omega)}^2) \|\uu \|_{\ast}^2,\end{aligned}$$ and $$\begin{aligned} (\uu \cdot \nabla \phi_2, \phi) &\leq \|\uu \|_{\L2} \| \nabla \phi_2 \|_{L^4(\Omega)} \|\phi \|_{L^4(\Omega)}\\ &\leq C \|\uu \|_{\L2} \| \nabla \phi_2 \|_{H^1(\Omega)}^\frac12 \|\phi \|_{\L2}^\frac12 \|\nabla \phi \|_{\L2}^\frac12\\ &\leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2+ \frac{1}{12} \| \nabla \phi\|_{\L2}^2 +C \|\phi_2 \|_{H^2(\Omega)}^2 \| \phi\|_{\L2}^2,\end{aligned}$$ where we have also used the inequality and the conservation of mass $\overline{\phi}=0$. Next, since $\nu'$ is bounded, by exploiting we have $$\begin{aligned} (\uu, \nu'(\phi_1) D \A^{-1}\uu \nabla \phi_1) &\leq C \| \uu\|_{\L2} \| D \A^{-1}\uu\|_{L^4(\Omega)} \| \nabla \phi_1\|_{L^4(\Omega)}\\ &\leq C \| \uu\|_{\L2}^\frac32 \| \uu\|_{\ast}^\frac12 \| \nabla \phi_1\|_{H^1(\Omega)}^\frac12\\ &\leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2+ C \| \phi_1\|_{H^2(\Omega)}^2\| \uu\|_{\ast}^2.\end{aligned}$$ By using the Stokes operator (i.e. $\A=\mathbb{P}(-\Delta)$) and the integration by parts, we infer that $$\begin{aligned} -\frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p) &= \frac12 \big( \Delta \A^{-1} \uu , \mathbb{P} ( \nu'(\phi_1)\nabla \phi_1 p ) \big)\\ &=-\frac12 \int_{\Omega} (\nabla \A^{-1} \uu)^t : \nabla \mathbb{P} ( \nu'(\phi_1)\nabla \phi_1 p ) \, \d x \\ &\quad +\frac12 \int_{\partial \Omega} \big( (\nabla \A^{-1} \uu)^t \mathbb{P}( \nu'(\phi_1)\nabla \phi_1 p ) \big) \cdot \n \, \d \sigma.\end{aligned}$$ Thanks to , , and the properties of the Leray projection, we find $$\begin{aligned} &-\frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p) \notag \\ &\quad \leq C \| \nabla \A^{-1}\uu\|_{\L2} \| \nabla \mathbb{P} ( \nu'(\phi_1)\nabla \phi_1 p ) \|_{\L2} + C \| \nabla \A^{-1}\uu\|_{L^2(\partial \Omega)} \| \mathbb{P}( \nu'(\phi_1)\nabla \phi_1 p )\|_{L^2(\partial \Omega)} \notag \\ &\quad \leq C \| \uu\|_{\ast} \| \nu'(\phi_1)\nabla \phi_1 p \|_{H^1(\Omega)} +C \|\uu\|_{\ast}^\frac12 \| \uu\|_{\L2}^\frac12 \| \mathbb{P}( \nu'(\phi_1)\nabla \phi_1 p )\|_{\L2}^\frac12 \| \mathbb{P}( \nu'(\phi_1)\nabla \phi_1 p )\|_{H^1(\Omega)}^\frac12 \notag \\ &\quad \leq C \| \uu\|_{\ast} \| \nu'(\phi_1)\nabla \phi_1 p \|_{H^1(\Omega)} +C \| \uu\|_{\ast}^\frac12 \| \uu\|_{\L2}^\frac12 \| \nu'(\phi_1)\nabla \phi_1 p \|_{\L2}^\frac12 \| \nu'(\phi_1)\nabla \phi_1 p \|_{H^1(\Omega)}^\frac12. \label{pterm}\end{aligned}$$ Owing to , , Lemma \[result1\] and , we observe that $$\begin{aligned} \| \nu'(\phi_1)\nabla \phi_1 p \|_{\L2} &\leq C \|\nabla \phi_1 \|_{L^4(\Omega)} \| p\|_{L^4(\Omega)}\\ &\leq C \| \phi_1\|_{H^2(\Omega)}^\frac12 \| p\|_{\L2}^\frac12 \| p\|_{H^1(\Omega)}^\frac12 \\ &\leq C \| \phi_1\|_{H^2(\Omega)}^\frac12 \| \nabla \A^{-1} \uu\|_{\L2}^\frac14 \| \uu\|_{\L2}^\frac34,\end{aligned}$$ and $$\begin{aligned} \| \nu'(\phi_1)\nabla \phi_1 p \|_{H^1(\Omega)} &\leq \| \nu'(\phi_1)\nabla \phi_1 p \|_{L^2(\Omega)} +\| \nu''(\phi_1)\nabla \phi_1 \otimes \nabla \phi_1 p \|_{L^2(\Omega)} \\ &\quad + \| \nu'(\phi_1) \nabla^2 \phi_1 p\|_{L^2(\Omega)} + \| \nu'(\phi_1) \nabla \phi_1 \otimes \nabla p\|_{L^2(\Omega)}\\ &\leq C \| \uu\|_{L^2(\Omega)} \log^\frac12 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big) + C \| \nabla \phi_1\|_{L^4(\Omega)}^2 \| p\|_{L^\infty(\Omega)}\\ &\quad + C \| \phi_1\|_{H^2(\Omega)} \| p\|_{L^\infty(\Omega)} + C \| \phi_1\|_{H^2(\Omega)} \| p\|_{H^1(\Omega)} \log^\frac12 \Big( C \frac{\| p\|_{H^2(\Omega)}}{\|p \|_{H^1(\Omega)}}\Big)\\ &\leq C \big( 1+ \| \phi_1\|_{H^2(\Omega)}\big) \| \uu\|_{L^2(\Omega)} \log^\frac12 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big).\end{aligned}$$ Combining the above estimates with , we are led to $$\begin{aligned} -\frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p) & \leq C \big( 1+ \| \phi_1\|_{H^2(\Omega)}\big) \| \uu\|_{\ast} \| \uu\|_{L^2(\Omega)} \log^\frac12 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big)\\ &\quad + C \big( 1+ \| \phi_1\|_{H^2(\Omega)}^\frac34 \big) \| \uu\|_{\ast}^\frac58 \| \uu\|_{L^2(\Omega)}^\frac{11}{8} \log^\frac14 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big)\\ &\leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2+ C \big( 1+ \| \phi_1\|_{H^2(\Omega)}^2 \big) \| \uu\|_{\ast}^2 \log \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big)\\ &\quad +C \big( 1+ \| \phi_1\|_{H^2(\Omega)}^\frac{12}{5} \big) \| \uu\|_{\ast}^2 \log^\frac45 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big).\end{aligned}$$ In order to handle the logarithmic terms, we recall that $\frac{ C \|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}>1$. Since $\frac{C' \| \uu\|_{\L2}}{\|\uu\|_{\ast}}>1$, for some $C'>0$ depending on $\Omega$, we have $$\begin{aligned} \log^\frac45 \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big) & \leq 1+ \log \Big( C \frac{\|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{L^2(\Omega)}}\Big)\\ & \leq 1+ \log \Big( C \frac{ C' \|\nabla \uu\|_{L^2(\Omega)} }{\| \uu\|_{\ast}}\Big)\\ &\leq C+ \log \big(1+\| \nabla \uu\|_{\L2} \big) + \log \Big( \frac{\widetilde{C} }{\| \uu\|_{\ast}}\Big),\end{aligned}$$ where $\widetilde{C}>0$ is a sufficiently large constant such that $\log \Big( \frac{\widetilde{C} }{\| \uu\|_{\ast}}\Big)>1$, which holds true in light of . Thus, we obtain $$\begin{aligned} -\frac12 (\nu'(\phi_1) \nabla \phi_1 \cdot \uu, p) & \leq \frac{\nu_\ast}{20} \| \uu\|_{\L2}^2+ C \big( 1+ \| \phi_1\|_{H^2(\Omega)}^\frac{12}{5} \big) \log \big(1+\| \nabla \uu\|_{\L2} \big) \| \uu\|_{\ast}^2\\ &\quad + C \big( 1+ \| \phi_1\|_{H^2(\Omega)}^\frac{12}{5} \big) \|\uu\|_{\ast}^2 \log \Big( \frac{\widetilde{C}}{\| \uu\|_{\ast}}\Big).\end{aligned}$$ Netx, by using Lemma \[result1\], we infer that $$\begin{aligned} &-((\nu(\phi_1)-\nu(\phi_2))D\uu_2,\nabla \A^{-1}\uu)\\ &\quad =-\int_{\Omega} \int_0^1 \nu'(\tau \phi_1+ (1-\tau)\phi_2) \, \d \tau \, \phi D \uu_2 : \nabla \A^{-1}\uu \, \d x\\ &\quad \leq C \| D\uu_2\|_{\L2} \| \phi \nabla \A^{-1}\uu\|_{\L2}\\ &\quad \leq C \| \uu_2 \|_{H^1(\Omega)} \| \nabla \phi\|_{\L2} \| \uu\|_{\ast} \log^\frac12 \Big(C\frac{ \| \uu\|_{\L2}}{\| \uu\|_{\ast}} \Big)\\ &\quad \leq \frac{1}{12} \|\nabla \phi \|_{\L2}^2 +C \| \uu_2 \|_{H^1(\Omega)}^2 \| \uu\|_{\ast}^2 \log \Big(\frac{ \widetilde{C}}{\| \uu\|_{\ast}} \Big),\end{aligned}$$ where $\widetilde{C}$ is chosen sufficiently large as above. Summing up, we arrive at the differential inequality $$\label{u-est3} \ddt G(t) + \frac{\nu_\ast}{4} \| \uu\|_{\L2}^2 + \frac12 \| \nabla \phi\|_{\L2}^2 \leq C S(t) G(t) \log \Big( \frac{\widetilde{C}}{G(t)}\Big),$$ where $$S(t)= \Big(1+ \| \uu_1\|_{H^1(\Omega)}^2+ \| \uu_2\|_{H^1(\Omega)}^2 + \|\phi_2 \|_{H^2(\Omega)}^2 +\| \phi_1\|_{H^2(\Omega)}^2 + \|\phi_1 \|_{H^2(\Omega)}^\frac{12}{5} \big( 1+ \log \big(1+\| \nabla \uu\|_{\L2} \big) \big)\Big).$$ Here we have used that the function $s \log \Big(\frac{\widetilde{C}}{s} \Big)$ is increasing on $(0, \frac{\widetilde{C}}{e})$. We observe that $S\in L^1(0,T)$ provided that $\phi_1 \in L^{\gamma}(0,T;H^2(\Omega))$ with $\gamma>\frac{12}{5}$. Indeed, we recall that $\log(1+s)\leq C(\kappa) (1+s)^\kappa$, for any $\kappa>0$. Taking $\kappa= \frac{2(5\gamma -12)}{5\gamma}$, we have $$\begin{aligned} \int_0^T \|\phi_1(\tau) \|_{H^2(\Omega)}^\frac{12}{5}& \log \big(1+\| \nabla \uu (\tau)\|_{\L2} \big) \, \d \tau\\ &\leq C \int_0^T \| \phi_1 (\tau)\|_{H^2(\Omega)}^\frac{12}{5} \big(1+\| \nabla \uu(\tau)\|_{\L2} \big)^\frac{2(5\gamma -12)}{5\gamma} \, \d \tau \\ &\leq C \int_0^T \| \phi_1(\tau)\|_{H^2(\Omega)}^\gamma + \| \nabla \uu_1(\tau)\|_{\L2}^2+ \| \nabla \uu_2 (\tau)\|_{\L2}^2 \, \d \tau.\end{aligned}$$ Throughout the rest of the proof, we will assume that $S\in L^1(0,T)$. Integrating on the time interval $[0,t]$, we find $$G(t) \leq G(0)+C \int_0^t S(s) G(s) \log \Big( \frac{\widetilde{C}}{G(s)} \Big) \, \d s,$$ for almost every $t \in [0,T]$. We observe that $\int_0^1 \frac{1}{s\log(\frac{C}{s})} \, \d s= \infty$. Thus, if $G(0)=0$, applying the Osgood lemma \[Osgood\], we deduce that $G(t)=0$ for all $t\in [0,T]$, namely $\uu_1(t)=\uu_2(t)$ and $\phi_1(t)=\phi_2(t)$. This demonstrates the uniqueness of solutions in the class of weak solutions satisfying the additional regularity $\phi_1 \in L^\gamma(0,T;H^2(\Omega))$ with $\gamma>\frac{12}{5}$. Indeed, we are able to deduce a continuous dependence estimate with respect to the initial datum. To this end, we define $\mathcal{M}(s)=\log( \log(\frac{\widetilde{C}}{s}) )$. by the Osgood lemma, for $G(0)>0$, we are led to $$\label{u-est4} -\log \Big(\log \Big(\frac{\widetilde{C}}{G(t)}\Big)\Big)+\log \Big(\log \Big(\frac{\widetilde{C}}{G(0)}\Big)\Big)\leq C\int_0^t S(s)\, \d s$$ for almost every $t \in [0,T]$. Taking the double exponential of , we eventually infer the control $$\label{CD} G(t)\leq \widetilde{C} \Big(\frac{G(0)}{\widetilde{C}}\Big)^{ \exp(-C\int_{0}^t S(s)\, \mathrm{d}s)} \quad \forall \, t \in [0,T_0],$$ where $T_0>0$ is defined by $$\log \Big(\log \Big(\frac{\widetilde{C}}{G(0)}\Big)\Big)\geq C \int_0^{T_0} S(s)\, \d s.$$ The proof is complete. $\square$ We note that the same existence result as in Theorem \[W-S\] holds for $\Omega=\mathbb{T}^d$, $d=2,3$. In the particular case $\Omega=\mathbb{T}^2$, the uniqueness of weak solutions can be achieved, without the additional regularity $\phi\in L^\gamma(0,T;H^2(\Omega))$ as in Theorem \[uni2d\]. Indeed, in this case the solutions of the Stokes operator $\A^{-1}\uu$ and $p$ are given by (see [@Temam Chapter 2.2]) $$\A^{-1}\uu= \sum_{k\in \mathbb{Z}^2} g_k {e}^{\frac{2i\pi k \cdot x}{L}}, \quad p= \sum_{k\in \mathbb{Z}^2} p_k {e}^{\frac{2i\pi k \cdot x}{L}},$$ where $$g_k=-\frac{L^2}{4\pi^2|k|^2} \Big( \uu_k-\frac{(k\cdot \uu_k)k}{|k|^2}\Big), \quad p_k= \frac{L k \cdot \uu_k}{2i\pi |k|^2}, \quad k \in \mathbb{Z}^2, \ k\neq 0,$$ $L>0$ is the cell size. Here $\uu_k$ is the $k$-mode of $\uu$. We observe that we only consider the case $k \neq 0$ since $\overline{\uu}$ is conserved for $_1$ on $\mathbb{T}^2$, and so we can choose $\overline{\uu}=0$. Moreover, since $\uu\in \H_\sigma$, we have that $k \cdot \uu_k=0$ for any $k \in \mathbb{Z}^2$, which implies that $p_k=0$ for any $k \in \mathbb{Z}^2$. Thus, following the above proof, we are led to the differential inequality without the last term on the right-hand side, i.e. $-\frac12 (\nu'(\phi_1)\nabla \phi_1\cdot \uu,p)$. Hence, we eventually end up with $$\ddt G(t) + \frac{\nu_\ast}{4} \| \uu\|_{\L2}^2 + \frac12 \| \nabla \phi\|_{\L2}^2 \leq C \widetilde{S}(t) G(t) \log \Big( \frac{\widetilde{C}}{G(t)}\Big),$$ where $$\widetilde{S}(t)= \Big(1+ \| \uu_1\|_{H^1(\Omega)}^2+ \| \uu_2\|_{H^1(\Omega)}^2 + \|\phi_1 \|_{H^2(\Omega)}^2 +\| \phi_2\|_{H^2(\Omega)}^2 \Big).$$ Since $\widetilde{S}(t)\in L^1(0,T)$ for any couple of weak solutions, an application of the Osgood lemma as above entails the uniqueness of weak solutions (without additional regularity) and a continuous dependence estimate with respect to the initial data, i.e. . Mass-conserving Navier-Stokes-Allen-Cahn System: Strong Solutions ================================================================= \[S-STRONG\] This section is devoted to the analysis of global strong solutions to the nonhomogeneous Navier-Stokes-Allen-Cahn system - in two dimensions. The main results are as follows. \[strong-D\] Let $\Omega$ be a bounded smooth domain in $\mathbb{R}^2$. Assume that $\uu_0 \in \V_\sigma$, $\phi_0 \in H^2(\Omega)$ such that $\partial_{\n} \phi_0=0$ on $\partial \Omega$, $F'(\phi_0)\in L^2(\Omega)$, $\|\phi_0 \|_{L^\infty(\Omega)}\leq 1$ and $|\overline{\phi}_0|<1$. - There exists a global strong solution $(\uu,\phi)$ to problem - satisfying, for all $T>0$, $$\begin{aligned} &\uu \in L^\infty(0,T;\V_\sigma)\cap L^2(0,T;\H^2(\Omega))\cap H^1(0,T;\H_\sigma),\\ &\phi \in L^\infty(0,T;H^2(\Omega))\cap L^2(0,T;W^{2,p}(\Omega)), \\ &\partial_t \phi \in L^\infty(0,T;\L2)\cap L^2(0,T;H^1(\Omega)),\\ &F'(\phi) \in L^\infty(0,T;\L2)\cap L^2(0,T;L^p(\Omega))\end{aligned}$$ where $p \in (2,\infty)$. The strong solution satisfies the system almost everywhere in $\Omega \times (0,\infty)$. Besides, $|\phi(x,t)|<1$ for almost any $(x,t)\in \Omega\times(0,\infty)$ and $\partial_{\n} \phi=0$ on $\partial \Omega\times(0,\infty)$. - There exists $\eta_1>0$ depending only on the norms of the initial data and on the parameters of the system: $$\eta_1=\eta_1(E(\uu_0,\phi_0), \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)},\|F'(\phi_0)\|_{\L2},\theta,\theta_0).$$ If, in addition, $\|\rho'\|_{L^\infty(-1,1)}\leq \eta_1$ and $F''(\phi_0)\in L^1(\Omega)$, then, for any $T>0$, we have $$\begin{aligned} F''(\phi)\in L^\infty(0,T;L^1(\Omega)),\quad F'' \in L^q(0,T;L^p(\Omega)),\end{aligned}$$ where $\frac{1}{p}+\frac{1}{q}=1$, $p \in (1,\infty)$, and $$\begin{aligned} \label{F''log} (F''(\phi))^2 \log (1+ F''(\phi)) \in L^1(\Omega\times(0,T)).\end{aligned}$$ In particular, the strong solution satisfying is unique. \[Proreg-D\] Let the assumptions in Theorem \[strong-D\]-(1) be satisfied. Assume that $\| \rho'\|_{L^\infty(-1,1)}\leq \eta_1$. Given a strong solution from Theorem \[strong-D\]-(1), for any $\sigma>0$, there holds $$(F''(\phi))^2 \log (1+ F''(\phi)) \in L^1(\Omega\times(\sigma,T)),$$ and $$\partial_t\uu\in L^\infty(\sigma, T; \H_\sigma)\cap L^2(\sigma, T; \V_\sigma),\quad \partial_t \phi\in L^\infty(\sigma, T; H^1(\Omega))\cap L^2(\sigma, T; H^2(\Omega)).$$ In addition, for any $\sigma>0$, there exists $\delta=\delta(\sigma)>0$ such that $$-1+\delta \leq \phi(x,t) \leq 1-\delta, \quad \forall \, x \in \overline{\Omega}, \ t \geq \sigma.$$ The smallness assumption on $\rho'$ (see below for the explicit form) can be reformulated in terms of the difference of the (constant) densities of the two fluids when $\rho$ is a linear interpolation function. In this case, we have $$\rho(s)= \rho_1\frac{1+s}{2}+ \rho_2 \frac{1-s}{2}, \quad \rho'(s)= \frac{\rho_1-\rho_2}{2} \quad \forall \, s \in [-1,1].$$ Roughly speaking, the results given by Theorem \[strong-D\] and Theorem \[Proreg-D\] imply that uniqueness and further regularity of strong solutions to the nonhomogeneous system hold provided that the two fluids have similar densities ($\rho_1 \approx \rho_2$). \[strong-hom\] It is worth noticing that Theorem \[strong-D\] and Theorem \[Proreg-D\] hold true in the case of constant density $\rho\equiv 1$ (i.e. $\rho_1=\rho_2$) without any smallness assumption. Proof of Theorem \[strong-D\] ----------------------------- We perform higher-order *a priori* estimates that are necessary for the existence of global strong solutions. **Higher-order estimates.** Multiplying $_1$ by $\partial_t \uu$, integrating over $\Omega$, and observing that $$\int_{\Omega} \nu( \phi)D\uu \cdot D \partial_t \uu \, \d x= \frac12 \ddt \int_{\Omega} \nu(\phi) |D\uu|^2 \, \d x - \frac12 \int_{\Omega} \nu'(\phi) \partial_t \phi |D \uu|^2 \, \d x,$$ we obtain $$\begin{aligned} \frac12 &\ddt \int_{\Omega} \nu(\phi) |D\uu|^2 \, \d x + \int_{\Omega} \rho(\phi) |\partial_t \uu|^2 \, \d x \notag \\ &= - ( \rho(\phi) \uu \cdot \nabla \uu, \partial_t \uu) - \int_{\Omega}\Delta \phi \, \nabla \phi \cdot \partial_t \uu \, \d x+ \frac12 \int_{\Omega} \nu'(\phi) \partial_t \phi |D \uu|^2 \, \d x. \label{NS1-D}\end{aligned}$$ Next, differentiating $_3$ in time, multiplying the resultant by $\partial_t \phi$ and integrating over $\Omega$, we obtain $$\begin{aligned} \frac12 &\ddt \| \partial_t \phi\|_{\L2}^2+ \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x + \| \nabla \partial_t \phi\|_{\L2}^2 +\int_{\Omega} F''(\phi) |\partial_t \phi|^2 \, \d x \notag \\ &= \theta_0 \| \partial_t \phi\|_{\L2}^2 - \int_{\Omega} \rho''(\phi) |\partial_t \phi|^2 \frac{|\uu|^2}{2}\, \d x - \int_{\Omega} \rho'(\phi) \uu \cdot \partial_t \uu \, \partial_t \phi \, \d x + \partial_t \xi \int_{\Omega} \partial_t \phi \, \d x. \label{AC1-D}\end{aligned}$$ Since $\overline{\partial_t \phi} =0$, by adding the equations and , we find that $$\begin{aligned} &\ddt H(t) +\rho_\ast \| \partial_t \uu\|_{\L2}^2 + \| \nabla \partial_t \phi\|_{\L2}^2+ \int_{\Omega} F''(\phi)|\partial_t \phi|^2 \, \d x \notag \\ &\quad \leq - (\rho(\phi) \uu \cdot \nabla \uu, \partial_t \uu) - \int_{\Omega}\Delta \phi \, \nabla \phi \cdot \partial_t \uu \, \d x + \frac12 \int_{\Omega} \nu'(\phi) \partial_t \phi |D \uu|^2 \, \d x+ \theta_0 \| \partial_t \phi\|_{\L2}^2\notag \\ &\qquad - \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x - \int_{\Omega} \rho''(\phi) |\partial_t \phi|^2 \frac{|\uu|^2}{2}\, \d x - \int_{\Omega} \rho'(\phi) \uu \cdot \partial_t \uu \, \partial_t \phi \, \d x, \label{NSAC1-D}\end{aligned}$$ where $$\label{H-D} H(t)= \frac12 \int_{\Omega} \nu(\phi) |D\uu|^2 \, \d x + \frac12 \| \partial_t \phi\|_{\L2}^2.$$ In , we have used that $\rho$ is strictly positive ($\rho(s)\geq \rho_\ast>0$). In addition, we simply infer from that $$\|\partial_t \phi\|_{\L2}\leq C\big( 1+ \| \uu\|_{H^1(\Omega)}\big) \| \phi\|_{H^2(\Omega)}+C \| F'(\phi)\|_{\L2}+ C \| \uu\|_{H^1(\Omega)}^2.$$ Therefore, it follows from the assumptions on the initial data that $H(0)<+\infty$. We proceed to estimate the right-hand side of . By using and , we have $$\begin{aligned} -( \rho(\phi) \uu \cdot \nabla \uu, \partial_t \uu) &\leq\| \rho(\phi)\|_{L^\infty(\Omega)} \| \uu\|_{L^\infty(\Omega)} \| \nabla \uu\|_{\L2} \| \partial_t \uu\|_{\L2}\\ &\leq C \| D\uu\|_{L^2(\Omega)}^2 \log^\frac12 \Big( C \frac{\| \uu\|_{W^{1,p}(\Omega)}}{\| D \uu\|_{L^2(\Omega)}}\Big) \| \partial_t \uu\|_{\L2}\\ &\leq \frac{\rho_\ast}{8} \|\partial_t \uu \|_{\L2}^2+ C \| D \uu\|_{L^2(\Omega)}^4 \log \Big( C \frac{\| \uu\|_{W^{1,p}(\Omega)}}{\| D \uu\|_{L^2(\Omega)}}\Big),\end{aligned}$$ for some $p>2$. Moreover, it holds $$\begin{aligned} - \int_{\Omega}\Delta \phi \, \nabla \phi \cdot \partial_t \uu \, \d x &\leq \| \Delta \phi \|_{\L2} \| \nabla \phi\|_{L^\infty(\Omega)} \| \partial_t \uu\|_{\L2}\\ &\leq C \|\Delta \phi \|_{\L2} \| \nabla \phi\|_{H^1(\Omega)} \log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}}\Big) \| \partial_t \uu\|_{\L2}\\ &\leq \frac{\rho_\ast}{8} \|\partial_t \uu \|_{\L2}^2+ C \|\Delta \phi \|_{\L2}^2 \| \nabla \phi\|_{H^1(\Omega)}^2\log \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}}\Big).\end{aligned}$$ Next, by exploiting Lemma \[result1\], together with and $\overline{\partial_t \phi}=0$, we obtain $$\begin{aligned} \frac12 \int_{\Omega} \nu'(\phi) \partial_t \phi |D \uu|^2 \, \d x &\leq \| \nu'(\phi)\|_{L^\infty(\Omega)} \| \partial_t \phi |D \uu|\|_{\L2} \| D \uu\|_{\L2}\\ & \leq C \| \nabla \partial_t \phi\|_{\L2} \| D \uu\|_{\L2}^2 \log^\frac12 \Big( C \frac{\|D \uu\|_{L^p(\Omega)}}{\| D \uu\|_{\L2}}\Big)\\ &\leq \frac18 \| \nabla \partial_t \phi\|_{\L2}^2+ C \| D \uu\|_{\L2}^4 \log \Big( C \frac{\|D \uu\|_{L^p(\Omega)}}{\| D \uu\|_{\L2}}\Big).\end{aligned}$$ It remains to control the last three terms on the right-hand side of . By using and , we obtain $$\begin{aligned} - \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x &\leq \| \partial_t \uu\|_{\L2} \| \nabla \phi\|_{L^4(\Omega)} \| \partial_t \phi\|_{L^4(\Omega)}\\ &\leq \| \partial_t \uu\|_{\L2} \| \nabla \phi\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12 \| \partial_t \phi\|_{\L2}^\frac12 \| \nabla \partial_t \phi\|_{\L2}^\frac12 \\ &\leq \frac{\rho_\ast}{8} \| \partial_t \uu\|_{\L2}^2 +\frac18 \| \nabla \partial_t \phi\|_{\L2}^2+ C \| \phi\|_{H^2(\Omega)}^2 \| \partial_t \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} - \int_{\Omega} \rho''(\phi) |\partial_t \phi|^2 \frac{|\uu|^2}{2}\, \d x &\leq C \| \rho''(\phi)\|_{L^\infty(\Omega)} \| \partial_t \phi\|_{L^4(\Omega)}^2 \|\uu\|_{L^4(\Omega)}^2\\ &\leq C \|\partial_t \phi \|_{\L2} \| \nabla \partial_t \phi\|_{\L2} \| \uu\|_{\L2}\| \nabla \uu\|_{\L2}\\ &\leq \frac18 \| \nabla \partial_t \phi\|_{\L2}^2 +C \|\partial_t \phi \|_{\L2}^2\| D\uu\|_{\L2}^2,\end{aligned}$$ and $$\begin{aligned} - \int_{\Omega} \rho'(\phi) \uu \cdot \partial_t \uu \, \partial_t \phi \, \d x &\leq C \| \rho'(\phi)\|_{L^\infty(\Omega)} \| \uu\|_{L^4(\Omega)}\| \partial_t \uu\|_{\L2} \| \partial_t \phi\|_{L^4(\Omega)}\\ &\leq C \| \uu\|_{\L2}^\frac12 \|\nabla \uu\|_{\L2}^\frac12 \| \partial_t \uu\|_{\L2} \| \partial_t \phi\|_{\L2}^\frac12 \| \nabla \partial_t \phi\|_{\L2}^\frac12\\ &\leq \frac{\rho_\ast}{8} \| \partial_t \uu\|_{\L2}^2+ \frac18 \| \nabla \partial_t \phi\|_{\L2}^2 +C \| \partial_t \phi\|_{\L2}^2\| D\uu\|_{\L2}^2.\end{aligned}$$ Combining and the above inequalities, we deduce that $$\begin{aligned} & \ddt H(t) +\frac{\rho_\ast}{2} \| \partial_t \uu\|_{\L2}^2 + \frac12 \| \nabla \partial_t \phi\|_{\L2}^2 \\ &\quad \leq C \| \partial_t \phi\|_{\L2}^2 +C \big(\| D\uu\|_{\L2}^2+\|\phi \|_{H^2(\Omega)}^2\big) \|\partial_t \phi \|_{\L2}^2 + C\| D \uu\|_{L^2(\Omega)}^4 \log \Big( C \frac{\| \uu\|_{W^{1,p}(\Omega)}}{\|D \uu\|_{L^2(\Omega)}}\Big)\\ &\qquad + C \| \Delta \phi\|_{\L2}^2 \| \nabla \phi\|_{H^1(\Omega)}^2 \log \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}}\Big).\end{aligned}$$ From the inequalities $$\begin{aligned} \label{ineq0} &x^2\log \Big(\frac{C y}{x}\Big)\leq x^2 \log (Cy) +1,\quad \forall\,x,\,y>0,\\ & \frac{\nu_\ast}{2} \| D \uu\|_{\L2}^2 +\frac12 \| \partial_t \phi\|_{\L2}^2 \leq H(t)\leq C \Big( \| D \uu\|_{\L2}^2 +\| \partial_t \phi\|_{\L2}^2\Big), \label{Hbb}\end{aligned}$$ and the estimate , we can rewrite the above differential inequality as follows $$\begin{aligned} & \ddt H(t) +\frac{\rho_\ast}{2} \| \partial_t \uu\|_{\L2}^2 + \frac12 \| \nabla \partial_t \phi\|_{\L2}^2 \notag \\ &\quad \leq C\big(1+ H(t) + H^2(t)\big) +C H^2(t) \log \big( C \| \uu\|_{W^{1,p}(\Omega)}\big) \nonumber \\ &\qquad + C \big( 1+H^2(t) \big) \log \big( C \| \phi\|_{W^{2,p}(\Omega)}\big). \label{NSAC2-D}\end{aligned}$$ Let us now estimate the argument of the logarithmic terms on the right-hand side of . First, we rewrite $_1$ as a Stokes problem with non-constant viscosity $$\begin{cases} -\div (\nu(\phi)D \uu)+\nabla P= \f, & \text{ in } \Omega\times (0,T),\\ \div \uu=0, & \text{ in } \Omega\times (0,T),\\ \uu=\mathbf{0}, & \text{ on } \partial \Omega\times (0,T), \end{cases}$$ where $\f= -\rho(\phi) \big( \partial_t \uu + \uu \cdot \nabla \uu \big) - \Delta \phi \nabla \phi$. We now apply Theorem \[Stokes-e\] with the following choice of parameters $p=1+\varepsilon$, $\varepsilon \in (0,1)$, and $r\in (2,\infty)$ such that $\frac{1}{r}=\frac{1}{1+\varepsilon}-\frac12$. We infer that $$\begin{aligned} \| \uu\|_{W^{2,1+\varepsilon}(\Omega)} &\leq C \big( \| \partial_t \uu\|_{L^{1+\varepsilon}(\Omega)}+ \| \uu\cdot \nabla \uu\|_{L^{1+\varepsilon}(\Omega)} +\|\Delta \phi \nabla \phi\|_{L^{1+\varepsilon}(\Omega)} \big)+ C \| D \uu\|_{\L2} \| \nabla \phi\|_{L^r(\Omega)}\\ &\leq C \big( \| \partial_t \uu\|_{\L2}+ \| \uu\|_{L^\frac{2(1+\varepsilon)}{1-\varepsilon}(\Omega)}\|\nabla \uu \|_{\L2}+ \| \nabla \phi\|_{L^\frac{2(1+\varepsilon)}{1-\varepsilon}(\Omega)} \| \Delta \phi\|_{\L2} \big)\\ &\quad + C \| D \uu\|_{\L2} \| \phi\|_{H^2(\Omega)}\\ &\leq C \| \partial_t \uu\|_{\L2}+ C \|D \uu \|_{\L2}^2 + \| \phi\|_{H^2(\Omega)}^2\\ &\leq C \| \partial_t \uu\|_{\L2}+ C(1+H(t)),\end{aligned}$$ where the constant $C$ depends on $\varepsilon$. We recall the Sobolev embedding $W^{2,1+\varepsilon}(\Omega)\hookrightarrow W^{1,p}(\Omega)$ where $\frac{1}{p}=\frac{1}{1+\varepsilon}- \frac12$. Therefore, for any $p\in (2,\infty)$ there exists a constant $C$ depending on $p$ such that $$\begin{aligned} \|\uu\|_{W^{1,p}(\Omega)}\leq C \| \partial_t \uu\|_{\L2} + C(1+H(t)). \label{est-uw1p}\end{aligned}$$ Next, by reformulating the equation $_4$ as the elliptic problem $$\begin{cases} -\Delta \phi+ F'(\phi)=\mu+\theta_0\phi, &\quad \text{in}\ \Omega\times(0,T),\\ \partial_\n \phi=0, &\quad \text{on}\ \partial \Omega\times (0,T). \end{cases}$$ We infer from the elliptic regularity theory (see, e.g., [@A2009 Lemma 2] and [@GMT2019]) that $$\begin{aligned} \| \phi\|_{W^{2,p}(\Omega)} +\|F'(\phi)\|_{L^p(\Omega)} &\leq C (1+\|\phi\|_{L^2(\Omega)}+\|\mu+\theta_0\phi\|_{L^p(\Omega)})\nonumber\\ &\leq C (1+\|\phi\|_{L^p(\Omega)}+\|\mu\|_{L^p(\Omega)}), \label{pw2p}\end{aligned}$$ for $p\in (2,\infty)$. On the other hand, from the equation $_3$, we see that $$\mu= -\partial_t \phi-\uu\cdot \nabla \phi- \rho'(\phi) \frac{|\uu|^2}{2}+\displaystyle{\overline{\mu+ \rho'(\phi)\frac{|\uu|^2}{2}}}.$$ Observe now that $$\Big\| -\rho'(\phi) \frac{|\uu|^2}{2}\Big\|_{L^p(\Omega)}\leq C \| \uu\|_{L^{2p}(\Omega)}^2 \leq C \|\nabla \uu \|_{L^2(\Omega)}^2.$$ Then, owing to Sobolev embedding and , we have $$\begin{aligned} \|\mu-\overline{\mu} \|_{L^p(\Omega)} &\leq \| \partial_t \phi\|_{L^p(\Omega)} + \| \uu \cdot \nabla \phi\|_{L^p(\Omega)}+\left\| \rho'(\phi) \frac{|\uu|^2}{2} - \displaystyle{\overline{\rho'(\phi)\frac{|\uu|^2}{2}}}\right\|_{L^p(\Omega)}\\ &\leq C \| \nabla \partial_t \phi\|_{\L2} +C \| \uu\|_{H^1(\Omega)} \| \phi\|_{H^2(\Omega)}+C \|\nabla \uu \|_{L^2(\Omega)}^2.\end{aligned}$$ In light of and , the above inequality yields $$\begin{aligned} \| \mu\|_{L^p(\Omega)} &\leq C \|\mu-\overline{\mu} \|_{L^p(\Omega)} +C |\overline{\mu}| \notag\\ &\leq C \|\mu-\overline{\mu} \|_{L^p(\Omega)}+ C(1+\| \mu-\overline{\mu} \notag \|_{\L2})\\ &\leq C (1+\| \nabla \partial_t \phi\|_{\L2} +H(t)). \label{mu-Lp}\end{aligned}$$ Thus, for any $p>2$, we deduce from the above estimate and that $$\label{estW2p-D} \| \phi\|_{W^{2,p}(\Omega)} \leq C(1+\| \nabla \partial_t \phi\|_{\L2} +H(t)),$$ for some positive constant $C$ depending on $p$. We recall the generalized Young inequality $$\begin{aligned} xy \leq \Phi(x)+ \Upsilon(y), \quad \forall \, x,\,y >0, \label{Young0}\end{aligned}$$ where $$\Phi(s)= s \log s -s+1, \quad \Upsilon(s)= {e}^{s}-1.$$ Then we have $$\begin{aligned} H(t)\log(1+\| \partial_t \uu\|_{\L2}) &\leq H(t)\log H(t)+ 1 + \| \partial_t \uu\|_{\L2}.\end{aligned}$$ Thus, using the above estimate and the elementary inequality $$\log(x+y)<\log(1+x)+\log(1+y),\quad x,y>0,$$ we can estimate the second term on the right-hand side of as follows $$\begin{aligned} &CH^2(t)\log(C\|\uu\|_{W^{1,p}(\Omega)})\nonumber\\ &\quad \leq CH^2(t)\log \big(C \| \partial_t \uu\|_{\L2} + C(1+H(t)) \big)\nonumber\\ &\quad \leq CH^2(t)\big(1+\log(1+\| \partial_t \uu\|_{\L2})+\log(1+H(t))\big)\nonumber\\ &\quad \leq CH^2(t)+ CH(t)\big(H(t)\log H(t)+ 1\big) + CH(t)\| \partial_t \uu\|_{\L2}+ CH^2(t)\log(1+H(t))\nonumber\\ &\quad \leq \frac{\rho_\ast}{4} \| \partial_t \uu\|_{\L2}^2+ C\big(1+H^2(t)\big)+ CH^2(t)\log(e+H(t)). \label{RHD3-D}\end{aligned}$$ In a similar manner, we have $$\begin{aligned} H(t)\log(1+\| \nabla \partial_t \phi\|_{\L2}) &\leq H(t)\log H(t)+ 1 + \| \nabla \partial_t \phi\|_{\L2}.\end{aligned}$$ Then, using , the third term on the right-hand side of can be estimated as follows $$\begin{aligned} &C \big( 1+H^2(t) \big) \log \big( C \| \phi\|_{W^{2,p}(\Omega)}\big)\nonumber\\ &\quad \leq C \big( 1+H^2(t) \big) \log \big(C(1+\| \nabla \partial_t \phi\|_{\L2} +H(t)) \big) \nonumber\\ &\quad \leq C\big(1+H^2(t)\big) +C\log\big(1+\|\nabla \partial_t \phi\|_{L^2(\Omega)}+H(t)\big) +H^2(t)\log\big(1+\| \nabla \partial_t \phi\|_{\L2}+ H(t)\big)\nonumber\\ &\quad \leq C \big( 1+H^2(t) \big) +C \big( 1+\| \nabla \partial_t \phi\|_{\L2}+H(t) \big) + C\| \nabla \partial_t \phi\|_{\L2} H(t)\nonumber\\ &\qquad + H^2(t) \log(1+H(t))\nonumber\\ &\quad \leq \frac{1}{8}\| \nabla \partial_t \phi\|_{\L2}^2+ C\big(1+H^2(t)\big)+ C H(t)\big( e+H(t) \big)\log (e+H(t)). \label{RHD4-D}\end{aligned}$$ Hence, by and , we easily deduce from that $$\begin{aligned} \ddt (e+H(t)) &+\frac{\rho_\ast}{4} \| \partial_t \uu\|_{\L2}^2 + \frac14 \| \nabla \partial_t \phi\|_{\L2}^2\leq C+CH(t) (e+H(t)) \log (e+H(t)). \label{NSAC3-D}\end{aligned}$$ Thanks to , , and , we obtain $$\label{intH} \int_t^{t+1} H(\tau) \, \d \tau \leq Q(E_0), \quad \forall \, t \geq 0,$$ where $Q$ is independent of $t$, and $E_0=E(\uu_0,\phi_0)$. We now apply the generalized Gronwall lemma \[GL2\] to and find the estimate $$\sup_{t \in [0,1]} H(t)\leq C \big(e+H(0)\big)^{{e}^{Q(E_0)}}.$$ Moreover, by using the generalized uniform Gronwall lemma \[UGL2\] together with , we infer that $$\sup_{t\geq 1} H(t)\leq C e^{(e+Q(E_0) ) e^{(1+Q(E_0))}}.$$ By combining the above inequalities, we get $$\begin{aligned} \sup_{t \geq 0} H(t)\leq Q(E_0, \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)}, \| F'(\phi_0)\|_{L^2(\Omega)}). \label{NSAC3h-D}\end{aligned}$$ In addition, integrating on the time interval $[t,t+1]$, we have, for all $t\geq 0$, $$\begin{aligned} \int_t^{t+1} \| \partial_t \uu(\tau)\|_{\L2}^2 + \| \nabla \partial_t \phi(\tau)\|_{\L2}^2\, \d \tau \leq Q(E_0, \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)}, \| F'(\phi_0)\|_{L^2(\Omega)}). \label{NSAC4}\end{aligned}$$ Then we can deduce that $$\label{str-1} \uu \in L^\infty(0,T; \V_\sigma)\cap H^1(0,T; \H_\sigma) \quad \partial_t \phi \in L^\infty(0,T; L^2(\Omega))\cap L^2(0,T;H^1(\Omega)).$$ Thanks to and , we also get, $$\label{str-2} \sup_{t\geq 0} \| \phi(t)\|_{H^2(\Omega))} \leq Q(E_0, \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)}, \| F'(\phi_0)\|_{L^2(\Omega)}),$$ and, for all $t\geq 0$, $$\label{str-2'} \int_t^{t+1} \| \phi(\tau)\|_{W^{2,p}(\Omega))}^2 \, \d \tau \leq Q(E_0, \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)}, \| F'(\phi_0)\|_{L^2(\Omega)}),$$ for any $p \in (2,\infty)$. This entails that $\phi \in L^\infty(0,T;H^2(\Omega))\cap L^2(0,T;W^{2,p}(\Omega))$. According to , and , it follows that $\mu\in L^\infty(0,T;L^2(\Omega))\cap L^2(0,T;L^{p}(\Omega))$ and, as a consequence, $$F'(\phi) \in L^\infty(0,T;\L2)\cap L^2(0,T;L^p(\Omega)).$$ Finally, by exploiting Theorem \[Stokes-e\] with $p=2$ and $r=\infty$, together with the regularity of $\phi$ obtained above, we have, for all $t\geq 0$, $$\label{str-3} \int_t^{t+1} \|\uu(\tau)\|_{H^2(\Omega)}^2 \, \d \tau \leq Q(E_0, \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)}, \| F'(\phi_0)\|_{L^2(\Omega)}),$$ which yields that $\uu \in L^2(0,T;\mathbf{H}^2(\Omega))$. **Entropy bound in $L^\infty(0,T;L^1(\Omega))$.** First of all, we observe that, for all $s\in (-1,1)$, $$\label{Fder1} F'(s)= \frac{\theta}{2} \log \Big( \frac{1+s}{1-s}\Big), \quad F''(s)= \frac{\theta}{1-s^2}, \quad F'''(s)= \frac{2\theta s}{(1-s)^2(1+s)^2}$$ and $$\label{Fder2} F^{(4)}(s)= \frac{2 \theta(1+3s^2)}{(1-s)^3(1+s)^3}>0.$$ Next, we compute $$\begin{aligned} \ddt \int_{\Omega} F''(\phi) \, \d x&= \int_{\Omega} F'''(\phi) \partial_t \phi \, \d x\\ &=\int_{\Omega} F'''(\phi) \Big( \Delta \phi - \uu \cdot \nabla \phi -F'(\phi)+ \theta_0 \phi - \rho'(\phi) \frac{|\uu|^2}{2}+ \xi\Big) \, \d x.\end{aligned}$$ Since $$\int_{\Omega} F'''(\phi) \uu \cdot \nabla \phi \, \d x= \int_{\Omega} \uu \cdot \nabla ( F''(\phi)) \, \d x=0,$$ and exploiting the integration by parts, we rewrite the above equality as follows $$\begin{aligned} & \ddt \int_{\Omega} F''(\phi) \, \d x + \int_{\Omega} F^{(4)}(\phi) |\nabla \phi|^2 \, \d x + \int_{\Omega} F'''(\phi) F'(\phi) \, \d x\nonumber\\ &\quad = \int_{\Omega} F'''(\phi) \Big( \theta_0 \phi - \rho'(\phi) \frac{|\uu|^2}{2}+\xi\Big) \, \d x. \label{EntE-}\end{aligned}$$ In particular, by using , we have $$\label{EntE} \ddt \int_{\Omega} F''(\phi) \, \d x + \int_{\Omega} F'''(\phi) F'(\phi) \, \d x \leq \int_{\Omega} F'''(\phi) \Big( \theta_0 \phi - \rho'(\phi) \frac{|\uu|^2}{2} + \xi\Big) \, \d x.$$ It follows from that $$\label{Young} xy \leq \varepsilon x \log x + {e}^{\frac{y}{\varepsilon}},\quad \forall \, x>0, y>0,\, \varepsilon \in (0,1).$$ which implies $$\begin{aligned} \int_{\Omega} -F'''(\phi) \rho'(\phi) \frac{|\uu|^2}{2} \, \d x&\leq \int_{\Omega} |F'''(\phi)| |\rho'(\phi)| \frac{|\uu|^2}{2} \, \d x \notag \\ & \leq \varepsilon \int_{\Omega} |F'''(\phi)| \log ( |F'''(\phi)| ) \, \d x+ \int_{\Omega} {e}^{\frac{|\rho'(\phi)|}{\varepsilon} \frac{|\uu|^2}{2}}\, \d x. \label{EE1}\end{aligned}$$ We observe that, for all $s\in [0,1)$, it holds $$\begin{aligned} \log (|F'''(s)|) &= \log( F'''(s))= \log \Big( \frac{2\theta s}{(1-s)^2(1+s)^2}\Big)\notag \\ &= 2 \log \Big( \frac{1+s}{1-s} \frac{\sqrt{2\theta s}}{(1+s)^2}\Big) \leq 2 \log \Big( \sqrt{2\theta} \frac{1+s}{1-s}\Big)= \log(2\theta) + \frac{4}{\theta} F'(s).\end{aligned}$$ Since both $F'(s)$ and $F'''(s)$ are odd, we easily deduce that $$\log (|F'''(s)|) \leq C_0+ \frac{4}{\theta} |F'(s)|, \quad \forall \, s \in (-1,1),$$ where $C_0=\log(2\theta)$ (without loss of generality, we assume in the sequel that $C_0>0$). Then, using the fact that $F'''(s)F'(s)\geq 0$ for all $s\in (-1,1)$, we obtain $$\begin{aligned} |F'''(s)|\log(|F'''(s)|)\leq C_0|F'''(s)|+ \frac{4}{\theta} F'''(s) F'(s), \quad \forall \, s \in (-1,1).\end{aligned}$$ Fix the constant $\alpha \in (0,1)$ such that $F'(\alpha)=1$. We infer that $$\label{estF'''} |F'''(s)|\log(|F'''(s)|)\leq C_1+ C_2F'''(s) F'(s), \quad \forall \, s \in (-1,1).$$ where $$C_1= C_0F'''(\alpha), \quad C_2=\frac{4}{\theta} +C_0.$$ Taking $\varepsilon=\frac{1}{2C_2}$ in , we arrive at $$\begin{aligned} \int_{\Omega} -F'''(\phi) \rho'(\phi) \frac{|\uu|^2}{2} \, \d x & \leq \frac{C_1 |\Omega|}{2C_2} + \frac12 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x+ \int_{\Omega} {e}^{C_2 |\rho'(\phi)| |\uu|^2}\, \d x. \label{EntE2}\end{aligned}$$ Arguing in a similar way ($\varepsilon= \frac{1}{4C_2}$), we obtain $$\int_{\Omega} F'''(\phi) \, (\theta_0 \phi+ \xi) \, \d x \leq \frac{C_1 |\Omega|}{4C_2} + \frac14 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x+ \int_{\Omega} {e}^{4C_2 |\theta_0\phi +\xi |}\, \d x.$$ Since $\phi$ is globally bounded ($\|\phi \|_{L^\infty(\Omega \times (0,T))}\leq 1$) and $\|\xi\|_{L^\infty(0,T)}\leq C^\ast_2$, we get $$\label{EntE3} \int_{\Omega} F'''(\phi) \, (\theta_0+\xi) \phi \, \d x \leq \frac14 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x+ \frac{C_1 |\Omega|}{4C_2} + {e}^{4 C_2 (\theta_0+C_2^\ast)} |\Omega|.$$ Combining with and , we deduce that $$\begin{aligned} \ddt \int_{\Omega} F''(\phi) \, \d x &+ \frac14 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x\notag \\ &\leq\frac{3C_1 |\Omega|}{4C_2} + {e}^{4 C_2 (\theta_0+C_2^\ast)} |\Omega|+ \int_{\Omega} {e}^{ C_2 |\rho'(\phi)| \| \nabla \uu\|_{\L2}^2 \left(\frac{|\uu|^2}{\|\nabla \uu\|_{\L2}^2}\right)}\, \d x. \label{EntE4}\end{aligned}$$ In order to control the last term on the right-hand side of , we shall use the Trudinger-Moser inequality (see, e.g., [@Moser]). Namely, let $f\in H_0^1(\Omega)$ ($d=2$) such that $\int_{\Omega} |\nabla f|^2 \,\d x\leq 1$. Then, there exists a constant $C_{TM}=C_{TM}(\Omega)$ (which depends only on the domain $\Omega$) such that $$\begin{aligned} \int_{\Omega} {e}^{4\pi |f|^2} \, \d x \leq C_{TM}(\Omega).\label{TrM}\end{aligned}$$ Next, as a consequence of , we have the following uniform estimate $$\sup_{t\geq 0 }\| \nabla \uu(t)\|_{\L2}\leq Q(E(\uu_0,\phi_0) ,H(0))=:R_0,$$ where $R_0$ is independent of time. The exact value of $R_0$ can be estimated in terms of the norm of the initial conditions. Now we make the following assumptions: $$\label{Hyp} |\rho'(s)|_{L^\infty(-1,1)}\leq \frac{4 \pi}{C_2 R_0^2}.$$ Thanks to , we conclude that $$\label{EntE5} \ddt \int_{\Omega} F''(\phi) \, \d x + \frac14 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x \leq\frac{3C_1 |\Omega|}{4C_2} + {e}^{4 C_2 (\theta_0+C_2^\ast)} |\Omega| + C_{TM}(\Omega).$$ Observe now that, for $s\in \big[\frac12,1)$, $$\begin{aligned} F''(s)=\frac{\theta}{1-s^2}= \frac{(1-s)(1+s)}{2s} F'''(s)\leq \frac{3}{4F'(\frac12)} F'''(s)F'(s).\end{aligned}$$ This gives $$\label{estF''} F''(s)\leq C_3 + C_4 F'''(s)F'(s), \quad \forall \, s \in (-1,1),$$ where $$C_3= F''\Big(\frac12\Big), \quad C_4=\frac{3}{4F'(\frac12)}.$$ Hence, we are led to $$\ddt \int_{\Omega} F''(\phi) \, \d x + \frac{1}{4 C_4} \int_{\Omega} F''(\phi) \, \d x \leq C_5,$$ where $$C_5=\frac{3C_1 |\Omega|}{4C_2} + {e}^{4 C_2 (\theta_0+C_2^\ast)} |\Omega|+ C_{TM}(\Omega)+ \frac{C_3|\Omega|}{4C_4}.$$ We recall that $F''(\phi_0)\in L^1(\Omega)$. Then, an application of the Gronwall lemma entails that $$\label{EB1} \int_{\Omega} F''(\phi(t)) \, \d x \leq \| F''(\phi_0)\|_{L^1(\Omega)} {e}^{-\frac{t}{4C_4}} + 4 C_4 C_5, \quad \forall \, t \geq 0.$$ In addition, integrating on the time interval $[t,t+1]$, we find $$\label{EB2} \int_t^{t+1}\! \int_{\Omega} F'''(\phi) F'(\phi) \,\d x \d \tau \leq 4 \| F''(\phi_0)\|_{L^1(\Omega)} + C_6, \quad \forall \, t \geq 0,$$ where $$C_6= 4 C_5- \frac{C_3|\Omega|}{C_4}.$$ This allows us to improve the integrability of $F''(\phi)$. Indeed, arguing similarly to , we have for $s\in \big[ \frac12 , 1)$ $$\begin{aligned} (F''(s))^2 \log(1+F''(s))&= \frac{\theta^2}{(1-s)^2(1+s)^2} \log \Big( 1+\frac{\theta}{1-s^2} \Big) \\ &\leq \theta F'''(s) \log \Big( \frac{1+s}{1-s} \frac{1-s^2 +\theta}{(1+s)^2}\Big)\\ &\leq 2F'''(s)F'(s) + \theta F'''(s) \log \Big( \frac12 + \frac{2\theta}{3} \Big) \\ &\leq C_7 F'''(s)F'(s).\end{aligned}$$ Hence, we infer that $$(F''(s))^2 \log(1+F''(s))\leq C_7 F'''(s)F'(s)+ C_8, \quad \forall \, s\in (-1,1).$$ In light of , we deduce . Indeed, we have $$\label{EB3} \int_t^{t+1}\! \int_{\Omega} (F''(\phi))^2 \log ( 1+F''(\phi)) \,\d x \d \tau \leq 4 C_7 \| F''(\phi_0)\|_{L^1(\Omega)} + C_6 C_7 +C_8, \quad \forall \, t \geq 0.$$ We notice that, by keeping the (non-negative) term $F^{(4)}(\phi)|\nabla \phi|^2$ (cf. ) on the left-hand side of in the above argument, we can also deduce that $$\int_{t}^{t+1}\! \int_{\Omega} F^{(4)}(\phi)|\nabla \phi|^2 \, \d x \d \tau \leq C_9, \quad \forall \, t \geq 0,$$ where $C_9$ depends on $\|F''(\phi_0)\|_{L^1(\Omega)}$, $R_0$, $\theta$, $\theta_0$ and $\Omega$. Since $ \left(\frac{s}{\sqrt{1-s^2}}\right)'=(1-s^2)^{-\frac32}$, we infer that $$\int_{t}^{t+1}\! \int_{\Omega} \Big| \nabla \Big( \frac{\phi}{\sqrt{1-\phi^2}}\Big)\Big|^2 \, \d x \d \tau \leq \frac{C_9}{2\theta}, \quad \forall \, t \geq 0.$$ Setting $\psi= \frac{\phi}{\sqrt{1-\phi^2}}$, and observing that $F''(s)= \theta \Big[ \big( \frac{s}{\sqrt{1-s^2}}\big)^2 +1\Big]$, we have (cf. ) $$\| \psi(t)\|_{\L2}^2+ \int_t^{t+1}\! \| \nabla \psi(\tau)\|_{\L2}^2 \, \d \tau \leq C_{10}, \quad \forall \, t \geq 0.$$ This implies that $\psi \in L^\infty(0,T;\L2)\cap L^2(0,T;H^1(\Omega))$. By Sobolev embedding, we also have that $\psi \in L^q(0,T;L^p(\Omega))$ where $\frac12=\frac{1}{p}+\frac{1}{q}$, $p \in (2,\infty)$. As a consequence, we conclude that $$\label{F''Lp} \int_t^{t+1}\! \| F''(\phi(\tau))\|_{L^p(\Omega)}^q \, \d \tau \leq C_{11}, \quad \forall \, t \geq 0,$$ where $1=\frac{1}{p}+\frac{1}{q}$, $p\in (1,\infty)$. **Uniqueness of strong solutions.** Let us consider two strong solutions $(\uu_1,\phi_1,P_1)$ and $(\uu_2,\phi_2.P_2)$ to system - satisfying the entropy bound and originating from the same initial datum. The solutions difference $(\uu,\phi, P):=(\uu_1-\uu_2, \phi_1-\phi_2, P_1-P_2)$ solves $$\begin{aligned} \label{D-Diff1} &\rho(\phi_1)\big( \partial_t \uu + \uu_1 \cdot \nabla \uu + \uu \cdot \nabla \uu_2 \big)- \div \big( \nu(\phi_1)D\uu\big)+ \nabla P \notag\\ & = - \Delta \phi_1 \nabla \phi -\Delta \phi \nabla \phi_2 - (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) + \div \big( (\nu(\phi_1)-\nu(\phi_2))D\uu_2\big)\end{aligned}$$ and $$\begin{aligned} \label{D-Diff2} &\partial_t \phi +\uu_1\cdot \nabla \phi +\uu \cdot \nabla \phi_2 -\Delta \phi + \Psi' (\phi_1)-\Psi'(\phi_2) \notag\\ &\quad = - \rho'(\phi_1)\frac{|\uu_1|^2}{2}+ \rho'(\phi_2)\frac{|\uu_2|^2}{2} +\xi_1-\xi_2,\end{aligned}$$ for almost every $(x,t) \in \Omega \times (0,T)$, together with the incompressibility constraint $\div \uu=0$. It follows that $\overline{\phi}(t)= 0$. Multiplying by $\uu$ and integrating over $\Omega$, we obtain $$\begin{aligned} &\ddt \int_{\Omega} \frac{\rho(\phi_1)}{2} |\uu|^2 \, \d x + \int_{\Omega} \rho(\phi_1) (\uu_1 \cdot \nabla) \uu \cdot \uu \, \d x +\int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x +\int_{\Omega} \nu(\phi_1)|D \uu|^2 \, \d x \notag\\ &\quad =-\int_{\Omega} \Delta \phi_1 \nabla \phi \cdot \uu \, \d x - \int_{\Omega} \Delta \phi\nabla \phi_2 \cdot \uu \, \d x - \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x \notag \\ &\qquad - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x + \int_{\Omega} \frac12 |\uu|^2 \rho'(\phi_1) \partial_t \phi_1 \,\d x. \label{D1}\end{aligned}$$ Next, multiplying by $-\Delta \phi$ and integrating over $\Omega$, we find $$\begin{aligned} &\ddt \int_{\Omega} \frac12 |\nabla \phi|^2 \,\d x + \|\Delta \phi \|_{\L2}^2 = \int_{\Omega} (\uu_1 \cdot \nabla \phi) \, \Delta \phi \, \d x + \int_{\Omega} (\uu\cdot \nabla \phi_2) \, \Delta \phi \,\d x \notag\\ & + \int_{\Omega} (F'(\phi_1)-F'(\phi_2)) \Delta \phi \, \d x + \theta_0 \|\nabla \phi\|_{\L2}^2 + \int_{\Omega} \Big( \rho'(\phi_1)\frac{|\uu_1|^2}{2}- \rho'(\phi_2)\frac{|\uu_2|^2}{2} \Big) \Delta \phi \, \d x. \label{D2}\end{aligned}$$ Here we have used the fact that $\overline{\Delta \phi}=0$ which implies that $\int_{\Omega} (\xi_1-\xi_2) \Delta \phi \, \d x=0$. Adding and , together with the bound from below of the viscosity, we have $$\begin{aligned} &\ddt \Big( \int_{\Omega} \frac{\rho(\phi_1)}{2} |\uu|^2 \, \d x + \int_{\Omega} \frac12 |\nabla \phi|^2 \,\d x \Big) + \nu_\ast \|D \uu\|_{\L2}^2 + \|\Delta \phi \|_{\L2}^2 \notag\\ &\leq -\int_{\Omega} \rho(\phi_1) (\uu_1 \cdot \nabla) \uu \cdot \uu \, \d x-\int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x -\int_{\Omega} \Delta \phi_1 \nabla \phi \cdot \uu \, \d x\notag \\ &\quad - \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x \notag \\ &\quad + \int_{\Omega} \frac12 |\uu|^2 \rho'(\phi_1) \partial_t \phi_1 \,\d x + \int_{\Omega} (\uu_1 \cdot \nabla \phi) \, \Delta \phi \, \d x + \int_{\Omega} (F'(\phi_1)-F'(\phi_2)) \Delta \phi \, \d x \notag\\ &\quad + \theta_0 \|\nabla \phi\|_{\L2}^2+ \int_{\Omega} \Big( \rho'(\phi_1)\frac{|\uu_1|^2}{2}- \rho'(\phi_2)\frac{|\uu_2|^2}{2} \Big) \Delta \phi \, \d x.\end{aligned}$$ We now proceed by estimating the terms on the right hand side of the above differential equality. We would like to mention that most of the bounds obtained below are easy applications of the Sobolev embedding theorem and interpolation inequalities in view of the estimates for global strong solutions that have been obtained before. Nevertheless, less standard is the estimate of the term involving the difference of the nonlinear terms ($F'(\phi_1)-F'(\phi_2)$) which makes use of the entropy bound . By using the regularity of strong solutions, and , we have $$\begin{aligned} -\int_{\Omega} \rho(\phi_1) (\uu_1 \cdot \nabla) \uu \cdot \uu \, \d x &\leq C \| \uu_1\|_{L^\infty(\Omega)}\| \nabla \uu\|_{\L2} \| \uu\|_{\L2}\\ &\leq \frac{\nu_\ast}{12} \| D \uu\|_{\L2}^2 +C \| \uu_1\|_{L^\infty(\Omega)}^2\| \uu\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} -\int_{\Omega} \rho(\phi_1) (\uu\cdot \nabla )\uu_2 \cdot \uu \, \d x &\leq C \| \nabla \uu_2\|_{\L2} \| \uu\|_{L^4(\Omega)}^2\\ & \leq \frac{\nu_\ast}{12} \|D \uu \|_{\L2}^2+ C\| \uu\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} -\int_{\Omega} \Delta \phi_1 \nabla \phi \cdot \uu \, \d x &\leq C \|\Delta \phi_1 \|_{L^4(\Omega)} \| \nabla \phi\|_{\L2} \| \uu\|_{L^4(\Omega)}\\ &\leq \frac{\nu_\ast}{12} \| D \uu\|_{\L2}^2+ C \|\Delta \phi_1 \|_{L^4(\Omega)}^2 \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} &- \int_{\Omega} (\rho(\phi_1)-\rho(\phi_2)) (\partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2) \cdot \uu \, \d x \notag\\ &\quad \leq C \| \phi\|_{L^4(\Omega)} \| \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2\|_{\L2} \| \uu\|_{L^4(\Omega)}\\ &\quad \leq C \| \nabla \phi\|_{\L2} \| \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2\|_{\L2} \| \nabla \uu\|_{\L2}\\ &\quad \leq \frac{\nu_\ast}{12} \| D \uu\|_{\L2}^2 + C \| \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2\|_{\L2}^2 \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} - \int_{\Omega} (\nu(\phi_1)-\nu(\phi_2))D\uu_2 : D \uu \, \d x &\leq C \|\phi \|_{L^4(\Omega)} \| D \uu_2\|_{L^4(\Omega)} \| D \uu\|_{\L2}\\ &\leq \frac{\nu_\ast}{12} \| D \uu\|_{\L2}^2 + C \| D \uu_2\|_{L^4(\Omega)}^2 \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} \int_{\Omega} \frac12 |\uu|^2 \rho'(\phi_1) \partial_t \phi_1 \,\d x &\leq C \| \uu\|_{L^4(\Omega)}^2 \| \partial_t \phi_1 \|_{\L2}\\ &\leq \frac{\nu_\ast}{12} \| D \uu\|_{\L2}^2+ C \| \uu\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} \int_{\Omega} (\uu_1 \cdot \nabla \phi) \, \Delta \phi \, \d x &\leq \|\uu_1 \|_{L^\infty(\Omega)} \| \nabla \phi\|_{\L2} \| \Delta \phi\|_{\L2}\\ &\leq \frac{1}{6} \| \Delta \phi\|_{\L2}^2+ C \|\uu_1 \|_{L^\infty(\Omega)}^2 \| \nabla \phi\|_{\L2}^2,\end{aligned}$$ $$\begin{aligned} \int_{\Omega} &\Big( \rho'(\phi_1)\frac{|\uu_1|^2}{2}- \rho'(\phi_2)\frac{|\uu_2|^2}{2} \Big) \Delta \phi \, \d x\\ &= \int_{\Omega} \Big( \rho'(\phi_1)-\rho'(\phi_2) \Big) \frac{|\uu_1|^2}{2} \Delta \phi \, \d x + \int_{\Omega} \frac{\rho'(\phi_2)}{2} \Big( \uu_1\cdot \uu+ \uu\cdot \uu_2 \Big) \Delta \phi \, \d x\\ &\leq C \| \phi\|_{L^4(\Omega)}\| \uu_1\|_{L^8(\Omega)}^2 \| \Delta \phi\|_{\L2} + C \| \uu\|_{\L2} (\| \uu_1\|_{L^\infty(\Omega)}+\| \uu_2\|_{L^\infty(\Omega)}) \| \Delta \phi\|_{L^2(\Omega)}\\ &\leq \frac{1}{6} \| \Delta \phi\|^2_{L^2(\Omega)} + C\| \nabla \phi\|_{\L2}^2+ C (\| \uu_1\|_{L^\infty(\Omega)}^2+\| \uu_2\|_{L^\infty(\Omega)}^2) \| \uu\|_{\L2}^2.\end{aligned}$$ Using the generalized Young inequality and the standard Young inequality, for $x>0$, $y>0$, $z>0$ with $Cz>y$, we obtain $$\begin{aligned} x^2 y^2 \log \Big( \frac{Cz}{y} \Big) &\leq xy^2\left(x\log x+\frac{Cz}{y}\right)\nonumber\\ & \leq \varepsilon z^2+ x^2 y^2 \log x +C^2\varepsilon ^{-1}x^2y^2, \quad \forall \, \varepsilon>0. \label{ineqy}\end{aligned}$$ By making use of and , we obtain that $$\begin{aligned} & \int_{\Omega} (F'(\phi_1)-F'(\phi_2)) \Delta \phi \, \d x\\ &\quad = \int_{\Omega} \int_0^1 F''(\tau\phi_1+(1-\tau)\phi_2) \, \d \tau\, \phi \Delta \phi \, \d x\\ &\quad \leq C\big( \| F''(\phi_1)\|_{\L2}+ \| F''(\phi_2)\|_{\L2} \big) \| \phi\|_{L^\infty(\Omega)}\| \Delta \phi\|_{\L2}\\ &\quad \leq C\big( \| F''(\phi_1)\|_{\L2}+ \| F''(\phi_2)\|_{\L2} \big) \| \nabla \phi\|_{\L2} \log^\frac12 \Big( C \frac{\|\Delta \phi \|_{\L2}}{\| \nabla \phi\|_{\L2}} \Big) \| \Delta \phi\|_{\L2}\\ &\quad \leq \frac{1}{12} \| \Delta \phi\|_{\L2}^2 +C \big( \| F''(\phi_1)\|_{\L2}^2+ \| F''(\phi_2)\|_{\L2}^2 \big) \| \nabla \phi\|_{\L2}^2 \log \Big( C \frac{\|\Delta \phi \|_{\L2}}{\| \nabla \phi\|_{\L2}} \Big)\\ &\quad \leq \frac{1}{6} \| \Delta \phi\|_{\L2}^2 + C \| F''(\phi_1)\|_{\L2}^2 \big( 1+ \log \big( \| F''(\phi_1)\|_{\L2} \big) \big) \| \nabla \phi\|_{\L2}^2 \\ &\qquad + C \| F''(\phi_2)\|_{\L2}^2 \big( 1+ \log\big( \| F''(\phi_2)\|_{\L2}\big) \big) \| \nabla \phi\|_{\L2}^2 .\end{aligned}$$ Collecting the above bounds, we find the differential inequality $$\begin{aligned} &\ddt \Big( \int_{\Omega} \frac{\rho(\phi_1)}{2} |\uu|^2 \, \d x + \int_{\Omega} \frac12 |\nabla \phi|^2 \,\d x \Big) + \frac{\nu_\ast}{2} \int_{\Omega} |D \uu|^2 \, \d x +\frac12 \|\Delta \phi \|_{\L2}^2 \notag \\ &\quad \leq W_1(t) \int_{\Omega} \frac{\rho(\phi_1)}{2} |\uu|^2 \, \d x+ W_2(t) \| \nabla \phi\|_{\L2}^2, \label{D-DI}\end{aligned}$$ where $$\begin{aligned} W_1(t)= C\big( 1+ \| \uu_1\|_{L^\infty(\Omega)}^2+ \| \uu_2\|_{L^\infty(\Omega)}^2+ \| \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2\|_{\L2}^2\big),\end{aligned}$$ and $$\begin{aligned} W_2(t)&= C \Big( 1+ \| \Delta \phi_1\|_{L^4(\Omega)}^2+\| \partial_t \uu_2 + \uu_2 \cdot \nabla \uu_2\|_{\L2}^2+ \| D\uu_2\|_{L^4(\Omega)}^2+ \| \uu_1\|_{L^\infty(\Omega)}^2 \Big)\\ &\quad + C \| F''(\phi_1)\|_{\L2}^2 \log \big( \| F''(\phi_1)\|_{\L2} \big) + C \| F''(\phi_2)\|_{\L2}^2 \log \big( \| F''(\phi_2)\|_{\L2} \big).\end{aligned}$$ Here we have used that $\rho(s)\geq \rho_\ast$ for all $s \in (-1,1)$. In order to apply the Gronwall lemma, we are left to show that $$\label{F''2log} \int_0^T \| F''(\phi_i)\|_{\L2}^2 \log \big( \| F''(\phi_i)\|_{\L2} \big) \, \d \tau\leq C(T), \quad i=1,2.$$ To this aim, we introduce the function $$g(s)= s \log ( C^\ast s), \quad \forall \, s\in (0,\infty),$$ where $C^\ast$ is a positive constant. It is easily seen that $g$ is continuous and convex ($g''(s)= \frac{1}{s}>0$). By applying Jensen’s inequality, we have $$g \Big( \frac{1}{|\Omega|}\int_{\Omega} |F''(\phi)|^2 \, \d x \Big) \leq \frac{1}{|\Omega|} \int_{\Omega} g(|F''(\phi)|^2) \, \d x.$$ Using the explicit form of $g$, this is equivalent to $$\begin{aligned} \frac{1}{|\Omega|}\|F''(\phi)\|_{\L2}^2 \log \Big( \frac{C^\ast}{|\Omega|} \|F''(\phi)\|_{\L2}^2 \Big) \leq \frac{1}{|\Omega|} \int_{\Omega} |F''(\phi)|^2 \log (C^\ast |F''(\phi)|^2) \, \d x.\end{aligned}$$ Taking $C^\ast= |\Omega|$ and integrating the above inequality over $[0,T]$, we find $$\label{jensen} \int_0^T \|F''(\phi)\|_{\L2}^2 \log \big( \|F''(\phi)\|_{\L2} \big) \, \d \tau \leq \int_{0}^T\! \int_{\Omega} |F''(\phi)|^2 \log ( |\Omega| |F''(\phi)|^2) \, \d x \d \tau.$$ Then, immediately follows from the entropy bounds and . As a consequence, both $W_1$ and $W_2$ belong to $L^1(0,T)$. Finally, an application of the Gronwall lemma entails the uniqueness of strong solutions.$\square$ Notice that the entropy estimate in $L^1(\Omega)$ proved in Theorem \[strong-D\]-(2) can be generalized to the $L^p(\Omega)$ case with $p>1$. More precisely, for any $p\in \mathbb{N}$, there exists $\eta_p>0$ with the latter depending on the norms of the initial data and on the parameters of the system $$\eta_p=\eta_p(E(\uu_0,\phi_0), \| \uu_0\|_{\V_\sigma}, \| \phi_0\|_{H^2(\Omega)},\| F'(\phi_0)\|_{\L2},\theta,\theta_0)$$ such that, if $\|\rho'\|_{L^\infty(-1,1)}\leq \eta_p$ and $F''(\phi_0)\in L^p(\Omega)$, then, for any $T>\sigma$, we have $$\begin{aligned} F''(\phi)\in L^\infty(0,T;L^p(\Omega)),\quad |F''(\phi)|^{p-1}F'''(\phi) F'(\phi)\in L^1(\Omega\times (0,T)).\end{aligned}$$ Such result follows from the above proof by replacing $\ddt \int_{\Omega} F''(\phi) \, \d x$ by $\ddt \int_{\Omega} (F''(\phi))^p \, \d x$, and the observation that, for any $p>2$, there exist two positive constants $C^1_p$ and $C^2_p$ such that $$|(F''(s))^{p-1}F'''(s)| \log \big( |(F''(s))^{p-1}F'''(s)| \big) \leq C^1_p+C^2_p (F''(s))^{p-1}F'''(s) F'(s), \quad \forall \, s \in (-1,1).$$ Proof of Theorem \[Proreg-D\] ----------------------------- We now prove the propagation of entropy bound as stated in Theorem \[Proreg-D\]. For every strong solution given by Theorem \[strong-D\]-(1), we have $$\begin{aligned} \|\uu\cdot \nabla \phi\|_{H^1(\Omega)}&\leq \|\uu\|_{L^4(\Omega)}\|\nabla \phi\|_{L^4}+\|\nabla \uu\|_{L^4(\Omega)}\|\nabla \phi\|_{L^4(\Omega)}+\|\uu\|_{L^\infty(\Omega)}\|\phi\|_{H^2(\Omega)}\nonumber\\ &\leq C+C\|\uu\|_{H^2(\Omega)}^\frac12\|\phi\|_{H^2(\Omega)}^\frac12+C\|\phi\|_{H^2(\Omega)},\end{aligned}$$ and $$\begin{aligned} \|\rho'(\phi)|\uu|^2\|_{H^1(\Omega)} &\leq C\|\uu\|_{L^4(\Omega)}^2+C\|\nabla \phi\|_{L^\infty(\Omega)}\|\uu\|_{L^4(\Omega)}^2+C\|\nabla \uu\|_{L^4(\Omega)}\|\uu\|_{L^4(\Omega)}\nonumber\\ &\leq C+C\|\phi\|_{W^{2,3}(\Omega)}+C\|\uu\|_{H^2(\Omega)},\end{aligned}$$ which imply that $$\int_t^{t+1} \| \uu(\tau)\cdot \nabla \phi(\tau)\|_{H^1(\Omega)}^2+ \|\rho'(\phi(\tau))\frac{|\uu(\tau)|^2}{2}\|_{H^1(\Omega)}^2 \, \d \tau \leq C, \quad \forall \, t \geq 0,$$ for some $C$ independent of $t$. In light of , it follows that $$\int_t^{t+1} \|-\Delta \phi(\tau) +F'(\phi(\tau))\|_{H^1(\Omega)}^2 \, \d \tau \leq C, \quad \forall \, t \geq 0.$$ By using [@GGW2018 Lemma 7.4], we infer that, for any $p\geq 1$, there exists $C=C(p)$ such that $$\label{propF''} \| F''(\phi)\|_{L^p(\Omega)}\leq C\Big(1+{e}^{C\| -\Delta \phi +F'(\phi)\|_{H^1(\Omega)}^2}\Big)\quad \text{a.e. in}\ (0,T).$$ Notice that we are not able to conclude that the right hand side of is $L^1(0,T)$. Nevertheless, since integrable function are finite almost everywhere, the above inequality entails that there exists some $\sigma \in (0,1)$ (actually $\sigma$ can be taken arbitrarily small but positive) such that $$\begin{aligned} F''(\phi(\sigma))\in L^p(\Omega) \quad\text{with}\quad \| F''(\phi(\sigma))\|_{L^p(\Omega)}\leq C(p,\sigma),\quad \forall\,p \in [1,\infty).\label{FttLp}\end{aligned}$$ Then, under the condition but without the additional assumption $F''(\phi_0)\in L^1(\Omega)$ on the initial datum, we are able to deduce that the previous estimates - hold for $t\geq \sigma>0$. More precisely, we have $$\label{EB-sig} \int_t^{t+1}\! \int_{\Omega} (F''(\phi))^2 \log ( 1+F''(\phi)) \,\d x \d \tau \leq C(\sigma), \quad \forall \, t \geq 0.$$ Differentiating $_1$ with respect to time and testing the resultant by $\partial_t\uu$, integrating over $\Omega$, we have $$\begin{aligned} & \frac12 \int_{\Omega} \rho(\phi) \partial_t |\partial_t \uu|^2 \, \d x +\int_{\Omega} \rho(\phi) \big( \partial_t \uu \cdot \nabla \uu + \uu \cdot \nabla \partial_t \uu\big)\cdot \partial_t \uu \, \d x + \int_{\Omega} \rho'(\phi)\partial_t \phi (\partial_t \uu + \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x\\ & + \int_{\Omega} \nu(\phi) |D \partial_t \uu|^2 \, \d x+ \int_{\Omega} \nu'(\phi) \partial_t \phi D \uu : D \partial_t \uu \, \d x= \int_\Omega \partial_t(\nabla \phi\otimes \nabla \phi):\nabla \partial_t\uu \, \d x.\end{aligned}$$ Since $$\begin{aligned} &\frac12 \int_{\Omega} \rho(\phi) \partial_t |\partial_t \uu|^2 \, \d x = \frac12 \ddt \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \,\d x - \frac12 \int_{\Omega} \rho'(\phi)\partial_t \phi |\partial_t \uu|^2\, \d x,\end{aligned}$$ we find $$\begin{aligned} &\frac12\frac{\d}{\d t} \int_{\Omega} \rho(\phi)|\partial_t \uu|^2 \, \d x +\int_\Omega \nu(\phi)|D\partial_t \uu|^2 \, \d x\nonumber\\ &=-\int_\Omega \rho(\phi)(\partial_t\uu\cdot \nabla \uu+ \uu\cdot \nabla \partial_t \uu)\cdot \partial_t\uu \, \d x - \frac12 \int_{\Omega} \rho'(\phi)\partial_t \phi |\partial_t \uu|^2\, \d x \notag \\ &\quad -\int_{\Omega} \rho'(\phi)\partial_t \phi (\uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x - \int_\Omega \nu'(\phi)\partial_t \phi D\uu: \nabla \partial_t\uu + \int_\Omega \partial_t(\nabla \phi\otimes \nabla \phi):\nabla \partial_t\uu \, \d x. \notag\end{aligned}$$ In view of , by using , we have $$\begin{aligned} -\int_{\Omega} \rho(\phi)(\partial_t \uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x &\leq C \| \partial_t \uu\|_{L^4(\Omega)}^2 \|\nabla \uu\|_{\L2} \notag \\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2+ C\| \partial_t \uu\|_{\L2}^2, \nonumber\end{aligned}$$ and $$\begin{aligned} -\int_{\Omega} \rho(\phi)(\uu \cdot \nabla \partial_t \uu) \cdot \partial_t \uu \, \d x &\leq C \| \uu\|_{L^4(\Omega)} \| \nabla \partial_t \uu\|_{\L2} \| \partial_t \uu\|_{L^4(\Omega)}\\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2+ C\| \partial_t \uu\|_{\L2}^2.\end{aligned}$$ Similarly, we obtain $$\begin{aligned} -\frac12 \int_{\Omega} \rho'(\phi)\partial_t \phi |\partial_t \uu|^2\, \d x &\leq C \| \partial_t \phi\|_{\L2} \| \partial_t \uu \|_{L^4(\Omega)}^2\\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2+ C\| \partial_t \uu\|_{\L2}^2,\end{aligned}$$ and $$\begin{aligned} -\int_{\Omega} \rho'(\phi)\partial_t \phi (\uu \cdot \nabla \uu) \cdot \partial_t \uu \, \d x &\leq C \| \partial_t \phi\|_{L^4(\Omega)} \| \uu \|_{L^4(\Omega)} \| \nabla \u\|_{L^4(\Omega)} \| \partial_t \uu\|_{L^4(\Omega)}\\ &\leq C \| \nabla \partial_t \phi\|_{\L2}^\frac12 \| \uu\|_{H^2(\Omega)}^\frac12 \| \partial_t \uu\|_{\L2}^\frac12 \| D \partial_t \uu\|_{\L2}^\frac12\\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + C\| \partial_t \uu\|_{\L2}^2 +C \| \nabla \partial_t \phi\|_{\L2}^2+C\| \uu\|_{H^2(\Omega)}^2.\end{aligned}$$ Besides, by means of , we deduce that $$\begin{aligned} &-\int_{\Omega} \nu'(\phi) \partial_t \phi D \uu : D \partial_t \uu \, \d x\notag\\ &\quad \leq C \| \partial_t \phi\|_{L^\infty(\Omega)} \| D \uu\|_{L^2(\Omega)} \| D \partial_t \uu \|_{\L2} \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 +C \| \partial_t \phi\|_{\L2} \| \partial_t \phi\|_{H^2(\Omega)} \notag \\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + \frac{1}{14} \| \Delta \partial_t \phi\|_{\L2}^2 +C \| \nabla \partial_t \phi\|_{\L2} ^2, \nonumber\end{aligned}$$ and $$\begin{aligned} &\int_\Omega \partial_t(\nabla \phi\otimes \nabla \phi):\nabla \partial_t\uu \d x\nonumber\\ &\quad \leq \|\nabla \phi\|_{L^4(\Omega)}\|\nabla \partial_t \phi\|_{L^4(\Omega)}\|D \partial_t\uu\|_{L^2(\Omega)}\nonumber\\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 +C\|\nabla \partial_t\phi\|_{L^2(\Omega)}\|\nabla \partial_t \phi\|_{H^1(\Omega)}\nonumber\\ &\quad \leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + \frac{1}{14} \| \Delta \partial_t \phi\|_{\L2}^2 +C \| \nabla \partial_t \phi\|_{\L2} ^2 \notag.\end{aligned}$$ Next, we differentiate $_3$ with respect to time, multiply the resultant by $-\Delta \partial_t \phi$, and integrate over $\Omega$ to obtain $$\begin{aligned} & \frac12 \ddt \| \nabla \partial_t \phi\|_{\L2}^2+ \| \Delta \partial_t \phi\|_{\L2}^2 \notag \\ &= \theta_0 \|\nabla \partial_t \phi\|_{\L2}^2+ \int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x+\int_{\Omega} (\partial_t \uu \cdot \nabla \phi) \Delta \partial_t \phi\, \d x \\ &\quad + \int_{\Omega} (\uu \cdot \nabla \partial_t \phi) \Delta \partial_t \phi\, \d x +\frac12 \int_{\Omega} \rho''(\phi) \partial_t \phi |\uu|^2 \Delta \partial_t \phi \, \d x + \int_{\Omega} \rho'(\phi)( \uu \cdot \partial_t \uu) \Delta \partial_t \phi \, \d x.\end{aligned}$$ Here we have used that $\overline{\Delta \partial_t \phi}=0$ since $\partial_\n \partial_t \phi=0$ on the boundary $\partial \Omega$. Exploiting , we get $$\begin{aligned} \int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x &\leq \| F''(\phi)\|_{\L2} \| \partial_t \phi\|_{L^\infty(\Omega)} \| \Delta \partial_t \phi \|_{\L2}\\ &\leq \| F''(\phi)\|_{\L2} \| \nabla \partial_t \phi\|_{\L2} \log^\frac12 \Big( C\frac{\| \Delta \partial_t \phi\|_{\L2}}{\| \nabla \partial_t \phi\|_{\L2}}\Big)\| \Delta \partial_t \phi \|_{\L2}\\ &\leq \frac{1}{28} \| \Delta \partial_t \phi \|_{\L2}^2 +C \| F''(\phi)\|_{\L2}^2 \| \nabla \partial_t \phi\|_{\L2}^2 \log \Big( C\frac{\| \Delta \partial_t \phi\|_{\L2}}{\| \nabla \partial_t \phi\|_{\L2}}\Big).\end{aligned}$$ Recalling , we obtain $$\begin{aligned} &\int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x \notag\\ &\quad \leq \frac{1}{14} \| \Delta \partial_t \phi \|_{\L2}^2 +C \| F''(\phi)\|_{\L2}^2 \log \big( C\| F''(\phi)\|_{\L2} \big)\| \nabla \partial_t \phi\|_{\L2}^2.\label{FFF}\end{aligned}$$ Next, using and , we see that $$\begin{aligned} \int_{\Omega} (\partial_t \uu \cdot \nabla \phi) \Delta \partial_t \phi\, \d x &\leq \|\partial_t \uu \|_{L^4(\Omega)}\|\nabla \phi\|_{L^4(\Omega)}\|\Delta \partial_t \phi\|_{\L2}\nonumber\\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + \frac{1}{14}\| \Delta \partial_t \phi\|_{\L2}^2 + C \|\partial_t \uu \|_{L^2(\Omega)}^2, \nonumber\end{aligned}$$ and $$\begin{aligned} \int_{\Omega} (\uu \cdot \nabla \partial_t \phi) \Delta \partial_t \phi\, \d x& \leq \| \uu\|_{L^4(\Omega)} \| \nabla \partial_t \phi\|_{L^4(\Omega)} \| \Delta \partial_t \phi\|_{\L2} \nonumber\\ &\leq \frac{1}{14} \| \Delta \partial_t \phi\|_{\L2}^2 + C \| \nabla \partial_t \phi\|_{\L2}^2. \label{unpt}\end{aligned}$$ Finally, in a similar manner we find that $$\begin{aligned} \frac12 \int_{\Omega} \rho''(\phi) \partial_t \phi |\uu|^2 \Delta \partial_t \phi \, \d x &\leq C \| \partial_t \phi\|_{L^4(\Omega)} \| \uu\|_{L^8(\Omega)}^2 \| \Delta \partial_t \phi\|_{\L2}\\ &\leq \frac{1}{14} \| \Delta \partial_t \phi\|_{\L2}^2 + C \| \nabla \partial_t \phi\|_{\L2}^2,\end{aligned}$$ and $$\begin{aligned} \int_{\Omega} \rho'(\phi)( \uu \cdot \partial_t \uu) \Delta \partial_t \phi \, \d x &\leq C \| \uu\|_{L^4(\Omega)} \| \partial_t \uu \|_{L^4(\Omega)} \| \Delta \partial_t \phi\|_{\L2}\\ &\leq \frac{\nu_\ast}{16} \| D \partial_t \uu\|_{\L2}^2 + \frac{1}{14}\| \Delta \partial_t \phi\|_{\L2}^2 + C \|\partial_t \uu \|_{L^2(\Omega)}^2.\end{aligned}$$ From the above estimates, we deduce that $$\begin{aligned} \frac{\d}{\d t}L(t)+ \frac{\nu_*}{2}\|D\partial_t \uu\|_{\L2}^2 +\frac{1}{2}\| \Delta \partial_t \phi\|_{\L2}^2 \leq CK(t)L(t)+ C \| \uu\|_{H^2(\Omega)}^2,\end{aligned}$$ where $$\begin{aligned} &L(t)=\frac12 \int_{\Omega} \rho(\phi) |\partial_t\uu(t)|^2 \, \d x+\frac12\|\nabla \partial_t \phi(t)\|_{\L2}^2,\\ &K(t)=1+\| F''(\phi)\|_{\L2}^2 \log \big( C\| F''(\phi)\|_{\L2} \big).\end{aligned}$$ Recalling estimates and , we have $$\int_{t}^{t+1} L(\tau) + \| \uu(\tau)\|_{H^2(\Omega)}^2 \, \d \tau \leq C, \quad \forall \, t \geq 0,$$ where $C$ is independent of $t$. As a consequence, there exists $\sigma \in (0,1)$ ($\sigma$ can be chosen arbitrary small but positive) such that $$\begin{aligned} L(\sigma)\leq C(\sigma).\label{Lini}\end{aligned}$$ Notice that, without loss of generality, this value of $\sigma$ can be chosen equal to the one in . Then, by exploiting and the Jensen inequality (cf. ), we obtain $$\int_t^{t+1} K(\tau) \, \d \tau \leq C, \quad \forall \, t \geq \sigma,$$ where $C$ depends on $\sigma$, but is independent of $t$. Thus, by using the Gronwall lemma on the time interval $[\sigma,1]$ and the uniform Gronwall lemma for $t\geq 1$, we deduce that $$L(t)+ \int_t ^{t+1} \|D\partial_t \uu\|_{\L2}^2 +\| \Delta \partial_t \phi\|_{\L2}^2 \, \d \tau \leq C(\sigma),\quad \forall\, t\geq \sigma.$$ Hence we have $$\partial_t\uu\in L^\infty(\sigma, T; \H_\sigma)\cap L^2(\sigma, T; \V_\sigma),\quad \partial_t \phi\in L^\infty(\sigma, T; H^1(\Omega))\cap L^2(\sigma, T; H^2(\Omega)).$$ In light of and , we infer that $$\uu \in L^\infty(\sigma,T;\mathbf{W}^{1,p}(\Omega)), \quad \forall \, p \in (2,\infty).$$ An immediate consequence of the above regularity results is that $$\widetilde{\mu}=-\Delta \phi+F'(\phi) \in L^2(\sigma,T;L^\infty(\Omega)).$$ Thanks to [@GGW2018 Lemma 7.2], we deduce that $F'(\phi)\in L^2(\sigma,T;L^\infty(\Omega))$. This property entails that there exists $\sigma' \in (\sigma,\sigma+1)$ such that $$\label{F'sigma} \|F'(\phi(\sigma'))\|_{L^\infty(\Omega)}\leq C(\sigma).$$ Note that $\sigma'$ can also be chosen arbitrarily close to $\sigma$. Now, we rewrite as follows $$\partial_t \phi + \uu \cdot \nabla \phi - \Delta \phi +F'(\phi) = U(x,t),$$ where $U=\theta_0 \phi- \rho'(\phi) \frac{|\uu|^2}{2}+\xi$. Thanks to the above regularity, it easily seen that $U \in L^\infty(0,T;L^\infty(\Omega))$. In particular, $\sup_{t\geq \sigma} \| U(t)\|_{L^\infty(\Omega)}\leq C(\sigma)$. For any $p\geq 2$, we compute $$\begin{aligned} \frac{1}{p} \ddt \int_{\Omega} |F'(\phi)|^p \, \d x &= \int_{\Omega} |F'(\phi)|^{p-2} F'(\phi) F''(\phi) \partial_t \phi \, \d x\\ &= \int_{\Omega} |F'(\phi)|^{p-2} F'(\phi) F''(\phi) \Big( - \uu\cdot \nabla \phi + \Delta \phi -F'(\phi) + U \Big) \, \d x.\end{aligned}$$ Since $$\int_{\Omega} |F'(\phi)|^{p-2} F'(\phi) F''(\phi) \uu \cdot \nabla \phi \, \d x= \int_{\Omega} \uu \cdot \nabla \Big( \frac{1}{p} |F'(\phi)|^p \Big) \, \d x=0,$$ we deduce that $$\begin{aligned} & \frac{1}{p} \ddt \int_{\Omega} |F'(\phi)|^p \, \d x + \int_{\Omega} \Big( (p-1) |F'(\phi)|^{p-2} F''(\phi)^2 + |F'(\phi)|^{p-1} F'(\phi) F'''(\phi) \Big) |\nabla \phi|^2 \, \d x\\ &\quad +\int_{\Omega} |F'(\phi)|^{p} F''(\phi) \, \d x = \int_{\Omega} |F'(\phi)|^{p-2} F'(\phi) F''(\phi) U \, \d x.\end{aligned}$$ We notice that the second term on the left-hand side is non-negative. Next, we observe that $$F''(s)\leq \theta \mathrm{e}^{\frac{2}{\theta} |F'(s)|}, \quad \forall \, s \in (-1,1).$$ Owing to the above inequality, and using the fact that $s \leq \mathrm{e}^s$ for $s\geq 0$, we deduce that $$\begin{aligned} \log \Big(|F'(s)|^{p-1} F''(s) \Big) \leq \log (\theta)+ \Big(1+\frac{2}{\theta} \Big) (p-1) |F'(s)|, \quad \forall \, s \in (-1,1).\end{aligned}$$ Thus, we get $$|F'(s)|^{p-1} F''(s) \log \Big(|F'(s)|^{p-1} F''(s) \Big) \leq C_1 p |F'(s)|^p F''(s) + C_2, \quad \forall \, s \in (-1,1),$$ for some $C_1,C_2>0$ independent of $p$. Recalling $$xy \leq \varepsilon x \log x + {e}^{\frac{y}{\varepsilon}},\quad \forall \, x>0, y>0,\quad \varepsilon \in (0,1),$$ and taking $\varepsilon = \frac{1}{2C_1 p}$, we arrive at $$\begin{aligned} \frac{1}{p} \ddt \int_{\Omega} |F'(\phi)|^p \, \d x + \frac12 \int_{\Omega} |F'(\phi)|^{p} F''(\phi) \, \d x \leq \frac{C_2 |\Omega|}{2C_1}+ \int_{\Omega} {e}^{2C_1 p |U| }\, \d x.\end{aligned}$$ Since $U$ is globally bounded, we obtain $$\frac{C_2 |\Omega|}{2C_1}+ \int_{\Omega} {e}^{2C_1 p |U| }\, \d x \leq \frac{C_2 |\Omega|}{2C_1} + |\Omega| {e}^{2C_3 p} \leq C_4 {e}^{C_5 p},$$ for some $C_4,C_5>0$ independent of $p$ and $t$. Observing that $ F''(s)\geq \theta$ for all $s\in (-1,1)$, we rewrite the above differential inequality for $p\geq 2$ as follows $$\ddt \int_{\Omega} |F'(\phi)|^p \, \d x +\theta \int_{\Omega} |F'(\phi)|^{p} \, \d x \leq C_4 p {e}^{C_5 p}.\nonumber$$ By applying the Gronwall lemma on the time interval $[\sigma', \infty)$, we infer that $$\| F'(\phi(t))\|_{L^p(\Omega)}^p \leq \| F'(\phi (\sigma'))\|_{L^p(\Omega)}^p e^{-\theta (t-\sigma')} + \frac{C_4 p {e}^{C_5 p}}{\theta}, \quad \forall \, t \geq \sigma'.$$ We recall the elementary inequality for $q<1$ $$(x+y)^q\leq x^q+y^q, \quad \forall \, x>0,y>0.$$ Choosing $q=\frac{1}{p}$, with $p\geq 2$, we find $$\label{di-sp} \| F'(\phi(t))\|_{L^p(\Omega)} \leq \| F'(\phi(\sigma'))\|_{L^p(\Omega)} {e}^{-\frac{\theta (t-\sigma')}{p}} +\Big( \frac{C_4 p}{\theta}\Big)^{\frac{1}{p}} {e}^{C_5}, \quad \forall \, t \geq \sigma'.\nonumber$$ Recalling and taking the limit as $p\rightarrow +\infty$, we deduce that $$\label{sp1} \| F'(\phi(t))\|_{L^\infty(\Omega)} \leq \| F'(\phi (\sigma'))\|_{L^\infty(\Omega)} + {e}^{C_5}, \quad \forall \, t \geq \sigma'. \nonumber$$ As a result, there exists $\delta=\delta(\sigma)>0$ such that $$-1+\delta \leq \phi(x,t) \leq 1-\delta, \quad \forall \, x \in \overline{\Omega}, \ t \geq \sigma'.$$ The proof is complete. $\square$ Mass-conserving Euler-Allen-Cahn System in Two Dimensions {#EAC-sec} ========================================================= In this section, we study the dynamics of ideal two-phase flows in a bounded domain $\Omega \subset \mathbb{R}^2$ with smooth boundary, which is described by the mass-conserving Euler-Allen-Cahn system: $$\label{EAC} \begin{cases} \partial_t \uu + \uu \cdot \nabla \uu + \nabla P= - \div(\nabla \phi \otimes \nabla \phi),\\ \div \uu=0,\\ \partial_t \phi +\uu\cdot \nabla \phi + \mu= \overline{\mu}, \\ \mu= -\Delta \phi + \Psi' (\phi), \end{cases} \quad \text{ in } \Omega \times (0,T).$$ The above system corresponds to the inviscid NS-AC system (i.e. $\nu\equiv 0$) with matched densities (i.e. $\rho \equiv 1$). The system is subject to the following boundary conditions $$\label{boundaryE} \uu\cdot \n =0,\quad \partial_{\n} \phi =0 \quad \text{ on } \partial\Omega \times (0,T),$$ and initial conditions $$\label{ICE} \uu(\cdot, 0)= \uu_0, \quad \phi(\cdot, 0)=\phi_0 \quad \text{ in } \Omega.$$ The main result of this section is as follows: \[Th-EAC\] Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^2$. - Assume that $\uu_0 \in \H_\sigma\cap \mathbf{H}^1(\Omega)$, $\phi_0\in H^2(\Omega)$ such that $F'(\phi_0)\in \L2$, $\| \phi_0\|_{L^\infty(\Omega)}\leq 1$, $|\overline{\phi}_0|<1$ and $\partial_\n \phi_0=0$ on $\partial \Omega$. Then, there exists a global solution $(\uu,\phi)$ which satisfies the problem - in the sense of distribution on $\Omega \times (0,\infty)$ and, for all $T>0$, $$\begin{aligned} &\uu \in L^\infty(0,T;\H_\sigma \cap \mathbf{H}^1(\Omega)), \quad \phi \in L^\infty(0,T; H^2(\Omega))\cap L^2(0,T; W^{2,p}(\Omega)),\\ &\partial_t \phi \in L^\infty(0,T;L^2(\Omega)) \cap L^2(0,T;H^1(\Omega)),\\ &\phi \in L^\infty(\Omega\times (0,T)) : |\phi(x,t)|<1 \ \text{a.e. in} \ \Omega\times(0,T),\end{aligned}$$ where $p\in (2,\infty)$. Moreover, $\partial_\n \phi=0$ on $\partial\Omega\times(0,\infty)$. - Assume that $ \uu_0 \in \H_\sigma\cap \mathbf{W}^{1,p}(\Omega)$, $p \in (2,\infty)$, $\phi_0\in H^2(\Omega) $ such that $F'(\phi_0)\in \L2$, $F''(\phi_0) \in L^1(\Omega)$, $\| \phi_0\|_{L^\infty(\Omega)}\leq 1$, $ |\overline{\phi}_0|<1$, $\partial_\n \phi_0=0$ on $\partial \Omega$, and in addition $\nabla \mu_0= \nabla ( -\Delta \phi_0+F'(\phi_0)) \in \mathbf{L}^2(\Omega)$. Then, there exists a global solution $(\uu,\phi)$ which satisfies the problem - almost everywhere in $\Omega \times (0,\infty)$ and, for all $T>0$, $$\begin{aligned} &\uu \in L^\infty(0,T;\H_\sigma \cap \mathbf{W}^{1,p}(\Omega)), \quad \phi \in L^\infty(0,T; W^{2,p}(\Omega)),\\ &\partial_t \phi \in L^\infty(0,T;H^1(\Omega)) \cap L^2(0,T;H^2(\Omega)),\\ &\phi \in L^\infty(\Omega\times (0,T)) : |\phi(x,t)|<1 \ \text{a.e. in} \ \Omega\times(0,T).\end{aligned}$$ In addition, for any $\sigma>0$, there exists $\delta=\delta(\sigma)>0$ such that $$-1+\delta \leq \phi(x,t) \leq 1-\delta, \quad \forall \, x \in \overline{\Omega}, \ t \geq \sigma.$$ To prove Theorem \[Th-EAC\], we first derive formal estimates leading to the required estimates of solutions. Then the existence results can be proved by a suitable approximation scheme with fixed point arguments and then passing to the limit, which is standard owing to uniform estimates obtained in the first step. Hence, here below we only focus on the *a priori* estimates and omit further details. Case 1 ------ Let us first consider initial datum $(\uu_0,\phi_0)$ such that $$\uu_0 \in \H_\sigma\cap \mathbf{H}^1(\Omega), \quad \phi_0\in H^2(\Omega), \quad \partial_\n \phi_0=0\ \ \text{on}\ \partial\Omega,$$ with $$\| \phi_0\|_{L^\infty(\Omega)}\leq 1, \quad |\overline{\phi}_0|<1 \quad \text{and} \quad F'(\phi_0)\in L^2(\Omega).$$ **Lower-order estimate.** As in the previous section, we have the conservation of mass $$\overline{\phi}(t)= \overline{\phi}_0, \quad \forall \, t \geq 0.$$ By the same argument for , we deduce the energy balance $$\label{EE2} \ddt E(\uu, \phi) + \|\partial_t \phi + \uu \cdot \nabla \phi\|_{L^2(\Omega)}^2=0.$$ Integrating the above relation on $[0,t]$, we find $$E(\uu(t),\phi(t))+ \int_0^t \| \partial_t \phi + \uu \cdot \nabla \phi\|_{\L2}^2 \, \d \tau = E(\uu_0,\phi_0), \quad \forall \, t \geq 0.$$ This implies that $$\label{E1} \uu \in L^\infty(0,T; \H_\sigma), \quad \phi\in L^\infty(0,T;H^1(\Omega)), \quad \partial_t \phi + \uu \cdot \nabla \phi \in L^2(0,T;L^2(\Omega)),$$ where the last property also implies $\mu-\overline{\mu}\in L^2(0,T;L^2(\Omega))$. In addition, it follows from the estimates and that $$\phi \in L^2(0,T;H^2(\Omega)),\ \ \mu\in L^2(0,T; L^2(\Omega))\ \ \text{and}\ \ F'(\phi)\in L^2(0,T;L^2(\Omega)).$$ The latter entails that $\phi \in L^\infty(\Omega \times (0,T))$ such that $|\phi(x,t)|<1$ almost everywhere in $\Omega\times(0,T)$. We remark that in comparison with the viscous case, it is not possible at this stage to prove that $\partial_t \phi\in L^2(\Omega\times (0,T))$. **Higher-order estimates.** In the two dimensional case, it is convenient to consider the equation for the vorticity $\omega= \frac{\partial u_2}{\partial x_1}-\frac{\partial u_1}{\partial x_2}$ that reads as follows $$\label{vort-eq} \partial_t \omega+ \uu \cdot \nabla \omega = \nabla \mu \cdot (\nabla \phi)^\perp,$$ where $\vv^\perp= (v_2,-v_1)$ for any $\vv=(v_1,v_2)$. Multiplying by $\omega$ and integrating over $\Omega$, we obtain $$\label{Vor1} \frac12 \ddt \| \omega\|_{\L2}^2= \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp \, \omega \, \d x.$$ On the other hand, differentiating $_3$ with respect to time, multiplying by $\partial_t \phi$ and integrating over $\Omega$, we find $$\begin{aligned} \frac12 \ddt \| \partial_t \phi\|_{\L2}^2+ \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x + \| \nabla \partial_t \phi\|_{\L2}^2 +\int_{\Omega} F''(\phi) |\partial_t \phi|^2 \, \d x= \theta_0 \| \partial_t \phi\|_{\L2}^2. \label{TestAC2}\end{aligned}$$ Here we have used the following equalities $$\int_{\Omega} \uu\cdot \nabla \partial_t \phi \, \partial_t \phi \, \d x= \int_{\Omega} \uu\cdot \nabla \Big( \frac12 |\partial_t \phi|^2 \Big) \, \d x=0 \quad \text{and}\quad \int_{\Omega} \partial_t \phi \, \d x=0.$$ We now define $$H(t) = \frac12 \| \omega\|_{\L2}^2+ \frac12 \| \partial_t \phi\|_{\L2}^2.$$ By adding together and , we infer from the convexity of $F$ (i.e. $F''>0$) that $$\label{In-EAC} \ddt H(t) + \| \nabla \partial_t \phi\|_{\L2}^2 \leq \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp \, \omega \, \d x - \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x + \theta_0 \| \partial_t \phi\|_{\L2}^2.$$ Before proceeding to control the terms on the right-hand side of , we rewrite the second one using the Euler equation. We first observe that $$\partial_t \uu = \P \big( \mu \nabla \phi - \uu \cdot \nabla \uu \big),$$ where $\mathbb{P}$ is the Leray projection operator. Thus, we write $$\begin{aligned} \int_{\Omega} \partial_t \uu \cdot \nabla \phi \, \partial_t \phi \, \d x &= \int_{\Omega} \P \big( \mu \nabla \phi - \uu \cdot \nabla \uu \big) \cdot \nabla \phi \, \partial_t \phi \, \d x \\ &= \int_{\Omega} \mu \nabla \phi \cdot \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x - \int_{\Omega} (\uu \cdot \nabla \uu) \cdot \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x \\ &= -\int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \,\nabla \partial_t \phi\big) \, \d x - \int_{\Omega} \div ( \uu \otimes \uu ) \cdot \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x \\ &= -\int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \,\nabla \partial_t \phi\big) \, \d x + \int_{\Omega} (\uu \otimes \uu) : \nabla \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x \\ &\quad - \int_{\partial \Omega} \uu\otimes \uu \P\big( \nabla \phi \, \partial_t \phi\big) \cdot \n \, \d \sigma \\ &= -\int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \,\nabla \partial_t \phi\big) \, \d x + \int_{\Omega} (\uu \otimes \uu) : \nabla \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x \\ &\quad - \int_{\partial \Omega} (\uu \cdot \n) \big(\uu \cdot \P\big( \nabla \phi \, \partial_t \phi\big) \big) \, \d \sigma \\ &= -\int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \,\nabla \partial_t \phi\big) \, \d x + \int_{\Omega} (\uu \otimes \uu) : \nabla \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x.\end{aligned}$$ Here we have used that $\P( \nabla v)=0$ for any $v \in H^1(\Omega)$, the relation $\div (S^t \vv)= S^t : \nabla \vv+ \div S \cdot \vv$ for any $d \times d$ tensor $S$ and vector $\vv$, and the no-normal flow condition $\uu \cdot \n =0$ at the boundary. As a consequence, we rewrite as follows $$\begin{aligned} \ddt H(t) + \| \nabla \partial_t \phi\|_{\L2}^2 & \leq \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp \, \omega \, \d x + \int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \, \nabla \partial_t \phi\big) \, \d x \notag \\ &\quad - \int_{\Omega} (\uu \otimes \uu) : \nabla \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x + \theta_0 \| \partial_t \phi\|_{\L2}^2. \label{In-EAC2}\end{aligned}$$ We now turn to estimate the right-hand side of . By Hölder’s inequality, we have $$\label{I1} \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp \, \omega \, \d x \leq \| \nabla \mu\|_{\L2} \| \nabla \phi\|_{L^\infty(\Omega)} \| \omega\|_{\L2}.$$ By taking the gradient of $_3$, we observe that $$\| \nabla \mu\|_{\L2} \leq \| \nabla \partial_t \phi\|_{\L2}+ \|\nabla^2 \phi\, \uu \|_{\L2} + \| \nabla \uu \, \nabla \phi\|_{\L2}.$$ Recalling the elementary inequality $$\| \vv\|_{H^1(\Omega)}\leq C \Big(\| \vv\|_{\L2} + \| \div \vv\|_{\L2} + \| \curl \vv\|_{\L2}+ \| \vv\cdot \n\|_{H^\frac12(\partial \Omega)}\Big), \quad \forall \, \vv \in \mathbf{H}^1(\Omega),$$ and exploiting Lemma \[result1\] as well as , we find that $$\begin{aligned} \| \nabla \mu\|_{\L2} &\leq \| \nabla \partial_t \phi\|_{\L2} + C \| \uu\|_{H^1(\Omega)} \| \nabla^2 \phi\|_{\L2} \log^\frac12 \Big( C \frac{\| \nabla^2 \phi\|_{L^{p}(\Omega)}}{\| \nabla^2 \phi\|_{\L2}} \Big) \\ &\quad + \| \nabla \uu \|_{\L2} \|\nabla \phi\|_{L^\infty(\Omega)}\\ &\leq \| \nabla \partial_t \phi\|_{\L2} +C (1+ \| \omega\|_{\L2}) \| \nabla^2 \phi\|_{\L2} \log^\frac12 \Big( C \frac{\| \nabla^2 \phi\|_{L^{p}(\Omega)}}{\| \nabla^2 \phi\|_{\L2}} \Big)\\ &\quad + C (1+ \| \omega\|_{\L2}) \|\nabla \phi\|_{H^1(\Omega)}\log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}} \Big),\end{aligned}$$ for some $p>2$. Using , we rewrite the above estimate as follows $$\begin{aligned} \| \nabla \mu\|_{\L2} &\leq \| \nabla \partial_t \phi\|_{\L2} +C (1+ \| \omega\|_{\L2}) \Big( \| \nabla \phi\|_{H^1(\Omega)} \log^\frac12 \big( C \|\nabla \phi\|_{W^{1,p}(\Omega)} \big) + 1 \Big).\end{aligned}$$ Then, using again the inequality , can be controlled as follows $$\begin{aligned} \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp \, \omega \, \d x &\leq \| \nabla \partial_t \phi\|_{\L2} \| \omega\|_{\L2} \|\nabla \phi\|_{H^1(\Omega)}\log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}} \Big) \\ &\quad + C \| \omega\|_{\L2}(1+ \| \omega\|_{\L2}) \Big( \|\nabla \phi\|_{H^1(\Omega)} \log^\frac12 \big( C \| \nabla \phi\|_{W^{1,p}(\Omega)} \big) + 1 \Big)\\ &\quad \times \|\nabla \phi\|_{H^1(\Omega)}\log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}} \Big)\\ &\leq \| \nabla \partial_t \phi\|_{\L2} \| \omega\|_{\L2} \Big( \|\nabla \phi\|_{H^1(\Omega)} \log^\frac12 \big( C \| \nabla \phi\|_{W^{1,p}(\Omega)}\big) +1\Big)\\ &\quad + C (1+\| \omega\|_{\L2}^2) \Big( \| \nabla \phi\|_{H^1(\Omega)}^2 \log \big( C \| \nabla \phi\|_{W^{1,p}(\Omega)} \big) + 1 \Big)\\ &\leq \frac16 \| \nabla \partial_t \phi\|_{\L2}^2+ C (1+\| \omega\|_{\L2}^2) \Big( \| \phi\|_{H^2(\Omega)}^2 \log \big( C \| \phi\|_{W^{2,p}(\Omega)} \big) + 1 \Big),\end{aligned}$$ for some $p>2$. Next, since $\phi$ is globally bounded, we have $$\begin{aligned} \int_{\Omega} \mu \nabla \phi \cdot \P \big( \phi \, \nabla \partial_t \phi\big) \, \d x &\leq C \| \mu\|_{\L2} \| \nabla \phi\|_{L^\infty(\Omega)} \| \phi \nabla \partial_t \phi \|_{\L2}\\ &\leq C \| \mu\|_{\L2} \|\nabla \phi\|_{H^1(\Omega)}\log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}} \Big) \| \phi\|_{L^\infty(\Omega)} \| \nabla \partial_t \phi\|_{\L2}\\ &\leq C \| \mu\|_{\L2} \Big( \| \phi\|_{H^2(\Omega)} \log^\frac12 \big( C \| \phi\|_{W^{2,p}(\Omega)} \big) + 1 \Big) \| \nabla \partial_t \phi\|_{\L2},\end{aligned}$$ for some $p>2$. In order to estimate the $L^2$-norm of $\mu$, we notice that $$\begin{aligned} \| \mu-\overline{\mu}\|_{L^2(\Omega)} &\leq \| \partial_t \phi\|_{\L2}+ \| \uu\cdot \nabla \phi\|_{\L2}\\ &\leq \| \partial_t \phi\|_{\L2}+ \|\uu \|_{L^4(\Omega)} \| \nabla \phi\|_{L^4(\Omega)} \\ &\leq \| \partial_t \phi\|_{\L2}+ C \|\uu \|_{\L2}^\frac12 \| \u\|_{H^1(\Omega)}^\frac12 \| \nabla \phi\|_{\L2}^\frac12 \| \phi\|_{H^2(\Omega)}^\frac12 \\ &\leq \| \partial_t \phi\|_{\L2}+ C(1+\| \omega\|_{\L2})^\frac12 (1+\| \mu-\overline{\mu}\|_{\L2})^\frac12 \nonumber\\ &\leq \| \partial_t \phi\|_{\L2}+ C(1+\| \omega\|_{\L2}) +\frac12 \| \mu-\overline{\mu}\|_{L^2(\Omega)}.\end{aligned}$$ Here we have used the equation $_3$, the Ladyzhenskaya inequality, and the estimates , . Since $\|\mu\|_{\L2}\leq C(1+\| \mu-\overline{\mu}\|_{\L2})$ (recalling ), we then infer that $$\| \mu\|_{\L2}\leq C(1+ \| \partial_t \phi\|_{\L2}+ \| \omega\|_{\L2} ).$$ Thus, we can deduce that $$\begin{aligned} \int_{\Omega} &\mu \nabla \phi \cdot \P \big( \phi \, \nabla \partial_t \phi\big) \, \d x\\ &\leq \frac16 \| \nabla \partial_t \phi\|_{\L2}^2 + C \big(1+ \| \partial_t \phi\|_{\L2}^2 + \| \omega\|_{\L2}^2 \big)\Big( \| \phi\|_{H^2(\Omega)}^2 \log \big( C \| \phi\|_{W^{2,p}(\Omega)} \big) + 1 \Big).\end{aligned}$$ Recalling that $\P$ is a bounded operator from $\mathbf{H}^1(\Omega)$ to $\H_\sigma \cap \mathbf{H}^1(\Omega)$, and using the inequalities , , Poincaré’s inequality and Lemma \[result1\], we have $$\begin{aligned} &-\int_{\Omega} (\uu \otimes \uu) : \nabla \P \big( \nabla \phi \, \partial_t \phi\big) \, \d x \\ &\quad \leq \| \uu\|_{L^4(\Omega)}^2 \|\P (\nabla \phi \, \partial_t \phi) \|_{H^1(\Omega)}\\ &\quad \leq C \| \uu\|_{\L2} \| \uu\|_{H^1(\Omega)} \| \nabla \phi \, \partial_t \phi\|_{H^1(\Omega)}\\ &\quad \leq C (1+\| \omega\|_{\L2}) \Big( \| \nabla \phi \, \partial_t \phi\|_{\L2} + \| \nabla^2 \phi \,\partial_t \phi \|_{\L2}+ \| \nabla \phi \, \nabla \partial_t \phi\|_{\L2} \Big) \\ &\quad \leq C (1+\| \omega\|_{\L2}) \Big[ \| \nabla \phi\|_{L^\infty(\Omega)} \| \nabla \partial_t \phi\|_{\L2}\\ &\qquad + \| \nabla \partial_t \phi\|_{\L2} \| \nabla^2 \phi\|_{\L2} \log^\frac12 \Big( C \frac{\| \nabla^2 \phi\|_{L^{p}(\Omega)}}{\| \nabla^2 \phi\|_{\L2}} \Big) \Big] \\ &\quad \leq C (1+\| \omega\|_{\L2}) \| \nabla \partial_t \phi\|_{\L2} \Big( \|\nabla \phi\|_{H^1(\Omega)}\log^\frac12 \Big( C \frac{\| \nabla \phi\|_{W^{1,p}(\Omega)}}{\| \nabla \phi\|_{H^1(\Omega)}} \Big)\\ &\qquad + \| \nabla^2 \phi\|_{\L2} \log^\frac12 \Big( C \frac{\| \nabla^2 \phi\|_{L^{p}(\Omega)}}{\| \nabla^2 \phi\|_{\L2}} \Big) \Big)\\ &\quad \leq C (1+\| \omega\|_{\L2}) \| \nabla \partial_t \phi\|_{\L2} \Big( \| \phi\|_{H^2(\Omega)} \log^\frac12\big( C\| \phi\|_{W^{2,p}(\Omega)}\big)+1 \Big)\\ &\quad \leq \frac16 \| \nabla \partial_t \phi\|_{\L2}^2+ C\big(1+\|\omega \|_{\L2}^2\big) \Big( \| \phi\|_{H^2(\Omega)}^2 \log\big( C\| \phi\|_{W^{2,p}(\Omega)}\big)+1 \Big),\end{aligned}$$ for some $p>2$. Combining the above estimates together with , we arrive at the differential inequality $$\begin{aligned} \label{In-EAC3} \ddt H(t) + \frac12 \| \nabla \partial_t \phi\|_{\L2}^2 \leq C (1+H(t)) \Big( \| \phi\|_{H^2(\Omega)}^2 \log\big( C \| \phi\|_{W^{2,p}(\Omega)}+1 \Big).\end{aligned}$$ In order to close the estimate, we are left to absorb the logarithmic term on the right-hand side of the above differential inequality. To this aim, we first multiply $\mu=-\Delta \phi+\Psi'(\phi)$ by $|F'(\phi)|^{p-2}F'(\phi)$, for some $p>2$, and integrate over $\Omega$. After integrating by parts and using the boundary condition for $\phi$, we obtain $$\int_{\Omega} (p-1)|F'(\phi)|^{p-2} F''(\phi) |\nabla \phi|^2 \, \d x+ \| F'(\phi)\|_{L^p(\Omega)}^p= \int_{\Omega} (\mu +\theta_0 \phi)|F'(\phi)|^{p-2}F'(\phi) \, \d x.$$ By Young’s inequality and the fact that $F''>0$, we deduce $$\| F'(\phi)\|_{L^p(\Omega)} \leq C(1+ \|\mu\|_{L^p(\Omega)} ).$$ Using a well-known elliptic regularity result, together with the above inequality and , we obtain that (cf. ) $$\| \phi\|_{W^{2,p}(\Omega)} \leq C (1+\| \mu\|_{L^p(\Omega)}).$$ On the other hand, we infer from equation $_3$ that $$\begin{aligned} \|\mu-\overline{\mu} \|_{L^p(\Omega)} \leq \| \partial_t \phi\|_{L^p(\Omega)} + \| \uu \cdot \nabla \phi\|_{L^p(\Omega)}.\end{aligned}$$ Then by Poincaré’s inequality and a Sobolev embedding theorem, we find $$\begin{aligned} \| \mu\|_{L^p(\Omega)} &\leq C \|\mu-\overline{\mu} \|_{L^p(\Omega)} +C |\overline{\mu}|\\ &\leq C \| \nabla \partial_t \phi\|_{\L2} +C \| \uu\|_{H^1(\Omega)} \| \phi\|_{H^2(\Omega)} + C(1+\| \mu-\overline{\mu}\|_{\L2})\\ &\leq C \| \nabla \partial_t \phi\|_{\L2} +C (1+\| \omega\|_{\L2}) (1+ \| \mu-\overline{\mu}\|_{\L2})\\ &\leq C \| \nabla \partial_t \phi\|_{\L2} +C (1+\| \omega\|_{\L2}) (1+ \| \partial_t \phi\|_{\L2}+ \|\omega\|_{\L2}).\end{aligned}$$ Thus, for $p>2$, we reach $$\| \phi\|_{W^{2,p}(\Omega)} \leq C ( 1+ \| \nabla \partial_t \phi\|_{\L2} + H(t) ),$$ which, in turn, allows us to rewrite as $$\label{In-EAC4} \ddt H(t) + \frac12 \| \nabla \partial_t \phi\|_{\L2}^2 \leq C (1+H(t)) \Big( \| \phi\|_{H^2(\Omega)}^2 \log\Big( C\big( 1+ \| \nabla \partial_t \phi\|_{\L2} + H(t)\big)\Big)+1 \Big).$$ We now observe that, for any $\varepsilon>0$, the following inequality holds $$x\log(C y)\leq \varepsilon y+ x\log\Big(\frac{ C x}{\varepsilon}\Big) \quad \forall \, x, y >0.$$ By using the above inequality with $x=1+H(t)$, $y= 1+ \| \nabla \partial_t \phi\|_{\L2} +H(t) $ and $\varepsilon=1$, we deduce that $$\begin{aligned} & \ddt H(t) + \frac12 \| \nabla \partial_t \phi\|_{\L2}^2\nonumber\\ &\quad \leq \| \nabla \partial_t \phi\|_{\L2} \| \phi\|_{H^2(\Omega)}^2 + C ( 1+\| \phi\|_{H^2(\Omega)}^2 ) (1+H(t)) \log \big( C (1+H(t))\big).\end{aligned}$$ By Young’s inequality, we obtain $$\begin{aligned} \ddt H(t) + \frac14 \| \nabla \partial_t \phi\|_{\L2}^2 \leq \| \phi\|_{H^2(\Omega)}^4 + C ( 1+\| \phi\|_{H^2(\Omega)}^2 ) (1+H(t)) \log \big( C (1+H(t))\big).\end{aligned}$$ Recalling that $\| \phi\|_{H^2(\Omega)}^2 \leq C(1+H(t))$, we are finally led to the differential inequality $$\label{In-EAC5} \ddt H(t) + \frac14 \| \nabla \partial_t \phi\|_{\L2}^2 \leq C ( 1+\| \phi\|_{H^2(\Omega)}^2 ) (1+H(t)) \log \big( C (1+H(t))\big).$$ Since $\phi \in L^2(0,T;H^2(\Omega))$, then applying the generalized Gronwall lemma \[GL2\], we find the double exponential bound $$\begin{aligned} \sup_{t \in [0,T]} &\Big( \|\partial_t \phi (t)\|_{\L2}^2+ \|\omega(t) \|_{\L2}^2 \Big)\\ &\leq C \big(1+ \| \uu_0\|_{H^1(\Omega)}^2 \| \phi_0\|_{H^2(\Omega)}^2 +\| \phi_0\|_{H^2(\Omega)}^2+ \| \Psi'(\phi_0)\|_{\L2}^2+ \| \uu_0\|_{H^1(\Omega)}^2 \big)^{{e}^{\int_0^T 1+\| \phi(s)\|_{H^2(\Omega)}^2 \, \d s}},\end{aligned}$$ for some constant $C>0$. Here we have used that $$\| \partial_t \phi (0)\|_{\L2} \leq C \| \uu_0\|_{H^1(\Omega)} \| \phi_0\|_{H^2(\Omega)} +C \| \phi_0\|_{H^2(\Omega)}+C \| \Psi'(\phi_0)\|_{\L2}.$$ Hence, we get $$\label{Reg-E1} \partial_t \phi \in L^\infty(0,T;L^2(\Omega)) \cap L^2(0,T;H^1(\Omega)), \quad \omega \in L^\infty(0,T;L^2(\Omega)),$$ which, in turn, entail that $$\label{Reg-E2} \uu \in L^\infty(0,T;\mathbf{H}^1(\Omega)), \quad \phi \in L^\infty(0,T; H^2(\Omega))\cap L^2(0,T; W^{2,p}(\Omega)),$$ for any $p \in [2,\infty)$. Case 2 ------ We now consider an initial condition $(\uu_0,\phi_0)$ such that $$\uu_0 \in \H_\sigma\cap \mathbf{W}^{1,p}(\Omega) , \quad \phi_0\in H^2(\Omega),\quad\partial_\n \phi_0=0\ \ \text{on}\ \partial\Omega,$$ for $p \in (2,\infty)$, with $\| \phi_0\|_{L^\infty(\Omega)}\leq 1$, $|\overline{\phi}_0|<1$ and $$F'(\phi_0)\in \L2, \quad F''(\phi_0)\in L^1(\Omega), \quad \nabla \mu_0= \nabla ( -\Delta \phi_0+F'(\phi_0)) \in L^2(\Omega).$$ Thanks to the first part of Theorem \[Th-EAC\], we have a solution $(\uu,\phi)$ satisfying and . Moreover, repeating the same argument performed in Section \[S-STRONG\], we have (cf. ) $$\ddt \int_{\Omega} F''(\phi) \, \d x + \frac14 \int_{\Omega} F'''(\phi) F'(\phi) \, \d x \leq C,$$ for some positive constant $C$ only depending on $\Omega$ and the parameters of the system. Since $F''(\phi_0)\in L^1(\Omega)$, we learn, in particular, that (cf. ) $$\int_t^{t+1}\! \int_{\Omega} |F''(\phi)|^2 \log ( 1+F''(\phi)) \,\d x \d \tau \leq C, \quad \forall t \geq 0.$$ Multiplying by $|\omega|^{p-2}\omega$ ($p>2$) and integrating over $\Omega$, we obtain $$\frac{1}{p} \ddt \|\omega\|_{L^p(\Omega)}^p = \int_{\Omega} \nabla \mu \cdot (\nabla \phi)^\perp |\omega|^{p-2}\omega \, \d x.$$ By Hölder’s inequality, we easily get $$\frac{1}{p} \ddt \|\omega\|_{L^p(\Omega)}^p \leq \| \nabla \mu \cdot (\nabla \phi)^\perp\|_{L^p(\Omega)} \|\omega\|_{L^p(\Omega)}^{p-1},$$ which, in turn, implies $$\frac12 \ddt \|\omega\|_{L^p(\Omega)}^2 \leq \| \nabla \mu \cdot (\nabla \phi)^\perp\|_{L^p(\Omega)} \| \omega\|_{L^p(\Omega)}.$$ Next, differentiating $_3$ with respect time, then multiplying the resultant by $-\Delta \partial_t \phi$ and integrating over $\Omega$, we obtain $$\begin{aligned} & \ddt \frac12 \| \nabla \partial_t \phi\|_{\L2}^2+ \| \Delta \partial_t \phi\|_{\L2}^2 \notag \\ &= \theta_0 \| \nabla \partial_t \phi\|_{\L2}^2+ \int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x+\int_{\Omega} (\partial_t \uu \cdot \nabla \phi) \Delta \partial_t \phi\, \d x + \int_{\Omega} (\uu \cdot \nabla \partial_t \phi) \Delta \partial_t \phi\, \d x.\end{aligned}$$ Here we have used the fact that $\overline{\Delta \partial_t \phi}=0$ since $\partial_\n \partial_t \phi=0$ on $\partial \Omega$. Collecting the above two estimates, we find that $$\begin{aligned} &\ddt \Big( \frac12 \|\omega\|_{L^p(\Omega)}^2+\frac12 \| \nabla \partial_t \phi\|_{\L2}^2 \Big) + \| \Delta \partial_t \phi\|_{\L2}^2 \\ &\quad \leq \| \nabla \mu \cdot (\nabla \phi)^\perp\|_{L^p(\Omega)} \| \omega\|_{L^p(\Omega)} + \theta_0 \|\nabla \partial_t \phi\|_{\L2}^2+ \int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x\\ &\qquad +\int_{\Omega} (\partial_t \uu \cdot \nabla \phi) \Delta \partial_t \phi\, \d x + \int_{\Omega} (\uu \cdot \nabla \partial_t \phi) \Delta \partial_t \phi\, \d x.\end{aligned}$$ Notice that, by $_3$, we have the relation $\nabla \mu= \nabla \partial_t \phi + (\nabla \uu)^t \nabla \phi + (\uu\cdot \nabla ) \nabla \phi$. By exploiting this identity, we obtain $$\begin{aligned} &\| \nabla \mu \cdot (\nabla \phi)^\perp\|_{L^p(\Omega)} \| \omega\|_{L^p(\Omega)}\\ &\quad \leq \big( \|\nabla \partial_t \phi\|_{L^p(\Omega)} + \|(\nabla \uu)^t \nabla \phi\|_{L^p(\Omega)} + \| (\uu\cdot \nabla ) \nabla \phi\|_{L^p(\Omega)} \big) \|\nabla \phi\|_{L^\infty(\Omega)} \| \omega\|_{L^p(\Omega)}.\end{aligned}$$ Using the Gagliardo-Nirenberg inequality and the following inequality for divergence free vector fields satisfying the boundary condition $_1$ $$\label{u-v} \| \nabla \u\|_{L^p(\Omega)}\leq C(p) \| \omega\|_{L^p(\Omega)}, \quad p \in [2,\infty),$$ we deduce that $$\begin{aligned} &\| \nabla \mu \cdot (\nabla \phi)^\perp\|_{L^p(\Omega)} \| \omega\|_{L^p(\Omega)}\\ &\leq C \| \nabla \partial_t \phi\|_{\L2}^\frac{2}{p} \| \Delta \partial_t \phi\|_{\L2}^{1-\frac{2}{p}} \| \nabla \phi\|_{L^\infty(\Omega)} \| \omega\|_{L^p(\Omega)} + C \| \nabla \uu\|_{L^p(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)}^2 \| \omega\|_{L^p(\Omega)}\\ &\quad + \| \uu\|_{L^\infty(\Omega)} \| \phi\|_{W^{2,p}(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)} \| \omega\|_{L^p(\Omega)}\\ &\leq \frac18 \| \Delta \partial_t \phi\|_{\L2}^2+ C \| \nabla \phi\|_{L^\infty(\Omega)}^{\frac{2p}{p+2}} \| \nabla \partial_t \phi\|_{\L2}^{\frac{4}{p+2}} \| \omega\|_{L^p(\Omega)}^\frac{2p}{p+2}\\ &\quad + C\big( \| \nabla \phi\|_{L^\infty(\Omega)}^2 + \| \phi\|_{W^{2,p}(\Omega)} \| \nabla \phi\|_{L^\infty(\Omega)} \big) \big(1+ \| \omega\|_{L^p(\Omega)}^2\big).\end{aligned}$$ Next, using $_1$ together with the bounds , we have $$\begin{aligned} &\int_{\Omega} \partial_t \uu \cdot \nabla \phi \Delta \partial_t \phi\, \d x\\ &\quad \leq \int_{\Omega} \mathbb{P} \big( -\uu \cdot \nabla \uu -\Delta \phi \nabla \phi \big) \cdot \nabla \phi \Delta \partial_t \phi \, \d x\\ &\quad \leq C \| \mathbb{P}\big( \uu \cdot \nabla \uu\big)\|_{\L2} \|\nabla \phi \Delta \partial_t \phi \|_{\L2} + C \| \mathbb{P}\big( \Delta \phi \nabla \phi\big)\|_{\L2} \| \nabla \phi \Delta \partial_t \phi\|_{\L2}\\ &\quad \leq \frac{1}{8} \| \Delta\partial_t \phi \|_{\L2}^2 +C \| \uu\|_{L^\infty(\Omega)}^2 \| \nabla \uu\|_{\L2}^2 \| \nabla \phi\|_{L^\infty(\Omega)}^2+ C\| \Delta \phi \|_{\L2}^2 \| \nabla \phi\|_{L^\infty(\Omega)}^4\\ &\quad \leq \frac{1}{8} \| \Delta\partial_t \phi \|_{\L2}^2 +C (1+\| \omega\|_{L^p(\Omega)}^2) \| \nabla \phi\|_{L^\infty(\Omega)}^2+C\| \nabla \phi\|_{L^\infty(\Omega)}^4.\end{aligned}$$ Arguing as for and , we have $$\begin{aligned} & \int_{\Omega} F''(\phi) \partial_t \phi \Delta \partial_t \phi \, \d x \leq \frac18 \| \Delta \partial_t \phi \|_{\L2}^2 +C \| F''(\phi)\|_{\L2}^2 \log \big( C\| F''(\phi)\|_{\L2} \big)\| \nabla \partial_t \phi\|_{\L2}^2,\nonumber\end{aligned}$$ $$\begin{aligned} &\int_{\Omega} (\uu \cdot \nabla \partial_t \phi) \Delta \partial_t \phi\, \d x \leq \frac{1}{8} \| \Delta \partial_t \phi\|_{\L2}^2 + C \| \nabla \partial_t \phi\|_{\L2}^2. \nonumber\end{aligned}$$ Collecting the above estimates and using Young’s inequality, we arrive at the differential inequality $$\begin{aligned} &\ddt \Big( \|\omega\|_{L^p(\Omega)}^2+ \| \nabla \partial_t \phi\|_{\L2}^2 \Big) + \| \Delta \partial_t \phi\|_{\L2}^2 \leq R_1(t) \Big( \|\omega\|_{L^p(\Omega)}^2+ \| \nabla \partial_t \phi\|_{\L2}^2 \Big) +R_2(t),\end{aligned}$$ where $$R_1= C \Big( 1+ \| \nabla \phi\|_{L^\infty(\Omega)}^2 + \| F''(\phi)\|_{\L2}^2 \log \big( C\| F''(\phi)\|_{\L2}\big) \Big)$$ and $$R_2 = C \| \phi\|_{W^{2,p}(\Omega)}^2 + C \big(\| \nabla \phi\|_{L^\infty(\Omega)}^4+1).$$ By using , and recalling , we see that $$\begin{aligned} \| \nabla \phi\|_{L^\infty(\Omega)}^4 \leq C \| \nabla^2 \phi\|_{\L2}^4 \log^2 \big( \| \nabla^2 \phi\|_{L^p(\Omega)} \big) +1 \leq C \log^2 \big( \| \phi\|_{W^{2,p}(\Omega)} \big) +1,\end{aligned}$$ for $p>2$. In light of , we infer that both $R_1$ and $R_2$ belong to $L^1(0,T)$. Thanks to Gronwall’s lemma, we obtain $$\|\omega(t)\|_{L^p(\Omega)}^2+ \| \nabla \partial_t \phi(t)\|_{\L2}^2 \leq \Big( \|\omega(0)\|_{L^p(\Omega)}^2+ \| \nabla \partial_t \phi(0)\|_{\L2}^2 + \int_0^T R_2(\tau)\, \d \tau\Big) {e}^{\int_0^T R_1(\tau)\, \d \tau},$$ for any $t\in [0,T]$. Since $\| \omega (0)\|_{L^p(\Omega)}\leq \| \nabla \uu_0\|_{L^p(\Omega)}$ and $$\begin{aligned} \| \nabla \partial_t \phi(0)\|_{\L2} &\leq \| (\nabla \uu_0)^t \nabla \phi_0\|_{\L2} + \| (\uu_0\cdot \nabla) \nabla \phi_0\|_{\L2} + \|\nabla \mu_0 \|_{\L2}\\ &\leq C \| \nabla \uu_0\|_{L^p(\Omega)} \|\phi_0 \|_{H^2(\Omega)}+ C \| \uu_0\|_{L^\infty(\Omega)} \|\phi_0 \|_{H^2(\Omega)}+\|\nabla \mu_0 \|_{\L2}\\ &\leq C\| \uu_0\|_{W^{1,p}(\Omega)} \|\phi_0 \|_{H^2(\Omega)}+\|\nabla \mu_0 \|_{\L2},\end{aligned}$$ we deduce that for any $p\in (2,\infty)$ $$\omega \in L^\infty(0,T;L^p(\Omega)),\quad \partial_t \phi \in L^\infty(0,T; H^1(\Omega))\cap L^2(0,T;H^2(\Omega)).$$ This, in turn, implies that $$\uu \in L^\infty(0,T;W^{1,p}(\Omega)), \quad \phi \in L^\infty(0,T;W^{2,p}(\Omega)).$$ As a consequence, the above estimates yield that $$\widetilde{\mu}=-\Delta \phi+F'(\phi) \in L^2(0,T;L^\infty(\Omega)).$$ The rest part of the proof is the same as the proof of Theorem \[Proreg-D\] with the choice $\sigma>0$. The proof of Theorem \[Th-EAC\] is complete. Conclusions and Future Developments =================================== In this paper we present mathematical analysis of some Diffuse Interface models that describe the evolution of incompressible binary mixture having (possibly) different densities and viscosities. We focus on the mass-conserving Allen-Cahn relaxation of the transport equation with the physically relevant Flory-Huggins potential. We show the existence of global weak solution in three dimension and of global strong solutions in two dimensions. For the latter, we discuss additional properties, such as uniqueness, regularity and the separation property. On the other hand, several still unsolved questions concern the analysis of the complex fluid, Navier-Stokes-Allen-Cahn and Euler-Allen-Cahn systems in the three dimensional case, which will be the subject of future investigations. We conclude by mentioning some interesting open problems related to the results proved in this work: $\bullet$ An important possible development of this work is to show the existence of global solutions to the complex fluids system - originating from small perturbation of some particular equilibrium states. We mention that some remarkable results in this direction have been achieved in [@LLZ2005; @LZ2008; @RWXZ2014] (see also [@LIN2012] and the references therein). In addition, it would be interesting to study the global existence of weak solutions as in [@HL2016] and to generalize Theorem \[CF-T\] to the case with zero viscosity (cf. [@LLZ2005 Theorem 3.1]). $\bullet$ Two possible improvements of this work concern the Navier-Stokes-Allen-Cahn system -. The first question is whether the entropy estimates in Theorem \[strong-D\] can be achieved for strong solutions with small initial data, but without restrictions on the parameters of the system, or even without any condition on the initial data. The second issue is to show the uniqueness of strong solutions given from Theorem \[strong-D\]-(1), without relying on the entropy estimates in Theorem \[strong-D\]-(2). Also, we mention the possibility of considering moving contact lines for the Navier-Stokes-Allen-Cahn system (see [@MCYZ2017] for numerical). $\bullet$ Interesting open issues regarding the Euler-Allen-Cahn system - are the existence and the uniqueness of solutions corresponding to an initial datum $\omega_0 \in L^\infty(\Omega)$ as well as the study of the inviscid limit on arbitrary time intervals (cf. [@ZGH2011] for short times). Acknowledgments {#acknowledgments .unnumbered} =============== Part of this work was carried out during the first and second authors’ visit to School of Mathematical Sciences of Fudan University whose hospitality is gratefully acknowledged. M. Grasselli is a member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). H. Wu is partially supported by NNSFC grant No. 11631011 and the Shanghai Center for Mathematical Sciences at Fudan University. Compliance with Ethical Standards {#compliance-with-ethical-standards .unnumbered} ================================= The authors declare that they have no conflict of interest. The authors also confirm that the manuscript has not been submitted to more than one journal for simultaneous consideration and the manuscript has not been published previously (partly or in full). Stokes System with Variable Viscosity {#App-0} ===================================== We prove an elliptic regularity result for the following Stokes problem with concentration depending viscosity $$\label{Stokes} \begin{cases} -\div (\nu(\phi)D \uu)+\nabla P=\f, \quad &\text{in } \Omega,\\ \div \uu=0, \quad &\text{in } \Omega,\\ \uu=0, \ \quad &\text{on } \partial \Omega. \end{cases}$$ This result is a variant of [@A2009 Lemma 4]. \[Stokes-e\] Let $\Omega$ be a bounded domain of class $\mathcal{C}^2$ in $\mathbb{R}^d$, $d=2,3$. Assume that $\nu\in W^{1,\infty}(\mathbb{R})$ such that $0<\nu_\ast\leq \nu(\cdot)\leq \nu^\ast$ in $\mathbb{R}$, $\phi \in W^{1,r}(\Omega)$ with $r>d$, and $\f\in \mathbf{L}^p(\Omega)$ with $1<p<\infty$ if $d=2$ and $\frac65\leq p<\infty$ if $d=3$. Consider the (unique) weak solution $\uu\in \V_\sigma$ to such that $(\nu(\phi) D\uu,\nabla \ww)= (\f,\ww)$ for all $\ww\in \V_\sigma$. We have: - If $\frac{1}{p}=\frac{1}{2}+\frac{1}{r}$ then there exists $C=C(p,\Omega)>0$ such that $$\label{regstokes} \| \u\|_{\mathbf{W}^{2,p}(\Omega)} \leq C\| \f\|_{L^p(\Omega)}+ C \| \nabla \phi\|_{L^r(\Omega)} \|D \u \|_{L^2(\Omega)}.$$ - Suppose that $\u \in \mathbf{V}_\sigma\cap \mathbf{W}^{1,s}(\Omega)$ with $s> 2$ such that $$\frac{1}{p}=\frac{1}{s}+\frac{1}{r}, \quad r\geq \frac{2s}{s-1}.$$ Then, there exists $C=C(s,p,\Omega)>0$ such that $$\label{regstokes2} \| \u\|_{\mathbf{W}^{2,p}(\Omega)} \leq C\| \f\|_{L^p(\Omega)}+ C \| \nabla \phi\|_{L^r(\Omega)} \|D \u \|_{L^s(\Omega)}.$$ We denote by $B$ the Bogovskii operator. We recall that $B: L^{q}_{(0)}(\Omega) \rightarrow W^{1,q}_0(\Omega)$, $1<q<\infty$, such that $\div B f=f$. It is well-known (see, e.g., [@Galdi Theorem III.3.1]) that, for all $1<q<\infty$, $$\label{Bog-W1p} \| Bf \|_{W^{1,q}(\Omega)}\leq C \| f\|_{L^q(\Omega)},$$ In addition, by [@Galdi Theorem III.3.4], if $f = \div \, \g$, where $\g\in \mathbf{L}^q(\Omega)$, $1<q<\infty$, is such that $\div \g \in L^q(\Omega)$, and $\g\cdot \n=0$ on $\partial \Omega$, we have $$\label{Bog-L2} \| B f \|_{L^q(\Omega)}\leq C \| \g\|_{L^q(\Omega)}.$$ For the sake of simplicity, we start proving the second part of Theorem \[Stokes-e\], and then we show the first part. **Case 2**. Let us take $\vv \in \C^\infty_{0,\sigma}(\Omega)$. As in [@A2009 Lemma 4], we define $\ww=\frac{\vv}{\nu(\phi)}- B[\div \big(\frac{\vv}{\nu(\phi)}\big)]$. We observe that $\ww \in \mathbf{W}_0^{1,r}(\Omega)$ with $\div \ww=0$. In particular, $\ww \in \V_\sigma$. Taking $\ww$ in the weak formulation, we obtain $$\begin{aligned} (D \uu,\nabla \vv)&=\Big(\f, \frac{\vv}{\nu(\phi)}- B\Big[\div \Big(\frac{\vv}{\nu(\phi)}\Big)\Big]\Big) \\ &\quad - \Big( \nu(\phi) D \uu, \vv \otimes \nabla \Big(\frac{1}{\nu(\phi)}\Big) \Big)+ \Big( \nu(\phi) D \uu, \nabla B \Big[ \div \Big(\frac{\vv}{\nu(\phi)}\Big)\Big] \Big)\end{aligned}$$ Since $\frac{2s}{s-1}\leq r$, we deduce that $r\geq p'$ $(\frac{1}{p'}=1-\frac{1}{p})$. This implies that $\div \big( \frac{\vv}{\nu(\phi)} \big) \in L^{p'}(\Omega)$. By using the assumptions on $\nu$ and the estimate with $q=p'$, we find $$\begin{aligned} \Big| \Big(\f, \frac{\vv}{\nu(\phi)}- B\Big[\div \Big(\frac{\vv}{\nu(\phi)}\Big)\Big]\Big)\Big| &\leq\| \f\|_{L^p(\Omega)} \Big( \frac{1}{\nu_\ast} \| \vv\|_{L^{p'}(\Omega)}+ \Big\|B\Big[\div \Big(\frac{\vv}{\nu(\phi)}\Big)\Big]\Big\|_{L^{p'}(\Omega)} \Big)\\ &\leq C \| \f\|_{L^p(\Omega)} \Big( \frac{1}{\nu_\ast} \| \vv\|_{L^{p'}(\Omega)}+ \Big\| \frac{\vv}{\nu(\phi)}\Big\|_{L^{p'}(\Omega)}\Big)\\ &\leq C \| \f\|_{L^p(\Omega)} \| \vv\|_{L^{p'}(\Omega)}.\end{aligned}$$ Also, we have $$\begin{aligned} \Big| \Big( \nu(\phi) D \uu, \vv \otimes \nabla \Big(\frac{1}{\nu(\phi)}\Big) \Big)\Big| &=\Big|\Big(D \uu, \frac{\nu'(\phi)}{\nu^2(\phi)} \vv \otimes \nabla \phi \Big)\Big|\\ &\leq C \| D \uu\|_{L^s(\Omega)} \| \nabla \phi\|_{L^r(\Omega)} \| \vv\|_{L^{p'}(\Omega)}.\end{aligned}$$ Recalling that $\div \vv=0$ and $r>s'$, by using we obtain $$\begin{aligned} \Big| \Big( \nu(\phi) D \uu, \nabla B \Big[ \div \Big( \frac{\vv}{\nu(\phi)} \Big) \Big] \Big)\Big| &\leq \| D \uu\|_{L^s(\Omega)} \Big\| \nabla B \Big[ \nabla \Big( \frac{1}{\nu(\phi)} \Big) \cdot \vv\Big] \Big\|_{L^{s'}(\Omega)}\\ &\leq C \| D \uu\|_{L^s(\Omega)} \Big\| \nabla \Big( \frac{1}{\nu(\phi)} \Big) \cdot \vv \Big\|_{L^{s'}(\Omega)}\\ &\leq C \| D \uu\|_{L^s(\Omega)} \| \nabla \phi\|_{L^r(\Omega)} \| \vv\|_{L^{p'}(\Omega)}.\end{aligned}$$ Therefore, by the Riesz representation theorem and a density argument, we find $$(D \uu, \nabla \vv)= (\widetilde{\f},\vv) \quad \forall \, \vv \in \V_\sigma,$$ where $$\| \widetilde{\f}\|_{L^p(\Omega)}\leq C \| \f\|_{L^p(\Omega)}+ C\| D \uu\|_{L^s(\Omega)} \| \nabla \phi\|_{L^r(\Omega)},$$ for some $C$ depending on $s, p$ and $\Omega$. By the regularity of the Stokes operator (see, e.g., [@Galdi Theorem IV.6.1]), the claim easily follows. **Case 1**. We consider $\phi_n \in C_c^\infty(\overline{\Omega})$ such that $\phi_n \rightarrow \phi$ in $W^{1,r}(\Omega)$ as $n\rightarrow \infty$. For any $n \in \mathbb{N}$, we define $\uu_n$ as the solution to $$(\nu(\phi_n) D \uu_n, \nabla \ww)= (\f,\ww) \quad \forall \, \ww \in \V_\sigma.$$ Since $\nu(\cdot)\geq \nu_\ast>0$, by taking $\ww=\uu_n$, it is easily seen that $\lbrace \uu_n\rbrace_{n\in \mathbb{N}}$ is bounded in $\V_\sigma$ independently of $n$. In addition, recalling that $W^{1,r}(\Omega)\hookrightarrow L^\infty(\Omega)$, we have $\nu(\phi_n)\rightarrow \nu(\phi)$ in $L^\infty(\Omega)$. By uniqueness of the weak solution $\uu$ to , we deduce that $\uu_n \rightharpoonup \uu$ weakly in $\V_\sigma$. Let us take $\ww=\frac{\vv}{\nu(\phi_n)}- B[\div \big(\frac{\vv}{\nu(\phi_n)}\big)]$ with $\vv \in \C^\infty_{0,\sigma}(\Omega)$. Then we find $$\begin{aligned} (D \uu_n,\nabla \vv)&=\Big(\f, \frac{\vv}{\nu(\phi_n)}- B\Big[\div \Big(\frac{\vv}{\nu(\phi_n)}\Big)\Big]\Big) \\ &\quad - \Big( \nu(\phi_n) D \uu_n, \vv \otimes \nabla \Big(\frac{1}{\nu(\phi_n)}\Big) \Big)+ \Big( \nu(\phi_n) D \uu_n, \nabla B \Big[ \div \Big(\frac{\vv}{\nu(\phi_n)}\Big)\Big] \Big)\end{aligned}$$ Note that, by construction, $\frac{\vv}{\nu(\phi_n)} \in W^{1,q}(\Omega)$ for all $q \in [1,\infty]$. Therefore, by repeating the same computations carried out above with $s=2$, we arrive at $$(D \uu_n, \nabla \vv)= (\widetilde{\f},\vv) \quad \forall \, \vv \in \V_\sigma,$$ where $$\| \widetilde{\f}\|_{L^p(\Omega)}\leq C \| \f\|_{L^p(\Omega)}+ C\| D \uu_n\|_{L^2(\Omega)} \| \nabla \phi_n\|_{L^r(\Omega)},$$ for some $C$ depending on $p$ and $\Omega$. By the regularity theory of the Stokes operator, we infer $$\begin{aligned} \| \uu_n\|_{W^{2,p}(\Omega)}\leq C \| \f\|_{L^p(\Omega)}+ C\| D \uu_n\|_{L^2(\Omega)} \| \nabla \phi_n\|_{L^r(\Omega)}.\end{aligned}$$ Since $\lbrace \uu_n\rbrace_{n\in \mathbb{N}}$ is bounded in $\V_\sigma$ and $\phi_n \rightarrow \phi$ in $W^{1,r}(\Omega)$, $\uu_n$ is bounded in $\W^{2,p}(\Omega)$ independently of $n$. By the choice of the parameters $r>d$ and $\frac{1}{p}=\frac12 +\frac{1}{r}$, $\W^{2,p}(\Omega)\cap \V_\sigma$ is compactly embedded in $\V_\sigma$. In particular, $\| D \uu_n\|_{L^2(\Omega)} \rightarrow \| D \uu\|_{L^2(\Omega)}$ as $n\rightarrow \infty$. As a consequence, by the lower semi-continuity of the norm with respect to the weak topology, the conclusion follows. The proof is complete. Some Lemmas on ODE Inequalities {#App} =============================== For convenience of the readers, we collect some useful results concerning ODE inequalities that have been used in this paper. First, we report the Osgood lemma. \[Osgood\] Let $f$ be a measurable function from $[0,T]$ to $[0,a]$, $g \in L^1(0,T)$, and $W$ a continuous and nondecreasing function from $[0,a]$ to $\mathbb{R}^+$. Assume that, for some $ c \geq 0$, we have $$f(t)\leq c+ \int_0^t g(s) W(f(s)) \, \d s, \quad \text{for a.e.}\, t\in [0,T].$$ - If $c>0$, then for almost every $t \in [0,T]$ $$-\mathcal{M}(f(t))+\mathcal{M}(c)\leq \int_0^T g(s)\, \d s, \quad \text{where} \quad \mathcal{M}(s)=\int_s^a \frac{1}{W(s)} \, \d s.$$ - If $c=0$ and $\int_0^a \frac{1}{W(s)} \, \d s= \infty$, then $f(t)=0$ for almost every $t\in [0,T]$. Next, we report two generalizations of the classical Gronwall lemma and of the uniform Gronwall lemma. \[GL2\] Let $f$ be a positive absolutely continuous function on $[0,T]$ and $g$, $h$ be two summable functions on $[0,T]$ that satisfy the differential inequality $$\ddt f(t)\leq g(t)f(t)\ln\big( e+ f(t)\big)+h(t),$$ for almost every $t\in [0,T]$. Then, we have $$f(t)\leq \big( e+f(0)\big)^{e^{\int_0^t g(\tau)\, \d \tau}} e^ { \int_0^t e^{\int_\tau^t g(s)\, \d s} h(\tau) \, \d \tau}, \quad \forall \, t \in [0,T].$$ \[UGL2\] Let $f$ be an absolutely continuous positive function on $[0,\infty)$ and $g$, $h$ be two positive locally summable functions on $[0,\infty)$ which satisfy the differential inequality $$\ddt f(t) \leq g(t)f(t)\ln\big( e+ f(t)\big) +h(t),$$ for almost every $t\geq 0$, and the uniform bounds $$\int_t^{t+r} f(\tau)\, \d \tau\leq a_1, \quad \int_t^{t+r} g(\tau)\, \d \tau \leq a_2, \quad \int_t^{t+r} h(\tau)\, \d \tau \leq a_3, \quad \forall \, t \geq 0,$$ for some positive constants $r$, $a_1$, $a_2$, $a_3$. Then, we have $$f(t)\leq e^{\big(\frac{r+a_1}{r}+a_3\big)e^{a_2}}, \quad \forall \, t \geq r.$$ [99]{} , [On generalized solutions of two-phase flows for viscous incompressible fluids]{}, [Interfaces Free Bound.]{} , [On a diffuse interface model for two-phase flows of viscous, incompressible fluids with matched densities]{}, [Arch. Ration. Mech. Anal.]{} , [Existence of weak solutions for a diffuse interface model for two-phase flows of incompressible fluids with different densities]{}, [J. Math. Fluid Mech.]{} , [Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities]{}, [Math. Models Methods Appl. Sci.]{} , [Sharp interface limit for a Stokes/Allen-Cahn system]{}, [Arch. Ration. Mech. Anal.]{} , [Liquid-liquid phase separation in disease]{}, [Ann. Rev. Genet.]{} , [A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening]{}, [Acta Metall.]{} , [Diffuse-interface methods in fluid mechanics]{}, [Annu. Rev. Fluid Mech.]{} , [A generalization of the Navier-Stokes equation to two-phase flows]{}, [J. Physics D (Applied Physics)]{} , [Nonhomogeneous Cahn-Hilliard fluids]{}, [Ann. Inst. H. Poincaré Anal. Non Linéaire]{} , [Polymer physics of intracellular phase transitions]{}, [Nat. Phys.]{} , [Volume-preserving mean curvature flow as a limit of nonlocal Ginzburg-Landau equation]{}, [SIAM J. Math. Anal.]{} , [An $L^\infty$ bound for solutions of the Cahn-Hilliard equation]{}, [Arch. Ration. Mech. Anal.]{} , [Free energy of a nonuniform system. I. interfacial free energy]{}, [J. Chem. Phys.]{} . , [On spinodal decomposition]{}, [Acta Metall.]{} , [Global regularity for the 2D MHD equations with mixed partial dissipation and magnetic diffusion]{}, [Adv. Math.]{} , [A level set formulation of Eulerian interface capturing methods for incompressible fluid flows]{}, [J. Comput. Phys.]{} , [Mass conserving Allen-Cahn equation and volume preserving mean curvature flow]{}, [Interfaces Free Bound.]{} , [Classical solvability of the problem of the motion of two viscous incompressible fluids]{}, [(Russian) Algebra i Analiz]{} Translation in St. Petersburg Math. J. **7** (1996), 755–786. , [Global solutions for a coupled compressible Navier-Stokes/Allen- Cahn system in 1D]{}, [J. Math. Fluid Mech.]{} , [The Cahn-Hilliard model for the kinetics of phase separation]{}, in “Mathematical Models for Phase Change Problems” (ed. J.F. Rodrigues), Internat. Ser. Numer. Math. **88**, 35–73, 1989, Birkhäuser Verlag, Basel. , [On the Cahn-Hilliard equation]{}, [Arch. Ration. Mech. Anal.]{} , [Regularity criteria for Navier-Stokes-Allen-Cahn and related systems]{}, [Front. Math. China]{} , [A energetic variational formulation with phase field methods for interfacial dynamics of complex fluids: advantages and challenges]{}, , [Analysis of a phase-field model for two-phase compressible fluids]{}, [Math. Models Methods Appl. Sci.]{} , [Some theoretical results concerning non-Newtonian fluids of the Oldroyd kind]{}, [Ann. Scuola Norm. Sup. Pisa Cl. Sci.]{} , [Phase-field and Korteweg-type models for the time-dependent flow of compressible two-phase fluids]{}, [Arch. Ration. Mech. Anal.]{} , [On an inviscid model for incompressible two-phase flows with nonlocal interaction]{}, [J. Math. Fluid. Mech.]{} , [Longtime behavior for a model of homogeneous incompressible two-phase flows]{}, [Discrete Contin. Dyn. Syst.]{} , [Trajectory attractors for binary fluid mixtures in 3D]{}, [Chinese Ann. Math. Ser. B]{} , [An Introduction to the Mathematical Theory of the Navier–Stokes Equations]{}, , [Variational modeling and complex fluids]{}, [Handbook of mathematical analysis in mechanics of viscous fluids]{}, , [The Cahn-Hilliard-Hele-Shaw system with singular potential]{}, [Ann. Inst. H. Poincaré Anal. Non Linéaire]{} , [Uniqueness and Regularity for the Navier-Stokes-Cahn-Hilliard system]{}, [SIAM J. Math. Anal.]{} , [The volume-preserving motion by mean curvature as an asymptotic limit of reaction-diffusion equations]{}, [Quart. Appl. Math.]{} , [Computational phase-field modeling]{}, , [Two-phase incompressible flows with variable density: an energetic variational approach]{}, [Discrete Contin. Dyn. Syst.]{} , [On the development and generalizations of Cahn–Hilliard equations within a thermodynamic framework]{}, [Z. Angew. Math. Phys.]{} , [On the development and generalizations of Allen–Cahn and Stefan equations within a thermodynamic framework]{}, [Z. Angew. Math. Phys.]{} , [Weak-strong uniqueness for Allen-Cahn/Navier-Stokes system]{}, [Czechoslovak Math. J.]{} , [Global solutions of two-dimensional incompressible viscoelastic flows with discontinuous initial data]{}, [Comm. Pure Appl. Math.]{}, , [Liquid-liquid phase separation in biology]{}, [Annu. Rev. Cell. Dev. Biol.]{} , [Weak solutions of an initial boundary value problem for an incompressible viscous fluid with nonnegative density]{}, [SIAM J. Math. Anal.]{}, , [Strong solutions of the Navier-Stokes equations for a compressible fluid of Allen-Cahn type]{}, [Arch. Rational Mech. Anal.]{} , [The unique solvability of an initial-boundary value problem for viscous incompressible inhomogeneous fluids]{}, [Zap. Naučn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)]{} , [Course of Theoretical Physics. Vol. 5: Statistical physics]{}, , [Blow-up criterion for an incompressible Navier-Stokes/Allen-Cahn system with different densities]{}, [Discrete Contin. Dyn. Syst. Ser. B]{} , [Strong solutions for an incompressible Navier-Stokes/Allen-Cahn system with different densities]{}, [Z. Angew. Math. Phys.]{} , [On some questions in boundary value problems of mathematical physics. Contemporary developments in continuum mechanics and partial differential equations, pp. 284–346.]{} , [Some analytical issues for elastic complex fluids]{}, [Comm. Pure Appl. Math.]{} , [On hydrodynamics of viscoelastic fluids]{}, [Comm. Pure Appl. Math.]{} , [On the initial-boundary value problem of the incompressible viscoelastic fluid system]{}, [Comm. Pure Appl. Math.]{} . , [Decoupled energy stable schemes for a phase-field model of two-phase Incompressible flows with variable density]{}, [J. Sci. Comput.]{} , [Quasi-incompressible Cahn-Hilliard fluids and topological transitions]{}, [Proc. Roy. Soc. Lond. A]{} , [Numerical approximations for Allen-Cahn type phase field model of two-phase incompressible fluids with moving contact lines]{}, [Commun. Comput. Phys.]{} , [The Cahn-Hilliard Equation: Recent Advances and Applications. CBMS-NSF Regional Conference Series in Applied Mathematics, 95]{}, , [The gradient theory of phase transitions and the minimal interface criterion]{}, [Arch. Ration. Mech. Anal.]{} , [Emulsion patterns in the wake of a liquid-liquid phase separation front]{}, [Proc. Natl. Acad. Sci. USA]{} , [A sharp form of an inequality of N. Trudinger]{}, [Indiana U. Math. J.]{} , [A conservative level set method for two phase flow]{}, [J. Comput. Phys.]{} , [Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations]{} [J. Comput. Phys.]{} , [Generalized solutions to a free boundary problem of motion of a non-Newtonian fluid]{}, [Siberian Math. J.]{} , [On the two-phase Navier-Stokes equations with surface tension]{}, [Interfaces Free Bound.]{} , [Moving Interfaces and Quasilinear Parabolic Evolution Equations]{}, , [Global existence and decay of smooth solution for the 2-D MHD equations without magnetic diffusion]{}, [J. Funct. Anal.]{} , [Nonlocal reaction-diffusion equations and nucleation]{}, [IMA J. Appl. Math.]{} , [Level set methods for fluid interfaces]{}, [Annu. Rev. Fluid Mech.]{} , [Liquid phase condensation in cell physiology and disease]{}, [Science]{} , [On continuity of functions with values in various Banach spaces]{}, [Pacific J. Math.]{} , [A level set approach for computing solutions to incompressible two-phase flow]{}, [J. Comput. Phys]{} , [Two-phase free boundary problem for viscous incompressible thermocapillary convection]{}, [Japan J. Mech.]{} , [Large‐time existence of surface waves in incompressible viscous fluids with or without surface tension]{}, [Arch. Rat. Mech. Anal.]{}, , [Navier-Stokes Equations and Nonlinear Functional Analysis]{}, , [A residual-based Allen–Cahn phase field model for the mixture of incompressible fluid flows]{}, [Int. J. Numer. Meth. Fluids]{} , [Sharp interface limit of phase change flows]{}, [Adv. Math. Sci. Appl.]{} , [Well-posedness of a diffuse-interface model for two-phase incompressible flows with thermo-induced Marangoni effect]{}, [European J. Appl. Math.]{} , [Analysis of a diffuse-interface model for the mixture of two viscous incompressible fluids with thermo-induced Marangoni effects]{}, [Commun. Math. Sci.]{} , [Axisymmetric solutions to coupled Navier-Stokes/Allen-Cahn equations]{}, [SIAM J. Math. Anal.]{} , [Numerical simulations of jet pinching-off and drop formation using an energetic variational phase-field method]{}, [J. Comput. Phys.]{} , [Asymptotic stability of superposition of stationary solutions and rarefaction waves for 1D Navier–Stokes/Allen–Cahn system]{}, [J. Differential Equations]{} , [Vanishing viscosity limit for a coupled Navier-Stokes/Allen-Cahn system]{}, [J. Math. Anal. Appl.]{} [^1]: For a system of finite number of molecules $A$ and $B$ occupying a lattice with $M$ sites, the thermodynamic properties of the system of molecules are derived from the partition function $$\label{pf} Z=\sum_{\Omega} \mathrm{e}^{\Big( \frac{H(\sigma_1,\dots,\sigma_M)}{k_B T}\Big)}$$ where the Hamiltonian $H(\sigma_1,\dots,\sigma_M)$ denotes the energy of the arrangement $\sigma_1,\dots, \sigma_M$ ($\sigma_n=1$ if the lattice is occupied by molecule $A$, $\sigma_n=0$ otherwise), and $\Omega$ is the set of all possible arrangements. Here $k_B$ is the Boltzmann constant and $T$ is the temperature. It is common to describe only nearest neighbor interactions between particles, which lead to the particular Hamiltonian $$H(\lbrace \sigma\rbrace)=\frac12 \sum_{m,n} \Big( e_{AA} \sigma_m \sigma_n +e_{BB} (1-\sigma_m)(1-\sigma_n)+e_{AB}\big(\sigma_m(1-\sigma_n)+\sigma_n(1-\sigma_m)\big)\Big),$$ where $e_{AA}$, $e_{BB}$, and $e_{AB}$ are coefficients. In the Mean Field approximation the arrangements $\sigma_n$ and $1-\sigma_n$ are approximated by the probability (average) that a site is occupied by a molecule $A$ and $B$, namely $\phi_A=\frac{N_A}{M}$ and $\phi_B=\frac{N_B}{M}$($N_A$ and $N_B$ are the number of molecules of type $A$ and $B$, and $M=N_A+N_B$). Then, the partition function is given by $$Z= \frac{M!}{N_A! N_B !} \e^{- \frac{H(\phi_A,\phi_B)}{k_B T }}, \quad H(\phi_A,\phi_B)= \frac{z M}{2} \big(e_{AA} \phi_A^2+2e_{AB} \phi_A \phi_B+e_{BB} \phi_B^2 \big),$$ where $z$ is the number of neighbors in a lattice. By using the Stirling approximation, the free energy density reads as $$f(\phi_A,\phi_B)= \frac{-k_B T \ln Z}{M} \approx \frac{k_B T}{\nu} \Big[ \phi_A \ln \phi_A +\phi_B \ln \phi_B\Big]+ \frac{z}{2\nu} \Big[ e_{AA} \phi_A^2+2e_{AB}\phi_A\phi_B+e_{BB}\phi_B^2\Big],$$ where $\nu$ is the volume of molecules. By defining $\phi =\phi_A-\phi_B$ (with range $[-1,1]$), and setting appropriately the constants $\theta$ and $\theta_0$, the Flory-Huggins potential immediately follows. As usual, the function $\Psi$ is meant as the continuous extension at the values $s=\pm 1$. Roughly speaking, the logarithmic term accounts for the entropy after mixing and the quadratic perturbation represents the internal energy after mixing. For more details, we refer the reader to [@LL2013]. [^2]: In the case, $\theta\geq \theta_0$, mixing prevails over demixing, and no separation takes place. [^3]: This equation differs from the classical Allen-Cahn equation due to the presence of term $\overline{\mu}$ (see [@RS1992; @YFLS2006], cf. also [@AC1979; @BS1997; @CHL2010; @GO1997; @MHVB2018; @VRC2014]). [^4]: Without loss of generality, we consider the values of the parameters $\sigma=\gamma=1$ in our analysis. [^5]: We define the (mixing) entropy as $F(s)=\frac{\theta}{2}\left[ (1+s)\log(1+s)+(1-s)\log(1-s)\right]$, for $s \in [-1,1]$. This corresponds to the convex part of . [^6]: It is worth pointing out that the initial concentration $\phi_0$ for strong solutions is not separated from the pure phases. Indeed, the imposed conditions $F'(\phi_0)\in L^2(\Omega)$ or $F''(\phi_0)\in L^1(\Omega)$ allow $\phi_0$ being arbitrarily close to $+1$ and $-1$.
--- abstract: | The emergence of robust optimization has been driven primarily by the necessity to address the demerits of the Markowitz model. There has been a noteworthy debate regarding consideration of robust approaches as superior or at par with the Markowitz model, in terms of portfolio performance. In order to address this skepticism, we perform empirical analysis of three robust optimization models, namely the ones based on box, ellipsoidal and separable uncertainty sets. We conclude that robust approaches can be considered as a viable alternative to the Markowitz model, not only in simulated data but also in a real market setup, involving the Indian indices of S&P BSE 30 and S&P BSE 100. Finally, we offer qualitative and quantitative justification regarding the practical usefulness of robust optimization approaches from the point of view of number of stocks, sample size and types of data. [*Keywords: Robust portfolio optimization; Worst case scenario; Uncertainty sets; S&P BSE 30; S&P BSE 100*]{} author: - '[^1]' - '[^2]' - '[^3]' title: 'Can robust optimization offer improved portfolio performance?: An empirical study of Indian market' --- Introduction {#Introduction} ============ The risk associated with individual assets can be reduced through investment in a diversified portfolio comprising of several assets. For optimal allocation of weights in a diversified portfolio, one of the well established methods is the classical mean-variance portfolio optimization introduced by Markowitz [@Markowitz52; @Markowitz59]. Despite being considered as the basic theoretical framework in the field of portfolio optimization, the Markowitz model is not widely accepted among investment practitioners. One of the most major limitations of the mean-variance model is the sensitivity of the optimal portfolios to the errors in the estimation of return and risk parameters. These parameters are estimated using sample mean and sample covariance matrix, which are maximum likelihood estimates (MLEs) (calculated using historical data) under the assumption that the asset returns are normally distributed. According to DeMiguel and Nogales [@DeMiguel09], since the efficiency of MLEs is extremely sensitive to deviations of the distribution of asset returns from the assumed normal distribution, it results in the optimal portfolios being vulnerable to the errors in estimation of input parameters. While referring to the Markowitz model as “estimation-error maximizers”, Michaud [@Michaud89] argues that it often overweights those assets having higher estimated expected return, lower estimated variance of returns and negative correlation between their returns (and vice versa). Best and Grauer [@Best91] study the sensitivity of weights of optimal portfolios with respect to changes in estimated expected returns of individual assets. Upon imposition of no short selling constraint, they observe that small changes in estimated expected return of individual assets can result in the assignment of zero weights to almost half the assets comprising the portfolio (which is counterintuitive), leading to a large adjustment in portfolio weights. In an empirical study, Broadie [@Broadie93] reports evidence of overestimation of expected returns of optimal portfolios obtained using the Markowitz model by observing that the estimated efficient frontier lies above the actual efficient frontier. Significant work has been done in the area of robust portfolio optimization in order to address the concerns about the sensitivity of the optimal portfolio to the estimated parameters for the Markowitz model. The robust optimization approach incorporates uncertainty in the input parameters directly into the optimization problem. T[ü]{}t[ü]{}nc[ü]{} and Koenig [@Tutuncu04] describe uncertainty using an uncertainty set that includes almost all possible realizations of the uncertain input parameters. The robust portfolio optimization involves optimizing the portfolio performance under the worst possible realizations of the uncertain input parameters. They conduct numerous experiments applying the robust allocation methods to the market data and conclude that robust optimization can be considered as a viable asset allocation alternative for conservative investors. According to Ceria and Stubbs [@Ceria06], the standard approach of robust optimization is too conservative. They argue that it is too pessimistic to adjust the return estimate of each asset downwards. Accordingly, they introduce new variants of robust optimization and observe superior performance of robust approaches vis-à-vis the mean-variance analysis, in the majority of cases. Utilizing the standard framework of robust optimization, Scherer [@Scherer07] shows that robust methods are equivalent to Bayesian shrinkage estimators and do not lead to significant change in the efficient set. Based on simulations, he observes that robust portfolio underperforms in comparison to the portfolio obtained using the Markowitz model, especially in the case of low risk aversion and high uncertainty aversion. Santos [@Santos10] performs similar experiments to compare two types of robust approaches, namely, the standard robust optimization discussed in Scherer’s work [@Scherer07] and zero net alpha-adjusted robust optimization proposed by Ceria and Stubbs [@Ceria06], with the traditional optimization methods. The empirical results indicate better performance of robust approaches in comparison to the portfolios constructed using mean-variance analysis in the case of simulated data unlike in the case of real market data. The main aim of the paper is to assess the viability of robust optimization as opposed to the mean-variance optimization, from a practitioner’s point of view. In accordance with our motivation, we carry out empirical analysis of the robust models with three uncertainty sets and compare their performance with that of classical Markowitz model, using not only simulated data but also the real market data. We intend to answer various questions related to wider acceptability of robust optimization, both quantitatively and qualitatively from various standpoints. For the purpose of illustration, we have chosen to use the data obtained from the Indian indices of S&P BSE 30 and S&P BSE 100. The rest of the paper is organized as follows. Section \[Robust\_Portfolio\_Optimization\_Approaches\] discusses the robust portfolio optimization methods used in the work. In section \[Computational\_Results\], we present the empirical results observed on comparing the performance of robust optimization models with the Markowitz model. Section \[Discussion\] analyzes the practical usefulness from the point of view of number of stocks and sample size as well as types of data. Finally, we sum up the main takeaways from this work in Section \[Conclusion\]. Robust Portfolio Optimization Approaches {#Robust_Portfolio_Optimization_Approaches} ======================================== The determination of the structure of uncertainty sets, so as to obtain computationally tractable solutions, is a key step in robust optimization. In the real world, even the distribution of asset returns has an uncertainty associated with it. In order to address this issue, a frequently used technique is to find an estimate of the uncertain parameter and define a geometric bound around it. Empirically, historical data is used to compute estimates of these uncertain parameters. For a given optimization problem, determining the geometry of the uncertainty set is a difficult task. For the purpose of this work, we will use three types of uncertainty sets, namely, box and ellipsoidal (for expected returns) [@Fabozzi07; @Kim14] and separable (for both expected returns and covariance matrix of returns) [@Lu06]. Accordingly, we first introduce the notations to be used in this work. 1. $N$: Number of assets. 2. $\displaystyle{\mathbf{x}}$: Weight vector for a portfolio. 3. $\displaystyle{\mathbf{a}=\begin{pmatrix} a_{1}, a_{2}, \dots , a_{N} \end{pmatrix}}$: Vector of $N$ uncertain parameters. 4. $\displaystyle{\mathbf{\hat{a}}=\begin{pmatrix}\hat{a}_{1}, \hat{a}_{2}, \dots, \hat{a}_{N} \end{pmatrix}}$: Estimate for $\displaystyle{\mathbf{a}}$. 5. $\displaystyle{\boldsymbol{\mu}}$: Vector for expected return. 6. $\displaystyle{\boldsymbol{{\hat{\mu}}}}$: Estimate for $\displaystyle{\boldsymbol{\mu}}$. 7. $\displaystyle{\Sigma}$: Covariance matrix for asset returns. 8. $\displaystyle{\Sigma_{\mu}}$: Covariance matrix for errors in estimation. 9. $\lambda$: Risk aversion. 10. $\displaystyle{\mathcal{U}_{\mu,\Sigma}}$: General uncertainty set with $\mu$ and $\Sigma$ as uncertain parameters. 11. $\displaystyle{\mathbf{1}}$: Unity vector of length $N$. The classical Markowitz model formulation with no short selling constraint is given by the following problem (hereafter referred to as **Mark**): $$\label{eq:classical_markowitz} \max\limits_{\mathbf{x}}\left\{\boldsymbol{\mu}^{\top}\mathbf{x}-\lambda\mathbf{x^{\top}}\Sigma\mathbf{x}\right\}~\text{such that}~ \mathbf{x^{\top}}\mathbf{1}=1~\text{and}~\mathbf{x}\geq 0.$$ Robust portfolio optimization involves enhancing the robustness of the portfolio obtained using the Markowitz model, by optimizing the portfolio performance in worst-case scenarios. Most of the robust models deal with optimizing a given objective function with a predefined “uncertainty set” for obtaining computationally tractable solutions. For any general uncertainty set $\displaystyle{\mathcal{U}_{\mu,\Sigma}}$, the worst case classical Markowitz model formulation [@Halldorsson03; @Kim14] with no short selling constraint is given as: $$\label{eq:worst_case_classical_markowitz} \max\limits_{\mathbf{x}}\left\{\min\limits_{\left(\boldsymbol{\mu},\boldsymbol{\Sigma}\right)~\in~\mathcal{U}_{\mu,\Sigma}}\boldsymbol{\mu}^{\top}\mathbf{x} -\lambda\mathbf{x^{\top}}\Sigma\mathbf{x}\right\}~\text{such that}~\mathbf{x^{\top}}\mathbf{1}=1~\text{and}~\mathbf{x}~\geq 0,$$ Robust Portfolio Optimization with Box Uncertainty Set ------------------------------------------------------ A general *polytopic* [@Fabozzi07] uncertainty set which resembles a box, is defined as, $$\label{eqn:box_1} U_{\boldsymbol{\delta}}(\mathbf{\hat{a}})=\left\{\mathbf{a}:\left|a_{i}-\hat{a}_{i}\right| \leq \delta_{i}, i=1,2,3,\dots,N\right\},$$ where $\delta_{i}$ represents the value which determines the confidence interval region for asset $i$. As we intend to model the uncertainty in expected returns $\left(\boldsymbol{\mu}\right)$ using box uncertainty sets, we use, $$\label{eqn:box_2} U_{\boldsymbol{\delta}}(\boldsymbol{\hat{\mu}})=\left\{\boldsymbol{\mu}: \left|\mu_{i}-\hat{\mu_{i}}\right|\leq \delta_{i}, i=1,2,3,\dots,N \right\}.$$ Accordingly, using (\[eqn:box\_2\]), the max-min robust formulation (\[eq:worst\_case\_classical\_markowitz\]) reduces to the following maximization problem (hereafter referred to as **Box**): $$\label{eqn:box_markowitz} \max\limits_{\mathbf{x}}\left\{\boldsymbol{\hat{\mu}}^{\top}\mathbf{x}-\lambda\mathbf{x^{\top}}\Sigma\mathbf{x}-\boldsymbol{\delta}^{\top}|\mathbf{x}|\right\} ~\text{such that}~\mathbf{x^{\top}}\mathbf{1}=1~\text{and}~\mathbf{x} \geq 0.$$ While dealing with box uncertainty set, we assume that the returns follow normal distribution. Therefore, we define $\delta_{i}$ for $100(1-\alpha)\%$ confidence level as, $$\displaystyle{\delta_{i}=\sigma_{i} z_{\frac{\alpha}{2}} n^{-\frac{1}{2}}},$$ where $z_{\frac{\alpha}{2}}$ represents the inverse of standard normal distribution, $\sigma_{i}$ is the standard deviation of returns of asset $i$ and $n$ is the number of observations of returns for asset $i$. Robust Portfolio Optimization with Ellipsoidal Uncertainty Set -------------------------------------------------------------- In order to capture more information from the data, the consideration of the second moment gives rise to another class of uncertainty sets, namely, ellipsoidal uncertainty sets. The ellipsoidal uncertainty set for expected return $\left(\boldsymbol{\mu}\right)$ is expressed as: $$\label{eqn:ellipsoidal} U_{\delta}(\boldsymbol{\hat{\mu}})=\left\{\boldsymbol{\mu}: (\boldsymbol{\mu}-\boldsymbol{\hat{\mu}})^{\top}\Sigma^{-1}_{\boldsymbol{\mu}} (\boldsymbol{\mu}-\boldsymbol{\hat{\mu}})\leq\delta^2 \right\}.$$ Therefore, the max-min robust formulation (\[eq:worst\_case\_classical\_markowitz\]) in conjunction with (\[eqn:ellipsoidal\]) results in the following maximization problem (hereafter referred to as **Ellip**): $$\label{eqn:ellipsoidal_markowitz} \max\limits_{\mathbf{x}}\left\{\boldsymbol{\hat{\mu}}^{\top}\mathbf{x}-\lambda \mathbf{x}^{\top}\Sigma\mathbf{x} -\delta\sqrt{\mathbf{x}^{\top}\Sigma_{\boldsymbol{\mu}}\mathbf{x}}\right\}~\text{such that}~\mathbf{x^{\top}}\mathbf{1}=1~\text{and}~\mathbf{x}\geq 0.$$ If the uncertainty set follows ellipsoid model, the condidence level is set using a chi-square ($\chi^{2}$) distribution with the number of assets being the degrees of freedom (df). Accordingly, for $100(1-\alpha)\%$ confidence level, $\delta$ is defined as [@Ceria06; @Scherer07]: $$\delta^2=\chi_{N}^2(\alpha)$$ where $\chi_{N}^2(\alpha)$ is the inverse of a chi square distribution with $N$ degrees of freedom. Robust Portfolio Optimization with Separable Uncertainty Set ------------------------------------------------------------ The above two robust approaches model only the expected returns using uncertainty sets. Hence, in order to also encapsulate the uncertainly in the covariances, the box uncertainty set for the covariance matrix of returns is defined akin to that for expected returns. The lower bound $\underline{\Sigma}_{ij}$ and the upper bound $\overline{\Sigma}_{ij}$ can be specified for each entry $\Sigma_{ij}$, resulting in the following constructed box uncertainty set for the covariance matrix [@Tutuncu04]: $$\label{eqn:separable_1} U_{\Sigma}=\{\Sigma: \underline{\Sigma} \leq \Sigma \leq \overline{\Sigma},~\Sigma \succeq 0\}.$$ In the above equation, the condition $\Sigma \succeq 0$ implies that $\Sigma$ is a symmetric positive semidefinite matrix. T[ü]{}t[ü]{}nc[ü]{} and Koenig [@Tutuncu04] define the uncertainty set for expected returns as, $$\label{eqn:separable_2} U_{\boldsymbol{\mu}}=\{\boldsymbol{\mu}:\underline{\boldsymbol{\mu}}\leq\boldsymbol{\mu}\leq\overline{\boldsymbol{\mu}}\},$$ where $\underline{\boldsymbol{\mu}}$ and $\overline{\boldsymbol{\mu}}$ represent lower and upper bounds on expected return vector $\boldsymbol{\mu}$ respectively. Consequently, the max-min robust formulation (\[eq:worst\_case\_classical\_markowitz\]) transforms to the following maximization problem (hereafter referred to as **Sep**): $$\label{eqn:separable_markowitz} \max_{\mathbf{x}} \left\{\underline{\boldsymbol{\mu}}^{\top}\mathbf{x}-\lambda \mathbf{x^{\top}}\overline{\Sigma}\mathbf{x}\right\}~\text{such that}~ \mathbf{x^{\top}}\mathbf{1}=1~\text{and}~\mathbf{x}\geq 0.$$ The above approach involves the use of “separable” uncertainty sets [@Lu06], which implies that the uncertainty sets for expected returns and covariance matrix are defined independent of each other. Computational Results {#Computational_Results} ===================== In this section, we analyze the performance of the robust portfolio optimization approaches discussed in Section \[Robust\_Portfolio\_Optimization\_Approaches\] vis-à-vis the Markowitz model, using the historical data from the market, as well as simulated data. For the purpose of this analysis, we consider two scenarios in terms of number of stocks $N$, being $31$ and $98$, with the goal of observing the effect of increase in the number of stocks on the performance of robust portfolio optimization approaches. These numbers were chosen since they represent the number of stocks in S&P BSE 30 and S&P BSE 100 indices, respectively. For the first scenario ($N=31$), we make use of the daily log-returns, based on the adjusted daily closing prices of the $31$ stocks comprising the S&P BSE 30, obtained from Yahoo Finance [@yf]. Accordingly, we consider the period of our analysis to be from 18th December, 2017 to 30th September, 2018 (both inclusive) which had a total of $194$ active trading days *i.e.,* $193$ daily log-returns. Corresponding to this historical data from S&P BSE 30, we generate two sets of simulated data for all the $31$ assets, by sampling returns using a multivariate normal distribution whose mean and covariance matrix are set to those obtained for the historical S&P BSE 30 data. The first set of sample returns comprises of the number of samples to be the same as in the historical data, namely $193$, in this case. On the other hand, the second set comprises of a larger number of samples, namely, $1000$. The two sets of simulated sample returns of different sizes were used to facilitate the study of the impact of the number of samples in simulated data on the performance of the robust portfolio optimization approaches. We make a comparative study of robust portfolio optimization approaches, in case of the historical S&P BSE 30 data, as well as the two sets of simulated data, in order to analyze whether the worst case robust portfolio optimization approaches are useful in a real market setup. For the second scenario ($N=98$), we use the daily log-returns, based on the adjusted daily closing prices of the $98$ stocks comprising the S&P BSE 100, obtained from Yahoo Finance [@yf], for the period 18th December, 2016 to 30th September, 2018 (both inclusive) with $443$ trading days *i.e.,* $442$ daily log-returns. The two sets of simulated data were generated in a manner akin to the scenario of $N=31$ assets. Similar kind of comparative study is performed for the second scenario. For Box and Ellip model, we construct uncertainty sets in expected return with $100(1-\alpha)\%$ confidence level by considering $\alpha=0.05$. Separable uncertainty set in Sep model is constructed as a $100(1-\alpha)\%$ confidence interval for both $\mu$ and $\Sigma$ using non-parametric Boostrap Algorithm with same $\alpha$ as in other robust models and assuming $\beta$, *i.e* the number of simulations, equal to $8000$. The performance analysis for these robust portfolio models vis-à-vis the Mark model is performed using the *Sharpe Ratio* of the constructed portfolios, with $\lambda$ representing the risk-aversion in the ideal range *i.e.,* $\lambda \in [2,4]$ [@Fabozzi07]. Further, since the T-bill rate in India from 2016 to 2018 was observed to oscillating around $6\%$ [@rbi], so we have assumed the annualized riskfree rate to be equal to $6\%$. We now present the computational results observed in case of the two scenarios, as discussed above. Performance with $N=31$ assets ------------------------------ We begin with the analysis for $N=31$ assets, in the case of the simulated data with $1000$ samples and present the results in Figure \[fig:1\] and Table \[tab:1\]. From Figure \[fig:1\] we observe that the efficient frontiers for the Ellip and the Sep models lie below the efficient frontier for the Mark model, which supports the argument made in [@Broadie93] regarding over-estimation of the efficient frontier for the Mark model. Further, the observed overlap of the efficient frontiers for the Mark and the Box models suggest that the utilizing box uncertainty sets for robust portfolio optimization does not prove to be of much use in this case. Also, from Figure \[fig:1\], we observe that the Mark model starts outperforming the Sep model in terms of the Sharpe Ratio after the risk-aversion $\lambda$ crosses $3$. The above observations are supported quantitatively by the results tabulated in Table \[tab:1\] as well, since the average Sharpe Ratio for portfolios constructed in the ideal range of risk-aversion $\lambda\in [2,4]$ is the same in case of both the Mark and the Box models. Also, we infer from Table \[tab:1\], that the Sep model performs at par with the Mark model by taking the average Sharpe Ratio into consideration, with the Ellip model performing the best among all the models. The analysis with the number of simulated samples being the same as the number of log-returns in the case of S&P BSE 30 data is presented in Figure \[fig:2\] and Table \[tab:2\]. The efficient frontiers for the Sep and Ellip model lie below that of the Mark model. We observe results similar to the case when $1000$ simulated samples were considered, upon comparison of the Mark model and the Box model. However, we observe a slight inconsistency in the performance of Box model as evident from the plot of the Sharpe Ratio in Figure \[fig:2\]. We also infer that the Ellip model and the Sep model outperform the Mark model in terms of the Sharpe Ratio in the ideal range of risk-aversion $\lambda \in [2,4]$. It is difficult to compare the performance of the Ellip model with that of the Sep model in this case, since the average Sharpe Ratio for both of them is almost the same (Table \[tab:2\]). For the historical market data involving the stocks comprising S&P BSE 30, we observe from Figure \[fig:3\], that the efficient frontiers for the Mark model and the Box model almost overlap with each other. Further, the efficient frontier for the Sep model lies below that of the Mark model with further widening of the gap between the plots, in case of the Ellip model. However, the performance of the Box model, in terms of the Sharpe ratio is quite inconsistent as evident from Figure \[fig:3\]. We also observe that the Sep model outperforms the Mark model in the ideal range of risk-aversion $\lambda\in [2,4]$ upon taking the Sharpe Ratio into consideration as the performance measure. This is not true in case of the Ellip Model, as evident from the Sharpe Ratio plot in Figure \[fig:3\]. Even from Table \[tab:3\], we observe that average Sharpe Ratio for the Ellip model is only slightly greater than that for the Mark model. We also note that the Sep model outperforms all the other three models. *A common observation that could be inferred from three cases considered in the scenario involving less number of assets ($N=31$) is that the Sep and the Ellip models perform superior or equivalent in comparison to the Mark model in the ideal range of risk-aversion.* Performance with $N=98$ assets ------------------------------ We now analyze the scenario involving $N=98$ assets. On applying robust model along with the Mark model on the simulated data having $1000$ samples, we observe results similar to the corresponding case for the previous scenario when we compared the Box model with the Mark model. This is evident from the coinciding plots of the efficient frontier and the plots for the Sharpe Ratio for both the models in Figure \[fig:4\]. However, in contrast to the scenario of $N=31$ assets, we observe that not only does the Ellip model but also the Sep model outperforms the Mark model when considering the portfolios constructed in the ideal range of risk-aversion $\lambda\in [2,4]$. Additionally, from Table \[tab:4\], we can infer that the Ellip model exhibits superior performance in comparison to the Sep model, in terms of greater average value of the Sharpe Ratio. In Figure \[fig:5\] and Table \[tab:5\] we present the results of the study for the simulated data with the number of samples being the same as that of log-returns of S&P BSE 100 data. The comparative results observed for the Box model and the Mark model are similar to the previous case of $1000$ simulated samples. In the ideal range of risk aversion $\lambda\in [2,4]$, one observes that the efficient frontier for both the Ellip as well as the Sep model lie below the efficient frontier for the Mark model and both the models perform better than the Mark model in terms of the Sharpe Ratio. Additionally, from the Sharpe ratio plot in Figure \[fig:5\], any comparative inference of the Sep model and the Ellip model is difficult, since each outperforms the other in a different sub-interval of the risk-aversion range. The similar values of the average Sharpe Ratio in Table \[tab:5\] supports the claim of almost equivalent performance of these two models in this case. Finally, the results for the historical market data, involving stocks comprising S&P BSE 100 are presented in Figure \[fig:6\] and Table \[tab:6\]. While the efficient frontier plot leads to observations similar to the previous case, however, there is a slight inconsistency in the performance of the Box model as can be seen from the plot of the Sharpe Ratio in Figure \[fig:6\]. The robust portfolios constructed using the Sep and the Ellip models outperform the ones constructed using the Mark model in the ideal range of risk-aversion $\lambda\in [2,4]$. Additionally, the performance of the Ellip model is marginally better than the Sep model as evident from the Sharpe Ratio plot, an inference that is supported by the marginal difference in average Sharpe Ratio of both (Table \[tab:6\]). *We draw a common inference from the three cases considered in the scenario involving greater number of assets, *i.e.*, the Sep and the Ellip model outperform the Mark model in the ideal range of risk aversion.* Discussion {#Discussion} ========== In this concluding section we analyze the different kinds of scenarios in the context of trends of the Sharpe Ratio. Recall that, we have considered the “adjusted closing prices” data of S&P BSE 30 and S&P BSE 100 to illustrate our analysis. Further, we have also generated simulated samples using the true mean and covariance matrix of log-returns obtained from the aforesaid actual market data of “adjusted closing prices”. Since the number of instances in market data for the assets comprising the two indices, was very less, we simulated two sets of samples, one where the number of simulated samples matches the number of instances of real market data available, say $\zeta(<1000)$ and another where the number of simulated samples is large (a constant, which in our case was taken to be $1000$), irrespective of the number of stocks. The motivation behind this setup was to understand if the market data we obtained (which was limited) is able to capture the trends and results in better portfolio performance. From the Standpoint of Number of Stocks --------------------------------------- We begin with a description of the results summarized in Table \[tab:no\_stocks\], wherein for a particular row and a particular column, we presented the maximum possible Sharpe Ratio that was obtained for that particular scenario. For example, in case of the tabular entry for the case when $N=98$ where we simulated $\zeta$ samples using true mean vector and the true covariance matrix of S&P BSE 100, we refer to Table \[tab:5\] (which explains the simulation corresponding to S&P BSE 100 with $\zeta$ simulated samples) and take the maximum of its last row *i.e.*, maximum of average Sharpe ratios that was attained using the available robust and Mark models. Larger the number of stocks, better is the performance of the portfolios constructed using robust optimization. This claim can be supported both qualitatively and quantitatively. Qualitatively, the number of stocks in a portfolio represents its diversification. According to Modern Portfolio Theory (MPT), investors get the benefit of better performance from diversifying their portfolios since it reduces the risk of relying on only one (small number) asset (assets) to generate returns. Based on the analysis by Value Research Online [@vro] one observes that on an average basis, the large-cap funds hold around $38$ shares while the mid-cap funds hold around $50-52$ assets for balanced funds, in which around $65-70\%$ of the assets are held in equity. This is because of great stability of returns in case of companies with large market capitalization, whereas this is not the case with mid-cap companies. Hence diversification requirements drives greater percentages in equities in case of mid-cap funds. From Table \[tab:no\_stocks\], we can provide quantitative justification by observing that the Sharpe Ratio was more for portfolios with larger number of stocks as compared to portfolios with smaller number of stocks. However, we observe opposite behavior for the market data which can be attributed to the following two reasons: 1. The insufficient availability of market data, when it comes to larger number of stocks. 2. The error in the estimation of return and covariance matrix accumulating as the number of stocks increases, impacting the performance of the model [@Michaud89]. From the Standpoint of Number of Simulated Samples -------------------------------------------------- We now focus on the performance of the portfolio when different number of samples were simulated and tabulate the results in Table \[tab:no\_samples\] in the same way as was done in the preceding discussion. Here several interesting performance trends can be noticed. We observe that in the case of smaller number of stocks, the performance when the number of simulated samples is $\zeta (< 1000)$ is better than the case when a large ($1000$) simulated samples were generated. On the other hand, the exactly opposite trend can be observed when higher number of stocks are taken into consideration. This observation can be explained as follows: In the case of real market, the number of data instances being available is relatively low. So, when larger number of samples were generated, we observe higher Sharpe Ratio as compared to $\zeta$ number of simulations. However, the reason behind such a pattern of opposite behavior, when smaller number of stocks are considered, is not obvious. From the Standpoint of the Kind of Data --------------------------------------- Finally, we discuss about the performance of the portfolio from the standpoint of kind of data that we have used in this work. Accordingly, the relevant results are tabulated in Table \[tab:data\_type\], from where the behavior is observed to be fairly consistent. For both the cases, the performance in case of the simulated data is better than in case of the real market data. This is clear from the fact that the real market data is difficult to model as it hardly follows any distribution, whereas the simulated is generated from multivariate normal distribution with mean and covariances as the true values obtained from the data. Conclusion {#Conclusion} ========== Robust optimization is an emerging area of portfolio optimization. Various questions have been raised on the advantages of robust methods over the Markowitz model. Through computational analysis of various robust optimization approaches followed by a discussion from different standpoints, we try to address this skepticism. We observe that robust optimization with ellipsoidal uncertainty set performs superior or equivalent as compared to the Markowitz model, in the case of simulated data, similar to the results reported by Santos [@Santos10]. In addition, we observe favorable results in the case of market data as well. Better performance of the robust formulation having separable uncertainty set in comparison to the Markowitz model is in line with the previous study on the same robust model by T[ü]{}t[ü]{}nc[ü]{} and Koenig [@Tutuncu04]. Empirical results presented in this work advocate enhanced practical use of the robust models involving ellipsoidal uncertainty set and separable uncertainty set and accordingly, these models can be regarded as possible alternatives to the classical mean-variance analysis in a practical setup. [plain]{} Best, M.J. and Grauer, R.R. (1991). On the sensitivity of mean-variance-efficient portfolios to changes in asset means: some analytical and computational results, The Review of Financial Studies, 4(2): 315–342. Broadie, M. (1993). Computing efficient frontiers using estimated parameters. Annals of Operations Research, 45(1): 21–58. Ceria, S. and Stubbs, R.A. (2006). Incorporating estimation errors into portfolio selection: Robust portfolio construction. Journal of Asset Management, 7(2): 109–127. DeMiguel, V. and Nogales, F.J. (2009). Portfolio selection with robust estimation. Operations Research, 57(3): 560–577. Fabozzi, F.J., Kolm, P.N., Pachamanova, D. and Focardi, S.M. (2007). Robust Portfolio Optimization and Management, Wiley. Halldorsson, B.V. and T[ü]{}t[ü]{}nc[ü]{}, R.H. (2003). An interior-point method for a class of saddle-point problems. Journal of Optimization Theory and Applications, 116(3): 559–590. Kim, J.H., Kim, W.C. and Fabozzi, F.J. (2014). Recent developments in robust rortfolios with a worst-case approach. Journal of Optimization Theory and Applications, 161(1): 103–121. Lu, Z. (2006). A new cone programming approach for robust portfolio selection. Technical report, Department of Mathematics, Simon Fraser University, Burnaby, BC. Markowitz, H.M. (1952). Portfolio selection. The Journal of Finance, 7(1): 77–91. Markowitz, H.M. (1959). Portfolio selection: Efficient diversification of investments, Yale University Press. Michaud, R.O. (1989). The Markowitz Optimization Enigma: Is ’Optimized’ Optimal?. Financial Analysts Journal, 45(1): 31–42. Santos, A.A.P. (2010). The out-of-sample performance of robust portfolio optimization. Brazilian Review of Finance, 8(2): 141–166. Scherer, B. (2007). Can robust portfolio optimization help to build better portfolios?. Journal of Asset Management, 7(6): 374–387. T[ü]{}t[ü]{}nc[ü]{}, R.H. and Koenig, M. (2004). Robust asset allocation. Annals of Operations Research, 132(1-4): 157–187. Portfolios: Do number of stocks matter?, Value Research Online, [<https://www.valueresearchonline.com/story/h2_storyView.asp?str=200396&fbclid=IwAR3YkUaNLCY3keZGXfooehh0WU90zkZ5s94h0Uazj1-rzOFSy5bw6R1E-MY>]{} Reserve Bank of India – Weekly Statistical Supplement, [<https://www.rbi.org.in/scripts/WSSViewDetail.aspx?TYPE=Section&PARAM1=4%0A>]{} Yahoo Finance, [<https://in.finance.yahoo.com/>]{} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with 1000 samples (31 assets)[]{data-label="fig:1"}](30_ef_ideal_range_1000_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with 1000 samples (31 assets)[]{data-label="fig:1"}](30_sr_ideal_range_1000_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with same number of samples as market data (31 assets)[]{data-label="fig:2"}](30_ef_ideal_range_exact_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with same number of samples as market data (31 assets)[]{data-label="fig:2"}](30_sr_ideal_range_exact_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Market Data (31 assets)[]{data-label="fig:3"}](30_ef_ideal_range.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Market Data (31 assets)[]{data-label="fig:3"}](30_sr_ideal_range.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with 1000 samples (98 assets)[]{data-label="fig:4"}](100_ef_ideal_range_1000_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with 1000 samples (98 assets)[]{data-label="fig:4"}](100_sr_ideal_range_1000_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with same number of samples as market data (98 assets)[]{data-label="fig:5"}](100_ef_ideal_range_exact_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe Ratio plot for different portfolio optimization models in case of Simulated Data with same number of samples as market data (98 assets)[]{data-label="fig:5"}](100_sr_ideal_range_exact_sim.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe ratio plot for different portfolio optimization models in case of Market Data (98 assets)[]{data-label="fig:6"}](100_ef_ideal_range.eps "fig:"){width=".8\linewidth"} [.5]{} ![Efficient Frontier plot and Sharpe ratio plot for different portfolio optimization models in case of Market Data (98 assets)[]{data-label="fig:6"}](100_sr_ideal_range.eps "fig:"){width=".8\linewidth"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.176 0.175 0.201 0.178 2.5 0.178 0.178 0.2 0.181 3 0.182 0.182 0.2 0.182 3.5 0.185 0.185 0.199 0.183 4 0.186 0.186 0.199 0.183 Avg 0.181 0.181 0.2 0.182 : Comparison of different portfolio optimization models in case of Simulated Data with 1000 samples (31 assets)[]{data-label="tab:1"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.2 0.198 0.218 0.213 2.5 0.203 0.204 0.218 0.213 3 0.205 0.207 0.217 0.217 3.5 0.208 0.209 0.216 0.222 4 0.21 0.21 0.215 0.225 Avg 0.205 0.206 0.217 0.218 : Comparison of different portfolio optimization models in case of Simulated Data with same number of samples as market data (31 assets)[]{data-label="tab:2"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.181 0.181 0.193 0.186 2.5 0.181 0.181 0.192 0.193 3 0.186 0.191 0.192 0.202 3.5 0.194 0.195 0.191 0.209 4 0.201 0.202 0.19 0.213 Avg 0.189 0.19 0.192 0.2 : Comparison of different portfolio optimization models in case of Market Data (31 assets)[]{data-label="tab:3"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.185 0.185 0.245 0.191 2.5 0.185 0.185 0.244 0.202 3 0.19 0.192 0.244 0.213 3.5 0.199 0.199 0.243 0.221 4 0.207 0.207 0.242 0.226 Avg 0.193 0.194 0.244 0.21 : Comparison of different portfolio optimization models in case of Simulated Data with 1000 samples (98 assets)[]{data-label="tab:4"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.192 0.192 0.235 0.216 2.5 0.192 0.192 0.234 0.227 3 0.202 0.203 0.233 0.233 3.5 0.212 0.213 0.232 0.237 4 0.221 0.222 0.232 0.239 Avg 0.204 0.205 0.233 0.23 : Comparison of different portfolio optimization models in case of Simulated Data with same number of samples as market data (98 assets)[]{data-label="tab:5"} $\lambda$ $SR_{Mark}$ $SR_{Box}$ $SR_{Ellip}$ $SR_{Sep}$ ----------- ------------- ------------ -------------- ------------ 2 0.175 0.173 0.195 0.193 2.5 0.178 0.177 0.194 0.193 3 0.18 0.181 0.194 0.193 3.5 0.186 0.188 0.193 0.192 4 0.191 0.192 0.193 0.192 Avg 0.182 0.182 0.194 0.192 : Comparison of different portfolio optimization models in case of Market Data (98 assets)[]{data-label="tab:6"} \#stocks = 31 \#stocks = 98 ------------------------------------------ --------------- --------------- \#generatedsimulations = 1000 0.2 0.244 \#generatedsimulations = $\zeta (<1000)$ 0.218 0.233 Market data 0.2 0.194 : The maximum average Sharpe ratio compared by varying the number of stocks in different kinds of scenarios.[]{data-label="tab:no_stocks"} \#samples = 1000 \#samples = $\zeta (< 1000)$ --------------- ------------------ ------------------------------ \#stocks = 31 0.2 0.218 \#stocks = 98 0.244 0.233 : The maximum average Sharpe ratio compared by varying the number of stocks in different kinds of scenarios.[]{data-label="tab:no_samples"} Simulated data Real Market data --------------- ---------------- ------------------ \#stocks = 31 0.218 0.2 \#stocks = 98 0.244 0.194 : The maximum average Sharpe ratio compared by varying the type of the data in different kinds of scenarios.[]{data-label="tab:data_type"} [^1]: Indian Institute of Technology Guwahati, Guwahati-781039, Assam, India, e-mail: s.oberoi@iitg.ac.in [^2]: Indian Institute of Technology Guwahati, Guwahati-781039, Assam, India, e-mail: m.girach@iitg.ac.in [^3]: Indian Institute of Technology Guwahati, Guwahati-781039, Assam, India, e-mail: pratim@iitg.ac.in, Phone: +91-361-2582606, Fax: +91-361-2582649
--- author: - 'T. Nagel' - 'S. Dreizler' - 'T. Rauch' - 'K. Werner' date: 'Received xx; accepted xx' title: 'AcDc - A new code for the NLTE spectral analysis of accretion discs: application to the helium CV AMCVn' --- Introduction ============ Accretion discs are components of objects as diverse as proto-planetary systems, active galactic nuclei, cataclysmic variables or X-ray binaries. A high fraction of the luminosity of these systems may be generated by the accretion disc itself. To understand these objects and interpret the observational data a model of the accretion disc as reliable as possible is necessary. The aim of our work was the development of a program package for the calculation of synthetic spectra and vertical structures of accretion discs considering the physical processes in the disc as accurately as possible. A fully three-dimensional radiation-hydrodynamic treatment is presently still impossible because of the enormous numerical costs. In the case of a geometrically thin $\alpha$-disc (Shakura & Sunyaev 1973), where the disc thickness is significantly smaller than the disc diameter, the radial and vertical structures can be decoupled. Under the assumption of axial symmetry and by dividing the disc into concentric rings the determination of the vertical structure becomes a one-dimensional problem. The dissipated energy in each disc ring is radiated away at the disc surface, the energy flux can be expressed as effective temperature. As an approximation, it can be assumed that the disc rings are radiating like black bodies. An improvement of the models can be obtained by describing the disc rings by stellar atmosphere models of the same effective temperature, see e.g. Kiplinger (1979), Mayo et al. (1980) or La Dous (1989) in the case of cataclysmic variables and Kolykhalov & Sunyaev (1984) or Sun & Malkan (1989) in the case of AGN. Unfortunately, neither black bodies nor stellar atmosphere models reproduce spectra of accretion discs in an adequate manner (Wade 1988). Meyer & Meyer-Hofmeister (1982), Cannizzo & Wheeler (1984) and Cannizzo & Cameron (1988) calculated the radiative transfer using the diffusion assumption. This assumption is only valid at large optical depths, but the spectrum is generated at optical depths around $\tau\sim 1$, where neither the diffusion assumption nor the assumption of local thermodynamic equilibrium (LTE) is fulfilled. Only solving the radiative transfer equation self-consistently with the structure equations allows the calculation of realistic accretion disc spectra (Kriz & Hubeny 1986, Shaviv & Wehrse 1986, Stoerzer et al. 1994). In the last decade much work in this field was done e.g. by Hubeny & Hubeny (1997, 1998) and Hubeny et al. (2000, 2001). Following the path mentioned above, we have developed our program package [AcDc]{} ([Ac]{}cretion [D]{}isc [c]{}ode) for the detailed calculation of vertical structures and NLTE spectra of accretion discs. For each disc ring the equations of radiative and hydrostatic equilibrium as well as the NLTE rate equations for the population numbers of the atomic levels are solved consistently with the radiation transfer equation under the constraint of particle number and charge conservation. Full metal-line blanketing as well as irradiation of the accretion disc by the central object can be considered. By integrating the spectra of the individual disc rings, one obtains a complete disc spectrum for different inclination angles, where the spectral lines are Doppler shifted according to the radial component of the Kepler rotation. This is shown in Sect. 2, whereas in Sect. 3 we show first applications of the developed program package to examine the influence of different parameters on the vertical structure and the spectrum of an accretion disc model for the helium cataclysmic variable AMCVn. Vertical Structure of Accretion Discs ===================================== We assume a geometrically thin (disc thickness is significantly smaller than the disc diameter) stationary accretion disc ($\alpha$-disc, Shakura & Sunyaev 1973). We also assume that the mass of the disc is much smaller than the mass of the central object, so we can neglect self-gravitation. Introducing the surface mass density $\Sigma$ as $$\Sigma\,=\,2\,\int\limits_{0}^{H/2}\rho\,dz\,,$$ with mass density $\rho$, geometrical height $z$ above the midplane and total disc height $H$, the radial dependence of $\Sigma$ reads following Shakura & Sunyaev (1973) $$\nu\,\Sigma(R)\,=\,\frac{\dot{M}}{3\pi}\left(1\,-\,\left(\frac{R_\star}{R}\right)^{1/2} \right)\,.$$ Here, $R$ denotes the distance from the central object, $R_\star$ the radius of the central object, $\dot{M}$ the mass accretion rate and $\nu$ the kinematic viscosity, defined as $$\nu\,=\,\alpha\,c_{\rm{s}}\,H\,, \label{alpha}$$ with sound speed $c_{\rm{s}}$ and the parameter $\alpha$ being a measure of the efficiency of angular momentum transport through the disc. The radial distribution of the effective temperature $T_{\rm eff}$ then, following Shakura & Sunyaev (1973), can be described by $$T_{\rm eff}(R)\,=\,\left[\frac{3GM_\star\dot{M}}{8\pi\sigma R^3}\left(1\,-\,\left(\frac{R_\star}{R}\right)^{1/2} \right) \right]^{1/4} \label{tglg}$$ with $M_\star$ denoting the mass of the central object, $G$ the gravitational constant and $\sigma$ the Stefan-Boltzmann constant. The accretion disc is divided into a set of concentric disc rings (cf. Fig. \[ringe\]). For each ring the vertical structure is calculated by solving the set of equations described in the following two subsections, assuming a plane-parallel geometry. ![\[ringe\]Geometry of the accretion disc, divided into concentric rings.](1522fig1.eps){height="7cm"} Radiation Transfer Equation --------------------------- In order to compute the radiation field, which determines the atomic population numbers, the radiation transfer equation has to be solved. This equation describes the modification of the specific intensity $I_{\nu}$ of a ray due to absorption or emission along its path $ds$ through the accretion disc (cf. Fig. \[rt\]). ![\[rt\]Radiation transfer through the accretion disc layers.](1522fig2.eps){width="48.00000%"} The radiation transfer equation for the shown geometry then reads $$\mu\,\frac{\partial\,I_{\nu}(\nu,\mu,z)}{\partial\,z}\,=\,-\chi_{\nu}(\nu,z)\,I_{\nu}(\nu,\mu,z)\,+\,\eta_{\nu}(\nu,z)\,.$$ with the absorption coefficient $\chi_{\nu}$ and the emission coefficient $\eta_{\nu}$. Introducing the source function $S_{\nu}$ $$S_{\nu}\,=\,\frac{\eta_{\nu}}{\chi_{\nu}}\,$$ the radiation transfer equation is $$\mu\,\frac{\partial\,I_{\nu}(\nu,\mu,z)}{\partial\,z}\,=\,-\chi_{\nu}(\nu,z)\,(I_{\nu}(\nu,\mu,z)\,-\,S_{\nu}(\nu,\mu,z))\,.$$ The solution of this equation is a complicated problem, because the source function depends on the radiation field itself. Within an iteration scheme (see below), one solves the radiation transfer equation formally assuming the source function known (cf. Mihalas 1978). In our work this is done by a short characteristics method (Olson & Kunasz 1987). Irradiation of the disc by an external source with a given spectrum is accounted for by an appropriate boundary condition. Structure Equations ------------------- In order to obtain the atomic population numbers in NLTE all processes populating or de-populating an atomic level are considered. These are ionisation, recombination, excitation and de-excitation, caused either by radiation or collision. Each level $i$ of each ion has one rate equation describing the modification of the population density $n_i$ with time $t$: $$\frac{\partial n_i}{\partial t}\,=\,n_i\sum_{i\neq j}P_{ij}\,-\,\sum_{j\neq i}n_jP_{ji}\,.$$ $P_{ij}$ denotes the rate coefficients, consisting of radiative and collisional components. Since we assume a hydrostatic stratification, we have $$\frac{\partial n_i}{\partial t}\,=\,0\,.$$ Assuming that the radial component of the gravitation of the central object equals the centrifugal force of the Keplerian rotation of the disc, the hydrostatic equilibrium of the disc only is determined by the vertical component of the gravitation $$\frac{dP}{dz}\,=\,-\frac{G\,M_\star}{R^3}\,z\,\rho\,, \label{hydros}$$ with $P$ denoting the total (gas and radiation) pressure. Another fundamental equation of the vertical structure describes the local energy balance: $$E_{\rm{mech}}\,=\,E_{\rm{rad}}\,+\,E_{\rm{conv}}\,.$$ The viscously generated energy $E_{\rm{mech}}$ is equal to the radiative energy loss $E_{\rm{rad}}$; the convective energy $E_{\rm{conv}}$ term is neglected in the following. For standard $\alpha$-discs, the viscously generated energy reads $$E_{\rm{mech}}\,=\,\nu\Sigma\left(R\frac{d\omega}{dR}\right)^2\,=\,\frac{9}{4}\nu\Sigma\frac{G\,M_\star}{R^3}$$ and $$E_{\rm{rad}}\,=\,4\,\pi\,\int\limits_{0}^{\infty}(\eta(\nu,z)\,-\,\chi(\nu,z)\,J(\nu,z)\,)\,d\nu\,.$$ Introducing now the mass column depth $m$ as $$m(z)\,=\,\int\limits_{z}^{\infty}\rho\,dz\,$$ with the total column mass $M_0$ at the midplane the energy dissipated at each depth is $$E_{\rm{mech}}(m)\,=\,\frac{9}{4}\,\frac{G\,M_\star}{R^3}\,\nu(m)\,\rho\,.$$ Here $\nu(m)$ denotes the depth-dependent kinematic viscosity $$\nu(m)\,=\,a\bar{\nu}(\zeta + 1)\left(\frac{m}{M_0}\right)^\zeta\qquad\mbox{with}\qquad \zeta > 0$$ with the damping factor $\zeta$, which has been introduced by Kriz & Hubeny (1986) to avoid numerical instabilities at the disc surface. $\bar{\nu}$ is the depth-averaged kinematic viscosity with $$\bar{\nu}\,=\,\frac{1}{M_0}\int\limits_{0}^{M_0}\nu(m)\,dm\,.$$ $\bar{\nu}$ corresponds to $\nu$ in Eq. (\[alpha\]) and can be determined by $$\bar{\nu}\,=\,\frac{Rv_{\phi}}{Re}\,=\,\frac{\sqrt{G\,M_\star\,R}}{Re}$$ with $v_{\phi}$ denoting the Keplerian angular velocity and $Re$ the effective Reynolds number (Lynden-Bell & Pringle 1974). The prescription of the viscosity using the Reynolds number has the advantage that no further assumptions concerning first values of disc height and speed of sound have to be made, as would be necessary using the $\alpha$-prescription. Furthermore, it is possible to describe the depth dependency of the viscosity using the column mass as depth variable. The solution of the energy balance is obtained with a generalised Unsöld-Lucy method (Lucy 1964, Dreizler 2003). Finally, the total particle density $N$ consists of the sum of the population numbers $n$ of the NLTE and LTE levels $l$ of all ions $i$ of all elements $x$ plus the electron density $n_{\rm{e}}$: $$N\,=\,\sum_{x=1}^{\rm{element}}\,\sum_{i=1}^{\rm{ion}}\,\left(\sum_{l=1}^{\rm{NLTE}}n_{xil}\,+\,\sum_{l=1}^{\rm{LTE}}n_{xil}^* \right)\,+\,n_{\rm{e}}\,.$$ The equation of charge conservation reads $$n_{\rm{e}}\,=\,\sum_{x=1}^{\rm{element}}\,\sum_{i=1}^{\rm{ion}}\,q(i)\left(\sum_{l=1}^{\rm{NLTE}}n_{xil}\,+\,\sum_{l=1}^{\rm{LTE}}n_{xil}^*\right)\,$$ with charge $q(i)$ of the ion $i$. The solution of the system of equations consisting of the radiation transfer equation, the equations of energy balance and hydrostatic equilibrium, the rate equations and the equations of charge and particle conservation is done in an iterative scheme, the so-called Accelerated Lambda Iteration (ALI, Werner & Husfeld 1985; Werner et al. 2003). The input parameters of our models are mass and radius of the central object, radius of the disc ring, mass accretion rate and Reynolds number. Furthermore, the atomic data and an appropriate frequency grid are specified (see e.g. Rauch & Deetjen 2003). LTE Start Models ---------------- To avoid numerical instabilities at the beginning of the NLTE calculations it is necessary to establish suitable start models under the assumption of LTE. In the following, we summarise the main steps creating such models, according to Hubeny (1990). To determine the atomic LTE population numbers $n$ the Boltzmann equation $$\frac{n_i}{n_j}\,=\,\frac{g_i}{g_j}\,e^{-(E_i-E_j)/kT}$$ and the Saha equation $$\frac{n_{\rm{up}}}{n_{\rm{low}}}\,=\,\frac{2}{n_{\rm{e}}}\left(\frac{2\pi m_{\rm{e}}kT}{h^2}\right)^\frac{3}{2}\frac{g_{\rm{up}}}{g_{\rm{low}}}e^{-(E_{\rm{up}}-E_{\rm{low}})/kT}$$ are used instead of the NLTE rate equations. Here, $g$ denotes the statistical weights, $E$ the excitation or ionisation energy, $m_{\rm{e}}$ the electron mass and $h$ the Planck constant. Furthermore, in the case of LTE the source function $S_{\nu}$ equals the Planck function $B_{\nu}$ $$S_{\nu}\,\equiv\,B_{\nu}\,.$$ To get an analytical expression for the vertical temperature structure one combines the first momentum of the specific intensity and the equation of the energy balance. Then the equation for the vertical temperature structure in the case of LTE finally reads $$T^4\,=\,\frac{3}{4}T_{\rm eff}^4\left(\tau_{\rm{R}}\left(1-\frac{\tau_{\rm{R}}}{2\tau_{\rm{tot}}}\right)+\frac{1}{\sqrt{3}}+\frac{1}{3\epsilon\tau_{\rm{tot}}}\frac{w}{\bar{w}} \right)\,$$ with $\epsilon\,=\,\kappa_{\rm{B}}M_0/\tau_{\rm{tot}}$ and $$\kappa_{\rm{B}}\,\,=\,\frac{1}{B}\int\limits_{0}^{\infty}(\kappa_{\nu}/\rho)\,B_{\nu}\,d\nu\,.$$ The equation of the hydrostatic equilibrium has the same form as in the case of NLTE, but can alternatively be transformed into a differential equation of second order: $$\frac{d^2P}{dm^2}\,=\,-\frac{c_{\rm{s}}^2}{P}\frac{G\,M_\star}{R^3}\,. \label{hydros2}$$ The solution of this equation is done numerically. The upper boundary condition reads, following Hubeny (1990), $$P(1)\,=\,\frac{m_1c_{\rm{s}}^2}{H_{\rm{g}}}\frac{1}{f(\frac{z-H_{\rm{r}}}{H_{\rm{g}}})}$$ with $$f(x)\,=\,\frac{\sqrt{\pi}}{2}e^{x^2}\,k(x)$$ and $$k(x)\,=\,\frac{2}{\sqrt{\pi}}\int\limits_{x}^{\infty}e^{-t^2}dt\,.$$ Here, $H_{\rm{g}}$ and $H_{\rm{r}}$ denote the gas and radiation pressure scale height, defined as $$\begin{aligned} H_{\rm{g}}\,&=&\,\sqrt{\frac{2c_{\rm{g}}^2}{GM_\star/R^3}}\,,\\ H_{\rm{r}}\,&=&\,\frac{\sigma}{c}T_{\rm eff}^4\kappa_H\frac{GM_\star}{R^3}\end{aligned}$$ with $c_g$ denoting the sound speed associated with gas pressure $$c_{\rm{g}}^2\,=\,\frac{P_{\rm{g}}}{\rho}\,.$$ The Synthetic Spectrum ---------------------- Having now calculated the vertical structures and spectra of the individual disc rings, the ring spectra are integrated to get the spectrum of the whole accretion disc: $$I(\nu,i)\,=\,\cos(i)\int\limits_{R_{\rm{i}}}^{R_{\rm{o}}}\int\limits_{0}^{2\pi}I(\nu,i,\phi,r)\,r\,d\phi\,dr\,.$$ Here, $R_{\rm{i}}$ and $R_{\rm{o}}$ denote the inner and outer radius of the disc, $i$ is the inclination angle (cf. Fig. \[Scheibe2\]). The integration over the azimuthal angle $\phi$ is done in intervals of $1^\circ$. ![Accretion disc geometry.[]{data-label="Scheibe2"}](1522fig3.eps){width="50.00000%"} In this last step, spectral lines become broadened due to the Keplerian rotation of the disc. Synthetic Spectrum for AMCVn ============================ In order to test and gain experience with [AcDc]{} our first application was the calculation of the synthetic spectrum of AMCVn for a comparison with results of a recent analysis performed by Nasser et al. (2001). AMCVn ----- AMCVn is the prototype of the so-called AMCVn stars or helium cataclysmics, a subgroup of the cataclysmic variables. They are thought to be the end product of the evolution of close binary systems (El-Khoury & Wickramasinghe, 2000). The first observations of AMCVn have been made by Malmquist (1936) and Humason & Zwicky (1947). Greenstein & Matthews (1957) classified AMCVn as a helium-rich white dwarf, Burbidge et al. (1967) as a quasi-stellar object, and Wambler (1967) as a hot star. After the discovery of periodic variability in the light curve (Smak 1967), Warner & Robinson (1972) proposed AMCVn to be a close binary system with ongoing mass transfer. Right now, these systems are believed to be interacting white dwarf binary systems, consisting of a degenerate C/O white dwarf as primary and a semi-degenerate low-mass secondary, composed of almost pure helium. The secondary fills its Roche volume and loses mass via Roche-Lobe overflow onto the primary, generating an accretion disc around it. AMCVn itself has been analysed by Nasser et al. (2001) using TLUSDISC (Hubeny 1990). They showed that the system consists of a primary of about 1.1$\rm M_\odot$ and a secondary of about 0.09$\rm M_\odot$. The radius of the primary is 4600km, and the mass accretion rate is about $3\cdot 10^{-9}\,\rm M_\odot/yr$. For the calculations, we assumed an inner radius of 1.4$\rm{R_\star}$ and an outer radius of 15$\rm{R_\star}$, both values were varied to explore the influence of the disc size onto the spectrum. The Reynolds number was set to 15000, the damping factor $\zeta$ was chosen to be 0.001. Convective energy transport is not yet included in [AcDc]{}, so we had to neglect convection, fortunately without getting numerical problems in the outer part of the disc. The number ratio H/He was set to $10^{-5}$, and abundances of carbon, nitrogen, oxygen, and silicon were assumed to be solar. The self-consistent inclusion of metals is an improvement over the Nasser et al. (2001) models. Some details concerning the ions, levels and lines we used in our calculations are shown in Table 1. The atomic data are taken from the opacity project (Seaton et al. 1994) and the Kurucz line lists (1991). The Lyman, Balmer and Paschen series of H[i]{}, the Lyman series of He[i]{} and the Lyman, Balmer, Paschen and Bracket series of He[ii]{}, some Lyman and Balmer lines of metals as well as resonance lines are Stark broadened, all other line profiles are Doppler broadened. For the line broadening of H we use VCS tables (Lemke 1997), for the line broadening of He[i]{} we use BCS tables (Barnard, Cooper & Shamey 1969) and Griem tables (Griem 1974), for He[ ii]{} we use VCS tables (Schöning & Butler 1989). In total, we calculated 38 individual disc rings. ----------------- ------------- ------- ---------- ------------- ------- Ion NLTE levels lines Ion NLTE levels lines \[0.5ex\]H[i]{} 16 29 N[ii]{} 2 0 H[ii]{} 1 - N[iii]{} 34 67 He[i]{} 44 31 N[iv]{} 34 53 He[ii]{} 32 59 N[v]{} 36 56 He[iii]{} 1 - N[vi]{} 1 0 C[ii]{} 16 25 O[ii]{} 26 36 C[iii]{} 58 124 O[iii]{} 28 37 C[iv]{} 36 86 O[iv]{} 11 5 C[v]{} 1 0 O[v]{} 6 1 O[vi]{} 36 48 O[vii]{} 1 0 ----------------- ------------- ------- ---------- ------------- ------- : Some details concerning the ions and the number of NLTE levels and lines used in our calculations.[]{data-label="tab_atom"} Influence of Disc Parameters on the Spectrum -------------------------------------------- First, we examined the influence of different inner and outer radii onto the spectrum of the disc. We varied the inner radius from 1.4 to 2$\rm R_\star$ and the outer radius from 11 to 15$\rm R_\star$. ![Optical spectra of three accretion disc models with outer radii of 11$\rm R_\star$ (solid line), 13$\rm R_\star$ (dotted) and 15$\rm R_\star$ (dashed). The mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$ and the inclination is $10^\circ$.[]{data-label="AMCVn_varRad1"}](1522fig4.ps){width="50.00000%"} ![Detail of Fig. \[AMCVn\_varRad1\] with three accretion disc models with outer radii of 11$\rm R_\star$ (solid line), 13$\rm R_\star$ (dotted) and 15$\rm R_\star$ (dashed).[]{data-label="AMCVn_varRad2"}](1522fig5.ps){width="50.00000%"} Figure \[AMCVn\_varRad1\] shows the optical spectrum of three disc models with outer radii of 11$\rm R_\star$, 13$\rm R_\star$ and 15$\rm R_\star$. The mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$, the inclination is $10^\circ$. One can clearly see the increasing total flux with increasing outer radius because of the increasing radiating surface. Figure \[AMCVn\_varRad2\] shows details of the normalised spectrum. The line cores of He[i]{} become deeper with increasing outer radius because the large outer regions of the disc are cool enough to show strong He[i]{} lines. Variation of the inner radius shows similar, but smaller effects on the lines. Second, we analysed the influence of different inclination angles on the spectrum. ![Optical spectra of an accretion disc model seen under three different inclination angles, 10$^\circ$ (solid line), 36$^\circ$ (dotted) and 60$^\circ$ (dashed). The inner radius of the disc is 1.4$\rm R_\star$, the outer radius 15$\rm R_\star$ and the mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$.[]{data-label="AMCVn_varInk1"}](1522fig6.ps){width="50.00000%"} ![Detail of Fig. \[AMCVn\_varInk1\] with an accretion disc model seen under three different inclination angles, 10$^\circ$ (solid line), 36$^\circ$ (dotted) and 60$^\circ$ (dashed).[]{data-label="AMCVn_varInk2"}](1522fig7.ps){width="50.00000%"} Figure \[AMCVn\_varInk1\] shows a disc model seen under three different inclination angles. The total flux decreases with increasing inclination angle because of the decreasing projected surface. As shown in Fig. \[AMCVn\_varInk2\], narrow lines at small inclination angles become broad lines at high inclination angles because of the increasing rotational broadening. Vertical Structure ------------------ ![Vertical distribution of temperature, density and Rosseland optical depth of three representative disc rings with radii of 1.4$\rm R_\star$ (solid line), 7$\rm R_\star$ (dotted line) and 15$\rm R_\star$ (dashed line).[]{data-label="AMCVn_vertikal"}](1522fig8.ps){height="11cm"} In Fig. \[AMCVn\_vertikal\] the vertical distribution of temperature, density and Rosseland optical depth of three representative disc rings at 1.4$\rm R_\star$, 7$\rm R_\star$ and 15$\rm R_\star$ is shown. According to Eq. (\[tglg\]) disc rings from the outer part of the disc are cooler than inner disc rings. The temperature decreases monotonously from the midplane to the surface of the rings. Disc rings composed only of H and He, but without C, N, and O show an increase of the temperature in the outer layers near the surface. Model vs. Observation --------------------- Finally we compared our accretion disc model spectra with an observed spectrum of AMCVn, obtained at the 6m BAT of the SAO in May 2002. Figure \[AMCVn\_BeobMod1\] shows the observed spectrum and three model spectra with outer radii of 11$\rm R_\star$, 13$\rm R_\star$ and 15$\rm R_\star$, respectively. The inner radius is 1.4$\rm R_\star$, the inclination is 36$^\circ$ and the mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$. Owing to the larger and at the same time cooler radiating surface, a larger outer radius leads to an increase of the spectral line strengths of neutral helium compared to those of ionized helium. ![Comparison of the observed spectrum (thin line) with model spectra of the AMCVn disc for 3 different outer radii.[]{data-label="AMCVn_BeobMod1"}](1522fig9.ps){width="50.00000%"} Figure \[AMCVn\_BeobMod2\] shows the influence of the inclination angle on the spectrum. The disc models with inclination angles of 23$^\circ$, 48$^\circ$ and 70$^\circ$ extend from 1.4$\rm R_\star$ to 13$\rm R_\star$ and the mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$. The larger the inclination angle, the stronger the spectral lines are broadened due to the increasing radial component of the Kepler velocity. ![Comparison of the observed spectrum (thin line) with model spectra of the AMCVn disc for 3 different inclination angles.[]{data-label="AMCVn_BeobMod2"}](1522fg10.ps){width="50.00000%"} In Fig. \[AMCVn\_BeobModbest\] our best fit is shown. The disc model extends from 1.4$\rm R_\star$ to 13$\rm R_\star$, the inclination is 36$^\circ$ and the mass accretion rate is $3\cdot10^{-9}\,\rm M_\odot/yr$. The shapes of the He[i]{} lines are in good agreement with the observation. ![Comparison of the observed spectrum (thin line) with a model spectrum of the AMCVn disc.[]{data-label="AMCVn_BeobModbest"}](1522fg11.ps){width="50.00000%"} Conclusions =========== 1. We developed the new code [AcDc]{} for detailed NLTE calculations of accretion disc spectra of cataclysmic variables and compact X-ray binaries, solving the radiation transfer equation self-consistently together with the structure equations under consideration of full metal line blanketing as well as irradiation of the accretion disc by the central object. 2. Variation of the extension of the disc has a significant influence on the spectrum of the AMCVn disc. Lines of He[i]{} become deeper with increasing outer radius. In addition, the inclination angle of the disc has a clear influence on the spectral lines by Kepler rotation. 3. The comparison of model spectra with an observed spectrum of AMCVn yields 1.4$\rm R_\star$ for the inner radius and 13$\rm R_\star$ for the outer radius, the inclination is found to be 36$^\circ$ and the mass accretion rate $3\cdot10^{-9}\,\rm M_\odot/yr$. This is in agreement with results of Nasser et al. (2001). For future work, we plan to include further physical processes in the program, e.g. Comptonisation as well as convective energy transport, into the program. Furthermore, we have started to study winds from the accretion disc. A depth dependent atomic model will be employed to overcome numerical instabilities in irradiated discs due to extremely weak populated atomic levels at some depths. This research was supported by the Deutsche Forschungsgemeinschaft, DFG grant We 1312/24-1,2 and by the DLR under grant 50OR0201 (TR). We thank Ivan Hubeny for many helpful discussions. Barnard, A.J., Cooper, J., & Shamey, L.J. 1969, A&A, 1, 28 Burbidge, G., Burbidge, M., & Hoyle, F. 1967, ApJ, 147, 1219 Cannizzo, J.K., & Wheeler, J.C. 1984, ApJS, 55, 367 Cannizzo, J.K., & Cameron, A.G.W. 1988, ApJ, 330, 327 Dreizler, S. 2003, in Stellar Atmosphere Modelling, ed. I. Hubeny, D. Mihalas, and K. Werner, ASP Conference Proceedings, 288, 69 El-Khoury, W., & Wickramasinghe, D. 2000, A&A, 358, 154 Greenstein, J.L., & Matthews, M.S. 1957, ApJ, 126, 14 Griem, H.R. 1974, Spectral line broadening by plasmas, Pure and Applied Physics, New York, Academic Press, 1974 Hubeny, I. 1990, ApJ, 351, 632 Hubeny, I., & Hubeny, V. 1997, ApJ, 484, L37 Hubeny, I., & Hubeny, V. 1998, ApJ, 505, 558 Hubeny, I., Agol, E., Blaes, O., & Krolik, J.H. 2000, ApJ, 533, 710 Hubeny, I., Blaes, O., Krolik, J.H., & Agol, E. 2001, ApJ, 559, 680 Humason, M.L., & Zwicky, F. 1947, ApJ, 105, 85 Kiplinger, A.L. 1979, ApJ, 234, 997 Kolykhalov, P.I., & Sunyaev, R.A. 1984, Advances in Space Research, 3, 249 Kriz, S., & Hubeny, I. 1986, Bulletin of the Astronomical Institutes of Czechoslovakia, 37, 129 Kurucz, R. 1991, NATO ASI Series C, 341, 441 La Dous, C. 1989, A&A, 211, 131 Lemke, M. 1997, A&AS, 122, 285 Lucy, L.B. 1964, in 1st Harvard-Smithsonian Conference on Stellar Atmospheres, SAO Special Report No. 167, 93 Lynden-Bell, D., & Pringle, J.E. 1974, MNRAS, 168, 603 Malmquist, K.G. 1936, in Stockholms Observatoriums Annaler Vol. 12, 7, 130 Mayo, S.K., Wickramasinghe, D.T., & Whelan, J.A.J. 1980, MNRAS, 193, 793 Meyer, F., & Meyer-Hofmeister, E. 1982, A&A, 106, 34 Mihalas, D. 1978, Stellar Atmospheres, San Francisco, W. H. Freeman and Co. Nasser, M.R., Solheim, J.-E., & Semionoff, D.A. 2001, A&A, 373, 222 Olson, G.L., & Kunasz, P.B. 1987, JQSRT, 38, 325 Rauch, T., & Deetjen, J.L. 2003, in Stellar Atmosphere Modeling, ed. I. Hubeny, D. Mihalas, and K. Werner, ASP Conference Proceedings, 288, 103 Schöning, T., & Butler, K. 1989, A&AS, 78, 51 Seaton, M., Yan, Y., Mihalas, D., & Pradhan, A. 1994, MNRAS, 266, 805 Shakura, N.I., & Sunyaev, R.A. 1973, A&A, 24, 337 Shaviv, G., & Wehrse, R. 1986, A&A, 159, L5 Smak, J. 1967, Acta Astronomica, 17, 255 Stoerzer, H., Hauschildt, P.H., & Allard, F. 1994, ApJL, 437, L91 Sun, W., & Malkan, M.A. 1989, ApJ, 346, 68 Wade, R.A. 1988, ApJ, 335, 394 Wampler, E.J. 1967, ApJL, 149, L101 Warner, B., & Robinson, E.L. 1972, MNRAS, 159, 101 Werner, K., & Husfeld, D. 1985, A&A, 148, 417 Werner, K., Deetjen, J.L., Dreizler, S., Nagel, T., Rauch, T., & Schuh, S. L. 2003, in Stellar Atmosphere Modeling, ed. I. Hubeny, D. Mihalas, and K. Werner, ASP Conference Proceedings, 288, 31
--- abstract: | For an integer $b\geq 2$, let $s_b(n)$ be the sum of the digits of the integer $n$ when written in base $b$, and let $S_b(N)=\sum_{n=0}^{N-1}s_b(n)$. Several inequalities are derived for $S_b(N)$. Some of the inequalities can be interpreted as comparing the average value of $s_b(n)$ over integer intervals of certain lengths to the average value of a beginning subinterval. Two of the main results are applied to derive a pair of “approximate convexity" inequalities for a sequence of Takagi-like functions. One of these inequalities was discovered recently via a different method by V. Lev; the other is new. : 11A63 (primary); 26A27, 26A51 (secondary) : Digital sum, Cumulative digital sum, Takagi function, Approximate convexity. title: 'Digital sum inequalities and approximate convexity of Takagi-type functions' --- Introduction ============ Fix an integer $b\geq 2$, and for $n\in{\mathbb{N}}$, write the $b$-ary representation of $n$ as $n=\sum_{j=0}^\infty \alpha_j(n)b^j$, where $\alpha_j(n):=\alpha_j(n;b)\in\{0,1,\dots,b-1\}$ for each $j$. Define the $b$-ary digital sum and cumulative $b$-ary digital sum respectively by $$s_b(n)=\sum_{j=0}^\infty \alpha_j(n), \qquad n\in{\mathbb{Z}}_+,$$ and $$S_b(N)=\sum_{n=0}^{N-1}s_b(n), \qquad N\in{\mathbb{Z}}_+,$$ where ${\mathbb{Z}}_+$ denotes the set of nonnegative integers, and we make the usual convention that the empty sum is equal to zero. These digital sums have been well investigated in the literature, especially for the case $b=2$. The investigations have mainly focused in two directions: finding exact or asymptotic formulas for $S_b(N)$ (e.g. Trollope [@Trollope], Delange [@Delange]) or determining the probability distribution of $s_b(n)$ as $n$ ranges over certain subsets of the positive integers (e.g. Mauduit and Sárközy [@Mauduit], Rivat [@Rivat] or Drmota, Mauduit and Rivat [@Drmota], among many others). Stolarsky [@Stolarsky] discusses a wide range of applications of digital sums. The aim of the present article is to prove a number of inequalities for $S_b(N)$. One of these inequalities, for the ternary case, came about naturally in the author’s quest to find a simpler proof of a recent result of Lev [@Lev] concerning the “approximate convexity" of a particular continuous but nowhere differentiable function akin to the Takagi function. The other results all concern general $b$. They are either needed in the proof of the above-mentioned inequality, or are further developments of special cases of it. Some of the inequalities are most elegantly stated in terms of the average values $$\bar{s}_b(s,t):=\frac{S_b(t)-S_b(s)}{t-s}=\frac{1}{t-s}\sum_{n=s}^{t-1}s_b(n), \qquad 0\leq s<t. \label{eq:averages}$$ The inequalities of Theorems \[thm:general-bound\] and \[thm:b-times-longer\] below compare the average value of $s_b(n)$ over certain intervals of integers to the average value over a beginning subinterval. The first inequality is in effect a strong form of superadditivity. It is known for the case $b=2$; see, for instance, section 4 of McIlroy [@McIlroy], where the inequality is used to determine the extremal cost in a merging process. \[thm:-strong-super-additivity\] For any nonnegative integers $n$ and $m$, we have $$S_b(m+n)\geq S_b(m)+S_b(n)+\min\{m,n\}. \label{eq:super-addivity}$$ Theorem \[thm:-strong-super-additivity\] is used to prove the following result, which specializes to the case $b=3$ and is a number-theoretic version of Theorem 3 of Lev [@Lev]. See Section \[sec:application\], where this connection is outlined in detail. \[thm:ternary-inequality\] For any integers $k$, $l$ and $m$ with $0\leq l\leq k\leq m$, we have $$S_3(m+k+l)+S_3(m-k)+S_3(m-l)-3S_3(m)\leq 2k+l. \label{eq:ternary-inequality}$$ Two special cases of the above inequality are particularly interesting: the case $l=0$ and the case $l=k$. For $l=0$, reduces to $$S_3(m+k)+S_3(m-k)-2S_3(m)\leq 2k.$$ This inequality holds in fact with strict inequality, and the factor 2 on the right can not be replaced by any smaller number. These observations follow from the following, more general result. \[thm:general-bound\] Let $b\geq 2$ be arbitrary. (i) For any nonnegative integers $k$ and $m$ with $k\leq m$, we have $$S_b(m+k)+S_b(m-k)-2S_b(m)\leq \left[\frac{b+1}{2}\right]k, \label{eq:general-bound}$$ where $[x]$ denotes the greatest integer less than or equal to $x$. The constant $[(b+1)/2]$ can not be replaced by a smaller constant. However, strict inequality holds in when $b$ is odd. (ii) For any nonnegative integers $n$ and $k$, we have $$\bar{s}_b(n,n+2k)\leq \bar{s}_b(n,n+k)+\frac12\left[\frac{b+1}{2}\right]. \label{eq:general-bound-av}$$ (For the case $b=2$, this result was proved previously by the present author; see [@Allaart].) On the other extreme, the case $l=k$ of simplifies to $$S_3(m+2k)+2S_3(m-k)-3S_3(m)\leq 3k.$$ Equality obtains when $k=m$ (see Lemma \[lem:base-multiply\] below). Setting $n=m-k$ and dividing by $3k$, the last inequality can be written as $$\bar{s}_3(n,n+3k)\leq \bar{s}_3(n,n+k)+1.$$ This extends to arbitrary $b\geq 2$ as in the following theorem, which states that the average value of $s_b(n)$ over any integer interval of length $bk$ is at most $(b-1)/2$ greater than the average over the first $k$ integers in the interval. \[thm:b-times-longer\] For each $b\geq 2$ and for all $n,k\geq 0$, we have $$\bar{s}_b(n,n+bk)\leq \bar{s}_b(n,n+k)+\frac{b-1}{2}. \label{eq:times-b-average}$$ Moreover, equality obtains for each $k$ when $n=0$. Note that for $b=2$, and give the same result. The proofs of Theorems \[thm:-strong-super-additivity\]-\[thm:b-times-longer\] are given in the next section. Two of the theorems are then used in Section \[sec:application\] to derive a pair of inequalities for a sequence of Takagi-like functions. Proofs of the main results ========================== Throughout this section, let $b\geq 2$ be fixed. It is convenient to introduce the notation $$\Sigma_b(s,t):=\sum_{r=s}^{t-1} s_b(r)=S_b(t)-S_t(s), \qquad s<t. \label{eq:Sigma-notation}$$ Thus, $\Sigma_b(s,t)$ is the sum of all the $b$-ary digits needed to write the block of consecutive integers $s,s+1,\dots,t-1$. When there is no confusion possible about the base $b$, the subscript $b$ will be frequently dropped throughout this paper. We first state a useful lemma. \[lem:complementation\] For any nonnegative integers $p, j, k$ and $n$ with $0\leq k\leq n\leq jb^p$ and $j\leq b$, $$\Sigma(jb^p-k,jb^p)-\Sigma(n-k,n)=\Sigma(jb^p-n,jb^p-n+k)-\Sigma(0,k).$$ This follows at once since $s_b(jb^p-r-1)+s_b(r)=(b-1)p+j-1$, independent of $r$, for $0\leq r<jb^p$. We will also use the following, easily verified fact: for any nonnegative integers $n$ and $k$, $s_b(n+b^k)\leq s_b(n)+1$. Applying this repeatedly, we obtain the useful estimate $$s_b\left(n+\sum_{i=1}^k b^{p_i}\right)\leq s_b(n)+k, \qquad n\in{\mathbb{Z}}_+, \quad p_1,\dots,p_k\in{\mathbb{Z}}_+. \label{eq:adding-r-powers}$$ The statement is obvious for the case $m=n=0$. We proceed by induction on $m+n$. Let $N\in{\mathbb{N}}$, and assume holds for all pairs $(m,n)$ with $m+n<N$. Suppose $m$ and $n$ are such that $m+n=N$. By symmetry we may assume that $m\geq n$. In terms of the notation , we must show that $$\Sigma(m,m+n)\geq \Sigma(0,n)+n.$$ This is trivial when $n=0$, so assume $n\geq 1$. We consider two cases: The range $\{m+1,\dots,m+n-1\}$ does not contain a power of $b$. In this case, there is $p\in{\mathbb{Z}}_+$ such that $b^p\leq m\leq m+n-1<b^{p+1}$. So we can subtract 1 from the first digit of each number $m,\dots,m+n-1$ and obtain $$\begin{aligned} \Sigma(m,m+n)&=n+\Sigma(m-b^p,m+n-b^p)\\ &=n+S(m+n-b^p)-S(m-b^p)\\ &\geq n+S(n)+\min\{m-b^p,n\}\\ &\geq n+\Sigma(0,n).\end{aligned}$$ [*Case 2.*]{} The range $\{m+1,\dots,m+n-1\}$ contains a power of $b$; say $m+j=b^p$, where $1\leq j<n$. Since subtracting $b^p$ maps $\{m+j,\dots,m+n-1\}$ onto $\{0,\dots,n-j-1\}$, we see that $$\Sigma(m+j,m+n)=\Sigma(0,n-j)+n-j. \label{eq:first-part}$$ On the other hand, by Lemma \[lem:complementation\], $$\Sigma(m,m+j)-\Sigma(n-j,n)=\Sigma(m+j-n,m+2j-n)-\Sigma(0,j).$$ Since $j<n$, the induction hypothesis implies $$\begin{aligned} \Sigma(m+j-n,m+2j-n)&=S((m+j-n)+j)-S(m+j-n)\\ &\geq S(j)+j=\Sigma(0,j)+j.\end{aligned}$$ Hence, $$\Sigma(m,m+j)-\Sigma(n-j,n)\geq j. \label{eq:second-part}$$ Combining and yields $$\begin{aligned} \Sigma(m,m+n)-\Sigma(0,n)&=\Sigma(m+j,m+n)+\Sigma(m,m+j)-\Sigma(0,n)\\ &=\Sigma(m,m+j)-\Sigma(n-j,n)+n-j\\ &\geq j+(n-j)=n,\end{aligned}$$ as required. [The inequality is sharp in the sense that equality holds whenever $n$ is a power of $b$ and $m<n$. ]{} The following identity is well known for the case $b=2$; see McIlroy [@McIlroy eq. (4a)]. \[lem:base-multiply\] For each $m\in{\mathbb{N}}$, $$S_b(bm)=bS(m)+\frac{b(b-1)m}{2}.$$ For each number $j\in\{0,\dots,m-1\}$ and $r\in\{0,\dots,b-1\}$, $s_b(bj+r)=s_b(j)+r$. Summing over $r$ and then over $j$ gives the lemma. Note first that can be stated equivalently as $$\Sigma(m,m+k+l)-\Sigma(m-k,m)-\Sigma(m-l,m)\leq 2k+l, \label{eq:ternary-inequality-sigma-form}$$ where the omitted subscript is understood to be $b=3$. We use induction on the sum $m+k+l$. The statement is trivial for all $m$ when $k=l=0$. Let $N\in{\mathbb{N}}$, and assume holds whenever $m+k+l<N$. Suppose $(k,l,m)$ is a triple with $0\leq l\leq k\leq m$ and $m+k+l=N$. If $m\leq 2(k+l)$, then $2m-k-l\leq m+k+l$ and so a double application of Theorem \[thm:-strong-super-additivity\] gives $$\begin{aligned} S(3m)&\geq S(m+k+l)+S(2m-k-l)+(2m-k-l)\\ &\geq S(m+k+l)+S(m-k)+S(m-l)+(m-k)+(2m-k-l)\\ &=S(m+k+l)+S(m-k)+S(m-l)+3m-(2k+l).\end{aligned}$$ On the other hand, $S(3m)=3S(m)+3m$ by Lemma \[lem:base-multiply\], and combining these results gives . In the remainder of the proof we may therefore assume that $m>2(k+l)$. Since $l\leq k$, this implies that $$m+k+l<2(m-l), \label{eq:at-most-double}$$ and $$m+k+l<3(m-k). \label{eq:at-most-triple}$$ Hence, the range $\{m-k,\dots,m+k+l-1\}$ contains at most one power of $3$. [*Case 1.*]{} The range $\{m-k+1,\dots,m-1\}$ does not contain a power of $3$. Then there is $i\in\{1,2\}$ and $p\in{\mathbb{Z}}_+$ such that $3^p i\leq m-k\leq m-1<3^p(i+1)$, so the numbers $m-k,\dots,m-1$ all have leading ternary digit $i$. Hence, $$\Sigma(m-k,m)=\Sigma(m-k-3^p i,m-3^p i)+ki,$$ and likewise, $$\Sigma(m-l,m)=\Sigma(m-l-3^p i,m-3^p i)+li.$$ On the other hand, for each $n\in{\mathbb{N}}$ we have $s(n+3^p i)\leq s(n)+i$ in view of , and therefore $$\Sigma(m,m+k+l)\leq\Sigma(m-3^p i,m+k+l-3^p i)+(k+l)i.$$ Hence, setting $m'=m-3^p i$, we have $$\begin{aligned} \Sigma(m,m+k+l)-&\Sigma(m-k,m)-\Sigma(m-l,m)\\ &\leq \Sigma(m',m'+k+l)-\Sigma(m'-k,m')-\Sigma(m'-l,m')\\ &\leq 2k+l,\end{aligned}$$ where the last inequality uses the induction hypothesis. [*Case 2.*]{} The range $\{m-k+1,\dots,m-1\}$ contains a power of $3$. Say $m-j=3^p$, where $0<j<k$. We consider two subcases: The power of $3$ is among $m-k+1,\dots,m-l$, so $l\leq j<k$. By the induction hypothesis (with $j$ in place of $k$), $$\Sigma(m,m+j+l)-\Sigma(m-j,m)-\Sigma(m-l,m)\leq 2j+l. \label{eq:first-half}$$ Next, placing a digit “2" in front of the numbers $m-k,\dots,m-j-1$ increases their digital sums by exactly 2, so that $$\Sigma(m-k,m-j)=\Sigma(m-k+2\cdot 3^p,m-j+2\cdot 3^p)-2(k-j).$$ By , the numbers $m+j+l,\dots,m+k+l-1$ are strictly smaller than $3^{p+1}$. Hence, by Theorem \[thm:-strong-super-additivity\] and Lemma \[lem:complementation\], $$\Sigma(m-k+2\cdot 3^p,m-j+2\cdot 3^p)\geq \Sigma(m+j+l,m+k+l),$$ and so $$\Sigma(m-k,m-j)\geq \Sigma(m+j+l,m+k+l)-2(k-j). \label{eq:second-half}$$ Combining and gives . The power of $3$ is among $m-l+1,\dots,m-1$, so $0<j<l$. By the induction hypothesis (with $j$ in place of $l$), $$\Sigma(m,m+k+j)-\Sigma(m-k,m)-\Sigma(m-j,m)\leq 2k+j. \label{eq:2b-first-half}$$ Now by , the first digit of $m+k+l-1$ must be a “1". We can now place a “1" in front of each number $m-l,\dots,m-j-1$ and use Theorem \[thm:-strong-super-additivity\] and Lemma \[lem:complementation\] to obtain $$\begin{aligned} \Sigma(m-l,m-j)&=\Sigma(m-l+3^p,m-j+3^p)-(l-j)\\ &\geq \Sigma(m+k+j,m+k+l)-(l-j).\end{aligned}$$ Along with , this yields . The proof of Theorem \[thm:general-bound\] uses the following lemma, whose easy proof is left as an exercise for the interested reader. \[lem:power-of-b\] For each $k\in{\mathbb{N}}$, there exist integers $p\geq 0$ and $j\leq[(b+1)/2]$ such that $k\leq jb^p<2k$. Fix $m$. We use induction on $k$. The statement is trivial when $k=0$, so let $1\leq l\leq m$ and assume holds for all $k<l$, with strict inequality in case $b$ is odd. By Lemma \[lem:power-of-b\], there exist integers $p\geq 0$ and $j\leq[(b+1)/2]$ such that $l\leq jb^p<2l$. Thus for each $r\in\{m-l,\dots,m+l-jb^p-1\}$, we have $r<m$, $r+jb^p\in\{m,\dots,m+l-1\}$ and, by , $s_b(r+jb^p)\leq s_b(r)+j$. Hence, $$\begin{aligned} \Sigma(m-l+jb^p,m+l)-\Sigma(m-l,m+l-jb^p)&\leq j(2l-jb^p) \notag \\ &\leq \left[\frac{b+1}{2}\right](2l-jb^p). \label{eq:direct-part}\end{aligned}$$ And the induction hypothesis applied to $k=jb^p-l$ gives $$\Sigma(m,m-l+jb^p)-\Sigma(m+l-jb^p,m)\leq \left[\frac{b+1}{2}\right](jb^p-l), \label{eq:induction-part}$$ since $jb^p-l<l$. Adding inequalities and yields $$\Sigma(m,m+l)-\Sigma(m-l,m)\leq \left[\frac{b+1}{2}\right]l, \label{eq:final-result}$$ so holds also for $k=l$. Statement (ii) of the theorem follows immediately from by rearranging terms and dividing by $2k$. We next demonstrate strict inequality when $b$ is odd. Assume first that $l$ is of the form $l=jb^p$. (This includes the case $l=1$.) If $j<(b+1)/2$ we have strict inequality in , so assume that $j=(b+1)/2$. But then $2l>b^{p+1}$, so we can replace $p$ with $p+1$ and $j$ with $1$ in the induction argument above, and once again obtain strict inequality in , since $1<(b+1)/2$ for odd $b$. When $l$ is not of the form $jb^p$, the induction hypothesis is used with $k=jb^p-l>0$, giving strict inequality in . Thus, in both cases, we have strict inequality in . Finally, we show that the inequality is sharp. For even $b$, take $m=k=b^n/2$ for any $n\in{\mathbb{N}}$. It is easy to calculate inductively, using Lemma \[lem:base-multiply\], that $$S_b(b^n)-2S_b\left(\frac{b^n}{2}\right)=\frac{b^{n+1}}{4}=\left[\frac{b+1}{2}\right]\cdot\frac{b^n}{2},$$ obtaining equality in . (The base case $n=1$ is left as an exercise for the interested reader.) When $b$ is odd, the computation is more tedious. Here we take $m=k=k_n:=(b^n-1)/2$, and claim that $$S_b(2k_n)-2S_b(k_n)=\frac{b+1}{2}k_n-\frac{(b-1)n}{2}, \label{eq:difference-formula}$$ so that $$\frac{S_b(2k_n)-2S_b(k_n)}{k_n}\to \frac{b+1}{2}=\left[\frac{b+1}{2}\right], \qquad\mbox{as $n\to\infty$}.$$ To derive we start with the well-known observation that, for any $b$, $$S_b(b^n)=\frac{nb^n(b-1)}{2}, \qquad n\in{\mathbb{N}}.$$ From this, we obtain $$S_b(2k_n)=S_b(b^n)-s_b(b^n-1)=\frac{nb^n(b-1)}{2}-n(b-1). \label{eq:twice-k}$$ The computation of $S_b(k_n)$ may be done inductively, using the recursion $k_{n+1}=bk_n+(b-1)/2$. For $0\leq m<b$ and $k\in{\mathbb{N}}$ we have $$S_b(bk+m)=S_b(bk)+\sum_{j=0}^{m-1}s_b(bk+j),$$ and since $s_b(bk+j)=s_b(k)+j$ for $0\leq j<b$, this leads via Lemma \[lem:base-multiply\] to a recursion for $S_b(k_n)$, noting that $s_b(k_n)=n(b-1)/2$. One can then inductively verify the formula $$S_b(k_n)=\frac{b^n-1}{4}\left(n(b-1)-\frac{b+1}{2}\right).$$ This, together with , leads after some more manipulations to . To prove Theorem \[thm:b-times-longer\], we will demonstrate a slightly stronger result. Define a partial order $\prec_b$ on ${\mathbb{N}}$ by $n\prec_b m$ if and only if $\alpha_i(n;b)\leq \alpha_i(m;b)$ for every $i$. \[thm:b-ary-array\] Fix $b\geq 2$. For each $k\in{\mathbb{N}}$, the numbers $0,1,\dots,bk-1$ can be arranged in a $b\times k$ matrix $A_k=[a_{i,j}]_{i=1,j=1}^{b,k}$ such that: (i) $a_{1,j}=j-1$ for $j=1,\dots,k$; (ii) $a_{1,j}\prec_b a_{i,j}$ for $i=1,\dots,b$ and $j=1,\dots,k$; and (iii) $a_{i,j}-a_{1,j}$ is the sum of exactly $i-1$ powers of $b$; that is, $s_b(a_{i,j})=s_b(a_{1,j})+i-1$, for $i=1,\dots,b$ and $j=1,\dots,k$. An example of such an arrangement for $b=3$ and $k=5$ is $$A_5=\begin{bmatrix} 0 & 1 & 2 & 3 & 4\\9 & 10 & 11 & 6 & 5\\12 & 13 & 14 & 7 & 8 \end{bmatrix}.$$ Note that the arrangement is by no means unique: in the above example we could interchange $6$ and $12$, or $5$ and $7$, etc. We prove Theorem \[thm:b-ary-array\] by describing a simple algorithm for constructing the matrix $A_k$. This requires some terminology and a lemma. Fix $b\geq 2$. Suppose a finite set of pegs are placed in a finite rectangular array of holes. A hole has [*position*]{} $(i,j)$ if it is the $j$th hole (from the left) in the $i$th row (from the top). For $k\in{\mathbb{Z}}_+$, a $b^k$-[*shift*]{} is the move of a peg from any position $(i,j)$ with $j>b^k$ to the new position $(i+1,j-b^k)$. In other words, a $b^k$-shift moves a peg $b^k$ columns to the left and one row down. A [*power shift*]{} is any $b^k$-shift, where $k\in{\mathbb{Z}}_+$. A $b^k$-shift from $(i,j)$ to $(i+1,j-b^k)$ is [*permissible*]{} if position $(i+1,j-b^k)$ is not yet occupied and there is $l\in{\mathbb{N}}$ such that $(l-1)b^{k+1}<j-b^k<j\leq lb^{k+1}$. \[lem:arrange-pegs\] For any $n\in{\mathbb{N}}$, a single row of $n$ pegs can be rearranged by a finite sequence of permissible power shifts into a table of $b$ rows and $\lceil n/b\rceil$ columns so that each column except possibly the last contains $b$ pegs, and in the last column no peg is placed below an empty hole. For $k\in{\mathbb{Z}}_+$, let a $k$-[*tableau*]{} be an arrangement of $b$ rows of pegs (possibly empty), aligned on the left and ordered by decreasing length, with the property that each row except perhaps one contains either zero or $b^k$ pegs. We claim that any $k$-tableau can be arranged by permissible power shifts into a table as described in the lemma. This is trivial for $k=0$, as a $0$-tableau already has the required form. Suppose the claim is true for some arbitrary $k\in{\mathbb{Z}}_+$, and let a $(k+1)$-tableau be given. Then some number $f\geq 0$ of rows (at the top of the table) contain $b^{k+1}$ pegs, row $f+1$ contains some number $m$ of pegs ($0\leq m<b^{k+1}$), and the remaining $b-f-1$ rows are empty. Note that in this tableau all $b^k$-shifts to empty holes are permissible. Let $l\in\{1,\dots,b\}$ be such that $(l-1)b^k\leq m<lb^k$. After performing all permissible $b^k$-shifts, the tableau is transformed into a new table with: - $l-1$ rows of $(f+1)b^k$ pegs; followed by - one row of $fb^k+m-(l-1)b^k$ pegs; followed by - $b-l$ rows of $fb^k$ pegs. In this new table, each row is at least $fb^k$ long, and columns $fb^k+1,\dots,(f+1)b^k$ form a $k$-tableau, which by the induction hypothesis can be rearranged as required. Together with the first $fb^k$ columns, this gives a rearrangement of the entire $(k+1)$-tableau as required. The statement of the lemma now follows because a row of $n\geq 2$ pegs can be trivially turned into a $k$-tableau by adding $b-1$ empty rows, where $k$ is the integer such that $b^{k-1}<n\leq b^{k}$. We may apply Lemma \[lem:arrange-pegs\] with $n=bk$ to see that the single row containing the numbers $0,1,\dots,bk-1$ in increasing order may be rearranged into a $b\times k$ matrix $[a_{i,j}]$ by permissible power shifts only. Clearly, the first row of this matrix contains the numbers $0,1,\dots,k-1$ in increasing order (since no numbers are ever moved into the first row by power shifts), so (i) is satisfied. We show (ii) by induction on $i$. Note that (ii) is trivial for $i=1$. Fix $j\in\{1,\dots,k\}$, and suppose $a_{1,j}\prec_b a_{i,j}$. The number $a_{i+1,j}$ was last moved from a position in row $i$ by shifting it some distance $b^r$ to the left. Since this was a permissible move, we have $j-1\prec_b j+b^r-1$, in other words, $a_{1,j}\prec_b a_{1,j+b^r}$. But $a_{i+1,j}$ had arrived at its position in row $i$ by a sequence of permissible moves, so by the induction hypothesis, $a_{1,j+b^r}\prec_b a_{i+1,j}$. Hence, $a_{1,j}\prec_b a_{i+1,j}$. This proves (ii). Property (iii) follows from (ii), as clearly $a_{i,j}-a_{1,j}$ is a sum of $i-1$ powers of $b$. Observe that, in terms of the notation $\Sigma_b(s,t)$, we are to prove that $$\Sigma_b(n,n+bk)\leq b\Sigma_b(n,n+k)+\frac{b(b-1)}{2}k. \label{eq:b-times-longer}$$ Let $[a_{i,j}]_{i=1,j=1}^{b,k}$ be a matrix satisfying the conclusion of Theorem \[thm:b-ary-array\]. Note that $$\Sigma_b(n,n+k)=\sum_{j=1}^k s_b(n+j-1)=\sum_{j=1}^k s_b(a_{i,j}+n),$$ and $$\Sigma_b(n,n+bk)=\sum_{j=1}^{bk}s_b(n+j-1)=\sum_{i=1}^b \sum_{j=1}^k s_b(a_{i,j}+n).$$ By property (iii) of Theorem \[thm:b-ary-array\] and , $s_b(a_{i,j}+n)-s_b(a_{1,j}+n)\leq i-1$. Hence, $$\begin{aligned} \Sigma_b(n,n+bk)&\leq \sum_{i=1}^b \sum_{j=1}^k\{s_b(a_{1,j}+n)+i-1\}\\ &=b\sum_{j=1}^k s_b(a_{1,j}+n)+k\sum_{i=1}^b (i-1)\\ &=b\Sigma_b(n,n+k)+\frac{b(b-1)}{2}k,\end{aligned}$$ completing the proof. Note that property (ii) of Theorem \[thm:b-ary-array\] was not needed in the last proof. However, dropping the requirement (ii) from Theorem \[thm:b-ary-array\] does not appear to lead to a simpler proof, whereas including it adds to the independent interest of that theorem. Application to approximate convexity {#sec:application} ==================================== Delange [@Delange] introduced the functions $$h_b(x)=\sum_{n=0}^\infty b^{-n}g_b(b^n x),$$ where for each $b\geq 2$, $g_b$ is the 1-periodic continuous function defined on $[0,1)$ by $$g_b(x)=\int_0^x\left(\frac{b-1}{2}-[bt]\right)\,dt.$$ For the case $b=2$, we have $g_2(x)=(1/2){\operatorname{dist}}(x,{\mathbb{Z}})$, where ${\operatorname{dist}}(x,{\mathbb{Z}})$ denotes the distance from $x$ to the nearest integer, and hence $h_2$ is one-half times the Takagi function [@Takagi]. The relationship between the Takagi function and the binary digital sum $S_2$ was first established by Trollope [@Trollope]. Delange [@Delange] generalized this relationship by showing that, for each $n\in{\mathbb{N}}$, $$S_b(n)=\frac{b-1}{2}n\log_b n+nF(\log_b n), \label{eq:Delange}$$ where $$F(x)=\frac{b-1}{2}(1-\{x\})-b^{1-\{x\}}h_b(b^{\{x\}-1}),$$ in which $\{x\}:=x-[x]$ denotes the fractional part of $x$. (The function $h$ in Delange’s paper is actually $-h_b$; the reason for the present representation is that $h_b$ is actually nonnegative, as is easily verified.) In addition to establishing , Delange [@Delange] proves that $h_b$ is nowhere differentiable for each $b\geq 2$. A different sequence of functions was recently introduced by Lev [@Lev]. For $b\in{\mathbb{N}}$, let $\phi_b(x)=\min\{{\operatorname{dist}}(x,{\mathbb{Z}}),1/b\}$, and define the function $$\omega_b(x):=\sum_{n=0}^\infty b^{-n}\phi_b(b^n x).$$ Lev demonstrates a direct connection between $\omega_b(x)$ and the edge-isoperimetric problem for Cayley graphs of homocyclic groups of exponent $b$. Comparison with Delange’s functions shows that $\omega_2=2h_2$, and $\omega_3=h_3$. After that, the two sequences go their separate ways: For $b\geq 4$, there is no direct relationship between $h_b$ and $\omega_b$, although $\omega_4=(1/2)\omega_2=h_2$. For the Takagi function $\omega_2$, Boros [@Boros] proved the inequality $$\omega_2\left(\frac{x+y}{2}\right)\leq \frac{\omega_2(x)+\omega_2(y)}{2}+\frac{|y-x|}{2}, \label{eq:Boros-Pales}$$ which had been conjectured by Házy and Páles [@Hazy-Pales]. We will show here that all of Delange’s functions satisfy an inequality similar to . \[thm:Delange-approx-convex\] Let $b\geq 2$. For all real $x$ and $y$ with $x<y$, we have $$h_b\left(\frac{x+y}{2}\right)\leq \frac{h_b(x)+h_b(y)}{2}+\frac14\left[\frac{b+1}{2}\right](y-x). \label{eq:Delange-approx-convex}$$ For $\omega_3=h_3$, Lev [@Lev Theorem 3] proves the following interesting inequality, which develops the Boros-Pales inequality in a different but equally natural direction. \[thm:lev\] For all real $x$, $y$ and $z$ with $x\leq y\leq z$, we have $$h_3\left(\frac{x+y+z}{3}\right)\leq \frac{h_3(x)+h_3(y)+h_3(z)}{3}+\frac13(z-x). \label{eq:ternary-approximate-convexity}$$ It is straightforward to deduce Theorems \[thm:Delange-approx-convex\] and \[thm:lev\] from Theorems \[thm:general-bound\] and \[thm:ternary-inequality\], respectively. The key is to derive an expression for $h_b$ at points of the form $x=k/b^n$ in terms of $S_b$, and to use the continuity of $h_b$. We first define the partial sums $$h_b^{(n)}(x)=\sum_{k=0}^{n-1} b^{-k}g_b(b^k x),$$ and note that for $k\in{\mathbb{Z}}$, $h_b(k/b^n)=h_b^{(n)}(k/b^n)$. For $x\in[0,1)$, let $x=\sum_{i=1}^\infty {{\varepsilon}}_i(x)b^{-i}$ denote the $b$-ary expansion of $x$, where ${{\varepsilon}}_i(x)\in\{0,1,\dots,b-1\}$. If $x$ is of the form $x=k/b^n$, we take the expansion ending in all zeros. Observe that for each $x\in(0,1)$, the right-hand derivative of $g_b$ at $x$ is $(b-1)/2-{{\varepsilon}}_1(x)$. Hence, by the periodicity of $g_b$, the slope of $h_b^{(n)}$ at any point $x$ not of the form $k/b^n$ is $$\sum_{k=1}^n\left(\frac{b-1}{2}-{{\varepsilon}}_k(x)\right)=\frac{b-1}{2}n-\sum_{k=1}^n {{\varepsilon}}_k(x).$$ This simple observation yields the formula $$h_b\left(\frac{k}{b^n}\right)-h_b\left(\frac{k-1}{b^n}\right) =b^{-n}\left(\frac{b-1}{2}n-s_b(k-1)\right),$$ and hence, $$h_b\left(\frac{k}{b^n}\right)= b^{-n}\left(\frac{b-1}{2}kn -\sum_{i=0}^{k-1}s_b(i)\right) =b^{-n}\left(\frac{b-1}{2}kn-S_b(k)\right). \label{eq:b-ary-expression}$$ Assume first that there exist nonnegative integers $n$, $m$ and $k$ such that $$x=\frac{m-k}{b^n}, \qquad y=\frac{m+k}{b^n}, \label{eq:b-adic-rational}$$ so that $(x+y)/2=m/b^n$. One verifies easily using that $$2h_b\left(\frac{m}{b^n}\right)-h_b\left(\frac{m-k}{b^n}\right)-h_b\left(\frac{m+k}{b^n}\right)=\frac{S_b(m-k)+S_b(m+k)-2S_b(m)}{b^n},$$ since the terms involving $(b-1)/2$ cancel. Thus, Theorem \[thm:general-bound\] gives for $x$ and $y$ of the form , as $k/b^n=(y-x)/2$. But any two real points $x$ and $y$ with $x<y$ can be approximated arbitrarily closely by points $x'$ and $y'$ of the form . Thus, the proof is completed by using the continuity of $h_b$. Let $0\leq x\leq y\leq z$, and put $a=(x+y+z)/3$. By symmetry of $h_3$, we may assume without loss of generality that $y\leq (x+z)/2$, so that $x\leq y\leq a\leq z$. Since $h_3$ is continuous, we may assume further that $x,y,z$ and $a$ are all triadic rational; that is, there exist nonnegative integers $n, m, k$ and $l$ with $m\geq k\geq l$ such that $$a=\frac{m}{3^n}, \qquad x=\frac{m-k}{3^n}, \qquad y=\frac{m-l}{3^n}, \qquad z=\frac{m+k+l}{3^n}.$$ Upon multiplying both sides by $3$, we can write for this case as $$3h_3\left(\frac{m}{3^n}\right)\leq h_3\left(\frac{m-k}{3^n}\right) +h_3\left(\frac{m-l}{3^n}\right) +h_3\left(\frac{m+k+l}{3^n}\right) +\frac{2k+l}{3^n}.$$ By , this is equivalent to $$\begin{gathered} 3(mn-S_3(m))\leq [(m-k)n-S_3(m-k)]+[(m-l)n-S_3(m-l)]\\ +[(m+k+l)n-S_3(m+k+l)]+(2k+l),\end{gathered}$$ and this simplifies to . Note that the number $n$ disappears from the inequality in the end. This suggests that Lev’s approach of induction on $n$ is perhaps not the most natural. While the above proof uses Theorem \[thm:ternary-inequality\], whose proof is quite long, Lev’s original proof is rather lengthy as well, and the present proof seems to be conceptually more pleasing. [6]{} , An inequality for sums of binary digits, with application to Takagi functions, [*J. Math. Anal. Appl.*]{} [**381**]{} (2011), no. 2, 689–694. , An inequality for the Takagi function. [*Math. Inequal. Appl.*]{} [**11**]{} (2008), no. 4, 757–765. , Sur la fonction sommatoire de la fonction “somme des chiffres", [*Enseignement Math.*]{} [**21**]{} (1975), 31–47. , [C. Mauduit]{} and [J. Rivat]{}, The sum-of-digits function of polynomial sequences, [*J. Lond. Math. Soc. (2)*]{} [**84**]{} (2011), no. 1, 81–102. and [Zs. Páles]{}, On approximately midconvex functions, [*Bull. London Math. Soc.*]{} [**36**]{} (2004), 339–350. , Edge-isoperimetric problem for Cayley graphs and generalized Takagi function, [*preprint*]{}, arXiv:1202.2566 (2012) and [A. Sárközy]{}, On the arithmetic structure of the integers whose sum of digits is fixed, [*Acta Arith.*]{} [**81**]{} (1997), no. 2, 145–173. , The number of 1’s in binary integers: bounds and extremal properties, [*SIAM J. Comput.*]{} [**3**]{} (1974), no. 4, 255–261. , On Gelfond’s conjecture about the sum of digits of prime numbers, [*J. Théor. Nombres Bordeaux*]{} [**21**]{} (2009), no. 2, 415–423. , Power and exponential sums of digital sums related to binomial coefficient parity, [*SIAM J. Appl. Math.*]{} [**32**]{} (1977), no. 4, 713–730. , A simple example of the continuous function without derivative, [*Phys.-Math. Soc. Japan*]{} [**1**]{} (1903), 176-177. [*The Collected Papers of Teiji Takagi*]{}, S. Kuroda, Ed., Iwanami (1973), 5–6. , An explicit expression for binary digital sums, [*Math. Mag.*]{} [**41**]{} (1968), 21–25.
--- abstract: 'The Stockholm Educational Air Shower Array (SEASA) project has established a network of GPS time-synchronised scintillator detector stations at high-schools in the Stockholm region. The primary aim of this project is outreach. A part of the network comprises a dense cluster of detector stations located at AlbaNova University Centre. This cluster is being used to study the cosmic ray anisotropy around the knee. Each station consists of three scintillator detectors in a triangular geometry which allows multiple timing measurements as the shower front sweeps over the station. The timing resolution of the system has been determined and the angular resolution has been studied using Monte Carlo simulations and is compared to data. The potential of this system to study small and large scale cosmic ray anisotropies is discussed.' author: - title: Cosmic ray anisotropy studies with the Stockholm Educational Air Shower array --- Introduction ============ The ’Stockholm Educational Air Shower Array’ (SEASA) [@seasa] project has established seven detector stations in the Stockholm area, distributed according to Fig. \[pic:array\]. Each detector station consists of three scintillator detectors, separated by approximately 15 m, which are read out by large area photomultipliers placed directly on top of the scintillators. The design of the detectors is explained in detail in [@mylic]. Four of the stations have been built by high school students and are located on the attics or roofs at their high schools. The separation between these stations are in the order of kilometers and the energy threshold to trigger multiple stations is therefore above $10^{18}$ eV. To be able to study lower energetic cosmic rays a more dense cluster of three detector stations has been deployed at the AlbaNova University Area (the *AlbaNova array*). This cluster detects air showers with energies above $~10^{16}$ eV and is being used to study the cosmic ray arrival distribution in this energy regime. To study the cosmic ray anisotropy the direction of the primary cosmic ray must be possible to reconstruct. This can be done as the air shower front travels in essentially the same direction as the primary particle. Assuming a flat shower front, the direction can be determined by measuring the time difference between hit detector stations. By fitting the geometry of the detectors and the trigger times to the shower plane the incident angles of the shower can be reconstructed. With a fixed geometry the accuracy of the reconstruction ultimately depends on the trigger time resolution, set by the GPS system. The time resolution of the GPS systems is therefore investigated in section \[sec:timeres\]. The pointing accuracy, or angular resolution, of the AlbaNova array has been assessed by simulations and this is described in section \[sec:angres\]. The dependence of the angular resolution on the timing accuracy is also investigated here, as well as the trigger efficiency of the array. The final section \[sec:anisotropy\] describes the methods used to study cosmic ray anisotropies. Finally, the hypothesis of a uniform flux of cosmic rays are tested using data taken during approximately six months of operation of the AlbaNova array. ![The seven stations constituting the SEASA air shower array. Four stations are located at high schools and are labeled after the corresponding schools. The three stations at the AlbaNova area are shown in greater detail in the inset picture in the upper right corner. The dotted station in this picture is a fourth station planned for installation.[]{data-label="pic:array"}](array){width="3.5in"} Time resolution of the system {#sec:timeres} ============================= To test the performance of a GPS system, the offset between the time-tags produced by the GPS system subject for measurement and a reference system, fed with a common trigger signal, is investigated. The output from the GPS card is a 1 Pulse Per Second (PPS) signal with an accuracy of $\pm$ 25 ns. The PPS can only be emitted on the rising edge of the GPS-cards internal 100 MHz oscillator, which introduces a built in uncertainty. The output from the GPS card also contains a negative sawtooth correction. This correction is a prediction of how early or late the next PPS signal will be due to the limitations of the internal 100 MHz oscillator. With the aid of this correction the PPS should be accurate to within 5 ns according to the developer of the GPS cards [@gps]. The principle of the test is as follows. For every trigger, a time stamp from each GPS card is retrieved. These time stamps should be identical in a perfect system. The time stamp is provided by the sawtooth corrected PPS and a 100 MHz oscillator implemented in the Programmable Logic Array (PLA) in the readout electronics. The information in the PPS signal gives the time within the second and the oscillator determines the trigger time relative to the PPS with a 5 ns resolution. A self calibrating system for the 100 MHz crystals were used to compensate for differences in the crystal frequencies and variations in the crystal frequency. To be self calibrating, the system counts the number of oscillations between PPS’s, and uses this value to calibrate the number of oscillations from the trigger to the PPS. All measurements in the test were done with a satellite mask angle of ten degrees to exclude unreliable time measurements from satellites close to the horizon. The result of the measurement for the AlbaNova E station is shown in Fig. \[gpstest\]. The offset of -18.5 ns for the mean value changes sign when the GPS cards are exchanged, indicating a systematic error between these two. This effect can be canceled by calibrating each card against a “standard card”, and then correcting the time stamps from each card accordingly. The standard deviation of 13.6 ns is the time resolution for this pair of GPS systems. The time resolution of a single GPS system is 9.6 ns considering the GPS systems equal and independent [@mylic]. The time resolution for each of the seven GPS systems in the present array were measured to be less than 15 ns. In the test, the GPS cards mostly tracked the same satellites. It is inevitable that detector stations spread out over a larger area will have different sets of satellites visible to them. A test was therefore conducted where the GPS cards were configured to use independent sets of satellites. The standard deviation then increased by approximately 50 %. Simulation of the angular resolution {#sec:angres} ==================================== The angular resolution of the sub-array at AlbaNova has been assessed with Monte Carlo simulations, using the simulation engine AIRES. Primary particles with energies above $10^{16}$ eV following the power-law $\frac{dE}{dN}\sim E^{-3.0}$ were generated and injected at the top of the atmosphere. The lower energy was chosen considering the energy threshold of the AlbaNova array, known to be slightly higher than this value from simulations (see section \[sec:triggeff\]). The injection angle was sampled from a uniform distribution. A total of 2000 cosmic rays with the above properties were generated and the ground particles from each shower were repeatedly used to hit the ground at different offsets relative to the detector array. The impact coordinates of the shower core was set to follow a $9\times9$ grid with a node separation of 50 m and the origin placed in the centre of the detector array. The number of detected showers outside this area is negligible and can thus be disregarded. A detector is triggered if it is hit by an electron, muon or heavier charged particle. If a photon hits the detector it is triggered with a 1% probability. This value corresponds to the probability that a photon deposit at least 1 MeV in a 1 cm thick scintillator. To simulate imperfections in the GPS time tagging a time jitter $\sigma_t$ sampled from a Gaussian with a standard deviation of 15 ns is added to the time of the hit. This value is based on the time resolution measurements presented in section \[sec:timeres\]. The trigger efficiency {#sec:triggeff} ---------------------- The trigger efficiency of the air shower array has been determined from the simulations simply by dividing the number of detected showers with the number of incident cosmic rays for bins of energy. Fig. \[pic:triggeffsuper\] shows the result of the simulation for the two different trigger criteria: $3/3$ detectors hit or at least $2/3$ detectors hit. The energy threshold is consequently around $10^{16.5}$ eV and the most probable energy slightly higher, $\sim10^{17}$ eV. Thus, SEASA detects cosmic rays with energies *below* the knee with this setup. The trigger efficiency is slightly underestimated in this study as the simulation program does not propagate particles close to the core and a detector station is thus not triggered if an air shower strikes directly on top of it. The underestimation is believed to be around 5 % as the efficiency should saturate at 100 % for large energy showers (see Fig. \[pic:triggeffsuper\]).. The underestimation is believed to be equal for all energies. The angular resolution of the AlbaNova array -------------------------------------------- The angular resolution of the detector array is derived by comparing the shower direction reconstructed from the timing information of the hit detectors with the direction of the primary particle inputted in the simulation. The precision of the reconstruction is measured as the angular distance between the two shower directions, characterised by the parameters $(\theta_1,\phi_1)$ and $(\theta_2,\phi_2)$, where $\theta$ is the zenith angle and $\phi$ the azimuthal. The angular distance between two directions is then calculated as $$\Psi = cos^{-1}\left(cos\theta_1cos\theta_2+sin\theta_1sin\theta_2cos(\phi_1-\phi_2)\right)$$ The angular resolution of the detector array is defined as the angular distance which contains $68\%$ of the reconstructed angles. This is the most common way to define the angular resolution and therefore makes it straight-forward to compare the result from SEASA to other air shower arrays. Some experiments use the $50\%$ level as the angular resolution and this is therefore included in the results below. The distribution of the angular distance is plotted in Fig. \[pic:angleres\] below for the $3/3$ trigger criteria. Superimposed on the distribution is the integral of the histogram with the corresponding axis to the right in the plot. The vertical dotted line marks the $68\%$ level of the integral and thus the angular resolution. The angular accuracy of the AlbaNova array is summarised in Table \[tab:angres\] for the two trigger criteria. The errors have been calculated by randomly regenerating the histogram of the angular distance a large number of times from the true distribution. The RMS of the derived distribution of the angular resolution is then used as the error for the true angular resolution. The angular accuracy, as shown in Table \[tab:angres\], is the same for both trigger modes, within statistical fluctuations. This is an important conclusion and makes it possible to increase the trigger rate by loosening the trigger criterion without compromising the angular accuracy of the array. Trigg. Crit. Resolution $68\%$ Resolution $50\%$ -------------- ---------------------- ------------------- $\Psi_{2/3}$ [**6.5$\pm$ 0.3**]{} 4.5$\pm$ 0.2 $\Psi_{3/3}$ [**7.0$\pm$ 0.3**]{} 4.5$\pm$ 0.2 : The angular resolution of the detector array for two different trigger criteria and levels. The $68\%$ level is used for the definition of the angular resolution for SEASA.[]{data-label="tab:angres"} The time resolution of 15 ns used in the simulations is only valid for the AlbaNova array, mainly due to two reasons: First, the relative time accuracy between two GPS setups decreases with separation distance because of differences in the atmosphere along the satellite-antenna path lengths. Secondly, the set of visible GPS satellites can change when the separation between the antennas is large, which has a negative influence on the relative time resolution. The effect of the time resolution on the angular resolution has been investigated by varying the Gaussian time jitter $\sigma_t$, introduced in the last section, and the result is presented below in Fig. \[pic:angresvstime\]. The angular accuracy is seen to decrease approximately linearly with the time resolution. Validation of the simulations ----------------------------- To confirm the validity of the simulations the difference between the reconstructed angles by the AlbaNova array and by each station in the array are derived for real and simulated data. These are compared below in Fig. \[pic:comparisonsimreal\] and the agreement is relatively good for AlbaNova W and E. The shift in the histograms between simulated and real data for the azimuthal distributions are likely caused by the crude measurement of the local coordinate system for the detectors in each station. The agreement is however poor for AlbaNova S. This is most likely due to the effect of the roof and walls that surround this station. Simulations that takes this into account will be performed in the future. However, the results in Fig. \[pic:comparisonsimreal\] are a good indicator that the simulations are correct. The difference between the reconstructed angles are in fact slightly smaller for real data indicating that the performance of the array may be better than the simulations show. Anisotropy searches {#sec:anisotropy} =================== This section presents a preliminary study of the methods that can be used by the SEASA experiment to search for small and large scale cosmic ray anisotropies. To date, the collected statistics are poor due to the short period of data taking, the small exposure and of the numerous tests that have been performed during the initial phase of the project. SEASA aims to lower the energy threshold in the future, by adding more stations and loosening the trigger criteria, thereby increasing the rate of detected showers. More accurate studies of small and large scale anisotropies will then be feasible. The search for anisotropies relies heavily on the estimation of the number of cosmic rays expected from each direction in the sky assuming a uniform flux over the celestial sphere. Such a estimation is henceforth called a *coverage map*. An unbiased coverage map is crucial in order to separate true anisotropies from acceptance effects. This is relatively straight-forward for ultra high energy cosmic rays ($E>10^{18}$ eV) where the total acceptance almost exclusively depends on the geometrical acceptance of the experiment. The derivation of the coverage map is more complicated at lower energies as variations in the atmospheric conditions then influences the detector acceptance heavily. This is balanced somewhat by the large number of low energy events. The coverage map {#sec:coverage} ---------------- The coverage map for a given data set is obtained by integrating the acceptance of the experiment over the data taking period. The acceptance generally depends on weather conditions and the direction in the sky, characterised by declination and right ascension. This corresponds to ($\theta(t)$,$\phi(t)$) at UTC [^1] t, in horizontal coordinates. Using the simplification that the acceptance is independent of azimuthal angle $\phi$ and weather conditions, the acceptance is a function of zenith angle $\theta$ only. The zenith angle distribution has been shown [@auger_coverage] to be almost unaffected by anisotropies, and this distribution is therefore used as a basis when calculating the acceptance. The function $$P(\theta)=Acos(\theta)sin(\theta)\frac{1}{1+exp\left(\frac{\theta-\theta_0}{\Delta\theta}\right)}$$ is fitted to the measured zenith angle distribution and converted to declination acceptance through the formula $a(\theta)=\frac{1}{sin(\theta)*P(\theta)}$ (solid angle effect). A coverage map, that only depends on declination, is then generated by integrating the acceptance over one sidereal day. $$\label{eqn:oneday} W(\delta)=\int_{0}^{24h}a(\theta)dt$$ The resultant coverage map in galactic coordinates can be seen in Fig. \[pic:coveragemap\] ![The coverage map in galactic coordinates.[]{data-label="pic:coveragemap"}](mapGal){width="3.5in"} A first measurement of the cosmic ray anisotropy {#sec:measurement} ------------------------------------------------ The hypothesis that the flux of cosmic rays is isotropic can be tested using data from the SEASA experiment. A simple approach is to derive the angular two point correlation function $w(\Phi)$. In its angular form it is defined by the expression $$\label{eqn:twopoint} \delta P = N[1+w(\Phi)]\delta\Omega$$ where $\delta P$ is the probability to find a second object within an angular distance of $\Phi$ from the primary object within an area $\delta\Omega$ if the mean object density is N. The two point correlation function thus represents an “excess probability” above what is expected from an isotropic distribution. The measured sky-plot of cosmic rays is plotted in galactic coordinates in Fig. \[pic:eventmap\]. The two-point correlation distribution is derived by calculating the distance between all possible pair of events for this data set. To compare this to the hypothesis that the arrival distribution is isotropic a second two-point correlation distribution is derived from a randomly generated isotropic distribution convoluted with the coverage map derived in the previous section. Possible deviations of the first distribution from the second then reveals anisotropies of the cosmic ray arrival distribution. Both correlation distributions mentioned above are plotted in Fig. \[pic:corr\]. The probability that the observed flux is a random sampling from an isotropic flux is checked with a Kolmogorov test and it is found to be 82%. The hypothesis of an isotropic flux is therefore supported. ![A sky map in galactic coordinates showing the detected events.[]{data-label="pic:eventmap"}](mapDetectedGal){width="3.5in"} Conclusions =========== A measurement of the cosmic ray anisotropy has been made using the AlbaNova array which forms a sub array of the SEASA outreach project. The result favors a scenario with a uniform flux of cosmic rays in the energy regime above $10^{16}$ eV, and therefore agree with previous measurements, for example KASCADE [@kascade]. It is a crude measurement of the cosmic ray anisotropy but it shows that the array is stable and in particular that the GPS timing is reliable. The measurement will be improved in the near future by adding a fourth station at the AlbaNova area which increases the exposure of the array and also improves the shower angle reconstruction. A future improvement is also to enhance the read out electronics with a pulse sampling device. This would make it possible to determine the shower core and curvature and thereby improving the angular resolution of the array. This would allow anisotropy searches on smaller scales than what has been possible up to now. [1]{} SEASA homepage *http://www.particle.kth.se/SEASA* P. Hofverberg, “Imaging the High Energy Cosmic Ray Sky”, Licentiate Thesis, Royal Institute of Technology (2006). Motorola homepage *http://www.motorola.com* J.-Ch Hamilton, “Coverage and large scale anisotropies estimation methods for the Pierre Auger Observatory”. Proc. 29th ICRC, Pune (2005) T. Antoni et. al. “Large scale cosmic ray anisotropy with KASCADE”, Astro. Part. J.,604:687-692 (2004) [^1]: Coordinated Universal Time
--- author: - | Karel Horák, Branislav Bošanský\ `{horak,bosansky}@agents.fel.cvut.cz` bibliography: - 'main.bib' title: 'Dynamic Programming for One-Sided Partially Observable Pursuit-Evasion Games' --- Introduction ============ Finite-horizon game {#sec:efg} =================== Value iteration {#sec:vi} =============== Conclusion ========== APPENDIX {#appendix .unnumbered} ========
--- abstract: 'We prove the existence of Hall polynomials for prinjective representations of finite partially ordered sets of finite prinjective type. In Section 4 we shortly discuss consequences of the existence of Hall polynomials, in particular, we are able to define a generic Ringel-Hall algebra for prinjective representations of posets of finite prinjective type.' author: - | Justyna Kosakowska[^1]\ *Faculty of Mathematics and Computer Science,\ *Nicolaus Copernicus University,\ *ul. Chopina 12/18, 87-100 Toru$\acute{n}$, Poland,\ *e-mail justus@mat.uni.torun.pl**** title: The existence of Hall polynomials for posets of finite prinjective type --- 2000 MSC: 16G20; 16G60; 16G70. Dedicated to Professor Daniel Simson on the occasion of his 65th birthday\ Introduction ============ Let $K$ be a finite field and let $A$ be a finite dimensional associative, basic $K$-algebra. All modules considered in the present paper are right, finite dimensional $A$-modules. Given $A$-modules $X,Y,Z$, denote by $F^Y_{Z,X}$ the number of submodules $U\subseteq Y$ such that $U\simeq X$ and $Y/U\simeq Z$. Moreover denote by $\Gamma_A$ the Auslander-Reiten quiver of the algebra $A$. The reader is referred to [@ars], [@ass] and to [@ri] for the definitions and the introduction to the theory of representations of algebras. Let $\Gamma=(\Gamma_0,\Gamma_1)$ be a directed Auslander-Reiten quiver, with the set of vertices $\Gamma_0$ and set of arrows $\Gamma_1$. Recall that for any field $K$ and any $K$-algebra $A$ such that $\Gamma_A=\Gamma$, we may identify a function $a:\Gamma_0\to \mathbb{N}$ with the corresponding $A$-module $M(A,a)=M(a)$ (see [@ringel1]). It was proved by C. M. Ringel (in [@ringel1]) that for any directed Auslander-Reiten quiver $\Gamma$ and all functions $a,b,c:\Gamma_0\to \mathbb{N}$, there exist polynomials $\varphi^b_{ca}\in \mathbb{Z}[T]$ with the following property: if $K$ is a finite field, and $A$ a $K$-algebra with $\Gamma_A=\Gamma$ and symmetrization index $r$, then $F^{M(A,b)}_{M(A,c),M(A,a)}=\varphi^b_{ca}(|K|^r)$. The polynomials $\varphi^b_{ca}$ are called [**Hall polynomials**]{}. Moreover, in [@ringel92], C. M. Ringel conjectured the existence of Hall polynomials for every representation finite algebra. In [@peng] it was proved that there exist Hall polynomials for representation-finite trivial extension algebras. The existence of Hall polynomials for cyclic symmetric algebras was proved in [@guo]. Now we present consequences of the existence of Hall polynomials. We restrict our considerations to hereditary algebras. Let $\Delta$ be a Dynkin quiver, $A=K\Delta$ – path algebra of $\Delta$ and $q\in \mathbb{C}$. Following [@ringel] we define ${{\cal H}}_q(\Delta)$ to be the free abelian group with basis $(u_M)_{[M]}$, indexed by the set of isomorphism classes of finite dimensional right $A$-modules. ${{\cal H}}_q(\Delta)$ is an associative ring with identity $u_0$, where the multiplication is defined by the formula $$u_{X_1}u_{X_2}=\sum_{[X]}\varphi^X_{X_1,X_2}(q)u_X,$$ and sum runs over all isomorphism classes of $A$-modules. We call ${{\cal H}}_q(\Delta)$ [**the Ringel-Hall algebra**]{} of $A$. The motivation for the study of Hall polynomials and Hall algebras comes from their connection with generic extensions, Lie algebras and quantum groups (see [@ringel], [@ringel1], [@ringel92], [@reineke2001]). It is known that ${{\cal H}}_1(\Delta)\otimes_{\mathbb{Z}}\mathbb{C}$ is isomorphic with the universal enveloping algebra $U({\bf n}_+)$ of ${\bf n}_+$, where ${\bf g}={\bf n}_-\oplus {\bf h}\oplus {\bf n}_+$ is a triangular decomposition of the semisimple complex Lie algebra ${\bf g}$ of type $\Delta$ (see [@ringel]). In the present paper we are interested in an analogous problem of the existence of Hall polynomials for prinjective modules over incidence algebras of posets of finite prinjective type (see Section 2 for definition). We define also (Section 4) prinjective Ringel-Hall algebras for such posets. The paper is organised as follows. In Section 2 we prove some results concerning injective and surjective homomorphisms between prinjective modules and we recall main definitions and results concerning prinjective modules. In Section 3 the existence of Hall polynomials for prinjective representations of posets of finite prinjective type is proved. Section 4 contains consequences of the existence of Hall polynomials. In particular we give there a definition of prinjective Ringel-Hall algebra. Concluding remarks are also presented in Section 4. The motivations for the study of prinjective $KI$-modules is the fact that many of the representation theory problems can be reduced to the corresponding problems for poset representations and prinjective modules (see [@arn], [@ri], [@s92], [@s93], [@s95]). Prinjective $KI$-modules play an important role in the representation theory of finite dimensional algebras (see [@ri], [@s92 Chapter 17]) and lattices over orders (see [@s92 Chapter 13], [@s93], [@s95], [@s97a]). Moreover the study of prinjective modules is equivalent to the study of a class of bimodule matrix problems in the sense of Drozd (see [@ps], [@s92 Chapter 17]). [**Acknowledgment.**]{} The author would like to thank S. Kasjan for careful reading of this paper and helpful remarks. The main results of this paper were presented on the X ICRA in Patzcuaro (Mexico) 2004, on the seminar in Bielefeld (Germany) 2004 and on the NWDR Workshop in Muenster (3th December 2004) during the stay of the author supported by Lie Grits (C 0105704). Counting surjective homomorphisms ================================= Let $I=(I,\preceq)$ be a finite poset (i.e. partially ordered set) with the partial order $\preceq$. Let $\max\, I$ denote the set of all maximal elements of $I$ and $I^-=I\setminus \max\, I$. Given a field $K$ we denote by $KI$ the [**incidence**]{} $K$[**-algebra**]{} of the poset $I$, that is, $$KI=\{(\lambda_{ij})\in \mathbb{M}_I(K)\; ;\;\; \lambda_{ij}=0 \mbox{ if } i\npreceq j\mbox{ in } I\} \subseteq \mathbb{M} _I(K)$$ (see [@s92], [@s93]). The reader is referred to [@s92], [@s93], [@s95], [@s97a] for a discussion of incidence algebras and their applications to the integral representation theory. A $KI$-module $X$ may be identified with the representation $(X_i,\varphi_{ij})_{i\preceq j\in I}$ of the poset $I$ (i.e. $X_i$ is a $K$-vector space for any $i\in I$ and, for all relations $i\preceq j$ in $I$, $\varphi_{ij};X_i\to X_j$ are linear maps satisfying $\varphi_{jk}\varphi_{ij}=\varphi_{ik}$ if $i\preceq j\preceq k$). Recall that the dimension vector $\bdim\, X\in \mathbb{Z}^I$ of $X$ is defined by $(\bdim\,X)(i)=\dim_K X_i$ for all $i\in I$. Denote by $P(i)$ the projective $KI$-module corresponding to the vertex $i$. Without loss of generality we may assume that $I\subseteq \mathbb{N}$ and that the order $\preceq$ in $I$ is such that $i\preceq j$ in $I$ implies $i\leq j$ in the natural order. In this case the algebra $KI$ has the following [**bipartition**]{} $$KI=\left[\begin{array}{cc}KI^-&M\\ 0&K(\max\, I)\end{array}\right], \leqno(2.1)$$ where $M$ is a $KI^-$-$K(\max\,I)$-bimodule. It is well-known (see [@s90],[@ars III.2]) that a finitely generated $KI$-module $X$ may be also identified with the triple $$X=(X',X'',\varphi:X'\otimes_{KI^-}M\to X''),$$ where $X'$ is a $KI^-$-module, $X''$ is a $K(\max\, I)$-module and $\varphi$ is a $K(\max\,I)$-module homomorphism. A homomorphism $f:X\to Y=(Y',Y'',\psi)$ of $KI$-modules is identified with a pair $(f',f'')$, where $f':X'\to Y'$ is a $KI^-$-module homomorphism, $f'':X''\to Y''$ is a $K(\max\, I)$-module homomorphism and $f''\varphi=\psi (f'\otimes \id)$. Equivalently, we may identify $X$ with the triple $$X=(X',X'',\overline{\varphi}:X'\to\Hom_{K(\max\, I)}(M,X'')),$$ where $X'$ is a $KI$-module, $X''$ is a $KI^-$-module and $\overline{\varphi}$ is the $KI^-$-module homomorphism adjoint to $\varphi$. A homomorphism $f:X\to Y=(Y',Y'',\psi)$ of $KI$-modules, in this case, is identified with a pair $(f',f'')$, where $f':X'\to Y'$ is a $KI^-$-module homomorphism, $f'':X''\to Y''$ is a $K(\max\, I)$-module homomorphism and $\overline{\psi} f'=\Hom_B(M,f'')\overline{\varphi}$. In the present paper we use and need these three presentations of a $KI$-module $X$. Let $\mod(KI)$ denotes the category of all finite dimensional right $KI$-modules. A $KI$-module $X$ is said to be [**prinjective**]{} if the $KI^-$-module $X'$ is projective. Let us denote by $\prin(KI)$ the full subcategory of $\mod(KI)$ consisting of prinjective $KI$-modules. Note that any projective $KI$-module is prinjective. The algebra $KI$ is said to be of [**finite prinjective type**]{} if the category $\prin(KI)$ is of finite representation type, i.e. there exist only finitely many isomorphism classes of indecomposable prinjective $KI$-modules. [Remark.]{} If the poset $I$ is of finite prinjective type, the $K$-algebra $KI$ may be of infinite representation type (even wild). Moreover the category of prinjective modules is not closed under submodules. Therefore the problem of the existence of Hall polynomials for prinjective modules does not reduce to the corresponding one for representation directed algebras and Ringel’s arguments given in [@ringel1] does not apply directly in our case. In this section wee present a reduction which allows us, in Section 3, to develop Ringel’s arguments in our case. Let us denote by $\modsp(KI)$ the full subcategory of $\mod(KI)$ consisting of socle projective modules, i.e. modules $X$ which have projective socle $\soc(X)$. Following [@s90] we define the functor $$\Theta:\prin(KI)\to\modsp(KI)$$ by $$(X',X'',\varphi)\mapsto (\Im \overline{\varphi},X'', \overline{j_\varphi})=(\Theta(X'),\Theta(X)'',\overline{j_\varphi}),$$ where $\overline{j_\varphi}$ is the adjoint map to the inclusion $j_\varphi:\Im \overline{\varphi}\hookrightarrow \Hom_{K(\max\, I )}(M,X'')$. Let us collect some properties of these categories and functor. [Lemma 2.2.]{} *A $KI$-module $X=(X',X'',\varphi)$ belongs to the category $\modsp(KI)$ if and only if $\soc(X)$ has the form $(0,Y,0)$, where $Y$ is a $K(\max\, I)$-module.* The functor $\Theta$ is full and dense with $\Ker\, \Theta =[(P,0,0)\, ;\; P\mbox{ {\rm projective} } \allowbreak KI^-\mbox{\rm -module}]$. Moreover $\Theta$ establishes a bijection between indecomposable modules which are not in $\Ker\,\Theta$ and indecomposable modules in $\modsp(KI)$. [**Proof.**]{} See [@s90] and [@ps]. Now we prove some facts about surjective and injective homomorphisms of $KI$-modules. These facts are essentially used in Section 3. [Lemma 2.3.]{} *Let $X=(X',X'',\overline{\varphi})$, $Y=(Y',Y'',\overline{\psi})$ be modules in $\prin(KI)$ and let $f=(f',f''):X\to Y$ be an injective (resp. surjective) $KI$-homomorphism. Then $\Theta(f)$ is an injective (resp. surjective) $KI$-homomorphism.* Let $X=(X',X'',\overline{\varphi})$, $Y=(Y',Y'',\overline{\psi})$ be modules in $\prin(KI)$ and let $f:X\to Y$ be a $KI$-homomorphism such that $\Theta(f)=(g',g''):\Theta(X)\to \Theta(Y)$ is surjective. If $Y$ has no direct summand of the form $(P,0,0)$, where $P$ is a projective $KI^-$-module, then $f$ is surjective. [**Proof.**]{} (a) Let $f:X\to Y$ be a homomorphism and $$g=(g',g'')=\Theta(f)=(\Hom_{K(\max\,I)}(M,f'')|_{\Im\overline{\varphi}},f'').$$ Assume that $f$ is injective. Then the morphisms $f'$ and $f''=g''$ are injective. Note that $g'$ is injective, because $f''$ is injective and the functor $\Hom_{K(\max\, I)}(M,-)$ is left exact. Now let $f$ be surjective. Then $f'$, $f''=g''$ are surjective. We have to show that $g':\Im \overline{\varphi}\to \Im \overline{\psi} $ is surjective. Note that $$g'(\Im \overline{\varphi})= \Hom_{K(\max\, I )}(M,f'')(\Im\overline{\varphi})=\Hom_{K(\max\, I )}(M,f'')\overline{\varphi}(X')=\overline{\psi} f'(X').$$ Since $f'$ is surjective we have $$\overline{\psi} f'(X')=\overline{\psi} (Y')=\Im\overline{\psi}.$$ Therefore $g'$ and $g$ are surjective. This finishes the proof of (a). \(b) Let $X=(X',X'',\overline{\varphi})$, $Y=(Y',Y'',\overline{\psi})$ be modules in $\prin(KI)$ and let $\Theta(f)=(\Hom_{K(\max\, I)}(M,f'')|_{\Im\overline{\varphi}},f'')=(g',g''):\Theta(X)\to \Theta(Y)$ be surjective. It follows that $g'\overline{\varphi}:X'\to \Theta(Y)'$ and $\overline{\psi}f'=g'\overline{\varphi}:X'\to \Theta(Y)'$ are surjective. Moreover, let $Y$ has no direct summand of the form $(P,0,0)$, where $P$ is a projective $KI^-$-module. By [@ps Lemma 3.3], $\overline{\psi}:Y'\to \Theta(Y)'=\Im \overline{\psi}$ is the projective cover of $\Theta(Y)'$ in $\mod (KI^-)$. Since $\overline{\psi}f'$ is surjective and $\overline{\psi}$ is the projective cover, the morphism $\overline{\psi}$ is essential, and therefore $f'$ is surjective and we are done. Let $|X|$ denotes the cardinality of a finite set $X$. Moreover, given $KI$-modules $X$, $Y$, let ${\rm Epi}_{KI}(X,Y)$ be the set of all surjective $KI$-homomorphisms $f:X\to Y$ and ${\rm Ker}\,\Theta(X,Y)$ be the set of all homomorphisms $f:X\to Y$ which are in $\Ker \,\Theta$ (in the case $X$, $Y$ are prinjective). [Corollary 2.4.]{} *Let $K$ be a finite field and $X=(X',X'',\overline{\varphi})$, $Y=(Y',Y'',\overline{\psi})$ be modules in $\prin(KI)$. If $Y$ has no direct summand of the form $(P,0,0)$, then* $$|{\rm Epi}_{KI}(X,Y)|= |{\rm Epi}_{KI}(\Theta(X),\Theta(Y)) | \cdot |\Ker\,\Theta (X,Y)|.$$ [**Proof.**]{} By Lemma 2.2(b) and Lemma 2.3 the functor $\Theta$ induces the surjective $K$-linear map $$\Theta :{\rm Epi}_{KI} (X,Y) \to {\rm Epi}_{KI} (\Theta(X),\Theta(Y))$$ by attaching to any surjective homomorphism $f:X\to Y$ the surjective homomorphism $\Theta (f)$. Lemma 2.3(a) finishes the proof. [Lemma 2.5.]{} *Let $K$ be a finite field and $X=(X',X'',\overline{\varphi})$, $Y=(Y',Y'',\overline{\psi})$, $Z=(Z',0,0)$ be modules in $\prin(KI)$. Assume that $Y$ has no direct summand of the form $(P,0,0)$, where $P$ is a projective $KI^-$-module.* If there exists a surjective homomorphism $f:X\to Y$, then there exists the unique (up to isomorphism) projective $KI^-$-module $U'$ such that $${\bf dim}\, U'={\bf dim}\,X'-{\bf dim}\, Y'.$$ If there is no surjective homomorphism $f:X\to Y$, then there is no surjective homomorphism $g:X\to Y\oplus Z$. Let $U'$ be the module defined in if there is a surjective homomorphism $f:X\to Y$ and $U'=0$ otherwise. Then $$|{\rm Epi}_{KI}(X,Y\oplus Z)|=|{\rm Epi}_{KI}(X, Y) |\cdot |{\rm Epi}_{KI^-}(U', Z') |\cdot | \Hom_{KI^-}(Y',Z')|.$$ [**Proof.**]{} (a) Let $f:X\to Y$ be a surjective homomorphism and consider $U=\Ker\, f=(U',U'',\phi)$. Since the $KI^-$ modules $X'$, $Y'$ are projective, the $KI^-$-module $U'$ is projective. Moreover ${\bf dim}\, U'={\bf dim}\,X'-{\bf dim}\, Y'$ and $U'$ is uniquely determined by its dimension vector (see [@ri pp 77]). The statement (b) is clear. \(c) If there is no surjective homomorphism $g:X\to Y$, then by (b) the formula given in (c) is clear. Let $g=\left[\matrix{g_1\cr g_2}\right] :X\to Y\oplus Z$ be a surjective homomorphism such that $g_1:X\to Y$, $g_2:X\to Z$ and let $U=\Ker\, g_1$. It follows that $g_1,g_2$ are surjective. Note that $X'$ may be identified with $U'\oplus Y'$, because $X'$, $Y'$ are projective $KI^-$-modules and $g_1':X'\to Y'$ is surjective with kernel isomorphic to $U'$. Therefore the condition ${\bf dim}\, U'={\bf dim}\,X'-{\bf dim}\, Y'$ is satisfied. By [@kos03 Lemma 2.3] there is an isomorphism of $K$-vector spaces $\Hom_{KI}(V,Z)\simeq \Hom_{KI^-}(V',Z')$ for any $KI$-module $V$. This isomorphism is given by $(f',f'')\mapsto f'$ and is based on the observation that $f''=0$ if $Z=(Z',0,0)$. Therefore $g_2$ may be identified with $g_2=[g_{21},\, g_{22}]:U'\oplus Y'\to Z'$, where $g_{21}:U'\to Z'$, $g_{22}:Y'\to Z'$. Consider the following commutative diagram with exact rows $$\begin{array}{ccccccccc} 0& \to & U& \hookrightarrow & X & \longarr{g_1}{15} &Y &\to & 0 \vspace{1ex} \\&& \mapdown{g_{21}} && \mapdown{\scriptsize{\left[\matrix{g_1\cr g_2}\right]}} && \mapdown{\scriptsize{\id}} && \\ 0& \to & Z& \longarr{\scriptsize{\left[\matrix{0\cr 1}\right]}}{15} & Y\oplus Z & \longarr{[1\,0]}{15} &Y &\to & 0\, .\end{array}$$ Since $g$ is surjective, by the Snake Lemma $g_{21}$ is surjective. So, with any surjective $KI$-homomorphism $g:X\to Y\oplus Z$ we associate two surjective $KI$-homomorphisms $g_1:X\to Y$, $g_{21}:U\to Z$ (identified with the surjective $KI^-$-homomorphism $g_{21}:U'\to Z'$) and a $KI^-$-homomorphism $g_{22}:Y'\to Z'$. Conversely, let $g_1:X\to Y$ be a surjective $KI$-homomorphism and $U=\Ker\,g_1$. Note that $X'\simeq U'\oplus Y'$, because $U'$, $X'$ and $Y'$ are projective $KI^-$-modules. Let $g_{21}:U'\to Z'$ be a surjective $KI^-$-homomorphism and $g_{22}:Y'\to Z'$ any $KI^-$-homomorphism. Then $g_2=[g_{21},g_{22}]:X\to Z$ is surjective (identified with $g_2:U'\oplus Y'\to Z'$). Finally we get a surjective $KI$-homomorphism $g=\left[ \matrix{g_1\cr g_2} \right]:X\to Y\oplus Z$. Indeed, let $(y,z)\in Y\oplus Z$. Let us fix the decomposition of $X\simeq U'\oplus Y'\oplus X''$ as a $K$-linear space. Since $g_1$ is surjective and $g_1(U)=0$, there exists $x_1=(0,x_1',x_1'')$ such that $g_1(x_1)=y$. Moreover $g_{21}$ is surjective, then there exists $x_2\in U'\subseteq X$ such that $g_{21}(x_2)=z-g_{22}(x_1')$. Let $x=(x_2,x_1',x_1'')$, therefore $g(x)=(g_1(x_1),z-g_{22}(x_1')+g_{22}(x_1'))=(y,z)$ and lemma follows. [Lemma 2.6.]{} [*Let $I$ be an arbitrary finite poset, and $KI$ - its incidence $K$-algebra. Let $P=\bigoplus_{i\in I}P(i)^{n_i}$, $n_i\geq 0$, $Q=\bigoplus_{i\in I}P(i)^{m_i}$, $m_i\geq 0$ be projective $KI$-modules. Then $\dim_K\Hom_{KI}(P,Q)=\sum_{i\in I}(\sum_{j\preceq i}n_im_j)$. In particular $\dim_K\Hom_{KI}(P,Q)$ is independent on the base field $K$.*]{} [**Proof.**]{} Let us recall that $\dim_K\Hom_{KI}(P(i),X)=\dim_KX_i$ (see [@ri pp 68]). Moreover $P(i)_j\simeq K$ if $i\preceq j$ in $I$ and $P(i)_j=0$ otherwise. Therefore lemma follows easily. Hall polynomials for posets of finite prinjective type ====================================================== Let $I$ be a poset of finite prinjective type and let $KI$ be its incidence $K$-algebra. In this section we prove the existence of Hall polynomials for prinjective $KI$-modules. Given finite dimensional $KI$-modules $X$, $Y$, $Z$ we define $F^Y_{Z,X}$ to be the number of modules $U\subseteq Y$ such that $U\simeq X$ and $Y/U\simeq Z$. It follows from [@s93], [@hosim] that the Auslander-Reiten quiver $\Gamma_I=\Gamma(\prin(KI))$ (resp. $\Gamma_{I-{\rm sp}}=\Gamma(\modsp(KI))$) of the category $\prin(KI)$ (resp. $\modsp(KI)$) is directed and coincides with its preprojective component. Moreover $\Gamma_I$ and $\Gamma_{I-{\rm sp}}$ do not depend on the base field $K$ (see [@s92 Chapter 11]). Let us recall that, by the definition, the vertices of Auslander-Reiten quiver corresponds bijectively to the isomorphism classes of indecomposable modules. For a given vertex $x\in (\Gamma_I)_0$ (resp. $x\in (\Gamma_{I-{\rm sp}})_0$) we denote by $M(K,x)$ (resp. $M_{\rm sp}(K,x)$) the corresponding indecomposable prinjective (resp. socle projective) $KI$-module. Moreover for any function $a:(\Gamma_I)_0\to \mathbb{N}$ (resp. $a:(\Gamma_{I-{\rm sp}})_0\to \mathbb{N}$) let $M(K,a)=\bigoplus_{x\in (\Gamma_I)_0}M(K,x)^{a(x)}$ (resp. $M_{\rm sp}(K,a)=\bigoplus_{x\in (\Gamma_{I-{\rm sp}})_0}M_{\rm sp}(K,x)^{a(x)}$) (see [@ringel1] for details). Moreover given a function $a\in {{\mathcal{B}}}$ we denote by $\Theta(a)\in {{\mathcal{B}}}_{sp}$ the function corresponding to the socle projective $KI$-module $\Theta(M(a))$. It follows from [@s92], [@s93], [@hosim] and [@ps] that the dimension vectors $\bdim M(K,a)$ and $\bdim M_{\rm sp}(K,a)$ depend only on the Auslander-Reiten quiver, so they do not depend on $K$. For the sake of simplicity we write $M(a)$ (resp. $M_{\rm sp}(a)$) instead of $M(K,a)$ (resp. $M_{\rm sp}(K,a)$) if the base field $K$ is known from the context. Denote by ${{\mathcal{B}}}$ (resp. ${{\mathcal{B}}}_{\rm sp}$) the set of all functions $a:(\Gamma_I)_0\to \mathbb{N}$ (resp. $a:(\Gamma_{I-{\rm sp}})_0\to \mathbb{N}$). It is clear that ${{\mathcal{B}}}$ (resp. ${{\mathcal{B}}}_{\rm sp}$) can be identified with the set of all finite dimensional prinjective (resp. socle projective) $KI$-modules. Given an arbitrary $KI$-module $M$ we denote by ${{\cal S}}(M)$ the set of all $KI$-modules $N$ such that $\bdim N< \bdim M$ (i.e. $\bdim N\neq \bdim M$ and $(\dim N)(i)\leq (\bdim M)(i)$ for all $i\in I$). [Lemma 3.1.]{} [*Let $I$ be a poset of finite prinjective type. For any $a,b\in {{\mathcal{B}}}$ (resp. $\overline{a},\overline{b}\in {{\mathcal{B}}}_{\rm sp}$) the natural number $h(a,b)=\dim_K\Hom_{KI}(M(a),M(b))$ (resp. $h(\overline{a},\overline{b})=\dim_K\Hom_{KI}(M_{\rm sp}(\overline{a}),\allowbreak M_{\rm sp}(\overline{b}))$) does not depend on the field $K$.*]{} [**Proof.**]{} Since the Auslander-Reiten quivers $\Gamma_I$ and $\Gamma_{I-{\rm sp}}$ are directed, the arguments given in [@ringel1] prove our lemma. For $a,b\in {{\mathcal{B}}}$ (resp. $\overline{a},\overline{b}\in{{\mathcal{B}}}_{sp}$) we define polynomial $\gamma_{ab}=T^{h(a,b)}\in\mathbb{Z}[T]$ (resp. $\gamma_{\overline{a}\overline{b}}=T^{h(\overline{a},\overline{b})}\in\mathbb{Z}[T]$). Note that $\gamma_{ab}(|K|)=|\Hom_{KI}(M(a),M(b))|$ (resp. $\gamma_{\overline{a}\overline{b}}(|K|)=|\Hom_{KI}(M_{sp}(\overline{a}),M_{sp}(\overline{b}))|$). [Lemma 3.2.]{} *Let $a,b\in {{\mathcal{B}}}$ and let $\overline{a},\overline{b}\in {{\mathcal{B}}}_{\rm sp}$ be such that $\Theta(M(a))=M(\overline{a})$ and $\Theta(M(b))=M(\overline{b})$.* $|\Ker\,\Theta(M(a),M(b))|=|K|^{h(a,b)-h(\overline{a},\overline{b})}$. There exists a polynomial $\omega _{ab}\in \mathbb{Z}[T]$ such that for any finite field $K$ we have $\omega_{ab}(|K|)=|\Ker\, \Theta(M(a),M(b))|$. [**Proof.**]{} (a) By Lemma 3.1 the natural numbers $h(a,b)$ and $h(\overline{a},\overline{b})$ are independent on the base field $K$. So let us fix a finite field $K$. By Lemma 2.2(b) we have $$|\Hom_{KI}(M(a),M(b))|=|\Hom_{KI}(M(\overline{a}),M(\overline{b}))|\cdot |\Ker\,\Theta(M(a),M(b))|.$$ To finish the prove of (a) we have only to observe that $|\Hom_{KI}(M(a),M(b))|=|K|^{h(a,b)}$ and $|\Hom_{KI}(M(\overline{a}),M(\overline{b}))|=|K|^{h(\overline{a},\overline{b})}$. \(b) Put $\omega_{ab}=T^{h(a,b)-h(\overline{a},\overline{b})}$. Then (b) follows from (a). [Theorem 3.3.]{} [*Let $I$ be a poset of finite prinjective type and let $a\in {{\mathcal{B}}}$ (resp. $\overline{a}\in{{\mathcal{B}}}_{\rm sp}$). There exists a monic polynomial $\alpha_{a}\in \mathbb{Z}[T]$ (resp. $\alpha_{\overline{a}}\in \mathbb{Z}[T]$) such that for any finite field $K$ $$|\Aut_{KI}(M(a))|=\alpha_{a}(|K|), \;\; (\mbox{resp. } |\Aut_{KI}(M_{\rm sp}(\overline{a}))|=\alpha_{\overline{a}}(|K|)).$$* ]{} [**Proof.**]{} We may follow the proof given in [@ringel1]. This theorem also follows from [@peng Proposition 2.1]. Given functions $x,y,z\in {{\mathcal{B}}}\cup {{\mathcal{B}}}_{sp}$, etc., for the sake of simplicity, we denote by capital letters $X$, $Y$, $Z$, etc. the $KI$-modules $M(K,x)$, $M_{sp}(K,x)$, $M(K,y)$, $M(K,z)$, respectively. However we should remember that $KI$-modules are identified with functions from the sets ${{\mathcal{B}}}$, ${{\mathcal{B}}}_{sp}$ and depend on the base field $K$. Moreover given a function $x\in {{\mathcal{B}}}$ we denote by $\Theta(x)$ the function in ${{\mathcal{B}}}_{sp}$ corresponding to the module $\Theta(X)$. [Lemma 3.4.]{} *Let $I$ be a poset of finite prinjective type. Let $x,y\in{{\mathcal{B}}}_{sp}$. There exist polynomials $\sigma ^y_x,\eta^y_x,\mu^y_x,\varepsilon^y_x\in \mathbb{Z}[T]$ such that for any finite field $K$:* $\sigma ^y_x(|K|)$ equals the number of submodules $U\subseteq Y$, such that $U\simeq X$, $\eta ^y_x(|K|)$ equals the number of submodules $U\subseteq Y$, such that $Y/U\simeq X$, $\mu^y_x(|K|)$ equals the number of injective homomorphisms $X\to Y$, $\varepsilon^y_x(|K|)$ equals the number of surjective homomorphisms $Y\to X$. [**Proof.**]{} One can prove this lemma by developing Ringel’s arguments given in [@ringel1]. For the convenience of the reader we outline the proof. If ${\bf dim}\, X\nleqslant {\bf dim}\, Y$, we set $\sigma^y_x=0=\eta^y_x$. Let ${\bf dim}\, X\leqslant {\bf dim}\, Y$. We apply induction on ${\bf dim}\, Y$. If ${\bf dim}\, Y=0$, then $X=0=Y$ and $\sigma^y_x=1=\eta^y_x$. Let $Y\neq 0$ and we start with induction on ${\bf dim}\, X$. Define two polynomials $\mu^y_x=\gamma_{xy}-\sum_{U\in {{\cal S}}(X)}\eta^x_u\alpha_u\sigma^y_u$, $\;\varepsilon^y_x=\gamma_{yx}-\sum_{U\in {{\cal S}}(X)}\eta^y_u\alpha_u\sigma^x_u$. Since the category $\modsp(KI)$ is closed under submodules, we may assume that $U$ arising in these sums is socle projective, because otherwise $\sigma^y_u=0=\sigma^x_u$. Moreover these sums are finite, because the poset $I$ is of finite prinjective type. All summands on the right side are defined by induction hypothesis. We claim that $\eta^x_u\alpha_u\sigma^y_u(|K|)$ equals the number of morphisms $f:X\to Y$ such that $\Im f\simeq U$. Indeed, for a given submodule $V\subseteq X$ such that $X/V\simeq U$ we fix a surjective homomorphism $g_V:X\to U$ with $\Ker\, g_V=V$. Similarly, if $W\subseteq Y$ is a submodule such that $W\simeq U$, we fix an injective homomorphism $h_W:U\to Y$ with $\Im h_W=W$. Homomorphisms $X\to Y$ with kernel $V$ and image $W$ correspond bijectively to automorphisms of $U$. This bijection is given by attaching to any automorphism $f:U\to U$ the following homomorphism $X\to Y$: $$X\longarr{g_V}{30}U\longarr{f}{30}U\longarr{h_W}{30}Y.$$ A homomorphism $X\to Y$ is injective if and only if its image is not isomorphic to any $U$ with ${\bf dim}\, U< {\bf dim}\, X$. Therefore $\mu^y_x(|K|)$ is the number of injective homomorphisms $X\to Y$. Dually, $\varepsilon^y_x(|K|)$ is the number of surjective homomorphisms $Y\to X$. Note that for all finite fields $K$, $\mu^y_x(|K|)(\alpha_x(|K|))^{-1}$ equals the number of submodules $U\subseteq Y$ with $U\simeq X$ and therefore it is an integer. By [@ringel1 page 441] the polynomial $\alpha_x$ divides $\mu^y_x$ in $\mathbb{Z}[T]$. Similarly, $\alpha_x$ divides $\varepsilon^y_x$ in $\mathbb{Z}[T]$. We put $\sigma^y_x=\mu^y_x(\alpha_x)^{-1}$ and $\eta^y_x=\varepsilon^y_x(\alpha_x)^{-1}$. This finishes the proof. [Lemma 3.5.]{} *Let $I$ be an arbitrary poset and let $X$, $Y$ be projective $KI$-modules there exist polynomials $\eta^y_x, \varepsilon^y_x\in \mathbb{Z}[T]$ such that for any finite field $K$:* $\eta ^y_x(|K|)$ equals the number of submodules $U\subseteq Y$ such that $Y/U\simeq X,$ $\varepsilon ^y_x(|K|)$ equals the number of surjective homomorphisms $Y\to X$. [**Proof.**]{} Let $X$, $Y$, $Z$ be $KI$-modules. By [@riedtmann Section 4], the number of submodules $U\subseteq Y$, such that $U\simeq Z$ and $Y/U\simeq X$, equals $$F^Y_{X,Z}=\frac{|\Ext^1_{KI}(X,Z)_Y||\Aut_{KI}(Y)|}{|\Aut_{KI}(Z)||\Aut_{KI}(X)| |\Hom_{KI}(Z,X)|},\leqno(*)$$ where $\Ext^1_{KI}(X,Z)_Y$ is the set of all exact sequences in $\Ext^1_{KI}(X,Z)$ with the middle term $Y$. Let us assume that $Y$ and $X$ are projective $KI$-modules. Let us fix a submodule $Z\subseteq Y$ such that $Y/Z\simeq X$. Since the category of projective modules is closed under kernels of surjective homomorphisms, the submodules $U\subseteq Y$ with $Y/U\simeq X$ are projective. Moreover $U\simeq Z$, because any exact sequence $0\to U\to Y\to X\to 0$ splits. Therefore $F^Y_{X,Z}$ equals the number of submodules $U\subseteq Y$ such that $Y/U\simeq X$. Note also that $\Ext^1_{KI}(X,Z)=0$ and therefore $|\Ext^1_{KI}(X,Z)_Y|=1$. By Lemma 2.6 the number $h(z,x)=\dim_K\Hom_{KI}(Z,X)$ is independent on the base field $K$ and the number of $KI$-homomorphisms $f:Z\to X$ equals $\gamma_{z,x}(|K|)$. We define $$\eta^y_x=\frac{\alpha_y}{\alpha_z\alpha_x\gamma_{z,x}}.$$ By Theorem 3.3 and $(*)$, $F^Y_{X,Z}=\eta^y_x(|K|)$ for any finite field $K$. Then the number $$\alpha_z(|K|)\alpha_x(|K|)\gamma_{z,x}(|K|)$$ divides $\alpha_y(|K|)$ for infinitely many finite fields $K$. Since the polynomial $\alpha_z\alpha_x\gamma_{z,x}$ is monic, it follows from [@ringel1 page 441] that it divides the polynomial $\alpha_y$ in $\mathbb{Z}[T]$ and therefore $\eta^y_x\in \mathbb{Z}[T]$. Consequently $\eta^y_x (|K|)$ equals the number of submodules $U\subseteq Y$ such that $Y/U\simeq X$. We put $\varepsilon^y_x=\eta^y_x\alpha_x\in \mathbb{Z}[T]$. Note that $\varepsilon^y_x(|K|)$ equals the number of surjective homomorphisms $f:Y\to X$. This finishes the proof. [Corollary 3.6.]{} *Assume that $I$ is of finite prinjective type and $x,y\in {{\mathcal{B}}}$. There exists a polynomial $\varepsilon^y_x\in \mathbb{Z}[T]$ such that for any field $K$:* $\varepsilon ^y_x(|K|)={\rm Epi}_{KI}(Y, X)$. [**Proof.**]{} If there is no surjective homomorphism $f:Y\to X$ for any field $K$, we put $\varepsilon^y_x=0$. Otherwise, let $X=\overline{X}\oplus Z$, where $Z=(P,0,0)$ with projective $KI^-$-module $P$ and $\overline{X}$ has no direct summand of the form $(P,0,0)$. Then $\Theta(X)=\Theta(\overline{X})$. In our case there exists a surjective homomorphism $f:Y\to X$ for some field $K$. Let $U'\simeq Y'/\overline{X}'$ be the unique (up to isomorphism) projective $KI^-$-module such that $\bdim\, U'=\bdim\, Y'-\bdim\, \overline{X}'$. By Lemma 3.4, there exists a polynomial $\varepsilon^{\Theta(y)}_{\Theta(x)}\in \mathbb{Z}[T]$ such that $\varepsilon ^{\Theta(y)}_{\Theta(x)}(|K|)$ equals the number of surjective homomorphisms $\Theta(Y)\to \Theta(X)$. By Lemma 3.5, there exists a polynomial $\varepsilon ^{u'}_{z'}\in \mathbb{Z}[T]$ such that $\varepsilon ^{u'}_{z'}(|K|)$ equals the number of surjective homomorphisms $U'\to Z'$. Put $$\varepsilon^y_x= \varepsilon ^{\Theta(y)}_{\Theta(x)}\cdot T^{h(y,\overline{x})-h(\Theta(y),\Theta(x))} \cdot T^{h(x',z')}\cdot \varepsilon^{u'}_{z'}.$$ By Corollary 2.4, Lemma 2.5 and Lemma 3.2, $\varepsilon^y_x$ is the required polynomial. [Corollary 3.7.]{} *Let $I$ be a poset of finite prinjective type and let $x,y\in {{\mathcal{B}}}$. There exists a polynomial $\eta^y_x\in \mathbb{Z}[T]$ such that for any finite field $K$:* $\eta ^y_x(|K|)$ equals the number of submodules $U\subseteq Y$, such that $Y/U\simeq X$. [**Proof.**]{} By Corollary 3.6, there exists a polynomial $\varepsilon^y_x\in \mathbb{Z}[T]$ such that $\varepsilon ^y_x(|K|)={\rm Epi}_{KI}(Y,X)$ for any finite field $K$. Note that, for any finite field $K$, the number $\varepsilon^y_x(|K|)\cdot \alpha_x(|K|)^{-1}$ is an integer, because it counts the number of submodules $U\subseteq Y$ such that $Y/U\simeq X$. Since $\alpha_X$ is a monic polynomial, it follows from [@ringel1 page 441] that $\alpha_x$ divides $\varepsilon^y_x$ in $\mathbb{Z}[T]$. Therefore $\eta^y_x=\varepsilon^y_x\cdot \alpha_x^{-1}\in \mathbb{Z}[T]$ is the required polynomial. [Theorem 3.8.]{} *Let $I$ be a poset of finite prinjective type and $x$, $y$, $z$ be functions in ${{\mathcal{B}}}$ (resp. $\overline{x},\overline{y},\overline{z}\in{{\mathcal{B}}}_{sp}$). There exist polynomials $\varphi ^y_{xz}\in \mathbb{Z}[T]$ (resp. $\varphi ^{\overline{y}}_{\overline{x}\overline{z}}\in \mathbb{Z}[T]$) such that for any finite field $K$:* $\varphi ^y_{xz}(|K|)=F^Y_{XZ}$ (resp. $\varphi ^{\overline{y}}_{\overline{x}\overline{z}}(|K|)=F^{\overline{Y}}_{\overline{X}\overline{Z}}$). [**Proof.**]{} We prove this theorem developing arguments given in [@ringel1] and facts proved in Sections 2 and 3. If ${\bf dim}\, Y\neq {\bf dim}\, Z+{\bf dim}\, X$ we put $\varphi ^y_{xz}=0$. Let ${\bf dim}\, Y= {\bf dim }\, Z+{\bf dim}\, X$. We apply induction on ${\bf dim}\, Z$. If ${\bf dim}\, Z=0$ we put $\varphi^x_{x0}=1$ and $\varphi^y_{x0}=0$ if $X\not\simeq Y$. Assume that $Z\neq 0$ and $Z=U_1\oplus U_2$, where $U_1\neq 0$, $U_1\simeq W^m$, $W$ is indecomposable, $W$ is not a direct summand of $U_2$ and no indecomposable direct summand of $U_2$ is a predecessor of $W$ in $\Gamma_I$ (resp. $\Gamma_{I-\rm sp}$ in the “socle projective” case). Let us consider two cases: [**Case 1.**]{} $U_2\neq 0$. We define $$\varphi^y_{xz}=\sum_d\varphi ^d_{xu_1}\varphi^y_{du_2},$$ where the sum runs over all modules $D$ such that ${\bf dim}\, D={\bf dim}\, X+{\bf dim}U_1$. Note that this sum is finite and runs over prinjective modules (resp. socle projective modules), because the category of prinjective modules (resp. socle projective modules) is closed under extensions and the poset $I$ is of finite prinjective type. Moreover the right side is already defined by induction hypothesis. One can prove that $\varphi^y_{xz}(|K|)=F^Y_{XZ}$ (see [@ringel1]). [**Case 2.**]{} $U_2=0$. We define $$\varphi^y_{xz}=\eta^y_x-\sum_{d\not\simeq z}\varphi^y_{xd},$$ where $d$ runs over all modules such that ${\bf dim}\, D={\bf dim}\, Z$. Since the category of prinjective is closed under kernels of epimorphisms and the category of socle projective modules is closed under submodules, we may assume that the modules $D$ are prinjective (resp. have projective socle). Note that $D$ is not a direct power of indecomposable, because $Z$ is a direct power of indecomposable, $Z\not\simeq D$ and $\bdim\, Z=\bdim\, D$ (see [@ars IX.2.1]). Therefore the polynomials $\varphi^y_{xd}$ are defined in Case 1. The polynomials $\eta^y_x$ are defined in Corollary 3.7 for prinjective modules and in Lemma 3.4 for socle projective modules. It is clear that $\varphi ^y_{xz}(|K|)=F^Y_{XZ}$ and this finishes the proof. The polynomials $\varphi^y_{xz}$ are called [**Hall polynomials**]{}. In the last chapter we present consequences of the existence of Hall polynomials for prinjective modules. Prinjective Ringel-Hall algebras ================================ We denote by ${{\cal H}}_{prin}(I)$ the free $\mathbb{Q}(T)$-module with basis $\{u_x\}_{x\in {{\mathcal{B}}}}$, indexed by the elements of the set ${{\mathcal{B}}}$. ${{\cal H}}_{prin}(I)$ is equipped with a multiplication defined by the formula: $$u_{x_1}u_{x_2}=\sum_{x\in {{\mathcal{B}}}}\varphi^x_{x_1x_2}u_x.$$ Note that this sum is finite, because the poset $I$ is of finite prinjective type and $\varphi^x_{x_1,x_2}\neq 0$ only if $\bdim X=\bdim X_1+\bdim X_2$. By [@ringel1 Proposition 4], ${{\cal H}}_{prin}(I)$ is an associative ring and the element $u_0$ is the identity element of ${{\cal H}}_{prin}(I)$. By the results of Section 3 this ring depends only on the poset $I$. We call ${{\cal H}}_{prin}(I)$ the [**prinjective generic Ringel-Hall algebra**]{} for the poset $I$. [**Concluding remarks.**]{} (1) In the forthcoming paper [@kos05b] description of ${{\cal H}}_{prin}(I)$ by generators and relations is given. Moreover in [@kos05b] we show connections of the prinjective Ringel-Hall algebra with Lie algebras and Kac-Moody algebras. \(2) In [@kos04] the existence of generic extensions for prinjective modules over posets of finite prinjective type is proved. It would be interesting to find connections between the monoid of generic extensions of prinjective modules and some specialization of prinjective Ringel-Hall algebra. Such a connection, for Dynkin quivers, one ca find in [@reineke2001]. \(3) In the paper [@kos05b] generators of prinjective Ringel-Hall algebra are given. Most of these generators are in the kernel of the functor $\Theta$. We can’t see natural candidates for generators in the “socle projective case”, therefore the category of prinjective modules is more convenient in our considerations. \(4) In [@kub] the existence of Hall polynomials for representations of finite type bisected posets is proved. However, in our case, it solves only the problem of the existence of Hall polynomials for socle projective modules over posets of finite prinjective type with exactly one maximal element. [999]{} D. M. Arnold, [*Representations of partially ordered sets and abelian groups*]{}, Contemporary Math., 87 (1989), 91-109. I. Assem, D. Simson and A. Skowroński, [*Elements of Representation Theory of Associative Algebras*]{}, London Math. Soc., Student Texts 65, Cambridge Univ. Press, Cambridge-New York, 2005, in press. M. Auslander, I. Reiten and S. Smalø, [*Representation Theory of Artin Algebras; Cambridge Studies in Advenced Mathematics 36*]{}, Cambridge University Press, Cambridge 1995. J. Y. Guo, [*The Hall polynomials of a cyclic serial algebras*]{}, Comm. Algebra 23 (1995), 743-751. H.-J. von Höhne and D. Simson, [*Bipartite posets of finite prinjective type*]{}, J. Algebra 201 (1998), 86-114. J. Kosakowska, [*Degenerations in a class of matrix problems and prinjective modules*]{}, J. Algebra 263 (2003), 262-277. J. Kosakowska, [*Generic extensions of prinjective modules*]{}, Algebras and Representations Theory, (2006) 9: 557-568. J. Kosakowska, [*The Ringel-Hall algebras for posets of finite prinjective type*]{}, 2005, preprint. O. V. Kubichka, [*Composition algebras for representations of finite type posets*]{}, Bull. Univ. Kiev, 4, 2002 (in Ukrainian). J. A. de la Peña and D. Simson, [*Prinjective modules, reflection functors, quadratic forms and Auslander-Reiten sequences*]{}, Trans. Amer. Math. Soc., 329 (1992), 733-753. L. Peng, [*Some Hall polynomials for representation-finite algebras*]{}, J. Algebra 197 (1997), 1-13. M. Reineke, [*Generic extensions and multiplicative bases of quantum groups at $q=0$*]{}, An Electronic Journal of the Amer. Math. Soc., Vol. 5 (2001), 147-163. Ch. Riedtmann, [*Lie algebras generated by indecomposables*]{}, J. Algebra 170 (1994), 526-546. C. M. Ringel, [*Tame Algebras and Integral Quadratic Forms*]{}, Lecture Notes in Mathematics, Vol. 1099 (Springer-Verlag, Berlin, Heidelbegr, New York, Tokyo 1984). C. M. Ringel, [*Hall algebras and quantum groups*]{}, Invent. Math. 101 (1990), 583-592. C. M. Ringel, [*Hall algebras*]{}, Banach Center Publications, Vol. 26, Warsaw 1990, 433-447. C. M. Ringel, [*Lie algebras arising in representation theory*]{}, London Math. Soc. Lecture Note Series, 168 (1992), 284-291. D. Simson, [*Module categories and adjusted modules over traced rings*]{}, Dissertationes Math. 269 (1990). D. Simson, [*“Linear Representations of Partially Ordered Sets and Vector Space Categories”*]{}, Algebra, Logic and Applications, Vol. 4, Gordon & Breach Science Publishers, London, 1992. D. Simson, [*Posets of finite prinjective type and a class of orders*]{}, J. Pure Appl. Algebra 90 (1993), 71-103. D. Simson, [*A reduction functor, tameness and Tits form for a class of orders*]{}, J. Algebra 174 (1995), 430-452. D. Simson, [*Representation types, Tits reduced quadratic forms and orbit problems for lattices over orders*]{}, in: Trends in Representation Theory of Finite Dimensional Algebras (Seattle, 1997), Contemp. Math. 229, Amer. Math. Soc., 1998, 307-342. [^1]: Partially supported by Polish KBN Grant 5 P03A 015 21
--- abstract: 'We perform cosmological simulations of galaxies forming at $z=3$ using the hydrodynamics grid code, *Enzo*. By selecting the largest galaxies in the volume to correspond to Lyman-break galaxies, we construct observational spectra of the HI flux distribution around these objects, as well as column densities of CIV and OVI throughout a refined region. We successfully reproduce the most recent observations of the mean HI flux in the close vicinity of Lyman-break galaxies but see no evidence for the proximity effect in earlier observations. While our galaxies do return metals to the IGM, their quantity and volume appears to be somewhat less than observed. We conclude that either we do not adequately resolve galactic winds, or that at least some of the intergalactic metal enrichment is by early epoch objects whose mass is smaller than our minimum resolved halo mass.' author: - 'Elizabeth J. Tasker, Greg L. Bryan' title: 'The environmental impact of Lyman-break galaxies' --- Introduction ============ The diverse structure and myriad of elements in the intergalactic medium (IGM) are witness to the strong role of stellar feedback in galaxies. What is not clear is which class of objects is primarily responsible for the enrichment. One set of likely candidates are the early forming Lyman-break galaxies (LBGs). These galaxies are found to be highly clustered, with high star formation rates and possess strong winds with velocities up to 775km s$^{-1}$ [@Pettini2001; @Pettini2002]. Whether LBGs are the cause of the intergalactic metals is hotly contested. The issue hangs on whether the metals were produced by massive galaxies such as the LBGs, or whether they originated in smaller objects at even earlier times, such as dwarf galaxies or population III stars. There is evidence to support both sides. Results from @Adelberger2003 [@Adelberger2005] (hereafter A03 and A05) find a cross-correlation between CIV systems and galaxies that is very similar to the autocorrelation function between LBGs. Additionally, gas that lies within 40kpc of LBGs contains strong CIV absorption lines and there is a similar association with the OVI systems. This is evidence that the LBGs might be the source of the metal systems. However, studies of the CIV column density between the redshifts of $1.5-5$ [@Songaila2001; @Schaye2003] seem to show relatively little variation in the amount of carbon in the IGM over time, suggesting that the IGM metals were already in place at the highest observable redshifts, and must therefore stem from the second option of small, early epoch objects. One query surrounding LBGs is whether they are able to thrust the metals they produce out of their gravitational pull and into the IGM. Observational studies of the LBGs have suggested that outflows from the galaxies are strong enough to bodily displace the gas and A03 observed a possible void in the absorbing gas at distances out to 1h$^{-1}$Mpc from the galaxy. This *proximity effect* results in an increase in the transmitted flux near the galaxy, where one would intuitively expect to see a decrease as the galaxy is approached. The most likely mechanism proposed for this was strong kinetic winds [@Croft2002], which seems plausible based on the observations of winds by @Pettini2002. However, recent simulation work has shown these winds have little effect on the absorption at these distances [@Kollmeier2005; @Bruscoli2003] and later observational work in this area (A05) has failed to confirm the increase in flux. Whether LBGs are therefore capable of dispersing metals into the IGM remains an open question. In this letter we perform cosmological simulations of galaxies forming at $z=3$. By constructing spectra that would be observed along multiple lines of sight from a distant quasi-stellar object, we measure the flux close to the galaxies and the column densities of HI, CIV and OVI in the IGM. By comparing these results with observations, we search for signs of a proximity effect surrounding LBGs and aim to determine the most likely cause of metal enrichment in the IGM. Computational Methods ===================== Our simulations were performed using the hydrodynamics adaptive mesh code, *Enzo* [@Bryan1999; @OShea2004]. We used a box with a comoving length of $20$h$^{-1}$Mpc and a $\Lambda$CDM cosmology with $(\Omega_\Lambda, \Omega_{\rm DM}, \Omega_{\rm b}, h, \sigma_8) = (0.7,0.26, 0.04, 0.67, 0.9)$ The simulation was evolved through to $z=3$ and the location of the halos was found using a halo finder developed by @Eisenstein1998. Initially, a low resolution run of the whole box was performed and the halo positions located. The simulation was then repeated with a section of approximately 5h$^{-1}$Mpc resolved to a maximum resolution of 1.8kpc with a dark matter particle mass of $5\times 10^6$M$_\odot$. This translates into a minimum resolved halo mass of approximately $\sim 5\times 10^9$M$_\odot$. The star particles formed in the simulation return thermal energy and metals back to the baryons. The energy is put back into the thermal energy of the gas over a dynamical time and is equal to $10^{-5}$ of the rest mass energy of generated stars, equivalent to one supernovae of $10^{51}$ergs per 55M$_\odot$ formed. It is known that some of this energy is radiated away, although previous work with this code [@Tassis2003] has shown that it can eject gas from halos. Metals are returned to the gas over the same time period with a yield of 0.02. Radiative cooling is computed including metal line cooling based on the local metal content. To convert the metallicity into the ionization fractions for CIV and OVI, we used CLOUDY [@Ferland1998], assuming ionization spectrums from @Haardt1996 and, for the softer UV background, @Haardt2001 ![Projections of the baryon density (left) and the fractional metallicity, $\rho_{\rm metal}/\rho_{\rm total}$. The white circles on the baryon density image show the location of the six Lyman-break galaxies. Note that the apparent close proximity of the central two galaxies is misleading, due to the image being projected down the $y$-axis. Image size is $5.31\times 4.69$h$^{-1}$Mpc with an integrated depth of $4.38$h$^{-1}$Mpc. Scales are logarithmic with baryon units M$_\odot$Mpc$^{-2}$.[]{data-label="fig:projections"}](fig1.eps){width="8.5cm"} Ideally, the LBGs in our simulation box would be chosen based on a calculation of their spectrum in the U-band, as with observations. However, for simplicity we selected the halos whose mass was the same order as the observed halo masses of LBGs, $10^{11.5\pm 0.3}$M$_\odot$ (A05). With this lower limit, we get a space density over the simulation box of $6.75\times 10^{-3}$h$^3$Mpc$^{-3}$ in good agreement with the observed value of $8\times 10^{-3}$h$^3$Mpc$^{-3}$ [@Adelberger1998]. Within our refined region (which is actually overdense with respect to the rest of the box), we get 6 halos that are above the cutoff mass. These are circled in white in the left-hand panel of Figure \[fig:projections\]. Results ======= Figure \[fig:projections\] shows the baryon density (left panel) and metallicity of the refined region containing 6 halos we classify as LBGs. We can see that the LBGs are situated at the intersection of multiple gas filaments, and so mark out the densest areas of the simulation box in agreement with the observed highly biased distribution [@Adelberger1998]. They are also a source of metals, with each region round the LBG being surrounded by a high metallicity bubble, as shown in the right-hand panel of Figure \[fig:projections\]. Flux properties --------------- To analyse the distribution of the HI near to the LBGs in our simulation, we computed the optical depth along 2000 randomly chosen lines of sight. By calculating the observed distance to each galaxy along the lines of sight, the average flux, $\left<F\right>=e^{-\tau}$, as a function of distance from the galaxy, $q$, was measured. The flux was divided and averaged into just under 50 bins and these results are presented in Figure \[fig:flux\]. Since the exact amplitude of the background ionizing radiation is uncertain, the results are scaled to match Adelberger’s 2005 results at $q=3.25$h$^{-1}$Mpc. ![Mean transmitted flux as a function of distance from the Lyman-break galaxies. The symbols mark out the observational results from A03 (open circles) and A05 (filled circles). Our simulation results are shown by the solid line and agree well with the most recent data set. All plots are normalised to agree with the 2005 (A05) observations at $q/h^{-1}= 3.25$Mpc.[]{data-label="fig:flux"}](fig2.ps){width="7.0cm"} From Figure \[fig:flux\], we see a clear difference in the trends of the observational results at low $q$. The earlier observations of A03 (open circles) show a definite increase in the transmitted flux, which is missing in the later results. In their paper, A05 note that the most likely reasons for this are the increase in sample size between A03 and A05, or an evolution in the LBGs between the epochs of the two data sets, taken at $z\sim 3$ and $z\sim 2$. Our simulation results (black line) show no proximity effect like that in A03, but agree closely with A05, indicating that sample size was the cause of the discrepancy, but not evolution since our data is at the earlier epoch of $z\sim 3$. ![Comparison between the simulated and observed Lyman-$\alpha$ absorption within 1Mpc of LBGs. The abscissa shows the flux decrement $1-<F_{1\rm Mpc}>$, where $<F_{1\rm Mpc}>$ is the average flux within 1Mpc of the galaxy. The left-hand ordinate shows the relative (in the case of simulated data) number of measurements at a given decrement interval. The right-hand ordinate on the top panel shows the actual number of measurement for the *Enzo* simulation. The observational and SPH simulation results were taken from A05. []{data-label="fig:flux_decrement"}](fig3.eps){width="7.0cm"} The distribution in absorbing gas can be further studied by looking at how it varies within 1h$^{-1}$Mpc of the LBGs. Figure \[fig:flux\_decrement\] shows the distribution of the flux decrement, $1-\left<F\right>$, within this volume for the simulation results from *Enzo*, the recent observations from A05 and the simulation results performed by Kollmeier et al., (taken from A05). Of the 2000 lines-of-sight that we passed through the simulation refined region, 884 passed within 1h$^{-1}$Mpc of an LBG. The average flux decrement within this area was calculated for each of these lines and scaled as for Figure \[fig:flux\]. A normalised scale, with the total number of measurements the same as Adelberger’s observations, is marked on the left-hand axis while the actual numbers are shown on the right. The observations indicate a bimodal distribution, with gas either very dense near to the galaxy, or largely absent. This, A05 argues, is consistent with the idea that anisotropic winds are present in these systems which are clearing a path in the gas which roughly half the lines-of-sight are passing through. If true, this could explain the discrepancy between the results, with the earlier 2003 observations picking out voids on one side of the galaxy that the larger sample set in 2005 averages out. The *Enzo* simulation shows some indication of a bimodal distribution, but with very few transparent lines-of-sight. In addition, there are a larger number of lines-of-sight with very high decrements. Kollmeier’s results are similar, also seeing a lack of voids in the gas, but with no indication of a bimodal distribution at all. The simulations are therefore not seeing as strong a population of galaxies with the anisotropic voids. Intergalactic metals -------------------- To measure the quantity of metals being ejected into the IGM, we calculate the column density of HI, CIV and OVI along 500 lines-of-sight. These are plotted in Figure \[fig:metals\] along with observational data. We note that we do not attempt to model observational effects such as confusion between HI and OVI lines, which will lead to systematic uncertainties at the low column-density end. ![Column density distributions for HI (top panel), CIV and OVI (middle panel) and for CIV and OVI calculated with a softer UV background along with observational data (as listed).[]{data-label="fig:metals"}](fig4.eps){width="7.0cm"} The top panel of Figure \[fig:metals\] shows the column density of HI in our simulation (black line) alongside the observational results from @Petitjean1993 and @Hu1995. Our results seem to be systematically slightly higher at medium to high column densities and curve away at lower values. The former is due to our choice of a high density area of the simulation box, where a number of LBG candidates were placed. The fall off at low densities is most likely due to a limitation in resolution of the simulation [@BryanMachacek1999] and low column density absorbers (which correspond to small filaments in under-dense regions) are underestimated. In the middle panel of Figure \[fig:metals\], the column densities for CIV and OVI are plotted with the observational data for CIV from @Scannapieco2005 at $z=2.7$ and $z=1.9$. The match between our results and the observations at first seems extremely good, pointing towards the objects in the region producing the required metal enrichment of the IGM. However, as we have seen from the hydrogen distribution above, we have picked an over-dense region to study and therefore on average, we would expect the metal content to be lower. This implies that either the metals are more clustered around the galaxies than observed, or not enough metals are being ejected into the IGM. Softening the UV background, as shown in the bottom panel of Figure \[fig:metals\], improves things slightly, but not enough to match the HI over-density. Further softening, or a supersolar yield, could resolve the difference, but appear to be unlikely [@Aguirre2005]. Therefore, we conclude that either our galaxies do not have powerful enough outflows to drive the metals into the IGM, or they are not the main source of IGM enrichment. If they are not the source, it opens the question as to what is causing the metal enrichment. As can be seen from Figure \[fig:projections\], contributions to our measured metal density come not just from the six LBG halos discussed in the previous section, but from all resolved galaxies. Therefore, if these are not enough, then smaller objects, below our minimum resolved halo mass of approximately $5\times 10^9$M$_\odot$, must be responsible. This minimum lies in the dwarf galaxy mass scale, so possible objects have to be dwarf-sized or smaller. If this result is correct and LBGs are not the source of IGM enrichment, then the problem is left as to why there seems to be a correlation between LBGs and CIV systems (A03, A05). A solution to this was suggested by @Porciani2005, who propose early-forming dwarf galaxies, with the same biased distribution as LBGs, as the metal source. The LBGs are gravitationally drawn to the same over-dense sites and form in bubbles of old metals. @Madau2001 also find that metal enrichment is more likely to come from pregalactic outflows than later ones from LBGs. Additionally, simulations performed by @Scannapieco2005 tried multiple schemes of metal injection but found only when the metals were placed in bubbles round the galaxies were the observations replicated well. They point out that outflows from the smaller gravitational potentials from dwarves are a more likely candidate for the bubbles than the LBGs.\ An alternative explanation for the LBGs weak effect on the IGM is that our feedback prescription is not able to correctly reproduce outflows of gas and metals. It is clear that we cannot resolve the full multi-phase nature of the ISM, although we are currently working to understand and improve our treatment of feedback [@Tasker2006]. A visual inspection of our galaxies reveals several outflows, with the maximum velocity in the region being $750$km s$^{-1}$, encouragingly close to the observed outflows of $775$km s$^{-1}$ by @Pettini2002. However, only two of our six halos showed strong outflows and only in one direction, indicating our outflows are neither homogeneous nor ubiquitous. Whether this is a problem is hard to determine; work performed by @Kollmeier2005 and @Theuns2002 on the effect of winds on absorption indicates that the lack of strong winds is probably not the cause of the decreased flux seen in the Figure \[fig:flux\]. @Kollmeier2005 found that the gas in the IGM is unaffected by the presence of winds and has only a small impact on the optical depth close to LBGs. @Theuns2002 and @Bruscoli2003 likewise found that the winds preferentially expanded into the voids, leaving the hydrogen filaments intact. It therefore seems likely that we are correctly modeling the HI around the LBGs. The lack of metal production is more difficult to judge. From Figure \[fig:projections\], the metals appear closely centered around their source, an effect that might be genuine due to pressure from the IGM [@Ferrara2005] or due to weak feedback. By using an extremely soft UV background and preventing heated gas from cooling for $10^7$yr, @Theuns2002 is able to reproduce the observed CIV density well, implying that galactic outflows at $z=3$ can enrich the IGM if feedback is more efficient than presented here. However, @Aguirre2005 find that the metals are too inhomogenously distributed compared to observations, a symptom that would not change in the presence of strong winds. They therefore argue that a second, high-z, enrichment mechanism is still needed. Conclusions =========== We performed detailed simulations of galaxies formed at $z=3$ using an adaptive-mesh code which included recipes for star formation, metal production and SN feedback. From this data, we looked at the transmitted HI flux in simulated QSO spectra near large galaxies and the production of metals that are returned to the IGM. Our calculated HI flux in the neighborhood of the galaxies agrees well with the most recent observational results (A05) in which the mean transmitted flux decreases near Lyman-break galaxies. While the mean properties agree well, there is an indication that the distribution of transmitted fluxes does not show the same strong bimodality seen in the observations. We also generate simulated CIV and OVI absorption lines and find that, in our simulation box, the column density distributions for these tracers are somewhat lower than observed. A visual inspection of the simulations shows the metals to be quite concentrated around galaxies (both large and small). We conclude that while we do find a significant amount of the IGM polluted with metals, the amount falls short of observations, implying that either our treatment of feedback does not generate sufficiently strong winds, or that objects smaller than our minimum resolution of about $ 5\times 10^9$M$_\odot$ are responsible for the remaining IGM enrichment. EJT and GLB acknowledge support from PPARC and the Leverhulme Trust. This work was partially supported by NSF grant AST-0507161. We thank the National Center for Supercomputing Applications for computing resources. Adelberger, K. L., Steidel, C. C., Giavalisco, M., Dickinson, M., Pettini, M., & Kellogg, M.  1998, , 505, 18 Adelberger, K. L., Steidel, C. C., Shapley, A. E., & Pettini, M. 2003, , 584, 45 Adelberger, K. L., Shapley, A. E., Steidel, C. C., Pettini, M., Erb, D. K., & Reddy, N. A.  2005, , 629, 636 Aguirre, A., Schaye, J., Hernquist, L., Kay, S., Springel, V., & Theuns, T. 2005, , 620, L13 Bruscoli, M., Ferrara, A., Marri, S., Schneider, R., Maselli, A., Rollinde, E., & Aracil, B.  2003, , 343, L41 Bryan,G.L. Comp. Phys. and Eng. 1999, 1:2, p. Bryan, G. L., Machacek, M., Anninos, P., & Norman, M. L. 1999, , 517, 13 Croft, R. A. C., Hernquist, L., Springel, V., Westover, M., & White, M. 2002, , 580, 634 Eisenstein, D. J., & Hut, P. 1998, , 498, 137 Ferland, G. J., Korista, K. T., Verner, D. A., Ferguson, J. W., Kingdon, J. B., & Verner, E. M. 1998, , 110, 761 Ferrara, A., Scannapieco, E., & Bergeron, J. 2005, , 634, L37 Haardt, F., & Madau, P. 2001, Clusters of Galaxies and the High Redshift Universe Observed in X-rays, Haardt, F., & Madau, P. 1996, , 461, 20 Hu, E. M., Kim, T.-S., Cowie, L. L., Songaila, A., & Rauch, M. 1995, , 110, 1526 Kollmeier, J. A., Miralda-Escude, J., Cen, R., & Ostriker, J. P. 2005, ArXiv Astrophysics e-prints, arXiv:astro-ph/0503674 Madau, P., Ferrara, A., & Rees, M. J. 2001, , 555, 92 O’Shea, B. W., Bryan, G., Bordner, J., Norman, M. L., Abel, T., Harkness, R., & Kritsuk, A. 2004, ArXiv Astrophysics e-prints, astro-ph/0403044 Petitjean, P., Webb, J. K., Rauch, M., Carswell, R. F., & Lanzetta, K. 1993, , 262, 499 Pettini, M., Shapley, A. E., Steidel, C. C., Cuby, J.-G., Dickinson, M., Moorwood, A. F. M., Adelberger, K. L., & Giavalisco, M. 2001, , 554, 981 Pettini, M., Rix, S. A., Steidel, C. C., Adelberger, K. L., Hunt, M. P., & Shapley, A. E. 2002, , 569, 742 Porciani, C., & Madau, P. 2005, , 625, L43 Scannapieco, E., Pichon, C., Aracil, B., Petitjean, P., Thacker, R. J., Pogosyan, D., Bergeron, J., & Couchman, H. M. P. 2005, ArXiv Astrophysics e-prints, arXiv:astro-ph/0503001 Schaye, J., Aguirre, A., Kim, T.-S., Theuns, T., Rauch, M., & Sargent, W. L. W., 2003, , 596, 768 Songaila, A. 2001, , 561, L153 Tasker, E. & Bryan, G.L. 2006, , in press Tassis, K., Abel, T., Bryan, G. L., & Norman, M. L., 2003, , 587, 13 Theuns, T., Viel, M., Kay, S., Schaye, J., Carswell, R. F., & Tzanavaris, P. 2002, , 578, L5
UAB-FT-645\ FERMILAB-PUB-08-152-T\ ANL-HEP-PR-08-32\ EFI-08-15 [**The Effective Theory of the Light Stop Scenario**]{} [**M. Carena$^{\,a}$, G. Nardini$^{\,b}$, M. Quirós$^{\,b,c}$, C.E.M. Wagner$^{\,d,e}$**]{}\ ${}^a\!\!$ [*[Theoretical Physics Department, Fermilab, P.O. Box 500, Batavia, IL 60510]{}*]{} ${}^b\!\!$ [*[ IFAE, Universitat Aut[ò]{}noma de Barcelona, 08193 Bellaterra, Barcelona (Spain)]{}*]{} ${}^c\!\!$ [*[Instituciò Catalana de Recerca i Estudis Avançats (ICREA)]{}*]{} ${}^d\!\!$ [*[HEP Division, Argonne National Laboratory, Argonne, IL 60439]{}*]{} ${}^e\!\!$ [*[EFI, KICP and Physics Deparment, Univ. of Chicago, Chicago, IL 60637]{}*]{} **Abstract** > Electroweak baryogenesis in the minimal supersymmetric extension of the Standard Model may be realized within the light stop scenario, where the right-handed stop mass remains close to the top-quark mass to allow for a sufficiently strong first order electroweak phase transition. All other supersymmetric scalars are much heavier to comply with the present bounds on the Higgs mass and the electron and neutron electric dipole moments. Heavy third generation scalars render it necessary to resum large logarithm contributions to perform a trustable Higgs mass calculation. We have studied the one–loop RGE improved effective theory below the heavy scalar mass scale and obtained reliable values of the Higgs mass. Moreover, assuming a common mass ${\widetilde m}$ for all heavy scalar particles, and values of all gaugino masses and the Higgsino mass parameter about the weak scale, and imposing gauge coupling unification, a two-loop calculation yields values of the mass ${\widetilde m}$ in the interval between three TeV and six hundred TeV. Furthermore for a stop mass around the top quark mass, this translates into an upper bound on the Higgs mass of about 150 GeV. The Higgs mass bound becomes even stronger, of about 129 GeV, for the range of stop and gaugino masses consistent with electroweak baryogenesis. The collider phenomenology implications of this scenario are discussed in some detail. Introduction ============ The minimal supersymmetric extension of the Standard Model (MSSM) has become the preferred candidate for the ultraviolet completion of the Standard Model (SM) beyond the TeV scale. The MSSM description may be extended up to a high (GUT or Planck) scale, and the search for supersymmetric particles is therefore one of the main experimental goals at the forthcoming Large Hadron Collider (LHC) at CERN. Among its main virtues, on top of solving the hierarchy problem of the Standard Model, the MSSM leads to a natural unification of the gauge couplings consistent with precision electroweak data and provides a natural candidate for the Dark Matter of the Universe (namely the lightest neutralino). On the other hand electroweak baryogenesis [@early] is a very elegant mechanism for generating the baryon asymmetry of the Universe that can be tested at present accelerator energies and, in particular, at the future LHC. It turns out that electroweak baryogenesis can not be realized within the Standard Model [@Shaposhnikov:1987tw; @Gavela:1993ts], while it is not a generic feature of the MSSM for arbitrary values of its parameters [@Giudice:1992hh; @Espinosa:1993yi]. However, a particular region in the space of supersymmetric parameters was found in the MSSM, where electroweak baryogenesis has a chance of being successful [@Carena:1996wj], dubbed under the name of light stop scenario (LSS). Since the generation of the BAU in the LSS is challenging other alternatives (where the right-handed stop is not singled out) have been explored in the literature. In particular in the context of split supersymmetry, and if one allows $R_p$-violating couplings, it was proven in Ref. [@Huber:2005iz] that superheavy squarks can produce enough baryon asymmetry when they decay out-of-equilibrium, while some splitting between left and right-handed mass squarks is required by the gluino cosmology. Moreover beyond the MSSM there are plenty of other possibilities. The simplest one is introducing singlets in the MSSM light spectrum (the so-called NMSSM [@Pietroni:1992in] or nMSSM [@Menon:2004wv]), or even adding an extra $Z'$ gauge boson [@Kang:2004pp], which easily triggers a strong first order phase transition. Since the generation of the BAU in the MSSM has inherent uncertainties of order one, large variations in the final results appear due to the different approaches which have been considered in the literature [@CQRVW] [^1]. According to these results, it looks possible to achieve the proper baryon asymmetry fulfilling all experimental bounds and in view of the forthcoming LHC running, it is worth refining the predictions of the LSS. In this paper we will then consider the effective theory of the LSS while in a companion paper [@CNQW2] the phase transition will be analyzed in great detail using the results provided by the present analysis. The light stop scenario of the MSSM is characterized by a light right-handed stop (with a mass near the top quark mass) while all other squarks and sleptons should be heavy enough in order to cope with present LEP bounds on the Higgs mass and to avoid large flavor, CP violation and electric dipole moment effects [@EDM; @edm1]. On the other hand, supersymmetric fermions (Higgsinos and gauginos) are required to be at the electroweak (EW) scale (this fact can be technically natural as a consequence of some partly conserved $R$-symmetry) in order to trigger the required CP-violating currents needed for baryogenesis [@CQRVW], as well as providing a Dark Matter candidate [@Balazs]. Moreover even if the LSS is consistent with a light CP-odd Higgs boson, a large splitting between the lightest CP-even Higgs and the CP-odd Higgs masses helps to avoid all phenomenological constraints, because it emulates the Standard Model Higgs sector at low energy (LE). In practice we will consider all heavy scalars (sleptons, non-SM Higgs bosons and squarks, except for the right-handed stop) at a common scale $\tilde m$ and study the LE Effective Theory (ET) below that scale. We will use the $\overline{MS}$ renormalization scheme and resum the large logarithms which will appear in the calculation of various observables by using Renormalization Group Equations (RGE) techniques. In particular we will make use of the run-and-match technique [@Georgi:1994qn] by which every particle decouples at its mass scale using the step-function approximation. The high-energy (HE) and LE theories, with different RGE in both regions, should match at the decoupling scale providing (finite) thresholds for the various couplings. In this way, considering a common decoupling scale is an approximation which amounts to neglect possible thresholds corresponding to the mass differences around $\tilde m$, and that should not affect our results in a significant way. For very large values of $\tilde m$ the model is a variant of Split Supersymmetry [@ArkaniHamed:2004fb], where the right-handed stop is also (light) in the LE theory. Thus in the spirit of Split Supersymmetry every light particle is required by one particular experimental input: apart from the light Higgs, required by electroweak symmetry breaking, the light stop is required to trigger a strong enough first order phase transition while light charginos and neutralinos are required to generate enough baryon asymmetry and to become dark matter candidates. On the other hand gauge coupling unification, which works reasonably well in the MSSM, is an important issue. As we will see a two-loop analysis points towards values of $\tilde m$ between ten and one hundred TeV for the case where all gauginos are at the electroweak scale, and around one order of magnitude larger for hierarchical gaugino masses as required by gaugino mass unification and by electroweak baryogenesis. The outline of this paper is as follows. In Section \[effective\] we present our LE effective theory below $\tilde m$ as well as the matching conditions between the couplings of LE and HE theories, the threshold conditions for the different couplings and the $\beta$-functions in the LE Effective Theory. The technical details of the calculation of threshold conditions are presented in Appendix \[thresholds\] and those about the RGE in Appendix \[RGE\]. In section \[numerical\] we present the numerical results based on the calculation of the previous section. In particular, the predictions of different parameters in the LE effective theory and the corresponding value of the Higgs mass. In Section \[unification\] we consider the issue of gauge coupling unification. We show that the unification scale is $M_{GUT}= 1\div2 \times 10^{16}$ GeV while imposing the experimental value for the strong coupling leads to values of the heavy sfermion masses ${\widetilde m}$ in good agreement with the values of the parameters required to fulfill the electric dipole moment constraints in the EWBG scenario within the MSSM [@EDM; @edm1; @Balazs]. In Section \[pheno\] we present some ideas for the experimental detection of $\tilde t_R$ in our model, as well as the possibility of having a Dark Matter candidate. Finally in Section \[conclusion\] we present our conclusions. The effective theory {#effective} ==================== The theory at an energy scale $\tau$ between the EW scale and ${\widetilde m}$, at which supersymmetry is broken, contains all the SM particles and the Bino, Winos and Higgsinos, as well as the light stop. All other squarks and sleptons are heavy, with masses about ${\widetilde m}$, and decouple from the low energy theory. The gluino, with a mass $M_3$ much below ${\widetilde m}$, may be much heavier than the other gauginos and, in this case, when $\tau < M_3$ it will decouple too. Therefore the corresponding low energy effective Lagrangian is given by \_[eff]{} &=& m\^2 H\^H-( H\^H)\^2 - h\_t + Y\_t\ &&-\_[g]{} [g]{}\^a [g]{}\^a -\^A [W]{}\^A - -\_u\^T\_d - M\_U\^2 |[t\_R]{}|\^2\ && - \_[g]{} G  [t\_R]{} [g]{}\^a \^a t\_R + J  [t\_R]{} [B]{} t\_R - K |[t\_R]{}|\^2 |[t\_R]{}|\^2 - Q   |[t\_R]{}|\^2 |[H]{}|\^2  +[h.c.]{}\ && +( g\_u \^a [W]{}\^a + g’\_u [B]{} ) [H]{}\_u +( - g\_d \^a [W]{}\^a + g’\_d [B]{} ) [H]{}\_d +   , \[lagreff\] where the gluino decoupling is taken into account by the symbol $\Theta_{\tilde g}$ which is equal to 1 (0) for $\tau\geq M_3 (\tau< M_3)$. For simplicity in (\[lagreff\]) we do not write the kinetic terms explicitly and we approximate the Lagrangian by taking into account only interactions of the SM fields, charginos, neutralinos and the right-handed stop coming from renormalizable high energy terms proportional to the gauge couplings $g', g, g_3$ or the supersymmetric top Yukawa coupling $\lambda_t$ without considering flavour mixing. Furthermore in (\[lagreff\]) the field $H$ is defined as the light projection of the MSSM Higgs bosons, given by $H_u \to {\sin\beta}H\,,H_{d,i} \to {\cos \beta}\epsilon_{ij} H^*_j$, with ${\tan \beta}\equiv \langle H_u^0 \rangle/\langle H_d^0 \rangle$. At the energy scale ${\widetilde m}$ the effective Lagrangian (\[lagreff\]) has to describe the physics of the HE theory, which implies that the following one–loop [*matching conditions*]{} have to be satisfied \[matchQ\] Q(m )-Q&=&(\_t\^2 (m )\^2+  g’\^2([m]{}) 2) (1-[12]{} Z\_Q) ,\ \[matchLamb\] (m ) -&=& \^22(1-[12]{} Z\_) , \[lambda\]\ K (m )-K &=& (g\_3\^2 ([m]{})+ g’\^2([m]{}))(1-[12]{} Z\_K) ,\[matchK\]\ G (m )-G&=&g\_3 (m ) (1-[12]{}Z\_G) ,\ h\_t(m )-&=&\_t (m )(1-[12]{}Z\_[h\_t]{}) , \[matchht\] Y\_t (m )-&=& \_t (m )(1-[12]{}Z\_[Y\_t]{}) , \[matchY\]\ g\_u([m]{})=g([m]{}) , && g\_d([m]{})=g([m]{}) , \[weak1\]\ g’\_u([m]{})=g’([m]{}) , && g’\_d([m]{})=g’([m]{}) ,\[weak2\]\ J (m )=g’(m )  ,&& \[match\] where the quantities $\Delta Q$, $\Delta \lambda$, $\Delta K$, $\Delta G$, $\Delta h_t$, $\Delta Y_t$ and $\Delta Z_i$ are the [*threshold*]{} functions. In particular $\Delta Z_i$ are the wave function thresholds coming from the matching of low and high energy propagators and the canonical normalization of ET kinetic terms while the others come directly from the matching of the low and high energy proper vertices (details of the calculation are given in Appendix \[thresholds\]). In this work we will consider for the threshold and $\beta$-functions the leading contributions and thus we will use the approximation of neglecting the one–loop corrections proportional to $g'$, $g$ and the Yukawa couplings other than that of the top-quark (as well as the low energy couplings correlated to those). Following this criterion we consider no threshold in the matchings (\[weak1\])-(\[match\]) since they do not appear at tree-level and would correspond to the one–loop corrections that we are neglecting. The same analysis has to be redone when the renormalization scale $\tau$ becomes lower than $M_3$ and the gluino decouples. In this case the interaction term of (\[lagreff\]) involving the coupling $G$ disappears and the following matching conditions relate the values of the couplings before and after the gluino decoupling: \[matchgl\] Q(M\_3\^-)&=&Q(M\_3\^+) (1-’ Z\_[[t\_R]{}]{})  ,\ K(M\_3\^-)&=&K(M\_3\^+) (1-2 ’ Z\_[[t\_R]{}]{}) + ’ K ,\ h\_t(M\_3\^-)&=& h\_t(M\_3\^+) (1-’ Z\_[t\_R]{} /2)  ,\ Y\_t(M\_3\^-)&=& Y\_t(M\_3\^+) (1-’ Z\_[[t\_R]{}]{}/2)  ,\ M\_U\^2(M\_3\^-)&=&M\_U\^2(M\_3\^+) (1-2 ’ Z\_[[t\_R]{}]{}) + ’ M\_U\^2 ,where $\Delta' Z_{{\tilde t_R}}$ ($\Delta' Z_{t_R}$) is the wave function threshold of the right stop (top) and $\Delta' K$ and $\Delta' M_U^2$ are the proper vertex threshold. The matching conditions of the couplings absent from (\[matchgl\]) are trivial since they have no threshold discontinuity when the renormalization scale crosses $M_3$. Readers interested in the explicit form of the thresholds of (\[matchQ\])-(\[matchgl\]) can find them in Appendix \[thresholds\], Eqs. (\[deltaQ\])-(\[wfgl\]) and (\[thresholdM3\]). For energy scales between the top mass and $\tilde{m}$, at which all scalars apart from the right-handed stop and the Standard Higgs doublet are decoupled, one can compute the one-loop $\beta$-functions of the gauge constants [^2] in a straightforward way (4)\^2 \_[g\_i]{} = g\_i\^3 b\_i         b=( , - , -+2 \_[g]{} )  , where we have used the GUT convention $g_1^2=(5/3)g'^{2}$. For the RGE of the other couplings we will only report their expressions and leave the calculation details to Appendix \[RGE\]. For the dimensionless couplings we obtain \[stopbeta\] &&(4)\^2 \_[g\_u]{}= g\_u (3 h\_t\^2 + Y\_t\^2)  ,    (4)\^2 \_[g\_d]{}= 3  g\_d  h\_t\^2  ,\ &&(4)\^2 \_[g’\_u]{}= g’\_u (3 h\_t\^2 + Y\_t\^2)  ,   (4)\^2 \_[g’\_d]{}= 3  g’\_d  h\_t\^2  ,\ &&(4)\^2 \_[J]{}=  J (h\_t\^2+2 Y\_t\^2+ G\^2 \_[g]{} -4g\_3\^2)  ,\ &&(4)\^2 \_[Y\_t]{}= Y\_t ( h\_t\^2 + 8 Y\_t\^2 + G\^2 \_[g]{}-8g\_3\^2 )  ,\ &&(4)\^2 \_G = G ( 9G\^2 + 2h\_t\^2 - 26 g\_3\^2 + 4 Y\_t\^2 )  ,\ &&(4)\^2 \_[h\_t]{} = h\_t ( h\_t\^2 + Y\_t\^2 + G\^2 \_[g]{} -8 g\_3\^2)  ,\ &&(4)\^2 \_= 12 \^2 +6 Q\^2 -12 h\_t\^4 +12 h\_t\^2  ,\ &&(4)\^2 \_Q = - G\^2 h\_t\^2 \_[g]{}- 4 Y\_t\^2 h\_t\^2 +Q (K + 3 + 4 Q+ 6 h\_t\^2 + 4 Y\_t\^2 + G\^2 \_[g]{}- 8 g\_3\^2 )  ,\ &&(4)\^2 \_K = 12 Q\^2 +13 g\_3\^4 - G\^4 \_[g]{}-24 Y\_t\^4 + K (K + 8 Y\_t\^2 + G\^2 \_[g]{} - 16 g\_3\^2 )  , and for the dimensionful ones \[massbeta\] &&(4)\^2 \_=    Y\_t\^2 ,\ &&(4)\^2 \_[M\_1]{}= [O]{}(g\_1\^2) ,    (4)\^2 \_[M\_2]{}= [O]{}(g\_2\^2)  ,\ &&(4)\^2 \_[M\_3]{}= M\_3 (-18 g\_3\^2 + G\^2)  ,\ &&(4)\^2 \_[m]{}= -6 Q M\_U\^2 + 6 m\^2 h\_t\^2  ,\ &&(4)\^2 \_[M\_U\^2]{}= M\_U\^2 ( K + 4 Y\_t\^2 + G\^2 -8 g\_3\^2 )- M\_3\^2 G\^2 \_[g]{} -4 m\^2 Q -4 Y\_t\^2 \^2  , where $\beta_G$ and $\beta_{M_3}$ make sense only for $\tau\geq M_3$. Numerical results on the Higgs mass {#numerical} =================================== In this section we will apply the previous results to obtain in an appropriate way the values of the LE couplings and the Higgs mass at the EW scale for any large value of the cutoff scale $\tilde m$ and for different values of the HE supersymmetric parameters. Running of couplings -------------------- We need to know all the couplings of (\[lagreff\]) at the EW scale that we identify here with the top-quark mass $m_t=172.5\pm 2.7$ GeV [@Yao:2006px] (corresponding to $h_t(m_t)\simeq 0.95$). All the mass parameters $M_U^2,\mu,M_3$ are free inputs of the theory and thus we choose them directly at low energy by fixing $M_U^2(m_t), \mu(m_t),M_3(M_3)$. Moreover at the low scale also the SM couplings $g({\widetilde m}), g'({\widetilde m}), g_3({\widetilde m}),h_t({\widetilde m})$ and $m^2(m_t)$ are fixed experimentally [^3]. On the contrary the non-SM couplings are defined by (\[matchQ\])-(\[match\]) at high energy as functions of the previous couplings, run up to the scale ${\widetilde m}$, and the free quantities ${\widetilde m}, {\tan \beta}$ and $A_t({\widetilde m})$. Therefore in order to get the non-SM couplings at the EW scale we have to solve a system of linear differential equations \[the RGE (\[stopbeta\])-(\[massbeta\])\] with boundary condition in $\tau=m_t,M_3,{\widetilde m}$. Equations must be solved numerically and iteratively because the conditions at the boundary ${\widetilde m}$ (\[matchQ\])-(\[match\]) depend in turn on the evolution of the parameters. The implicit resummation of the leading logarithms renders our estimation of the ET couplings reliable, even for large values of ${\widetilde m}$. Using this procedure the values of the \[\]\[bl\][$A_t/{\widetilde m}$]{} \[\]\[bl\][${}_{{\widetilde m}~~~{[GeV]}}$]{} \[\]\[c\][${}_{~~~~~\Theta[\sqrt{K}]}$]{} \[\]\[c\][${}_{\hspace{-2.6mm}\Theta [G]}$]{} \[\]\[c\][${}_{~\Theta [g_s]}$]{} \[\]\[c\][${}_{\Theta [g_u]}$]{} \[\]\[c\][${}_{~~~~\Theta [g_d]}$]{} \[\]\[c\][${}_{~~\,\Theta [g]}$]{} \[\]\[c\][${}_{~~~\Theta [\lambda]}$]{} \[\]\[c\][${}_{\Theta[\sqrt{Q}]}$]{} \[\]\[c\][${}_{~~~~\Theta[h_t]}$]{} \[\]\[c\][${}_{~~\,\Theta[Y]}$]{} \[\]\[c\][${}_{~~~\;\Theta[J]}$]{} \[\]\[c\][${}_{~~~\,\Theta [g_u']}$]{} \[\]\[c\][${~~~~}_{\Theta [g'_d]}$]{} \[\]\[c\][${}_{~~~\,\Theta[g\prime]}$]{} \[\]\[c\][${}_{\Theta [m^2_U]}$]{} \[\]\[c\][${}_{\Theta[m^2]}$]{} \[\]\[c\][   ${}_{\Theta[M^2_3]}$]{} \[\]\[c\][ ${}_{\Theta[\mu]}$]{} \ LE couplings at the EW scale will basically depend on two different factors: the matching conditions and the running evolution. Focusing on the former in the upper–left panel of Fig. \[PlotThresh\] we analyze the thresholds relevance by plotting for every coupling the ratio (defined as $\delta$ in the plot) of its value over the one without the threshold contribution, both evaluated at the scale ${\widetilde m}$, as functions of $A_t/{\widetilde m}$ for ${\tan \beta}=2, M_U=200\, $GeV$, M_3=500~ $GeV$, \mu=100~ $GeV, and ${\widetilde m}=100~ $TeV. It is remarkable that the threshold contributions to the couplings $\lambda, Q$ and $K$ can easily reach a value $\sim 10\%$ and beyond, unlike the $h_t, Y, G$-thresholds which are almost $A_t$-independent and remain below $\sim 2\%$ [^4]. For this reason it is sensible to neglect the threshold effects of $h_t, Y, G $, since their contributions are within the uncertainty of our approximations. The relevance of the running is also exhibited in Fig. \[PlotThresh\] where the ratios $\rho({\widetilde m})/\rho(X)\equiv \Theta[\rho]$ for all couplings $\rho$ are plotted as functions of ${\widetilde m}$ (where $X=M_3$ for $\rho=M_3,\, G$ and $X=m_t$ for the rest of couplings) for $A_t=0.6\,{\widetilde m}$ and keeping the rest of parameters fixed as in the upper–left plot. In particular in the upper–right panel we plot masses and in the lower panels all dimensionless couplings. For example we can compare from the lower–right figure how the couplings $h_t$ and $Y$ evolve differently, even if we had neglected their different threshold effects. The Higgs mass -------------- Once we have computed the values of the couplings in (\[lagreff\]) it is straightforward to obtain the Higgs effective potential in which the leading logarithms are resummed. Since this potential is strongly dependent on the renormalization scale we need to consider the one–loop part of the effective potential calculated in the LE theory. We are adding to the SM fields only the contribution from $\tilde t_R$ since the contribution from charginos and neutralinos (which is numerically small) would spoil the scale invariance of the effective potential in our approximation where we are neglecting electroweak gauge couplings in the LE $\beta$-functions. The one–loop contributions to the effective potential then read as V\_[1-loop]{}(\_c)= \[potForm\] \_i n\_i m\_i\^4(\_c) ( -C\_i)    [i=W,Z,h,,[[t\_R]{}]{}, t]{} \[\]\[bl\][$m_H/\textrm {\footnotesize GeV} ~~~~ \kappa_{\tilde t}$]{} \[\]\[bl\][$m_H/\textrm {\footnotesize GeV} ~~ ~~ \kappa_{\tilde t} $]{} \[\]\[l\][$A_t/{\widetilde m}$]{} \[\]\[l\][${\widetilde m}$  \[GeV\] ]{} \ where $C_W=C_Z=5/6$, $C_h=C_\chi=C_{{\tilde t_R}}=C_t=3/2$ and $n_W=n_{{\tilde t_R}}=6,\,n_Z=3,\,$ $n_h=1,\,n_\chi=3,\,n_t=-12$ and the masses are m\_W\^2=\_c\^2  , && m\_Z\^2=\_c\^2  ,\ m\_h\^2=(3\_c\^2-v\^2)  , && m\_\^2=(\_c\^2-v\^2)  ,\ m\_[[t\_R]{}]{}\^2=M\_U\^2 + \_c\^2  , && m\_t\^2= \_c\^2  , \[stopmass\] with the renormalization scale conventionally chosen to be $m_t$. Notice that by this renormalization scale choice and thanks to the use of the LE theory the logarithms of (\[potForm\]) are always small. Moreover the addition of the one–loop contribution (\[potForm\]) eliminates the scale dependence of the potential proportional to strong–like or Yukawa–like couplings up to the one–loop order. The second derivative of the potential at the EW minimum provides the Higgs mass within the one-loop renormalization group improved effective theory. The numerical result is shown in Fig. \[plotStH\] where we plot the Higgs mass $m_H$ (solid line). We also introduce the parameter $\kappa_{\tilde t}\equiv 10 \sqrt{m_{\tilde t_R}/ \textrm{GeV}}$ (dashed line) which parameterizes the lightest stop mass. The parameter $\kappa_{\tilde t}$ has the advantage of being related in a simple way to the stop mass, and since it acquires values similar to the Higgs mass (in GeV units), it may be represented together with it on a linear scale. Observe that $\kappa_{\tilde t} = 100$ is equivalent to $m_{\tilde t_R} = 100$ GeV and $m_{\tilde t_R} < m_t$ corresponds to $\kappa_{\tilde t} \lesssim 130$. Values of $\kappa_{\tilde t} \lesssim 100$ are therefore excluded by LEP searches. We plot both variables as functions of $A_t/{\widetilde m}$ \[upper panels: on the left (right) panel ${\widetilde m}=3$ (100) TeV\] and ${\widetilde m}$ \[lower panels: on the left (right) panel $A_t=0\, (1.3)\, {\widetilde m}$\] for several values of ${\tan \beta}$. For $m_H$ and $\kappa_{\tilde t}$ the different values of ${\tan \beta}$ are encoded by different colours (level of line darkness) presented in the legend. Since a change of ${\tan \beta}$ does not appreciably modify $\kappa_{\tilde t}$ we mark only the extremal curves corresponding to ${\tan \beta}=15$ and ${\tan \beta}=2$. In all the plots we have fixed $M_U=200$ GeV, $\mu=100$ GeV and $M_3=500$ GeV. Some comments on the different masses can be easily drawn from Fig. \[plotStH\]. We notice that because of the experimental bound on the Higgs mass, $m_H > 114.7$ GeV [@Yao:2006px] (dotted straight line in Fig. \[plotStH\]) the model with ${\widetilde m}\sim 1$ TeV requires ${\tan \beta}> 3$ and in general the smaller ${\tan \beta}$ is the closer to 1.3$\,{\widetilde m}$ the triliniear coupling $A_t$ has to be since the Higgs mass has a maximum there. This requirement is relaxed if ${\widetilde m}$ is increased, but scales as large as ${\widetilde m}\simeq 10^7$ TeV are necessary to overcome the Higgs mass bound for any ${\tan \beta}\gtrsim 2$ independently of $A_t$. On the contrary if we allow $A_t\simeq 1.3\, {\widetilde m}$ the model is experimentally safe for ${\tan \beta}\geq 2$ already at ${\widetilde m}=5$ TeV. On the other hand if we require the right-handed stop to be lighter than the top quark ($\kappa\lesssim 130$) with $M_U^2\sim (200~ \textrm{GeV})^2$ a large $A_t$ is needed. Another way to maintain the stop lighter than the top quark is by decreasing $M_U^2$, which lowers the stop mass. For $M_U^2\lesssim (100~ \textrm{GeV})^2$ the maxima of the Higgs mass curves are excluded by the LEP bound on the stop mass. Consequently the bounds on ${\tan \beta}$, $A_t$ and ${\widetilde m}$ become even stronger in this case. The latter result applies, in particular, for the conditions which are favorable to electroweak baryogenesis (EWBG) where $M_U^2<0$ is needed. As an example, in Fig. \[plot100\] we choose the same parameters as in Fig. \[plotStH\] but with a right-handed stop mass parameter $M_U^2 = -(100$ GeV$)^2$ and ${\widetilde m}=100$ (1000) TeV in the upper–left (right) plot. We can see from the right–panel of Fig. \[plot100\] that there exists the upper bound $A_t \lesssim 0.6 \; {\widetilde m}$ coming from the experimental bounds on the stop mass. Notice that, independently of the experimental bounds, larger values of $A_t/{\widetilde m}$ would lead to an instability of the electroweak minimum. \[\]\[bl\][$m_H ~~ m_{\tilde t} ~~[\textrm{GeV}]$]{} \[\]\[bl\][$m_H ~~ m_{\tilde t} ~~[\textrm{GeV}]$]{} \[\]\[l\][$A_t/{\widetilde m}$]{} \[\]\[l\][${\widetilde m}$  \[GeV\]]{} \ Moreover values of $A_t/{\widetilde m}\lesssim 0.5$ are also required in order to obtain a strong enough electroweak phase transition [@Carena:1996wj]. On the other hand the rough estimate ${\tan \beta}\lesssim 10$ [@EDM; @Balazs], coming from the requirement of generation of the observed baryon asymmetry of the Universe, pushes the parameter ${\widetilde m}$ towards values ${\widetilde m}\gg 1$ TeV which justifies a posteriori the study of the effective theory with resummed logarithms. A detailed analysis of electroweak baryogenesis in the present model will be thoroughly analyzed in Ref. [@CNQW2]. Let us stress that values of ${\widetilde m}\gg 1$ TeV are consistent with those necessary in order to suppress the one-loop contributions to the electric dipole moment of the electron and the neutron in the light stop scenario [@edm1]. Finally let us observe that all previous comments, which apply for a gluino mass of 500 GeV, can also be extended to other values of the gluino masses. In particular we have checked that for $M_3\simeq 1$ TeV the Higgs mass only decreases by a few percent with respect to the case of gluino masses at the EW scale. Gauge coupling unification {#unification} ========================== In this section we will consider the issue of gauge coupling unification in the theory where below the scale ${\widetilde m}$ there is the ET which has been considered in Section \[effective\] and beyond ${\widetilde m}$ the MSSM. In the extreme case where ${\widetilde m}$ is at the EW scale, the condition of gauge coupling unification yields low energy values for the strong gauge coupling $\alpha_3$ consistent with those obtained in low energy MSSM scenarios. The MSSM prediction, however, depends strongly on the possible threshold corrections to the gauge couplings at the GUT scale, as well as on the additional threshold corrections induced by the weak scale supersymmetric particles. Ignoring high-energy threshold corrections, one obtains a range of values $\alpha_3(M_Z) =$ 0.120–0.135, with the exact value depending on the precise MSSM spectrum. This range of values is compatible with experimental data, but with some tension towards a predicted high value. When ${\widetilde m}$ is increased the predicted value of $\alpha_3(M_Z)$ coming from the requirement of gauge coupling unification moves towards lower values. Therefore for a given low energy spectrum one can find agreement with the experimental values for a certain range of values of ${\widetilde m}$. In this sense it is possible to make a grand unification “prediction” for the parameter ${\widetilde m}$. High energy threshold corrections would lead to an uncertainty on this range of ${\widetilde m}$ values. In this section, we will quantify these issues after considering the two-loop RG evolution of the gauge couplings. The two-loop renormalization-group equation for the gauge couplings are [@Machacek:1983tz] &&(4)\^2 g\_i=g\_i\^3b\_i\ &&+ \[gauger\] where $t=\ln \tau$, $\tau$ is the renormalization scale and we use the convention $g_1^2=(5/3)g^{\prime 2}$. Eq. (\[gauger\]) is scheme-independent up to the two-loop order. In the effective theory below $\tilde{m}$, the $\beta$-function coefficients are &&b=(,-,-+2\_[g]{})  ,    B= &&\ & &12\ &&-+48\_[g]{} ,\ &&d\^u=(,,2)  ,    d\^G=(,0,)\_[g]{}  ,    d\^J=(,0,1) ,\ &&d\^W=(,,0)  ,    d\^B=(,,0) , while above ${\widetilde m}$ one has the MSSM result \[replacing in (\[gauger\]) the SM Yukawa $h_t$ by the MSSM one $\lambda_t$ related to the former by (\[matchht\])\] [@Martin:1993zk] &&b=(,1,-3)  ,    B= &&\ & 25&24\ &9&14 ,\ &&d\^u=(,6,4)  ,    d\^G=d\^J=d\^W=d\^B=0 . Finally the one-loop RGE of the Yukawa–like and gauge–like couplings are given in Eq. (\[stopbeta\]), while for the supersymmetric Yukawa coupling [@Martin:1993zk] (4)\^2 = \_t  . We will consider the following experimental inputs [@Yao:2006px] &&\^2\_(M\_Z)=0.23120.0002  ,\ && \^[-1]{}\_[EM]{}(M\_Z)=127.9060.019  ,\ &&\_3(M\_Z)=0.11760.0020  , and by imposing unification of $\alpha_1(M_{GUT})=\alpha_2(M_{GUT})$ we obtain a prediction for $\alpha_3(M_Z)$ as it is shown in the left panel of Fig. \[unif\]. The solid black line in the left panel of Fig. \[unif\] \[\]\[bl\][$\alpha_3(M_Z)$]{} \[\]\[bl\][${\widetilde m}~~~[GeV]$]{} \[\]\[bl\][${\widetilde m}~~~[GeV]$]{} represents the two-loop result for values of all gaugino masses about the weak scale, while the dashed black line represents the one-loop result. The grey lines are corresponding plots for a gluino mass $M_3=500$ GeV, which roughly follows the gaugino mass unification relation $M_3/M_2 \simeq 3$. In the figure the experimental value of $\alpha_3(M_Z)$ within $2\,\sigma$ is marked by a (yellow) band. For our two choices of $M_3$ the gluino decoupling almost does not modify the curves of $M_{GUT}$ and $\alpha_{GUT}$ as function of ${\widetilde m}$ and for this reason in the right panel we do not differentiate between both cases. Using the experimental value for the strong coupling one can get for the case where all gauginos are at the EW scale the $1\,\sigma$ prediction for ${\widetilde m}$ as [m]{}10\^[1.60.6]{} [TeV]{}   . \[pred1\] If one considers instead the standard unification relation between the gaugino masses the predicted values of $\alpha_3(M_Z)$ would be shifted to larger values, $\Delta \alpha_3(M_Z) \simeq 0.005$ and the resulting values of ${\widetilde m}$ will be shifted up to [m]{}10\^[3.30.6]{} [TeV]{}   . \[pred2\] For these two ranges of ${\widetilde m}$ altogether the unification scale $M_{GUT}$ turns out to be all in all $1\div2 \times 10^{16}$ GeV, where the smaller value is referred to the largest available ${\widetilde m}$ value. The numerical results may be analytically understood by considering the modifications of the two-loop predictions for $\alpha_3(M_Z)$ by one-loop threshold corrections induced by the supersymmetric particles, $$\left.\left. \alpha_3(M_Z) = \alpha_3(M_Z)\right|_{\rm MSSM} - \alpha_3^2(M_Z)\right|_{\rm MSSM} \frac{19}{28 \pi} \ln \left( \frac{T_{\rm MSSM}}{M_Z} \right),$$ where [@CPW] $$T_{MSSM} = |\mu| \left( \frac{M_2}{M_3} \right)^{28/19} \left(\frac{M_2}{|\mu|} \right)^{4/19} \left(\frac{{\widetilde m}}{m_{\tilde{t}}}\right)^{5/19} \left(\frac{{\widetilde m}}{|\mu|}\right)^{3/19}. \label{TMSSM}$$ In the above $\alpha_3(M_Z)|_{\rm MSSM} \simeq 0.127$ would be the value that would be obtained if all supersymmetric particles would have masses equal to $M_Z$. The second-to-last and last terms in Eq. (\[TMSSM\]) represent the effects of separating the stop mass with respect to the other sfermion masses and of increasing the non-SM Higgs doublet mass, respectively. In particular for the case in which all gaugino masses, $|\mu|$ and the light stop are of the order of the weak scale one can reproduce the $1\sigma$ prediction for ${\widetilde m}$ as given in Eq. (\[pred1\]), while in the case of standard unification relation of gaugino masses Eq. (\[pred2\]) is recovered if equal values of $|\mu|$ and $M_2$ of order of the weak scale are assumed. These low energy supersymmetric threshold effects may be compensated by thresholds at the GUT scale, which are strongly model dependent, but are naturally of the same order as the low energy supersymmetric thresholds (see, for instance Ref. [@Langacker]). Therefore for hierarchical gaugino masses, as the ones required by electroweak baryogenesis, the natural values of ${\widetilde m}$ necessary to achieve unification and consistent at 95 % C.L. with present experimental values of $\alpha_3(M_Z)$ are about 100 TeV. Somewhat lower values of ${\widetilde m}$ of the order of a few tens of TeV may be obtained by pushing $|\mu|$ to larger values. The values of ${\widetilde m}$ consistent at 95 % C.L. with unification of couplings (for light gluinos $3\,{\rm TeV}<{\widetilde m}<600\,{\rm TeV}$ and for standard gaugino unification $100\,{\rm TeV}<{\widetilde m}<3\times10^4\,{\rm TeV}$) have an impact on the Higgs mass predictions. From Fig. \[plotStH\], we can see that values of ${\widetilde m}$ larger than a few TeV lead to consistency with the LEP Higgs mass constraints for a large range of values of $\tan\beta$ when $A_t/{\widetilde m}$ is conveniently chosen. For the (positive) value of $M_U^2$ used in Fig. \[plotStH\], $M_U^2= (200~ {\rm GeV})^2$, values of $\tilde m$ of the order of 600 TeV ($3\times10^4$ TeV) lead to maximum values of the Higgs mass around 144 (150) GeV. On the other hand, as shown in Fig. \[plot100\], for negative values of $M_U^2$ and $A_t\lesssim 0.5~{\widetilde m}$, as required by electroweak baryogenesis, the values ${\widetilde m}\simeq 600$ TeV ($3\times 10^4$ TeV) lead to maximum values of the Higgs boson mass around $125 ~(129)$ GeV. Model cosmology and collider phenomenology {#pheno} ========================================== The cosmology and collider phenomenology of the light stop scenario has been the subject of study of different articles. For masses below 135 GeV, as preferred by the electroweak baryogenesis scenario, the light stop mass will be in general smaller than the sum of the $W$ mass, the $b$-quark mass and the lightest neutralino mass, and therefore its three body decay channels will be suppressed. Under these conditions, the main stop decay channel may be a loop-induced two body decay channel into a charm quark and the lightest neutralino. Searches for such a light stop at LEP put a bound on its mass of about 100 GeV [@Kraan:2003it]. Current searches at the Tevatron collider for a stop decaying into charm jets and neutralinos lead to a final state of two jets and missing energy. The jets should be sufficiently energetic for the Tevatron to be able to trigger on those events, what in practice demands mass differences between the stops and the neutralinos of about 30 GeV or larger [@Aaltonen:2007sw; @Abazov:2008rc]. Therefore the Tevatron collider cannot set any constraints on direct production of stops for mass differences smaller than 30 GeV. Searches for light stops in direct pair production of these particles, will be equally difficult at the LHC. Small mass differences between the stop and the neutralino define a particularly interesting region of parameters since they are helpful in providing the proper dark matter density in scenarios with heavy fermions. Indeed, for mass differences of about 20 GeV, the co-annihilation between the stop and the lightest neutralino leads to a neutralino dark matter density consistent with experimental observations [@Balazs]. Searches for light stops at the LHC may proceed through additional production channels. For instance, the light stops may be produced from the decay of heavy gluinos. Assuming that the right-handed stops are the only squarks with masses below the gluino mass, as happens in the light stop scenario discussed in this article, the gluinos being Majorana particles may decay into a stop and an anti-top or into an anti-stop and a top-quark. One can then consider the decay of a pair of gluinos into two equal sign top-quarks and two stops (two charm jets and missing energy). It has been shown [@Sabine] that under these conditions, the light stops may be found even for small mass differences, of about 5 GeV, provided the heavy gluinos are lighter than about 900 GeV. One would be interested in finding a method of stop detection that would be independent of the exact masses of other sparticles and which would work for small mass differences. A possibility is to analyze the possible production of light stops in association with a photon or a gluon (jet). The photon signatures are particularly clean, and for small mass differences they may be used as a complementary channel for the search for light stops at hadron colliders leading to a final state of two photons, soft jets and missing energy. Although not as clean as the photon signatures, due to larger rates, the jet plus missing energy signature may allow a further extension of the LHC reach for light stops. An analysis in this direction is in progress [@Ayres]. Conclusions {#conclusion} =========== In this article we analyzed the light stop scenario in which all squarks and sleptons, apart from a mainly right-handed stop, are significant heavier than the weak scale. The large values of the scalars imply that the low energy effective theory predictions may only be evaluated in a precise way by resummation of the large leading logarithms associated with the decoupling of the heavy scalars. Since supersymmetry is broken at scales below the heavy scalar mass ${\widetilde m}$, the Yukawa couplings associated with gauginos and Higgsinos must be computed, starting with their boundary values given by the gauge and supersymmetric Yukawa couplings respectively. Similarly the quartic couplings of the Higgs boson and the light top-squark may be computed through their RG evolution to lower energies. We have applied the low energy effective theory to obtain a reliable computation of the lightest CP-even Higgs boson mass for large values of ${\widetilde m}$. In the extreme case where ${\widetilde m}$ is close to the EW scale, logarithm resummation is unnecessary and we have checked that our calculation of the SM-like Higgs is consistent with earlier calculations in the literature [@Carena:1995bx]. Since the quartic coupling is bounded by its relation with the weak gauge couplings plus finite threshold corrections at high energies, the Higgs mass remains bounded to small values, smaller than about 133 GeV for negative values of $M_U^2 \simeq -(100\; {\rm GeV})^2$ and $A_t\lesssim0.5~{\widetilde m}$ even for large values of ${\widetilde m}$. This has important implications for the realization of the electroweak baryogenesis scenario. In a general light stop scenario with no EWBG mechanism built in, this bound on the Higgs mass may be relaxed for positive values of $M_U^2$, for which the trilinear mass parameter $A_t$ may be pushed to larger values, leading to masses that may be as large as 152 GeV for large values of ${\widetilde m}$. We have also analyzed the issue of unification of gauge couplings. We have shown that the corrections induced by the heavy spectrum are helpful in rendering the measured values of the gauge couplings consistent with the unification conditions even for relatively large values of ${\widetilde m}$. For instance considering universal values of the gaugino masses at low energies and values of $|\mu|$ of about 100 GeV, one obtains that appropriate unification is achieved for values of ${\widetilde m}\simeq 10^{1.6 \pm 0.6}$ TeV. A similar result is obtained for the standard unification relation between the gaugino masses, for a Higgsino mass parameter of about 1 TeV. If $|\mu|$ takes values close to 100 GeV, instead, the value of ${\widetilde m}$ consistent with the unification of couplings is pushed up to ${\widetilde m}\simeq 10^{3.3 \pm 0.6}$ TeV. This ranges of masses have implications on the predicted Higgs mass. For positive values for $M_U^2 \simeq (200\,{\rm GeV})^2$, the ranges of values of ${\widetilde m}$ compatible at 95 % C.L. with gauge coupling unification lead to an upper bound on the Higgs mass of about $150$ GeV. For negative values of $M_U^2$ and $A_t\lesssim0.5~{\widetilde m}$, as the ones required for baryogenesis, the upper bound becomes even stronger, of about $129$ GeV. The resulting phenomenology of the light stop scenario was also discussed in some detail. If a light Higgs, with mass $m_h\lesssim 133$ GeV is found, the next step to confirm the EWBG scenario within the MSSM would be the discovery of a light stop, with a mass below the top quark mass. Light stop searches at the Tevatron may lead to an experimental confirmation of this scenario, but may not be successful if the mass difference between the stop and the neutralino is smaller than about 30 GeV. Unfortunately, these small mass differences may be the ones required to obtain the proper dark matter relic density by means of coannihilation between the stop and the lightest neutralino. Searches at the LHC may be able to test the coannihilation region in case the gluino is lighter than about 900 GeV. For heavier gluino masses alternative methods of detection at the LHC via the production of light stops in association with photons or jets, are currently being studied and seem to be promising. Acknowledgments {#acknowledgments .unnumbered} --------------- We would like to thank C. Balazs, A. Freitas, T. Konstandin, D. Morrisey and A. Menon for useful discussions and interesting observations. Work supported in part by the European Commission under the European Union through the Marie Curie Research and Training Networks “Quest for Unification" (MRTN-CT-2004-503369) and “UniverseNet" (MR-TN-CT-2006-035863). Work at ANL is supported in part by US DOE, Division of HEP, Contract DE-AC-02-06CH11357 and Fermilab is operated by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the United States Department of Energy. The work of M.Q. was partly supported by CICYT, Spain, under contract FPA 2005-02211. Appendix {#appendix .unnumbered} ======== Thresholds ========== In this appendix we will give some details about the calculation of the different thresholds which appear in Section \[effective\]. At the renormalization scale ${\widetilde m}$ the ET (\[lagreff\]) has to describe the same physics as its corresponding HE theory, the MSSM. Here we will match both theories at one–loop in the Landau gauge using the step–function approximation. Graphically the matching is presented in Figs. \[figQ\]-\[figK\]. Since the matching has to be performed order by order in perturbation theory, we will start by considering the effective coupling $Q$, which is the only one having not trivial matching at tree–level (see Fig. \[figQ\]). This matching can be easily solved by neglecting heavy field kinetic–terms and solving for their equation of motion. The result is given in (\[deltaQ\]). At one–loop we cannot follow the same procedure because in dimensional regularization no kinetic–term can be neglected inside the loop. In such a case we can compute the diagrams in the LE and HE theories and, after using the tree–level matching conditions, impose the equivalence of the two results [^5]. We will follow this diagrammatic approach only for the $h_t,~Y_t,~G$ proper vertex matching, Figs. \[fight\]-\[figG\], and the wave function contribution furnished by each external leg (as an example we draw the case of ${{\tilde t_R}}$ and $t_R$ in Figs. \[figst\] and \[figt\]). In fact in these cases identifying the threshold sources is straightforward. The corresponding HE diagrams are shown in Figs. \[fight\]-\[figt\]. The resulting proper vertex thresholds are given in (\[deltaht\])-(\[deltaY\]) and the wave function ones in (\[wfgl\]). Finally for the proper vertex threshold $\Delta\lambda$ ($\Delta K$) it is easier to match the one–loop Higgs (stop) LE and HE effective potentials, instead of performing the matching diagrammatically (see Fig. \[figK\]) [^6]. Let us start to explicitly analyze the case of $\Delta\lambda$ for which we have to impose the equivalence of the terms proportional to $\phi_c^4$ after the expansion of the HE and LE effective potentials \[LamPots\] -\_c\^2+\_c\^4 + V\_[LE]{}\^h(\_c) = ’-\_c\^2+ \_c\^4 \^2 2+ V\_[HE]{}\^h(\_c) +[O]{}(\_c\^5)\ where $m^2$ and $\Lambda$ are equal to $m'^2 $ and $\Lambda'$ up to threshold effects coming from the difference between the one-loop contributions $V_{HE}^h(\phi_c)$ and $V_{LE}^h(\phi_c)$ \[diffPot\] && V\_[HE]{}\^h(\_c)-V\_[LE]{}\^h(\_c)\ &&= \_[r=t\_1,t\_2]{} m\_[r]{}\^4 ( -) -  m\_[t\_R]{}\^4 ( -) , where the renormalization scale is fixed to $\tau={\widetilde m}$, $m_{{{\tilde t_R}}}$ is explicit in (\[stopmass\]) and $\tilde t_1,\tilde t_2$ are the eigenvalues of \[m2st\] ( [cc]{} [m]{}\^2 + \_t\^2 (\_c\^2/2) \^2& -\_t A\_t (\_c/2)\ \ -\_t A\_t (\_c/2) & M\_U\^2 + \_t\^2 (\_c\^2/2) \^2 )  , with $\tilde A_t=A_t-\mu/{\tan \beta}$. The threshold $\Delta\lambda$ can be derived extracting the coefficient of the term $\phi_c^4/8$ from the right-hand side of (\[diffPot\]). Finally remembering that ${\widetilde m}^2 \gg M_U^2$, we obtain the relation (\[deltalam\]). \ (70,0)(0,27) (0,200)(200,0)[13]{}(-5,70)\[\][$H$]{} (70,70)\[\][$H$]{} (0,0)(200,200)[13]{}(-5,-8)\[\][$\tilde t_R$]{} (70,-8)\[\][$\tilde t_R$]{} (100,100)[8]{} $+~~~\cdots~~~ = ~~~$ (70,30)(0,27) (0,200)(200,0)[13]{}(-5,70)\[\][$H$]{} (70,70)\[\][$H$]{} (0,0)(200,200)[13]{}(-5,-8)\[\][$\tilde t_R$]{} (70,-8)\[\][$\tilde t_R$]{} $~~+~~~~$ (70,30)(0,27) (0,200)(100,100)[13]{}(-5,70)\[\][$H$]{} (100,70)\[\][$H$]{} (0,0)(100,100)[13]{}(-5,-8)\[\][$\tilde t_R$]{} (100,-8)\[\][$\tilde t_R$]{} (100,100)(200,100)[13]{} (46,40)\[\][$\tilde q$]{} (200,100)(300,200)[13]{} (200,100)(300,0)[13]{} $~~~~~~~~~+ ~~~\cdots$\ \ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_R$]{} (0,0)(100,90)(-5,-8)\[\][$t_L$]{} (100,90)(100,200)[13]{} (30,70)\[\][$H$]{} (100,90)[8]{} $~+ ~~~\cdots ~~~~ =~~~$ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_R$]{} (0,0)(100,90)(-5,-8)\[\][$t_L$]{} (100,90)(100,200)[13]{} (30,70)\[\][$H$]{} $~+ ~~~~~$ (70,0)(0,27) (200,0)(200,40)(70,-8)\[\][$t_R$]{} (0,0)(0,40) (-5,-8)\[\][$t_L$]{} (100,150)(100,200)[10]{} (30,70)\[\][$H$]{} (200,40)(0,40)[13]{}[6]{} (200,40)(0,40) (31,3)\[\][$\tilde g$]{} (0,40)(100,150)[13]{}(53,33)\[\][$\tilde t_R$]{} (200,40)(100,150)[13]{} (8,33)\[\][$\tilde t_L$]{} $~+ ~~~\cdots$\ \ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_L$]{} (0,0)(100,90)(-5,-8)\[\][$\tilde H_u$]{} (100,90)(100,200)[13]{} (30,70)\[\][$\tilde t_R$]{} (100,90)[8]{} $~+ ~~~\cdots ~~~~ =~~~$ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_L$]{} (0,0)(100,90)(-5,-8)\[\][$\tilde H_u$]{} (100,90)(100,200)[13]{} (30,70)\[\][$\tilde t_R$]{} $~+ ~~~~~$ (70,0)(0,27) (200,0)(200,40)(70,-8)\[\][$t_L$]{} (0,0)(0,40) (-5,-8)\[\][$\tilde H_u$]{} (100,150)(100,200)[10]{} (30,70)\[\][$\tilde t_R$]{} (200,40)(0,40)[13]{} (31,3)\[\][$\tilde t_L$]{} (200,40)(100,150)[13]{}[6]{} (0,40)(100,150) (8,33)\[\][$t_R$]{} (200,40)(100,150) (54,33)\[\][$\tilde g$]{} $~+ ~~~\cdots$\ \ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_R$]{} (0,0)(100,90)[13]{}[6]{} (0,0)(100,90)(-5,-8)\[\][$\tilde g$]{} (100,90)(100,200)[13]{} (30,70)\[\][$\tilde t_R$]{} (100,90)[8]{} $~+ ~~~\cdots ~~~~ =~~~$ (70,0)(0,27) (200,0)(100,90)(70,-8)\[\][$t_R$]{} (0,0)(100,90)[13]{}[6]{} (0,0)(100,90)(-5,-8)\[\][$\tilde g$]{} (100,90)(100,200)[13]{} (30,70)\[\][$\tilde t_R$]{} $~+ ~~~~~$ (70,0)(0,27) (200,0)(200,40)(70,-8)\[\][$t_R$]{} (0,0)(0,40)[13]{}[2.5]{} (0,0)(0,40) (-5,-8)\[\][$\tilde g$]{} (100,150)(100,200)[10]{} (30,70)\[\][$\tilde t_R$]{} (200,40)(0,40)[13]{} (31,3)\[\][$\tilde t_L$]{} (0,40)(100,150) (8,33)\[\][$t_L$]{} (200,40)(100,150) (54,33)\[\][$\tilde H_u$]{} $~+ ~~~\cdots$\ \ (70,0)(0,27) (0,100)(200,100)[13]{}(33,39)\[\][$\tilde t_R$]{} (100,100)[8]{} $+~~\cdots~~=~$ (70,0)(0,27) (0,100)(170,100)[13]{}(26,39)\[\][$\tilde t_R$]{} $+~~~$ (70,30)(0,27) (0,100)(60,100)[13]{}(-2,39)\[\][$\tilde t_R$]{} (110,100)(50,0,180)[13]{}(35,54)\[\][$\tilde q_L$]{} (110,100)(50,180,0)[13]{}(35,5)\[\][$H_h$]{} (160,100)(220,100)[13]{}(70,39)\[\][$\tilde t_R$]{} $~~+ ~~$ (70,30)(0,27) (0,100)(60,100)[13]{}(-2,39)\[\][$\tilde t_R$]{} (110,100)(50,0,180)[13]{}(35,54)\[\][$\tilde q_L$]{} (110,100)(50,180,0)[13]{}(35,5)\[\][$H$]{} (160,100)(220,100)[13]{}(70,39)\[\][$\tilde t_R$]{} $~~+ ~\cdots$\ \ (70,0)(0,27) (0,100)(200,100)(33,39)\[\][$t_R$]{} (100,100)[8]{} $+~~\cdots~~=~$ (70,0)(0,27) (0,100)(170,100)(16,39)\[\][$t_R$]{} $+~~~$ (70,30)(0,27) (0,100)(60,100)(-2,39)\[\][$t_R$]{} (110,100)(50,0,180)(35,51)\[\][$q_L$]{} (110,100)(50,180,0)[13]{}(35,5)\[\][$H_h$]{} (160,100)(220,100)(70,39)\[\][$t_R$]{} $~~+ ~~$ (70,30)(0,27) (0,100)(60,100)(-2,39)\[\][$t_R$]{} (110,100)(50,0,180)(35,54)\[\][$\tilde H_u$]{} (110,100)(50,180,0)[13]{}(35,5)\[\][$\tilde q_L$]{} (160,100)(220,100)(70,39)\[\][$t_R$]{} $~~+ ~\cdots$\ \ (70,0)(0,27) (0,200)(200,0)[13]{}(-5,70)\[\][$H(\tilde t_R)$]{} (70,70)\[\][$H(\tilde t_R)$]{} (0,0)(200,200)[13]{}(-5,-8)\[\][$H(\tilde t_R)$]{} (70,-8)\[\][$H(\tilde t_R)$]{} (100,100)[8]{} $~~~~~+~~~~~~~$ (70,30)(0,27) (0,200)(100,100)[13]{}(-5,70)\[\][$H(\tilde t_R)$]{} (100,70)\[\][$H(\tilde t_R)$]{} (0,0)(100,100)[13]{}(-5,-8)\[\][$H(\tilde t_R)$]{} (100,-8)\[\][$H(\tilde t_R)$]{} (150,100)(50,0,360)[13]{} (46,55)\[\][$\tilde t_R(H)$]{} (46,5)\[\][$\tilde t_R(H)$]{} (200,100)(300,200)[13]{} (200,100)(300,0)[13]{} (100,100)[8]{} (200,100)[8]{} $~~~~~~~~~+ ~~~\cdots ~~~~ =~~~~$ (70,0)(0,27) (0,200)(200,0)[13]{}(-5,70)\[\][$H(\tilde t_R)$]{} (70,70)\[\][$H(\tilde t_R)$]{} (0,0)(200,200)[13]{}(-5,-8)\[\][$H(\tilde t_R)$]{} (70,-8)\[\][$H(\tilde t_R)$]{} \ ${}\hspace{80pt} + ~~~$ (70,30)(0,27) (0,200)(100,100)[13]{}(-5,70)\[\][$H(\tilde t_R)$]{} (100,70)\[\][$H(\tilde t_R)$]{} (0,0)(100,100)[13]{}(-5,-8)\[\][$H(\tilde t_R)$]{} (100,-8)\[\][$H(\tilde t_R)$]{} (150,100)(50,0,360)[13]{} (46,55)\[\][$\tilde t_R(H)$]{} (46,5)\[\][$\tilde t_R(H)$]{} (200,100)(300,200)[13]{} (200,100)(300,0)[13]{} $~~~~~~~~~~+~~~~~~~$ (70,30)(0,27) (0,200)(50,150)[13]{}(-5,70)\[\][$H(\tilde t_R)$]{} (70,70)\[\][$H(\tilde t_R)$]{} (0,0)(50,50)[13]{}(-5,-8)\[\][$H(\tilde t_R)$]{} (70,-8)\[\][$H(\tilde t_R)$]{} (50,50)(50,150)[13]{}(-1,30)\[\][$\tilde t_R(H)$]{} (50,50)(150,50)[13]{}(65,30)\[\][$\tilde t_R(H)$]{} (50,150)(150,150)[13]{}(30,5)\[\][$\tilde q$]{} (150,50)(150,150)[13]{}(30,55)\[\][$\tilde q$]{} (150,150)(200,200)[13]{} (150,50)(200,0)[13]{} $~~~~~~+ ~~~\cdots$\ Following the same idea we can also obtain $\Delta K$. We give a constant background $s_c$ to the real third colour component of $\tilde t_R$, [*i.e.*]{} $\langle\tilde t_{R_3}\rangle=s_c/\sqrt{2}$, which breaks the $SU(3)_c$ and $U(1)_Y$ symmetries, and we impose the equivalence of its one–loop effective potential in the LE and HE theory at the scale ${\widetilde m}$ \[Kpots\] +s\_c\^2 + s\_c\^4 + V\_[LE]{}\^[[t\_R]{}]{}(s\_c) = ’+s\_c\^2 + s\_c\^4 + V\_[HE]{}\^[[t\_R]{}]{}(s\_c) + [O]{}(s\_c\^5)   , where \[diffpotK\] V\^[[t\_R]{}]{}\_[HE]{}-V\^[[t\_R]{}]{}\_[LE]{} = \_[r=1]{}\^5 m\_[r]{}\^4 ( -) -  \_H\^4 ( -)  , with $\tau={\widetilde m}$ and $\nu^2_{H}=\lambda_t^2 \sin^2\beta (1-\frac{\tilde A_t^2}{{\widetilde m}^2}) \frac{s_c^2}{2}$. Moreover for $r=1,2,3$  the masses $m_r^2$ are the eigenvalues of the squared mass matrix of $\tilde q_3$, $H$ and $H_h$ (the heavy projection of the Higgses: $H_u^\dagger\rightarrow {\sin\beta}H_h^t \epsilon $ and $H_d\rightarrow {\cos \beta}\epsilon H_h^*$) &=& ( [ccc]{} (\_t\^2-) +[m]{}\^2 & \_t B\_t & \_t A\_t\ \_t B\_t & [m]{}\^2 +\_t\^2 \^2 & \_t\^2 2\ \_t A\_t & \_t\^2 2 & \_t\^2 \^2 )  , with $\tilde B_t=A_t+\mu {\tan \beta}$, and finally $m_4^2\equiv m_{\tilde q_1}^2=m_5^2\equiv m_{\tilde q_2}^2={\widetilde m}^2+ \frac{s_c^2}{12}$. Extracting from the right-hand side of (\[diffpotK\]) the coefficient of the term $s_c^4/24$ we obtain $\Delta K$ as expressed in (\[deltaK\]). To conclude here we collect all the proper vertex thresholds \[deltaQ\] Q=- \_t\^2 \^2  , h\_t= g\_3\^2 \_tA\_t M\_3b\_1  , \[deltaht\] G = g\_3 \_t\^2 (-1+ )  , \[deltaG\] Y\_t = g\_3\^2 \_t (1- ) \[deltaY\]  , \[deltalam\] = (\_t )\^4 A\_t\^4   , \[deltaK\] K = c\_0 +c\_1 + c\_2 + c\_3   , where b\_1&=&  ,\ c\_0&=&([3 \_t\^4 \^2 2]{}+[2 \_t\^2 \^2]{}+[\_t\^4 \^4  ]{})  ,\ c\_1&=&-  ,\ c\_2&=&([8 \_t\^2 \^2 ]{}+[3 \_t\^4 \^2 2  ]{})  ,\ c\_3&=&-  , along with the wave function threshold contributions of each external leg [lr]{} Z\_[t\_R]{}=2 (\_t B\_t)\^2 F([m]{}\^2) +2 (\_t A\_t)\^2 F(0) &,\ Z\_H= (\_t A\_t)\^2 F(M\_U\^2) &\[q  t\_R\],\ Z\_[t\_R]{}= 2 (\_t )\^2 E(0) +2 \_t\^2  E(\^2) &\[ H\_h q+qH\_u\],\ Z\_[t\_L]{}= (\_t )\^2 E(0) +g\_3\^2  E(M\_3\^2) & \[H\_h t\_R+q g\],\ Z\_[H\_u]{}=\_t\^2  E(0) &\[q t\_R\],\ Z\_[g]{}=11 g\_3\^2  E(0) &\[q q\], \[wfgl\] where the particles propagating in the loops are indicated inside squared brackets and the functions $F(m^2)$ and $E(m^2)$ are defined by F(m\^2)&& -  ,\ E(m\^2)&& -  , with $a^2=m^2/{\widetilde m}^2$. Because of the thresholds (\[wfgl\]) the kinetic terms of the effective theory would not be canonically normalized if these wave function thresholds were not absorbed in a redefinition of the effective fields. This implies that any generic effective coupling $\rho$ has also gotten a wave function threshold dependence coming from its field redefinitions as 1-Z\_1-\_i Z\_i   , where $i$ runs over the fields of the interaction $\rho$. A last remark concerns the gauge couplings. They have no threshold because Ward identities impose a cancellation between the proper vertex threshold and the non-vector fields wave function ones. Therefore a possible threshold could only come from the vector boson wave function threshold but the latter is zero when evaluated at the renormalization scale ${\widetilde m}$. Finally let us observe that the mass thresholds are not necessary for our aim. In fact the LE masses only appear inside one–loop thresholds in which a possible one–loop mass thresholds would only contribute at two–loop. Finally if we assume the gluino mass heavy enough (but below ${\widetilde m}$), it is necessary to also integrate it out and repeat at the scale $\tau=M_3$ the procedure just described. The gluino decoupling affects the proper vertex $K$ and the right–handed top and stop propagators, which produce the right–handed top and stop wave function thresholds and the mass threshold $\Delta'M_U^2$ [^7]. Concerning the wave function thresholds, the matching conditions at $\tau=M_3$ lead to ’Z\_[t\_R]{} &=&  ,\ ’Z\_[[[t\_R]{}]{}]{} &=&  , where $b^2=M_U^2/M_3^2$. In order to calculate the proper vertex threshold $\Delta'K$ and $\Delta'M_U^2$ we use the procedure of matching the stop effective potential in the presence of a background field. After giving a VEV to the third colour stop, $\langle\tilde t_{R_3}\rangle=s_c/\sqrt{2}$, mixing mass terms between right top and gluino are generated but, after diagonalizing, only $t_R^{(3)}$ and $\tilde g^{(8)}$ have masses depending on $s_c$; explicitly $r_{\pm}=M_3\pm\sqrt{G^2 s_c^2\, 4/3 +M_3^2}$ . Therefore the thresholds can come only from the contribution to the effective potential of the heaviest fermionic eigenstate, which results - = + s\_c\^2 - s\_c\^4 + [O]{}(s\_c\^6)  , and thus \[thresholdM3\] ’ K &=& -  ,\ ’ M\_3 &=&  . Renormalization group equations {#RGE} =============================== In this appendix we sketch the calculation of the one–loop RGE in the ET [^8]. In order to present our result it is useful to define \[beta\] \_= \_\^[(v)]{} + \_i \_[\_i]{}   , where $\tau$ is the renormalization scale, $\eta$ is the coupling between different fields $\rho_i$ with multiplicity $n_i$ where the index $i$ runs over the fields which are involved in the particular vertex, and the functions $\beta_\eta^{(v)}$ and $\eta \gamma_{\rho_i}$ are the respective contributions from the renormalization of the proper vertex and the anomalous dimension of each external leg. In the same way as for the threshold effects the $\beta_{\eta}$ and $\gamma_{\rho_i}$ functions are computed in the $\overline {MS}$ renormalization scheme and using the Landau gauge. We will implicitly consider renormalization scales $\tau$ larger than any fermionic mass, in particular the gluino mass $M_3$, and for $\tau<M_3$ the correct results are obtained by simply erasing the couplings $G$ and $M_3$ and disregarding $\beta_{\tilde g}$ and $\gamma_{\tilde g}$. By using the diagrammatic procedure we find \[betaF\] &&(4)\^2 \_[t\_R]{} = 4 Y\_t\^2 + G\^2 -8 g\_3\^2  ,\ &&(4)\^2 \_[q\_L]{} = h\_t\^2 + Y\_t\^2  ,\ &&(4)\^2 \_[t\_R]{} = 2 h\_t\^2 + G\^2  ,\ &&(4)\^2 \_H = 6 h\_t\^2  ,\ && (4)\^2 \_[H\_u]{} = 3 Y\_t\^2  ,\ &&(4)\^2 \_[g]{} = G\^2  .\ &&\_[W]{}=\_[B]{}=\_[H\_d]{}=0 and \[betaV1\] &&\_[g\_u]{}\^[(v)]{} = \_[g’\_u]{}\^[(v)]{}=\_[g\_d]{}\^[(v)]{}=\_[g’\_d]{}\^[(v)]{} = \_[J]{}\^[(v)]{}= \_[Y\_t]{}\^[(v)]{} = 0  ,\ &&(4)\^2 \_[G]{}\^[(v)]{} = -9   g\_3\^2 G  ,\ &&(4)\^2 \_[h\_t]{}\^[(v)]{} = - 8   h\_t g\_3\^2   . On the other hand to compute $\beta_{\lambda}^{(v)}$, $\beta_{Q}^{(v)}$ and $\beta_{K}^{(v)} $ we have found it very convenient to use the effective potential method [@Gato:1984ya]. In order to do that we introduce background fields $\phi_c$ and $s_c$ for $H$ and ${\tilde t_R}$ defined as $$H \rightarrow 1/\sqrt{2}\left(\begin{array}{c} \phi_2 +i \phi_3 \\ h+\phi_c + i \phi_1 \end{array}\right)$$ $${\tilde t_R}^{(\omega)} \rightarrow 1/\sqrt{2} \left({\tilde t_{1R}^{(\omega)}}+ \delta^{j3} s_c +i {\tilde t_{2R}^{(\omega)}} \right) ~,$$ where $\omega$ is the color index. In this background some fields acquire a mass and, in particular, the bosonic mass spectrum becomes $$\label{bosonic} \begin{array}{rccrl} g^{a} : & m^2 = 0 &&& a=1,2,3 \vspace{1.8mm}\\ g^{a} : & m^2 = s_c^2 ~g_3^2/4 &&& a=4,5,6,7 \vspace{1.8mm}\\ g^{a} : & m^2 = s_c^2 ~g_3^2/3 &&& a=8 \vspace{1.8mm}\\ \phi_\omega : & m^2 = \phi_c^2 \lambda/2 + s_c^2 Q/2 &&& \omega=1,2,3 \vspace{1.8mm}\\ {\tilde t_{1R}^{(\omega)}},{\tilde t_{2R}^{(\alpha)}} : & m^2 = \phi_c^2 Q/2 + s_c^2 K/6 &&& \omega=1,2,3 ~ \alpha=1,2 \vspace{1.8mm}\\ \left(\begin{array}{c} {\tilde t_{1R}^{(3)}} \\ h \end{array}\right) : & m^2 = \frac{1}{2} \left(\begin{array}{cc} s_c^2 K + \phi_c^2 Q & 2 s_c \phi_c Q\\ 2 s_c \phi_c Q & 3 \phi_c^2 \lambda +s_c^2 Q\\ \end{array}\right) \end{array}$$ where we have written only the terms which depend on $\phi_c$ and/or $s_c$. Analogously the fermionic mass spectrum looks like $$\left(\begin{array}{ll} b_L^{(3)} \\ {\tilde H_u^+} \end{array}\right) ~:~m = \left( \begin{array}{cc} 0 & {Y'} \\ {Y'} & 0 \end{array} \right)$$ $$\left(\begin{array}{llll} t_L^{(1)} \\ t_R^{(1)\dag} \\ g^{(4)} \\ g^{(5)} \\ \end{array}\right) ~:~m = \left( \begin{array}{cccc} 0 & h' & 0 & 0 \\ h' & 0 & G' & -i G' \\ 0 & G' & 0 & 0 \\ 0 & -i G' & 0 & 0 \end{array} \right)$$ $$\left( \begin{array}{ll} t_L^{(2)} \\ t_R^{(2)\dag} \\ g^{(6)} \\ g^{(7)} \end{array}\right) ~:~m = \left(\begin{array}{cccc} 0 & h' & 0 & 0 \\ h' & 0 & G' & -i G'\\ 0 & G' & 0 & 0 \\ 0 & -i G' & 0 & 0 \end{array}\right)$$ $$\left(\begin{array}{llll} t_L^{(3)} \\ t_R^{(3)\dag} \\ {\tilde H_u^0} \\ g^{(8)} \end{array}\right) ~:~m = \left( \begin{array}{cccc} 0 & h' & {Y'} & 0 \\ h'& 0 & 0 & -\frac{2 G'}{\sqrt{3}} \\ {Y'} & 0 & 0 & 0 \\ 0 & -\frac{2 G'}{\sqrt{3}} & 0 & 0 \end{array} \right) \label{fermionic}$$ where $h'=\phi_c h_t / \sqrt{2}$ , $Y'=s_c Y_t / \sqrt{2}$ and $G'=s_c G /2$. Using the property of invariance of the effective potential with respect to the renormalization scale we can write \[invScale\] && =\ && ( V\_0(\_c,s\_c) + +)\ &&= + \_\^[(v)]{} \_c\^4 + \_K\^[(v)]{} s\^4 + \_Q\^[(v)]{} \_c\^2 s\_c\^2 - =0  , where the ellipses stand for terms we are not interested in, $V_0(\phi_c,s_c)$ is the tree–level scalar potential of (\[lagreff\]) in the presence of the background fields $\phi_c$ and $s_c$, $\mathcal M^2(\phi_c,s_c)$ is the mass spectrum of the fields written in (\[bosonic\])-(\[fermionic\]) and, finally, for a given function $f(\mathcal M^2)$ of the squared mass matrix of all fields in the theory, $\mathrm{STr} f(\mathcal M^2) \equiv\mathrm{Tr}\sum_J(-1)^{2J}(2J +1) f(M_J^2)$. Therefore in (\[invScale\]) $\beta^{(v)}_\lambda$, $\beta^{(v)}_Q$ and $\beta^{(v)}_K$ are put easily in evidence by the expansion in powers of $s_c$ and $\phi_c$ of $\mathrm{STr}\mathcal M^4(\phi_c,s_c)$. Furthermore for our purposes only the terms $M_J^4(\phi_c,s_c)$ proportional to $s_c^4$, $\phi_c^4$ or $s_c^2 \phi_c^2$ are interesting and consequently we can ignore in $M_J^2(\phi_c,s_c)$ the dependence on dimensional couplings, as we have done in (\[bosonic\])-(\[fermionic\]). After performing the corresponding expansions we get the $\beta$-functions \[betaV2\] &&(4)\^2 \_\^[(v)]{} = 12 \^2 +6 Q\^2 -12 h\_t\^4  ,\ &&(4)\^2 \_Q\^[(v)]{} = K Q +3 Q+ 4 Q\^2 - G\^2 h\_t\^2 - 4 Y\^2 h\_t\^2  ,\ &&(4)\^2 \_K\^[(v)]{} = 12 Q\^2+K\^2 +13 g\_3\^4 - G\^4 -24 Y\_t\^4  . Finally by plugging the results (\[betaF\]), (\[betaV1\]) and (\[betaV2\]) into (\[beta\]), we find the result which was anticipated in (\[stopbeta\]). In order to complete our renormalization picture we will compute now the running of the masses. By using standard diagrammatic methods we obtain \[betaV3\] &&\_[M\_1]{}\^[(v)]{} = \_[M\_2]{}\^[(v)]{} = \_\^[(v)]{}= 0  ,\ &&(4)\^2 \_[M\_3]{}\^[(v)]{} = -18 g\_3\^2 M\_3  ,\ &&(4)\^2 \_[m\^2]{}\^[(v)]{} = -6 Q m\_U\^2  ,\ &&(4)\^2 \_[M\_U\^2]{}\^[(v)]{} = - M\_3\^2 G\^2 +K M\_U\^2 -4 m\^2 Q -4 Y\_t\^2  \^2  , and using (\[beta\]) and (\[betaF\]) we find the expressions in Eq. (\[massbeta\]). [99]{} V. A. Kuzmin, V. A. Rubakov and M. E. Shaposhnikov, Phys. Lett.  B [**155**]{}, 36 (1985); A. G. Cohen, D. B. Kaplan and A. E. Nelson, Nucl. Phys.  B [**349**]{}, 727 (1991); M. Joyce, T. Prokopec and N. Turok, Phys. Lett.  B [**339**]{} (1994) 312; M. Quiros, Helv. Phys. Acta [**67**]{}, 451 (1994); Acta Phys. Polon.  B [**38**]{}, 3661 (2007); V. A. Rubakov and M. E. Shaposhnikov, Usp. Fiz. Nauk [**166**]{}, 493 (1996) \[Phys. Usp.  [**39**]{}, 461 (1996)\] \[arXiv:hep-ph/9603208\]; M. S. Carena and C. E. M. Wagner, arXiv:hep-ph/9704347; A. Riotto and M. Trodden, Ann. Rev. Nucl. Part. Sci.  [**49**]{}, 35 (1999) \[arXiv:hep-ph/9901362\]. M. E. Shaposhnikov, Nucl. Phys.  B [**287**]{}, 757 (1987). M. B. Gavela, P. Hernandez, J. Orloff and O. Pene, Mod. Phys. Lett.  A [**9**]{} (1994) 795 \[arXiv:hep-ph/9312215\]; Nucl. Phys.  B [**430**]{}, 382 (1994) \[arXiv:hep-ph/9406289\]; P. Huet and E. Sather, Phys. Rev.  D [**51**]{}, 379 (1995) \[arXiv:hep-ph/9404302\]. G. F. Giudice, Phys. Rev.  D [**45**]{}, 3177 (1992); S. Myint, Phys. Lett.  B [**287**]{}, 325 (1992) \[arXiv:hep-ph/9206266\]. J. R. Espinosa, M. Quiros and F. Zwirner, Phys. Lett.  B [**307**]{}, 106 (1993) \[arXiv:hep-ph/9303317\]; A. Brignole, J. R. Espinosa, M. Quiros and F. Zwirner, Phys. Lett.  B [**324**]{}, 181 (1994) \[arXiv:hep-ph/9312296\]. M. S. Carena, M. Quiros and C. E. M. Wagner, Phys. Lett.  B [**380**]{}, 81 (1996) \[arXiv:hep-ph/9603420\]; D. Delepine, J. M. Gerard, R. Gonzalez Felipe and J. Weyers, Phys. Lett.  B [**386**]{}, 183 (1996) \[arXiv:hep-ph/9604440\]; M. S. Carena, M. Quiros and C. E. M. Wagner, Nucl. Phys.  B [**524**]{}, 3 (1998) \[arXiv:hep-ph/9710401\]. S. J. Huber, JCAP [**0602**]{}, 008 (2006) \[arXiv:hep-ph/0508208\]. M. Pietroni, Nucl. Phys.  B [**402**]{}, 27 (1993) \[arXiv:hep-ph/9207227\]; S. J. Huber and M. G. Schmidt, Nucl. Phys.  B [**606**]{}, 183 (2001) \[arXiv:hep-ph/0003122\]. A. Menon, D. E. Morrissey and C. E. M. Wagner, Phys. Rev.  D [**70**]{}, 035005 (2004) \[arXiv:hep-ph/0404184\]; S. J. Huber, T. Konstandin, T. Prokopec and M. G. Schmidt, Nucl. Phys.  B [**757**]{}, 172 (2006) \[arXiv:hep-ph/0606298\]; C. Balazs, M. S. Carena, A. Freitas and C. E. M. Wagner, JHEP [**0706**]{}, 066 (2007) \[arXiv:0705.0431 \[hep-ph\]\]. J. Kang, P. Langacker, T. j. Li and T. Liu, Phys. Rev. Lett.  [**94**]{}, 061801 (2005) \[arXiv:hep-ph/0402086\]. M. Carena, M. Quiros, A. Riotto, I. Vilja and C. E. M. Wagner, Nucl. Phys. B [**503**]{}, 387 (1997); J. M. Cline and K. Kainulainen, Phys. Rev. Lett.  [**85**]{}, 5519 (2000) \[arXiv:hep-ph/0002272\]; J. M. Cline, M. Joyce and K. Kainulainen, JHEP [**0007**]{}, 018 (2000) \[arXiv:hep-ph/0006119\]; M. S. Carena, J. M. Moreno, M. Quiros, M. Seco and C. E. M. Wagner, Nucl. Phys.  B [**599**]{}, 158 (2001) \[arXiv:hep-ph/0011055\]; M. S. Carena, M. Quiros, M. Seco and C. E. M. Wagner, Nucl. Phys.  B [**650**]{}, 24 (2003) \[arXiv:hep-ph/0208043\]. T. Konstandin, T. Prokopec, M. G. Schmidt and M. Seco, Nucl. Phys. B [**738**]{}, 1 (2006) \[arXiv:hep-ph/0505103\]; V. Cirigliano, S. Profumo and M. J. Ramsey-Musolf, JHEP [**0607**]{}, 002 (2006) \[arXiv:hep-ph/0603246\]. D. J. H. Chung, B. Garbrecht, M. J. Ramsey-Musolf and S. Tulin, arXiv:0808.1144 \[hep-ph\]. M. Carena, G. Nardini, M. Quiros and C. E. M. Wagner, arXiv:0809.3760 \[hep-ph\]. D. Chang, W. Y. Keung and A. Pilaftsis, Phys. Rev. Lett.  [**82**]{}, 900 (1999) \[Erratum-ibid.  [**83**]{}, 3972 (1999)\] \[arXiv:hep-ph/9811202\]. B. C. Regan, E. D. Commins, C. J. Schmidt and D. DeMille, Phys. Rev. Lett.  [**88**]{}, 071805 (2002); S. Abel, S. Khalil and O. Lebedev, Nucl. Phys.  B [**606**]{}, 151 (2001) \[arXiv:hep-ph/0103320\]. C. Balazs, M. S. Carena and C. E. M. Wagner, Phys. Rev.  D [**70**]{}, 015007 (2004) \[arXiv:hep-ph/0403224\]; C. Balazs, M. S. Carena, A. Menon, D. E. Morrissey and C. E. M. Wagner, Phys. Rev.  D [**71**]{}, 075002 (2005) \[arXiv:hep-ph/0412264\]. H. Georgi, Ann. Rev. Nucl. Part. Sci.  [**43**]{}, 209 (1993). N. Arkani-Hamed and S. Dimopoulos, JHEP [**0506**]{}, 073 (2005) \[arXiv:hep-th/0405159\]; G. F. Giudice and A. Romanino, Nucl. Phys.  B [**699**]{}, 65 (2004) \[Erratum-ibid.  B [**706**]{}, 65 (2005)\] \[arXiv:hep-ph/0406088\]. W. M. Yao [*et al.*]{} \[Particle Data Group\], “Review of particle physics,” J. Phys. G [**33**]{}, 1 (2006). M. E. Machacek and M. T. Vaughn, Nucl. Phys.  B [**222**]{}, 83 (1983); M. E. Machacek and M. T. Vaughn, Nucl. Phys.  B [**236**]{}, 221 (1984); M. E. Machacek and M. T. Vaughn, Nucl. Phys.  B [**249**]{}, 70 (1985). S. P. Martin and M. T. Vaughn, Phys. Rev.  D [**50**]{}, 2282 (1994) \[arXiv:hep-ph/9311340\]. M. S. Carena, S. Pokorski and C. E. M. Wagner, Nucl. Phys.  B [**406**]{}, 59 (1993) \[arXiv:hep-ph/9303202\]. P. Langacker and N. Polonsky, Phys. Rev.  D [**47**]{}, 4028 (1993) \[arXiv:hep-ph/9210235\]. A. C. Kraan, arXiv:hep-ex/0305051. T. Aaltonen [*et al.*]{} \[CDF Collaboration\], Phys. Rev.  D [**76**]{}, 072010 (2007) \[arXiv:0707.2567 \[hep-ex\]\]. V. M. Abazov [*et al.*]{} \[D0 Collaboration\], arXiv:0803.2263 \[hep-ex\]. S. Kraml and A. R. Raklev, Phys. Rev.  D [**73**]{}, 075002 (2006) \[arXiv:hep-ph/0512284\]; AIP Conf. Proc.  [**903**]{}, 225 (2007) \[arXiv:hep-ph/0609293\]. M. Carena, A. Freitas and C. E. M. Wagner, arXiv:0808.2298 \[hep-ph\]. M. S. Carena, A. Menon, R. Noriega-Papaqui, A. Szynkman and C. E. M. Wagner, Phys. Rev.  D [**74**]{}, 015009 (2006) \[arXiv:hep-ph/0603106\]. M. S. Carena, J. R. Espinosa, M. Quiros and C. E. M. Wagner, Phys. Lett.  B [**355**]{}, 209 (1995) \[arXiv:hep-ph/9504316\]; M. S. Carena, M. Quiros and C. E. M. Wagner, Nucl. Phys.  B [**461**]{}, 407 (1996) \[arXiv:hep-ph/9508343\]; S. Heinemeyer, W. Hollik, and G. Weiglein, Phys. Rev. D [**58**]{}, 091701 (1998) \[arXiv:hep-ph/9803277\]; S. Heinemeyer, W. Hollik, and G. Weiglein, Phys. Lett. B [**440**]{}, 296 (1998) \[arXiv:hep-ph/9807423\]; S. Heinemeyer, W. Hollik, and G. Weiglein, Eur. Phys. J. C [**9**]{}, 343 (1999) \[arXiv:hep-ph/9812472\]; J. R. Espinosa and R. J. Zhang, J. High Energy Phys. [**0003**]{}, 026 (2000) \[arXiv:hep-ph/9912236\]; J. R. Espinosa and R. J. Zhang, Nucl. Phys. B [**586**]{}, 3 (2000) \[arXiv:hep-ph/0003246\]; M. Carena, H. E. Haber, S. Heinemeyer, W. Hollik, C. E. M. Wagner, and G. Weiglein, Nucl. Phys. B [**580**]{}, 29 (2000) \[arXiv:hep-ph/0001002\]; G. Degrassi, P. Slavich, and F. Zwirner, Nucl. Phys. B [**611**]{}, 403 (2001) \[arXiv:hep-ph/0105096\]; A. Brignole, G. Degrassi, P. Slavich, and F. Zwirner, arXiv:hep-ph/0112177; S. P. Martin, Phys. Rev. D [**67**]{}, 095012 (2003) \[arXiv:hep-ph/0211366\]. B. Gato, J. Leon, J. Perez-Mercader and M. Quiros, Nucl. Phys.  B [**253**]{}, 285 (1985). [^1]: For instance, a possible contribution to the baryon asymmetry coming from light sbottons and staus have been recently explored in Ref. [@Chung:2008ay]. [^2]: The two-loop beta functions will be given in Section \[unification\] where the issue of gauge coupling unification is considered. [^3]: The parameter $m^2(m_t)$ is fixed by the condition that the minimum of the SM-like Higgs one–loop potential be $v=246.22$ GeV at the scale $m_t$. [^4]: It has been checked that this estimate holds also for other values of ${\widetilde m}$. [^5]: Clearly, this operation is well defined only after fixing the subtraction scheme, in our case the $\overline{MS}$. [^6]: In the $Q$ matching condition we do not consider the one–loop proper vertex threshold since the tree–level one dominates. [^7]: We also consider the mass thresholds because we want to know the masses evolution beyond the decoupling scale $M_3$. [^8]: We have checked that our results are consistent with the MSSM [@Martin:1993zk] and Split Supersymmetry [@ArkaniHamed:2004fb] limits.
--- abstract: 'Let $G$ be $SL(n, {{\mathbb C}})$. This paper aims to describe the Zhelobenko parameters and the spin-lowest $K$-types of the scattered representations of $G$, which lie at the heart of $\widehat{G}^d$—the set of all the equivalence classes of irreducible unitary representations of $G$ with non-vanishing Dirac cohomology. As a consequence, we will verify a couple of conjectures of the first-named author for $G$.' address: - 'Mathematics and Science College, Shanghai Normal University, Shanghai 200234, P. R. China' - 'School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong 518172, P. R. China' author: - 'Chao-ping Dong' - Kayue Daniel Wong title: 'Scattered representations of $SL(n, {{\mathbb C}})$' --- Introduction {#sec:intro} ============ Preliminaries on complex simple Lie groups ------------------------------------------ Let $G$ be a complex connected simple Lie group, and $H$ be a Cartan subgroup of $G$. Let ${{\mathfrak g}}_0$ and ${{\mathfrak h}}_0$ be the Lie algebra of $G$ and $H$ respectively, and we drop the subscripts to stand for the complexified Lie algebras. We adopt a positive root system $\Delta^+({{\mathfrak g}}_0, {{\mathfrak h}}_0)$, and let $\varpi_1, \dots, \varpi_{\mathrm{rank}({{\mathfrak g}}_0)}$ be the corresponding fundamental weights with $\rho=\varpi_1+\cdots+\varpi_{\mathrm{rank}({{\mathfrak g}}_0)}$ being the half sum of positive roots. Fix a Cartan involution $\theta$ on $G$ such that its fixed points form a maximal compact subgroup $K$ of $G$. Then on the Lie algebra level, we have the Cartan decomposition $${{\mathfrak g}}_0={{\mathfrak k}}_0+{{\mathfrak p}}_0.$$ We denote by $\langle\cdot, \cdot\rangle$ the Killing form on ${{\mathfrak g}}_0$. This form is negative definite on ${{\mathfrak k}}_0$ and positive definite on ${{\mathfrak p}}_0$. Moreover, ${{\mathfrak k}}_0$ and ${{\mathfrak p}}_0$ are orthogonal to each other under $\langle\cdot, \cdot\rangle$. We shall denote by $\|\cdot\|$ the norm corresponding to the Killing form. Let $H=TA$ be the Cartan decomposition of $H$, with ${{\mathfrak h}}_0={{\mathfrak t}}_0+{{\mathfrak a}}_0$. We make the following identifications: $$\label{identifction} {{\mathfrak h}}\cong {{\mathfrak h}}_0\times {{\mathfrak h}}_0, \quad {{\mathfrak t}}=\{(x, -x): x\in{{\mathfrak h}}_0\}, \quad {{\mathfrak a}}\cong\{(x, x): x\in {{\mathfrak h}}_0\}.$$ Take an arbitrary pair $(\lambda_L, \lambda_R)\in {{\mathfrak h}}_0^*\times {{\mathfrak h}}_0^*$ such that $\mu:=\lambda_L-\lambda_R$ is integral. Denote by $\{\mu\}$ the unique dominant weight to which $\mu$ is conjugate under the action of the Weyl group $W$. Write $\nu:=\lambda_L + \lambda_R$. We can view $\mu$ as a weight of $T$ and $\nu$ a character of $A$. Put $$I(\lambda_L, \lambda_R):={\rm Ind}_B^G({{\mathbb C}}_{\mu}\otimes {{\mathbb C}}_{\nu}\otimes {\rm triv})_{K-{\rm finite}},$$ where $B$ is the Borel subgroup of $G$ determined by $\Delta^+({{\mathfrak g}}_0, {{\mathfrak h}}_0)$. It is not hard to show that $V_{\{\mu\}}$, the $K$-type with highest weight $\{\mu\}$, occurs exactly once in $I(\lambda_L, \lambda_R)$. Let $J(\lambda_L, \lambda_R)$ be the unique irreducible subquotient of $I(\lambda_L, \lambda_R)$ containing $V_{\{\mu\}}$. By [@Zh], every irreducible admissible $({{\mathfrak g}}, K)$-module has the form $J(\lambda_L, \lambda_R)$. Indeed, up to equivalence, $J(\lambda_L, \lambda_R)$ is the unique irreducible admissible $({{\mathfrak g}}, K)$-module with infinitesimal character the $W \times W$ orbit of $(\lambda_L, \lambda_R)$, and lowest $K$-type $V_{\{\lambda_L-\lambda_R\}}$. We will refer to the pair $(\lambda_L, \lambda_R)$ as the [*Zhelobenko parameter*]{} for the module $J(\lambda_L, \lambda_R)$. Dirac cohomology ---------------- Fix an orthonormal basis $Z_1, \dots, Z_l$ of ${{\mathfrak p}}_0$ with respect to the inner product on ${{\mathfrak p}}_0$ induced by $\langle\cdot, \cdot\rangle$. Let $U({{\mathfrak g}})$ be the universal enveloping algebra of ${{\mathfrak g}}$, and put $C({{\mathfrak p}})$ as the Clifford algebra of ${{\mathfrak p}}$. One checks that $$\label{Dirac-operator} D:=\sum_{i=1}^{l} Z_i\otimes Z_i \in U({{\mathfrak g}})\otimes C({{\mathfrak p}})$$ is independent of the choice of the orthonormal basis $Z_1, \dots, Z_l$. The operator $D$, called the *Dirac operator*, was introduced by Parthasarathy [@P1]. By construction, $D^2$ is a natural Laplacian on $G$, which gives rise to the Parthasarathy’s Dirac inequality (see below). The inequality is very effective for detecting non-unitarity of $({{\mathfrak g}},K)$-modules, but is by no means sufficient to classify all (non-)unitary modules. To sharpen the Dirac inequality, and to offer a better understanding of the unitary dual, Vogan formulated the notion of Dirac cohomology in 1997 [@V2]. Let ${\rm Ad}: K\rightarrow SO({{\mathfrak p}}_0)$ be the adjoint map, ${\rm Spin}\ {{\mathfrak p}}_0$ be the spin group of ${{\mathfrak p}}_0$, and denote by $p: {\rm Spin}\ {{\mathfrak p}}_0\rightarrow SO({{\mathfrak p}}_0)$ the spin double covering map. Put $$\widetilde{K}:=\{(k,s)\in K\times {\rm Spin} \, {{\mathfrak p}}_0\mid {\rm Ad}(k)=p(s)\}.$$ As in the case of $K$-types, we will refer to an irreducible $\widetilde{K}$-type with highest weight $\delta$ as $V_{\delta}$. Let $\pi$ be any admissible $({{\mathfrak g}}, K)$-module, and $S$ be the spin module of $C({{\mathfrak p}})$. Then $U({{\mathfrak g}})\otimes C({{\mathfrak p}})$, in particular the Dirac operator $D$, acts on $\pi\otimes S$. Now the *Dirac cohomology* is defined as the $\widetilde{K}$-module $$\label{Dirac-cohomology} H_D(\pi):={\rm Ker} D/({\rm Ker} D \cap {\rm Im} D).$$ It is evident from the definition that Dirac cohomology is an invariant for admissible $({{\mathfrak g}}, K)$-modules. To compute this invariant, the Vogan conjecture, proved by Huang and Pandžić [@HP1], says that whenever $H_D(\pi) \neq 0$, one would have $$\label{thm-HP} \gamma+\rho=w\Lambda,$$ where $\Lambda$ is the infinitesimal character of $\pi$, $\gamma$ is the highest weight of any $\widetilde{K}$-type in $H_D(\pi)$, and $w$ is some element of $W$. It turns out that many interesting $({{\mathfrak g}},K)$-modules $\pi$, such as some $A_q(\lambda)$-modules and all the highest weight modules, have non-zero Dirac cohomology (see [@HKP], [@Ko]). One would therefore like to classify all representations with non-zero Dirac cohomology. Spin-lowest $K$-type -------------------- From now on, we set $\pi$ as an irreducible unitary $({{\mathfrak g}}, K)$-module with infinitesimal character $\Lambda$. In order to get a clearer picture on $H_D(\pi)$, the first-named author introduced the notion of spin-lowest $K$-types. Given an arbitrary $K$-type $V_{\delta}$, its spin norm is defined as $$\label{spin-norm} \|\delta\|_{\rm spin}:=\|\{\delta-\rho\}+\rho\|.$$ Then a $K$-type $V_{\tau}$ occurring in $\pi$ is called a *spin-lowest $K$-type* of $\pi$ if it achieves the minimum spin norm among all the $K$-types showing up in $\pi$. As an application of spin-lowest $K$-type, note that $D$ is self-adjoint on the unitarizable module $\pi\otimes S$. By writing out $D^2$ carefully, and by using the *PRV-component* [@PRV], we can rephrase *Parthasarathy’s Dirac operator inequality* [@P2] as follows: $$\label{Dirac-inequality} \|\delta\|_{\rm spin}\geq \|\Lambda\|,$$ where $V_{\delta}$ is any $K$-type. Moreover, one can deduce from [@HP2 Theorem 3.5.3] that $H_D(\pi)\neq 0$ if and only if the spin-lowest $K$-types $V_{\tau}$ attain the lower bound of Equation . In such cases, $V_{\{\tau - \rho\}}$ will show up in $H_D(\pi)$. Put it in a different way, the spin-lowest $K$-types of $\pi$ are exactly the $K$-types contributing to $H_D(\pi)$ whenever the cohomology is non-vanishing (see Proposition 2.3 of [@D1] for more details). Scattered representations {#scattered} ------------------------- Based on the studies [@BP; @DD], we are interested in the following irreducible unitarizable $({{\mathfrak g}}, K)$-modules $J(\lambda, -s\lambda)$ such that - the weight 2$\lambda$ is dominant integral, i.e., $2\lambda=\sum_{i=1}^{\mathrm{rank}({{\mathfrak g}}_0)}c_i\varpi_i$, where each $c_i$ is a positive integer; - the element $s\in W$ is an involution such that each simple reflection $s_i$, $1\leq i\leq \mathrm{rank}({{\mathfrak g}}_0)$, occurs in one (thus in each) reduced expression of $s$; - the module has non-zero Dirac cohomology, i.e., $H_D(J(\lambda, -s\lambda))\neq 0$, or equivalently, there exists a $K$-type $V_{\tau}$ in $J(\lambda, -s\lambda)$ such that $$\label{spin=2lambda} \|\tau\|_{\rm spin} = \|(\lambda, -s\lambda)\| = \|2\lambda\|$$ According to [@DD], there are only finitely such representations, which are called the *scattered representations*. These representations lie at the heart of $\widehat{G}^d$ — the set of all the irreducible unitary $({{\mathfrak g}}, K)$-modules of $G$ with non-zero Dirac cohomology up to equivalence. Namely, by Theorem A of [@DD], any member of $\widehat{G}^d$ is either a scattered representation, or it is cohomologically induced from a scattered representation tensored with a suitable unitary character of the Levi factor of certain proper $\theta$-stable parabolic subgroup. In the latter case, one can easily trace the spin-lowest $K$-types along with the Dirac cohomology of the modules before and after induction. It is therefore of interest to have a good understanding on scattered representations. Overview -------- In this manuscript, we focus on Lie groups $G$ of Type $A$. For convenience, we will start from the group $GL(n, {{\mathbb C}})$, written as $GL(n)$ for short. In this case, Vogan classified the unitary dual. The part that we need can be described as follows: \[unitary\] All irreducible unitary representations of $GL(n)$ with regular half-integral infinitesimal characters are of the form $\displaystyle \pi = {\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m {\det}^{p_i})$ for some $a_i \in \mathbb{Z}_{>0}$ and $p_i \in \mathbb{Z}$. Using [@BP Theorem 2.4], all such $\pi$ have non-zero Dirac cohomology. Moreover, [@BDW] proved Conjecture 4.1 of [@BP], which says $$H_D(\pi) = [\frac{\mathrm{rank}({{\mathfrak g}}_0)}{2}]V_{\{\tau - \rho\}},$$ where $V_{\tau}$ is the [*unique*]{} spin-lowest $K$-type appearing in $\pi$ [*with multiplicity one*]{}. However, it is not clear how $V_{\tau}$ is like from the calculations in [@BDW]. In Section 2, we will give an algorithm to compute $V_{\tau}$ for all such $\pi$ (see Proposition \[prop-spin-lowest\]). In Section 3, we will see how the calculations for $GL(n)$ in Section 2 can be translated to $SL(n)$, which gives a combinatorial description of scattered representations of $SL(n)$ (Proposition \[prop:scattered\]). As a result, we prove the following: - The spin-lowest $K$-type of each scattered representation of $SL(n)$ is [*unitarily small*]{} in the sense of Salamanca-Riba and Vogan [@SV] (Corollary \[cor-u-small\]); and - The number of scattered representations of $SL(n)$ is equal to $2^{n-2}$ (Corollary \[cor-number\]). This verifies Conjecture C of [@DD] in the case of $SL(n)$, and proved Conjecture 5.2 of [@D2] respectively. It is worth noting that for any scattered representation, its spin-lowest $K$-type lives deeper than, and differs from the lowest $K$-type. We hope the effort here will shed some light on the real case in future. An algorithm predicting the spin-lowest $K$-types ================================================= In this section, we give an algorithm to find the spin-lowest $K$-types of the irreducible unitary modules of $GL(n)$ given by Theorem \[unitary\]. We use a [**chain**]{} $$\mathcal{C} := \{c, c-2 \dots, c-(2k-2), c-2k\},$$ where $c, k \in \mathbb{Z}$ with $k >0$, to denote the Zhelobenko parameter $$\begin{pmatrix} \lambda \\ -w_0\lambda \end{pmatrix} = \begin{pmatrix} \frac{c}{2} & \frac{c}{2} - 1& \dots & \frac{c}{2} - (k-1) & \frac{c}{2} - k \\ -\frac{c}{2} + k & -\frac{c}{2} + (k-1) & \dots & -\frac{c}{2} + 1 & -\frac{c}{2} \\ \end{pmatrix}.$$ Note that the entries of $\mathcal{C}$ are precisely equal to $2\lambda$. Also, this parameter corresponds to the one-dimensional module ${\det}^{c-k}$ in $GL(k+1)$. Consequently, Theorem \[unitary\] implies that the Zhelobenko parameters of all irreducible unitary modules with regular half-integral infinitesimal character can be expressed by the chains $$(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i,$$ where all the entries of $\mathcal{C}_i$ are disjoint. In order to understand the spin-lowest $K$-types of these modules of $GL(n)$, we make the following: - Two chains $\mathcal{C}_1 = \{A, \dots, a\}$, $\mathcal{C}_2 = \{B, \dots, b\}$ are [**linked**]{} if the entries of $\mathcal{C}_1$ and $\mathcal{C}_2$ are disjoint satisfying $$A > B > a \ \ \ \ \text{or} \ \ \ \ B > A > b.$$ - We say a union of chains $\displaystyle \bigcup_{i \in I} \mathcal{C}_i$ is [**interlaced**]{} if for all $i \neq j$ in $I$, there exist indices $i = i_0, i_1, \dots, i_m = j$ in $I$ such that $\mathcal{C}_{i_{l-1}}$ and $\mathcal{C}_{i_{l}}$ are linked for all $1 \leq l \leq m$. (By convention, we also let the single chain $\mathcal{C}_1$ to be interlaced). For example, the parameter $\{9,7,5\} \cup \{6,4,2\} \cup \{3,1\}$ is interlaced, while the parameter $\{10,8\} \cup \{9,7\} \cup \{6,4\} \cup \{5,3,1\}$ is not interlaced. We are now in the position to describe the spin-lowest $K$-types of the unitary modules in Theorem \[unitary\] using chains. \[alg:spinlkt\] Let $J(\lambda,-s\lambda)$ be an irreducible unitary module of $GL(n)$ in Theorem \[unitary\] with $(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i$, where $$\mathcal{C}_i := \{k_i + (d_i -1), \dots, k_i - (d_i - 1)\} = \{C_{i,1},\dots,C_{i,d_i}\}$$ is a chain with average value $k_i$ and length $d_i$. Then the lowest $K$-type is equal to (a $W$-conjugate of) $(\mathcal{T}_0, \dots, \mathcal{T}_m)$, where $$\mathcal{T}_i := (\underbrace{k_i, \dots, k_i}_{d_i}).$$ By re-indexing the chains when necessary, we may and we will assume that $$\label{eq:order} \mbox{ for any } 0\leq i<j\leq m, \quad k_i > k_j \ \ \text{or}\ \ d_i < d_j\ \text{if}\ \ k_i = k_j.$$ Let us change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ for all pairs of linked chains $\mathcal{C}_i$ and $\mathcal{C}_j$ such that $i<j$ by the following rule: - If $C_{i,1} > C_{j,1} \geq C_{j,d_j} > C_{i,d_i}$, i.e. $$\begin{aligned} \{C_{i,1},\ \ \dots,\ \ C_{i,d_i-p}&,\ \ \overbrace{C_{i,d_i-p+1},\ \dots \dots,\ \ C_{i,d_i}}^{p} \} \\ &\{C_{j,1}, \ \ \dots,\ \ C_{j,d_j}\}\end{aligned}$$ with $C_{j,1} = C_{i,d_i} + 2p-1$ and $d_j \leq p$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: $$\boxed{\begin{aligned} \mathcal{T}_i' &: (*,\ \ \dots,\ \ *,\ \overbrace{k_i+p,\ k_i+(p-1),\ \dots,\ k_i+(p - d_j + 1),\ * ,\ \dots,\ *}^{p} ) \\ \mathcal{T}_j' &: \ \ \ \ \ \ \ \ \ \ \ \ (k_j - p,\ k_j - (p-1),\ \dots,\ k_j-(p - d_j+1)), \end{aligned}}$$ where the entries marked by $*$ remain unchanged. - If $C_{i,1} > C_{j,1} > C_{i,d_i} > C_{j,d_j}$, i.e. $$\begin{aligned} \{C_{i,1},\ \ \dots,\ \ C_{i,d_i-p}&,\ \ \overbrace{C_{i,d_i-p+1},\ \ \dots,\ \ C_{i,d_i}}^{p} \} \\ &\{C_{j,1}, \ \ \ \dots\dots,\ \ \ C_{j,p},\ \ \ \ C_{j,p+1},\ \ \dots, \ \ C_{j,d_j}\}\end{aligned}$$ with $C_{j,1} = C_{i,d_i} + 2p-1$ and $d_j > p$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: $$\boxed{\begin{aligned} \mathcal{T}_i' &: (*,\dots,\ *,\ \overbrace{k_i+1,\ \dots,\ k_i+p}^{p} ) \\ \mathcal{T}_j' &: \ \ \ \ \ \ \ \ \ \ (k_j-1, \ \dots,\ k_j-p,\ *,\ \ \dots, \ \ *). \end{aligned}}$$ where the entries marked by $*$ remain unchanged. - If $C_{j,1} > C_{i,1} > C_{j,d_j}$, then since $k_i \geq k_j$ one also have $C_{j,1} > C_{i,1} \geq C_{i,d_i} > C_{j,d_j}$ i.e. $$\begin{aligned} \{C_{i,1}, \ \ \dots ,\ \ &C_{i,d_i}\} \\ \{\underbrace{C_{j,1},\ \ \ \ \ \ \dots, \ \ \ \ \ \ C_{j,q}}_{q},&\ \ \ \ \ \ C_{j,q+1}, \ \ \dots, \ \ C_{j,d_j}\}\end{aligned}$$ with $C_{j,1} = C_{i,d_i} + 2q-1$, then we change the coordinates of $\mathcal{T}_i$ and $\mathcal{T}_j$ into: $$\boxed {\begin{aligned} \mathcal{T}_i'&: \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (k_i + (q-d_0+1),\ \dots,\ k_i + (q-1),\ k_i + q) \\ \mathcal{T}_j'&: (\underbrace{*,\ \dots,\ *,\ k_j - (q-d_0+1),\ \dots,\ k_j - (q-1),\ k_j -q}_{q},\ *, \ \ \dots, \ \ *), \end{aligned}}$$ where the entries marked by $*$ remain unchanged. In the above three cases, we only demonstrate the situation that $\mathcal{C}_i$ is in the first row and $\mathcal{C}_j$ is in the second row. The rule is the same when $\mathcal{C}_j$ is in the first row while $\mathcal{C}_i$ is in the second row. After running through all pairs of linked chains, $V_{\tau}$ is defined as the $K$-type with highest weight $\tau$ given by (a $W$-conjugate of) $\bigcup_{i=0}^m \mathcal{T}_i'$. \[eg:spinlowest\] Consider $(\lambda,-s\lambda) = \begin{aligned} \{10 && && 8\} && && \{6\} && && \{4\} \\ && \{9 && && 7 && && 5 && && 3 && && 1\} \end{aligned}$. Then the lowest $K$-type of $J(\lambda,-s\lambda)$ is $$\begin{aligned} (9 && && 9) && && (6) && && (4) \\ && (5 && && 5 && && 5 && && 5 && && 5)\end{aligned}$$ To compute $V_{\tau}$, let us label the chains so that holds: $$\mathcal{T}_0=(9 \quad 9),\quad \mathcal{T}_1=(6), \quad \mathcal{T}_2=(5 \quad 5\quad 5 \quad 5 \quad 5),\quad \mathcal{T}_3=(4).$$ Then we apply (a) to the pair $\mathcal{T}_2$, $\mathcal{T}_3$, apply (b) to the pair $\mathcal{T}_0$, $\mathcal{T}_2$, and apply (c) to the pair $\mathcal{T}_1$, $\mathcal{T}_2$. This gives us $$\begin{aligned} (9 && && 10) && && (8) && && (2) \\ && (4 && && 3 && && 5 && && 7 && && 5).\end{aligned}$$ Thus $\tau = (10,9,8,7,5,5,4,3,2)$. Let $J(\lambda,-s\lambda)$ be an unitary module of $GL(n)$ in Theorem \[unitary\], and $V_{\tau}$ is obtained by Algorithm \[alg:spinlkt\]. Then $[J(\lambda,-s\lambda):V_{\tau}] > 0$. Let $\displaystyle J(\lambda,-s\lambda) = {\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)})$. By rearranging the Levi factors, one can assume the chains $\mathcal{C}_0$, $\dots$, $\mathcal{C}_m$ satisfy Equation . We are interested in studying $$\begin{aligned} \left[{\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)}): V_{\tau}\right] &= \left[\bigotimes_{i=0}^m V_{(k_i,\dots,k_i)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)} \otimes \bigotimes_{i=0}^m V_{(t,\dots,t)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau}|_{\prod_{i=0}^m GL(a_i)} \otimes V_{(t,\dots,t)}|_{\prod_{i=1}^m GL(a_i)}\right] \\ &= \left[\bigotimes_{i=0}^m V_{(k_i+t,\dots,k_i+t)}: V_{\tau+(t,\dots,t)}|_{\prod_{i=0}^m GL(a_i)}\right] \\ &= \left[{\rm Ind}_{\prod_{i=0}^m GL(a_i)}^{GL(n)}(\bigotimes_{i=1}^m V_{(k_i+t,\dots,k_i+t)}): V_{\tau+(t,\dots,t)}\right]\end{aligned}$$ So we can assume $k_i > 0$ for all $i$ without loss of generality. We prove the theorem by induction on the number of Levi components. The theorem obviously holds when there is only one Levi component – the irreducible module is a unitary character of $GL(n)$. Now suppose we have the hypothesis holds when there are $m$ Levi factors, i.e. $$\left[{\rm Ind}_{\prod_{i=0}^{m-1} GL(a_i)}^{GL(n')}(\bigotimes_{i=0}^{m-1} V_{(k_i,\dots,k_i)}) : V_{\tau_{m-1}}\right] > 0,$$ where $n' = n - a_m$, and $\tau_{m-1}$ is obtained by applying Algorithm \[alg:spinlkt\] on $\bigcup_{i=0}^{m-1} \mathcal{C}_i$. Suppose now $\tau_{m}$ is obtained by applying Algorithm \[alg:spinlkt\] on $\bigcup_{i=0}^m \mathcal{C}_i$. Then $$\begin{aligned} &\ \left[{\rm Ind}_{\prod_{i=0}^{m} GL(a_i)}^{GL(n)}(\bigotimes_{i=0}^{m} V_{(k_i,\dots,k_i)}) : V_{\tau_{m}}\right] \\ = &\ \left[{\rm Ind}_{GL(n') \times GL(a_m)}^{GL(n)}\left({\rm Ind}_{\prod_{i=0}^{m-1} GL(a_i)}^{GL(n')}(\bigotimes_{i=1}^m V_{(k_i,\dots,k_i)}) \otimes V_{(k_m,\dots,k_m)}\right) : V_{\tau_{m}}\right] \\ \geq &\ \left[{\rm Ind}_{GL(n') \times GL(a_m)}^{GL(n)}(V_{\tau_{m-1}} \otimes V_{(k_m,\dots,k_m)}) : V_{\tau_{m}}\right] \\ =&\ c_{\tau_{m-1}, (k_m,\dots,k_m)}^{\tau_{m}}\end{aligned}$$ Here $c_{\mu,\nu}^{\lambda}$ is the Littlewood-Richardson coefficient, and the last step uses Theorem 9.2.3 of [@GW]. Suppose $\tau_{m-1} = \bigcup_{i=0}^{m-1} \mathcal{T}_i''$. Here these $\mathcal{T}_i''$ are obtained by applying Algorithm \[alg:spinlkt\] on $\mathcal{C}_0$, $\dots$, $\mathcal{C}_{m-1}$. Then $\tau_m$ is obtained from applying Algorithm \[alg:spinlkt\] on $\mathcal{T}_i''$ and $\mathcal{T}_m = (k_m,\dots,k_m)$ for all linked $\mathcal{C}_i$ and $\mathcal{C}_m$. More precisely, by applying Rules (a) – (c) in Algorithm \[alg:spinlkt\], $\tau_m$ is obtained from $\tau_{m-1}$ by the following: - Construct a new partition $\tau_{m-1} \cup (k_m,\dots,k_m)$. - For each linked $\mathcal{C}_i$ and $\mathcal{C}_m$, add $(0,\dots,0, A, A-1,\dots,a+1,a,0,\dots,0)$ on the rows of $\tau_{m-1}$ corresponding to $\mathcal{T}_i''$, and subtract $(0,\dots,0, A, A-1,\dots,a+1,a,0,\dots,0)$ on the corresponding rows of $(k_m,\dots,k_m)$. - $\tau_m$ is obtained by going through (ii) for all $\mathcal{C}_i$ linked with $\mathcal{C}_m$. By the above construction of $\tau_m$, it is obvious from the Littlewood-Richardson rule [@GW] that $c_{\tau_{m-1}, (k_m,\dots,k_m)}^{\tau_{m}} > 0$. Consequently, the result follows. \[prop-spin-lowest\] Let $J(\lambda,-s\lambda)$ be a unitary module of $GL(n)$ in Theorem \[unitary\], and $V_{\tau}$ be the $K$-type obtained by Algorithm \[alg:spinlkt\]. Then $\tau$ satisfies $$\{\tau - \rho\} = 2\lambda - \rho.$$ Consequently, $V_{\tau}$ is a spin-lowest $K$-type of $J(\lambda,-s\lambda)$ by Equation . We prove by induction on the number of chains in $(\lambda,-s\lambda) = \bigcup_{i=0}^m \mathcal{C}_i$, where the chains are arranged so that Equation holds. Suppose that the proposition holds for $\bigcup_{i=0}^{m-1} \mathcal{C}_i$. There are two possibilities when adding $\mathcal{C}_m$: - There exists $\mathcal{C}_i$ such that $\mathcal{C}_i$ and $\mathcal{C}_m$ is related by Rule (a) in Algorithm \[alg:spinlkt\]: $$\begin{aligned} \{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathcal{C}_i\ \ \ \ \ \ \ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \} \\ &\{\ \ \mathcal{C}_{m}\ \ \}\end{aligned}$$ - There exist $\mathcal{C}_j$ and $\mathcal{C}_{r}$, $\dots$, $\mathcal{C}_{m-1}$, such that $\mathcal{C}_j$ and $\mathcal{C}_m$ are related by Rule (b), and $\mathcal{C}_{l}$, $r \leq l \leq m-1$ and $\mathcal{C}_m$ are related by Rule (c) in Algorithm \[alg:spinlkt\]: $$\begin{aligned} \{\ \ \ \ \ &\mathcal{C}_j\ \ \ \ \ \} \ \ \ \ \ \{\ \ \mathcal{C}_{r}\ \ \} \ \ \ \dots \ \ \ \{\ \ \mathcal{C}_{m-1}\ \ \} \\ &\{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathcal{C}_m\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \}\end{aligned}$$ We will only study the second case, and the proof of the first case is simpler. Suppose the chains in the second case are interlaced in the following fashion: $$\label{eq:interlaced} \begin{aligned} \{\ \ \ \ &\ \mathcal{C}_{j}\ \ \ \ \ \ \ \ \ \} &&\ \ \{\overbrace{\ \ \mathcal{C}_{r}\ \ }^{d_{r}}\} \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ \{\overbrace{\ \ \ \mathcal{C}_{m-1}\ \ \ }^{d_{m-1}}\}\\ \{&\underbrace{C_{m,1}, \cdots}_{p}\ \ \underbrace{\cdots\cdots }_{a_r} &&\underbrace{\cdots\ \ \cdots }_{d_{r}} \ \ \underbrace{\cdots\cdots}_{a_{r+1}} \ \ \cdots\cdots &&\underbrace{\cdots \ \ \ \ \cdots }_{d_{m-1}},\ \underbrace{\cdots,\ C_{m,d_m}}_{a_m}\} \end{aligned}$$ for some $j+1 \leq r \leq m-1$, and the chains $\mathcal{C}_{j+1}$, $\dots$, $\mathcal{C}_{r-1}$—which have not been shown in —are linked with $\mathcal{C}_j$ under Rule (a) of Algorithm \[alg:spinlkt\]. To simplify the calculations below, we introduce the notation $$(a)^{\epsilon}_d:=\underbrace{a, a+\epsilon, \dots, a+(d-1)\epsilon}_d.$$ Then $2\lambda$ is equal to the entries in Equation . Since the values of the adjacent entries within the same chain differ by $2$, and the values of the interlaced entries differ by $1$, one can calculate $2\lambda - \rho$ up to a translation of a constant on all coordinates as follows: $$\label{2lambda} \begin{aligned} \{\cdots&\ (A_{r-1})^0_p\} &&\ \ \{(A_{r})^0_{d_{r}}\} \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ \{(A_{m-1})^0_{d_{m-1}}\}\\ \dots\ \ \{&(A_{r-1})^0_p\ \ (A_r)^{-1}_{a_r} &&(A_{r})^0_{d_{r}}\ \ (A_{r})^{-1}_{a_{r+1}} \ \ \cdots\cdots &&(A_{m-1})^0_{d_{m-1}}\ \ (A_{m-1})^{-1}_{a_m}\} \end{aligned}$$ where $\displaystyle A_x := \sum_{l=x}^{m-1} a_{l+1}$ for $r-1 \leq x\leq m-1$ (note that the smallest entry of is $1$, appearing at the rightmost entry of the bottom chain). On the other hand, the calculation in Algorithm \[alg:spinlkt\] gives $\tau$ as follows: $$\begin{aligned} (\cdots & \ \ (k_j)^{0}_{p}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ (k_{r})^{0}_{d_{r}} \ \ \ \ \ \ \ \cdots\ \ \ \ \ \ \ (k_{m-1})^{0}_{d_{m-1}}\\ \dots \ \ (&(k_m)^{0}_{p}\ \ (k_m)^{0}_{a_r}\ \ (k_m)^{0}_{d_{r}}\ (k_m)^{0}_{a_{r+1}} \ \cdots\ (k_m)^{0}_{d_{m-1}}(k_m)^{0}_{a_m}) \end{aligned} = \bigcup_{i=0}^m \mathcal{T}_i \ \longrightarrow\ \ \bigcup_{i=0}^m \mathcal{T}_i' = \tau,$$ where $\bigcup_{i=0}^m \mathcal{T}_i'$ is given by $$\label{tau} \begin{aligned} (\ \cdots &\ (k_j+1)^{1}_{p}) \ \ \ \ \ \ \ \ \ \ \ (k_{r}+(q_r-d_{r}+1))^1_{d_{r}} \ \ \ \ \ \ \cdots\ \ \ \ \ \ \ (k_{m-1}+(q_{m-1}-d_{m-1}+1))^1_{d_{m-1}}\\ \dots \ \ (&(k_m-1)^{-1}_{p} (k_m)^{0}_{a_r} (k_m-(q_r-d_{r}+1))^{-1}_{d_{r}}(k_m)^{0}_{a_{r+1}} \cdots(k_m-(q_{m-1}-d_{m-1}+1))^{-1}_{d_{m-1}}(k_m)^{0}_{a_m}) \end{aligned}$$ and $q_i$ are obtained by Rule (c) of Algorithm \[alg:spinlkt\]. For instance, $q_r=p+a_r+d_{r}$. Note that $$k_j-(d_j-1)=k_{r}+(d_{r}-1)+2a_r+2.$$ Therefore, $$k_j-d_j=k_{r}+d_{r}+2a_r.$$ From this, one deduces easily that $k_j\geq k_{r}+q_r+1$. Thus it makes sense to talk about the interval $[k_{r}+q_r+1, k_j]$. Before we proceed, we pay closer attention to the coordinates of $\mathcal{T}_{j}'$, which is the leftmost chain on the top row of Equation . More precisely, it consists of three parts: - As mentioned in the paragraph after Equation , by applying Rule (a) of Algorithm \[alg:spinlkt\] between $\mathcal{C}_j$ and each of $\mathcal{C}_{j+1}$, $\dots$, $\mathcal{C}_{r-1}$, one can check that $$\bigcup_{i=j+1}^{r-1}\mathcal{T}_{i}' \subset [k_{r}+q_r+1, k_j].$$ Suppose there are $\delta \geq 0$ coordinates in $\bigcup_{i=j+1}^{r-1}\mathcal{T}_{i}'$, then there will be exactly $\delta$ coordinates in $\mathcal{T}_{j}'$ having coordinates strictly greater than $k_j + p$. - By applying Algorithm to $\mathcal{C}_j$ and $\mathcal{C}_m$, we have $p$ coordinates $(k_j+1)^1_p$ in $\mathcal{T}_j'$ as in Equation . - The other coordinates of $\mathcal{T}_j'$ are either equal to $k_j$, or smaller than $k_j$ if they are linked with $\mathcal{C}_t$ with $t < j$. In conclusion, the coordinates of $\mathcal{T}_j'$ are given by $(\overbrace{\sharp\ \dots\ \sharp}^{\delta} ; (k_j+1)^1_p; \overbrace{\flat\ \dots\ \flat}^{d_j - \delta - p})$, where $\sharp\ \dots\ \sharp$ has coordinates greater than $k_j+p$, and $\flat\ \dots\ \flat$ has coordinates smaller than $k_j + 1$. We now arrange the coordinates of $\bigcup_{i = j}^{m} \mathcal{T}_i'$ in Equation as follows: $$\begin{aligned} &\overbrace{\sharp\ \dots\ \sharp}^{\delta} > \overbrace{(k_j+1)^{1}_{p}}^{p} > \overbrace{\flat \cdots \flat}^{d_j-p-\delta} > \overbrace{\bigcup_{i=j+1}^{r-1} \mathcal{T}_i'}^{\delta} > \mathcal{T}_r' > \dots > \mathcal{T}_{m-1}' > (k_m)^{0}_{a_r} = \dots = (k_m)^{0}_{a_m} \\ & > (k_m-1)^{-1}_{p} > (k_m-(q_r-d_{r}+1))^{-1}_{d_{r}} > \dots > (k_m-(q_{m-1}-d_{m-1}+1))^{-1}_{d_{m-1}}\end{aligned}$$ Here elements in the blocks $\mathcal{T}'_r, \dots, \mathcal{T}'_{m-1}$ are still kept in the increasing manner. Note that if $x < y$, then $\mathcal{T}_x' > \mathcal{T}_y'$ in terms of their coordinates. We index the coordinates of $\tau$ shown in Equation using the above ordering, with the smallest coordinate indexed by $1$: $$\label{rho} \begin{aligned} (\dots &\ (d_m+D_r+ d_j - p+1)^1_p) \ \ ((d_m+D_{r+1}+1)^1_{d_{r}}) \ \ \ \ \ \cdots\ \ \ \ \ \ ((d_m+1)^1_{d_{m-1}})\\ (&(D_r+p)^{-1}_p (D_r + p + 1)^1_{a_r}(D_r)^{-1}_{d_{r}} (D_r + p + a_r + 1)^1_{a_{r+1}} \cdots (D_{m-1})^{-1}_{d_{m-1}} (D_r + p + \sum_{l=r}^{m-1} a_l + 1)^1_{a_m}), \end{aligned}$$ where $\displaystyle D_x := \sum_{l = x}^{m-1} d_{l}$ for $r\leq x\leq m-1$. Note that the coordinates of the last row read as $$\begin{aligned} (D_r + p, \dots, 2, 1)&=((D_r+p)^{-1}_p;\ (D_r)^{-1}_{d_{r}};\ \dots;\ (D_{m-1})^{-1}_{d_{m-1}}),\\ (D_r + p + 1, \dots, d_m-1, d_m)&=\\ ((D_r + p + 1)^1_{a_r};\ \dots ;&\ (D_r + p + \sum_{l=r}^{x-1} a_l + 1)^1_{a_x};\ \dots;\ (D_r + p + \sum_{l=r}^{m-1} a_l + 1)^1_{a_m}).\end{aligned}$$ Up to a translation of a constant of all coordinates, the difference between Equation and gives (a $W$-conjugate of) $\{\tau - \rho\}$, which is of the form: $$\label{taurho} \begin{aligned} (\cdots &\ (\beta_j)^0_p) \ \ \ \ \ \ \ \ \ \ \ (\beta_r)^0_{d_{r}} \ \ \ \ \ \ \cdots\ \ \ \ \ \ (\beta_{m-1})^0_{d_{m-1}}\\ (&(\alpha_j)^0_p\ \ {\bf *\ *\ *\ }(\alpha_r)^0_{d_{r}}\ {\bf *\ *\ *}\ \cdots\ (\alpha_{m-1})^0_{d_{m-1}}\ {\bf *\ *\ *}) \end{aligned}$$ Our goal is to show and are equal up to a translation of a constant of all coordinates. So we need to show the following: - $\alpha_j = \beta_j$: We need to show $$k_m - 1 - (D_r + p) = k_j + 1 - (d_m + D_r + d_j-p + 1).$$ In fact, we have $$\begin{aligned} C_{m,1} &= C_{j,d_j} + 2p - 1 \\ k_m + (d_m - 1) &= k_j - (d_j-1) + 2p - 1 \\ k_m - p -1 &= k_j - d_j + p - d_m \\ k_m - 1 - (D_r + p) &= k_j + 1 - (d_m + D_r + d_j -p + 1)\end{aligned}$$ as required. - $\alpha_x = \beta_x$ for all $r \leq x \leq m-1$: This is the same as showing $$k_m - (q_x - d_{x} + 1) - D_x = k_{x} + (q_x - d_{x} + 1) - (d_m + D_{x+1}+1).$$ As in (i), we consider $$\begin{aligned} C_{m,1} &= C_{x,d_{x}} + 2q_x - 1 \\ k_m + (d_m - 1) &= k_{x} - (d_{x}-1) + 2q_x - 1 \\ k_m - q_x + d_{x} -1 &= k_{x} + q_x - d_m \\ k_m - q_x + d_{x} -1 - D_x + D_{x+1} + d_{x} &= k_{x} + (q_x+1) - (d_m + 1) \\ k_m - q_x + d_{x} -1 - D_x &= k_{x} + (q_x-d_{1}+1) - (d_m + D_{x+1} + 1)\end{aligned}$$ as we wish to show.\ - $\alpha_j - \alpha_x = A_{r-1} - A_{x}$ for all $r \leq x \leq m-1$: In other words, we need to show $$[(k_m - 1) - (D_r +p)] - [(k_m - (q_x-d_{x}+1)) - D_x] = A_{r-1} - A_{x} = a_r + \dots + a_x$$ Indeed, by looking at Equation and applying Rule (c) of Algorithm \[alg:spinlkt\], one gets $$\begin{aligned} p + (a_r + \dots + a_x) + (d_r + \dots + d_x) &= q_x \\ q_x - p &= (A_{r-1} - A_{x}) + (D_r - D_{x+1}) \\ (k_m - 1) - (k_m - 1) + q_x - p - D_r + D_{x+1} &= A_{r-1} - A_{x} \\ [(k_m - 1) - (D_r +p)] - (k_m -1) + q_x + (D_x - d_x) &= A_{r-1} - A_{x} \\ [(k_m - 1) - (D_r +p)] - [(k_m - (q_x-d_x+1)) - D_x] &= A_{r-1} - A_{x}\end{aligned}$$ so the result follows.\ - Collecting the $*\ *\ *$ entries of Equation consecutively from left to right gives $$\underbrace{\alpha_j,\dots,\alpha_r+1}_{a_r};\ \cdots\cdots;\ \underbrace{\alpha_x,\dots,\alpha_{x+1}+1}_{a_{x+1}};\ \cdots\cdots;\ \underbrace{\alpha_{m-1},\dots,\alpha_{m-1} - (a_m-1)}_{a_m}$$ In order for the above expression to make sense, one needs $\alpha_x - \alpha_{x+1} = a_x$ for all $r\leq x \leq m-1$ for instance. This is indeed the case, since $\alpha_x - \alpha_{x+1} = A_{x} - A_{x+1}$ by (iii), and the latter is equal to $a_{x+1}$ by the definition of $A_x$ for $r-1 \leq x\leq m-1$. So it suffices to check $\displaystyle k_m - (D_r + p + \sum_{l=r}^{x} a_l + 1) = \alpha_x.$ To see it is the case, one can check that the leftmost entry of the second row of Equation is equal to $$\begin{aligned} \alpha_j &= k_m - 1 - (D_r + p) \\ \alpha_x + A_{r-1} - A_{x} &= k_m - (D_r + p + 1) \ \ \ \ \ \ \ \ \ \text{(by (iii))} \\ \alpha_x + \sum_{l = r}^{x} a_l &= k_m - (D_r + p + 1) \\ \alpha_x &= k_m - (D_r + p + \sum_{l=r}^{x} a_l + 1)\end{aligned}$$ as follows. Combining (i) – (iv), Equation can be rewritten as $$\begin{aligned} (\cdots&\ (\alpha_j)^0_p) &&\ \ \ ((\alpha_r)^0_{d_r}) \ \ \ \ \ \ \ \ \ \ \cdots\cdots &&\ \ ((\alpha_{m-1})^0_{d_{m-1}})\\ (&(\alpha_j)^0_p\ \ \ \ \ \ \ (\alpha_j)^{-1}_{a_r} &&(\alpha_r)^0_{d_r}\ \ \ \ \ (\alpha_r)^{-1}_{a_{r+1}} \ \ \ \ \cdots\cdots &&(\alpha_{m-1})^0_{d_{m-1}}\ \ (\alpha_{m-1})^{-1}_{a_m}) ,\end{aligned}$$ whose coordinates are in descending order from left to right. So it is equal to $\{\tau - \rho\}$ up to a translation of a constant. Moreover, by comparing it with Equation , we have shown that all coordinates of $2\lambda - \rho$ and $\{\tau - \rho\}$ differ by a constant (note that the other coordinates on the left of $\mathcal{C}_j$ are taken care of by induction hypothesis). To see they are exactly equal to each other, we calculate the [*true*]{} values of $A_{m-1}$ and $\alpha_{m-1}$ in $2\lambda - \rho$ and $\tau$ respectively on the entry marked by $\circledast$ below: $$\begin{aligned} \{\dots, &\ \ *, \dots ,* \} &&\ \ \ \{*,\dots,*\} \ \ \ \ \ \ \cdots\ &&\ \ \{*,\dots, *\}\\ \{&*,\dots, *;\ \ \ \ *,\dots,*;\ &&*, \dots,*;\ \ \ *,\dots,*;\ \ \ \cdots;\ &&*,\dots,\circledast;\ \ \underbrace{*,\dots,*}_{a_m}\}\end{aligned}$$ For $2\lambda - \rho$, $\circledast$ takes the value $$C_{m, d_m - a_m} - \rho_{a_m + 2},$$ where $\rho = (\rho_n, \dots, \rho_2, \rho_1)$ with $\rho_i = \rho_1 + (i-1)$. So it can be simplified as $$\begin{aligned} C_{m,d_m - a_m} - \rho_{a_m + 2} &= k_m - (d_m - 1) + 2a_m - \rho_{a_m + 2} \\ &= k_m - d_m + 1 + 2a_m - \rho_1 - (a_m +1)\\ &= k_m - d_m + a_m - \rho_1\end{aligned}$$ On the other hand, for $\{\tau - \rho\}$, $\circledast$ takes the value $$k_m - q_{m-1} - \rho_{1}$$ (Recall that we had $\alpha_{m-1} = k_m - q_{m-1} - 1$ for $\circledast$ in our previous calculation). By looking at Equation and applying Rule (c) of Algorithm \[alg:spinlkt\] again, one has $q_{m-1} = d_m - a_m$, hence $2\lambda - \rho$ and $\{\tau - \rho\}$ takes the same value on the $\circledast$ coordinate. Since we have seen that their coordinates differ by the same constant, one can conclude that $2\lambda - \rho = \{\tau - \rho\}$. For the the interlaced chain in Example \[eg:spinlowest\], the translate of $2\lambda - \rho$ in Equation is equal to $$\begin{aligned} \ &\begin{aligned} \{10-8 && && 8-6\} && && \{6-4\} && && \{4-2\} \\ && \{9-7 && && 7-5 && && 5-3 && && 3-1 && && 1-0\} \end{aligned} \\ =\ &\begin{aligned} \{2 && && 2\} && && \{2\} && && \{2\} \\ && \{2 && && 2 && && 2 && && 2 && && 1\} \end{aligned}.\end{aligned}$$ Also, the translate of $\tau - \rho$ in Equation is given by: $$\begin{aligned} \ &\begin{aligned} (9-8 && && 10-9) && && (8-7) && && (2-1) \\ && (4-3 && && 3-2 && && 5-4 && && 7-6 && && 5-5) \end{aligned}\\ =\ &\begin{aligned} (1 && && 1) && && (1) && && (1) \\ && (1 && && 1 && && 1 && && 1 && && 0) \end{aligned}\end{aligned}$$ Hence their coordinates differ by the same constant $1$. To see $2\lambda - \rho$ and $\{\tau - \rho\}$ are equal, where $\rho = (4,3,2,1,0,-1,-2,-3,-4)$, one can look at the [*true*]{} values of them for the rightmost entry of the bottom chain: $$2\lambda - \rho:\ 1 - \rho_1 = 1 - (-4) = 5;\ \ \ \ \ \tau - \rho:\ 5 - \rho_5 = 5 - 0 = 5.$$ Hence $2\lambda - \rho = \{\tau - \rho\} = (6,6,6,6,6,6,6,6,5)$, and the unique $\widetilde{K}$-type in the Dirac cohomology of the corresponding unitary module is $V_{(6,6,6,6,6,6,6,6,5)}$. Scattered Representations in $SL(n)$ ==================================== It is easy to parametrize irreducible unitary representations of $SL(n)$ using that of $GL(n)$. In such cases, we impose the condition on $\lambda$ such that the sum of the coordinates is equal to $0$. In other words, for each possible regular, half-integral infinitesimal character $\lambda$ for $SL(n)$, one can shift the coordinates by a suitable scalar, so that it corresponds to the an infinitesimal character $\lambda'$ of $GL(n)$ whose smallest coordinate is equal to $1/2$. Therefore, the irreducible unitary representations of $SL(n)$ are parametrized by chains with $n$ coordinates whose smallest coordinate is equal to $1$. The following proposition characterizes which of these representations are scattered in the sense of Section \[scattered\]: \[prop:scattered\] Let $\pi := J(\lambda,-s\lambda)$ be an irreducible unitary representation of $SL(n)$ such that $\lambda$ is dominant and half-integral. Then $\pi$ is a scattered representation if and only if the translated Zhelobenko parameter $(\lambda',-s\lambda')$ can be expressed as a union of interlaced chains with smallest coordinate equal to $1$. By the arguments in Section \[scattered\], one only needs to check that $s \in W$ involves all simple reflections in its reduced expression if and only if $(\lambda',-s\lambda') = \bigcup_{i=0}^m \mathcal{C}_i$ are interlaced. Indeed, $s \in W$ can be read from $\bigcup_{i=0}^m \mathcal{C}_i$ as follows: label the entries of $\bigcup_{i=0}^m \mathcal{C}_i$ in descending order, e.g. $$\bigcup_{i=0}^m \mathcal{C}_i = \begin{aligned} &\ \ \{p_{k+1},\ \ \dots \}\ \ \cdots \\ \{p_1,\ \ p_2, \ \ \dots,\ \ &p_k, \ \ \ p_{k+2},\ \ \dots \}\ \ \cdots \end{aligned}$$ with $p_1 > p_2 > \dots > p_n$, then we ‘flip’ the entries of each chain $\mathcal{C}_i$ by $\{C_{i,1},\dots,C_{i,d_i}\}$ $\rightarrow$ $\{C_{i,d_i},\dots,C_{i,1}\}$. Suppose we have $$\begin{aligned} \begin{aligned} &\ \ \{p_{s_{k+1}},\ \ \dots \}\ \ \cdots \\ \{p_{s_1},\ \ p_{s_2}, \ \ \dots,\ \ &p_{s_k}, \ \ \ p_{s_{k+2}},\ \ \dots \}\ \ \cdots \end{aligned} \end{aligned}$$ after flipping each chain, then $s \in S_n$ is obtained by $s = \begin{pmatrix}1 & 2 & \dots & n \\ s_1 & s_2 & \dots & s_n \end{pmatrix}$ (see Example \[eg:interlaced\]). Define the equivalence class of interlaced chains by letting $\mathcal{C}_i \sim \mathcal{C}_j$ iff $i = j$, or $\mathcal{C}_i, \mathcal{C}_j$ are interlaced. So we have a partition of $\{p_1, \dots, p_n\}$ by the entries of chains in the same equivalence class. It is not hard to check that the entries on each partition have consecutive indices, i.e. $$\mathcal{E}_i = \{p_{a_i}, p_{a_i + 1}, \dots, p_{b_i -1}, p_{b_i}\}$$ and $\bigcup_{i=0}^m \mathcal{C}_i$ are interlaced iff there is only one equivalence class. We now prove the proposition. Suppose there exists more than one equivalence class, i.e. we have $$\mathcal{E}_1 = \{p_1, \dots, p_a\};\ \ \ \mathcal{E}_2 = \{p_{a+1}, \dots, p_b\}$$ for some $1 \leq a < n$. Since the smallest element in any equivalence class must be the smallest element of a chain, and the largest element in a class must be the largest element of a chain, we have $$\mathcal{C}_i = \{\ \dots,\ p_{a}\}\ \ \{p_{a+1},\ \dots \} = \mathcal{C}_j.$$ By the above description of $s \in S_n$, it is obvious that $s \in S_a \times S_{n-a} \subset S_n$, which does not involve the simple reflection $s_a$. Conversely, if there is only one equivalence class, we suppose on the contrary that there exists some $1\leq a < n$ such that $s \in S_a \times S_{n-a}$. Since $p_a, p_{a+1}$ are in the same equivalence class, then at least one of the following $$\{p_a, p_{a+1}\},\ \ \ \ \ \{p_a, p_{a+2}\}, \ \ \ \ \ \{p_{a-1}, p_{a+1}\}$$ is in the same chain $\mathcal{C}_i$ for some $0 \leq i \leq m$. By ‘flipping’ $\mathcal{C}_i$ in either case, there must be some $u \leq a < a+1 \leq v$ such $s = \begin{pmatrix} \dots & u & \dots & v & \dots \\ \dots & v & \dots & u & \dots \end{pmatrix}$. The reduced expression of such $s$ must involve the simple reflection $s_a$, hence we obtain a contradiction. Therefore, $s$ must involve all simple reflections in its reduced expression. \[eg:interlaced\] Consider the interlaced chain with smallest coordinate $1$ given in Example \[eg:spinlowest\]: $$\begin{aligned} \{10 && && 8\} && && \{6\} && && \{4\} \\ && \{9 && && 7 && && 5 && && 3 && && 1\}\end{aligned}$$ Its corresponding irreducible representation in $SL(9)$ has Langlands parameter $(\lambda',-s\lambda')$, where $s = \begin{pmatrix}1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ 3 & 9 & 1 & 8 & 5 & 6 & 7 & 4 & 2 \end{pmatrix}$, and $\lambda'$ $=$ $[1/2,1/2,1/2,1/2,1/2,1/2,1]$, where $[a_1,\dots,a_m]$ is defined by $$[a_1,\dots,a_m] := a_1\varpi_1 + \dots, + a_m\varpi_m.$$ In fact, the coordinates of $\lambda'$ is simply obtained by taking the difference of the neighboring coordinates of $\lambda = \frac{1}{2}(10,9,8,7,6,5,4,3,1)$. The calculation in Example \[eg:spinlowest\] implies that the spin-lowest $K$-type for $J(\lambda',-s\lambda')$ in $SL(8)$ is $V_{[1,1,1,2,0,1,1,1]}$. \[spherical\] We explore the possibilities of chains $\bigcup_{i=0}^m \mathcal{C}_i$ whose corresponding Zhelobenko parameter $(\lambda',-s\lambda')$ gives a spherical representation. In order for the lowest $K$-type to be trivial, we need the $\mathcal{T}_i$’s in Algorithm \[alg:spinlkt\] to have the same average value $k_i$ for all $i$, that is, the mid-point of all $\mathcal{C}_i$’s (if there is more than one) must be the same. This leaves the possibility of $\bigcup_{i=0}^m \mathcal{C}_i$ consisting of a single chain, which corresponds to the trivial representation, or there are two chains of lengths $a > b > 0$ whose entries are of different parity. Hence it must be of the form $$\{2a-1, 2a-3, \dots, 3, 1\} \cup \{a+(b-1), a+(b-3), \dots, a-(b-3), a-(b-1)\},$$ where $a$, $b$ are of different parity. In other words, such representations can only occur for $SL(n)$ with $n = a+b$ is odd, and is equal to $Ind_{S(GL(a) \times GL(b))}^{SL(n)}(\mathrm{triv} \otimes \mathrm{triv})$, which are the unipotent representations corresponding to the nilpotent orbit with Jordan block $(2^b1^{a-b})$ (see [@BP Section 5.3]). Its Langlands parameter $(\lambda',-s\lambda')$ has $2\lambda' = [\underbrace{2,\dots,2}_{(a-b-1)/2},\underbrace{1,\dots,1}_{2b},\underbrace{2,\dots,2}_{(a-b-1)/2}]$ and $s = w_0$ (see [@DD Conjecture 5.6]). Moreover, its spin-lowest $K$-type is given by Equation (5.5) of [@BP], which matches with our calculations in Algorithm \[alg:spinlkt\]. For the rest of this section, we give two applications of Proposition \[prop:scattered\]: The spin-lowest $K$-type is unitarily small ------------------------------------------- To offer a unified conjectural description of the unitary dual, Salamanca-Riba and Vogan formulated the notion of unitarily small (*u-small* for short) $K$-type in [@SV]. Here we only quote it for a complex connected simple Lie group $G$ – using the setting in the introduction, a $K$-type $V_{\delta}$ is u-small if and only if $\langle \delta-2\rho, \varpi_i\rangle\leq 0$ for $1\leq i\leq \mathrm{rank}({{\mathfrak g}}_0)$ (see Theorem 6.7 of [@SV]). \[lemma-u-small\] Let $\lambda=\sum_{i=1}^{\mathrm{rank}({{\mathfrak g}}_0)}\lambda_i\varpi_i \in{{\mathfrak h}}_0^*$ be a dominant weight such that $\lambda_i=\frac{1}{2}$ or $1$ for each $1\leq i\leq n$, and $V_{\delta}$ be the $K$-type with highest weight $\delta$ such that $$\{\delta-\rho\}=2\lambda-\rho.$$ Then $\langle\delta-2\rho, \varpi_i\rangle\leq 0$, $1\leq i\leq \mathrm{rank}({{\mathfrak g}}_0)$. Therefore, the $K$-type $V_{\delta}$ is u-small. By assumption, there exists $w\in W$ such that $\delta=w^{-1}(2\lambda-\rho)+\rho$. Thus $$\begin{aligned} \langle \delta-2\rho, \varpi_i\rangle &=\langle w^{-1}(2\lambda-\rho)-\rho, \varpi_i\rangle\\ &=\langle w^{-1}(2\lambda-\rho), \varpi_i\rangle-1\\ &=\langle 2\lambda-\rho, w(\varpi_i)\rangle -1.\end{aligned}$$ On the other hand, let $w=s_{\beta_1}s_{\beta_2}\cdots s_{\beta_p}$ be a reduced decomposition of $w$ into simple root reflections. Then by Lemma 5.5 of [@DH], $$\varpi_i-w(\varpi_i)=\sum_{k=1}^p \langle \varpi_i, \beta_k \rangle s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k).$$ Note that $s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k)$ is a positive root for each $k$. Now we have that $$\begin{aligned} \langle \delta-2\rho, \varpi_i\rangle &=\Big\langle 2\lambda-\rho,\varpi_i- \sum_{k=1}^p \langle \varpi_i, \beta_k \rangle s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k)\Big\rangle -1\\ &=2\lambda_i-2-\sum_{k=1}^p \langle \varpi_i, \beta_k \rangle \langle 2\lambda-\rho, s_{\beta_1}s_{\beta_2}\cdots s_{\beta_{k-1}}(\beta_k) \rangle \\ &\leq 2 \lambda_i-2\\ &\leq 0.\end{aligned}$$ \[cor-u-small\] The unique spin-lowest $K$-type $V_{\tau}$ of any scattered representation of $SL(n)$ is u-small. Consequently, Conjecture C of [@DD] holds for $SL(n)$. Let $(\lambda, -s\lambda)$ be the Zhelobenko parameter for a scattered representation of $SL(n)$. Write $\lambda=\sum_{i=1}^{n-1} \lambda_i \varpi_i$ in terms of the fundamental weights. Then it is direct from our definition of the interlaced chains that each $\lambda_i$ is either $\frac{1}{2}$ or $1$ (recall Proposition \[prop:scattered\] and Example \[eg:interlaced\]). Let $V_{\tau}$ be the unique spin-lowest $K$-type of the scattered representation. Then $\{\tau-\rho\}=2\lambda-\rho$ (see Proposition \[prop-spin-lowest\]). Thus the result follows from Lemma \[lemma-u-small\]. Number of scattered representations ----------------------------------- As another application of Proposition \[prop:scattered\], we compute the number of scattered representations of $SL(n)$. By the proposition, it is equal to the number of interlaced chains with $n$ entries with the smallest entry equal to $1$. We now give an algorithm of constructing new interlaced chains with smallest coordinate equal to $1$ from those with one less coordinate: \[interlaced\] Let $\displaystyle \bigcup_{i=1}^p \{2A_i-1,\dots, 2a_i-1\} \cup \bigcup_{j=1}^q \{2B_j, \dots, 2b_j\}$ be a union of interlaced chains with such that - $A_{i'} > A_i$ if $i' > i$, and $B_{j'} > B_j$ if $j' > j$; and\ - $2a_p - 1 = 1$ We construct two new interlaced chains with one extra coordinate as follows. (When $q=0$, we adopt CASE I only.)\ [**CASE I:**]{} If $2A_p -1 > 2B_q + 1$, then the two new interlaced chains are $$\begin{aligned} && && && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ and $$\begin{aligned} && \{{\bf 2A_p-2}\} && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{2A_p-1&& \dots && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ [**CASE II:**]{} If $2A_p-1 = 2B_q+1$, then the two new interlaced chains are $$\begin{aligned} && && \{2B_q&& && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ and $$\begin{aligned} \{{\bf 2B_q+2} && 2B_q&& && \dots && && 2b_q\} && && \dots \\ & \{2A_p-1 && && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ [**CASE III:**]{} If $2A_p-1 = 2B_q - 1$, then the two new interlaced chains are $$\begin{aligned} & \{2B_q && && \dots && && 2b_q\} && && \dots \\ \{{\bf 2A_p+1} && 2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ and $$\begin{aligned} \{{\bf 2B_q + 2} && 2B_q&& && \dots && && 2b_q\} && && \dots \\ && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ [**CASE IV:**]{} If $2A_p-1 < 2B_q - 1$, then the two new interlaced chains are $$\begin{aligned} \{2B_q&& \dots && \dots && && 2B_q\} && && \dots \\ && \{{\bf 2B_q-1}\} && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ and $$\begin{aligned} \{{\bf 2B_q+2} && 2B_q&& && \dots && && 2b_p\} && && \dots \\ && && && && \{2A_p-1&& && \dots && && 2a_p-1\} && && \dots\end{aligned}$$ Suppose we begin with an interlaced chain $\{9,7,5,3,1\} \cup \{4,2\}$. Then the new interlaced chains with one extra coordinate are $$\{11,9,7,5,3,1\} \cup \{4,2\} \ \ \text{and}\ \ \{9,7,5,3,1\} \cup \{8\} \cup \{4,2\}.$$ All interlaced chains with $n \geq 2$ entries with smallest coordinate equal to $1$ can be obtained uniquely from the chain $\{3\ 1\}$ by inductively applying the above algorithm. Suppose $\bigcup_{i=0}^m \mathcal{C}_i$ be interlaced chains with largest coordinate equal to $M \in \mathcal{C}_0$. We remove a coordinate from it by the following rule: If $\mathcal{C}_i \neq \{M-1\}$ for all $i$, remove the entry $M$ from $\mathcal{C}_0$. Otherwise, remove the whole chain $\{M-1\}$ from the original interlaced chains. One can easily check from the definition of interlaced chain that the reduced chains are still interlaced, and one can recover the original chain by applying Algorithm \[interlaced\] on the reduced chain. Therefore, for all interlaced chains with smallest entry $1$, we can use the reduction mentioned in the first paragraph repeatedly to get an interlaced chain with only $2$ entries, which must be of the form $\{3\ 1\}$, and repeated applications of Algorithm \[interlaced\] on $\{3\ 1\}$ will retrieve the original interlaced chains (along with other chains). In other words, all interlaced chains with smallest entry $1$ can be obtained by Algorithm \[interlaced\] inductively on $\{3\ 1\}$. We are left to show that all interlaced chains are uniquely constructed using the algorithm – Suppose on the contrary that there are two different interlaced chains that give rise to the same $\bigcup_{i=0}^m \mathcal{C}_i$ after applying Algorithm \[interlaced\]. By the algorithm, these two chains must be obtained from $\bigcup_{i=0}^m \mathcal{C}_i$ by removing its largest odd entry $M_o \in \mathcal{C}_p$ or largest even entry $M_e \in \mathcal{C}_q$. So they must be equal to $$\bigcup_{i \neq p} \mathcal{C}_i \cup (\mathcal{C}_p \backslash \{M_o\})\ \ \ \text{and}\ \ \ \bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\})$$ respectively. Assume $M_o > M_e$ for now (and the proof for $M_e > M_o$ is similar). By applying Algorithm \[interlaced\] to $\bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\})$, we obtain two interlaced chains $$\bigcup_{i \neq p,q} \mathcal{C}_i \cup \mathcal{C}_p' \cup (\mathcal{C}_q \backslash \{M_e\})\ \ \ \text{and}\ \ \ \bigcup_{i \neq q} \mathcal{C}_i \cup (\mathcal{C}_q \backslash \{M_e\}) \cup \{M_o -1\},$$ where $\mathcal{C}_p' := \{M_o +2, \overbrace{M_o, \dots, m_o}^{\mathcal{C}_p}\}$. Note that none of the above gives rise to the interlaced chains $\bigcup_{i=0}^m \mathcal{C}_i$: Even in the case when $M_0 -1 = M_e$, $(\mathcal{C}_q \backslash \{M_e\}) \cup \{M_o -1\}$ and $\mathcal{C}_q$ are different – although they have the same coordinates, the first consists of two chains while the second consists of one chain only. So we have a contradiction, and the result follows. \[cor-number\] The number of interlaced chains with $n$ coordinates and the smallest coordinate equal to $1$ is equal to $2^{n-2}$. Since the scattered representations of $SL(n+1)$ is in one to one correspondence with interlaced chains with $n+1$ coordinates having smallest coordinate $1$, this corollary implies that the number of scattered representations of Type $A_n$ is equal to $2^{n-1}$. This verifies Conjecture 5.2 of [@D2]. Moreover, by using `atlas`, the spin-lowest $K$-types for all scattered representations of $SL(n)$ with $n \leq 6$ are given in Tables 1–3 of [@D2]. One can easily check the results there match with our $V_{\tau}$ in Algorithm \[alg:spinlkt\]. \[exam-scattered-small-rank\] Let us start from $SL(2, {{\mathbb C}})$ and the chain $\{3\quad 1\}$. This chain corresponds to the trivial representation. Now we consider $SL(3, {{\mathbb C}})$. By Algorithm \[interlaced\], the chain $\{3\quad 1\}$ for $SL(2)$ produces two chains $$\{5\quad 3 \quad 1\} \qquad\qquad\qquad\qquad \begin{aligned} \{ 3 \ \ \ &\ \ \ \ 1\} \\ \{ &2 \} \end{aligned}.$$ The first corresponds to the trivial representation, while the second gives the representation with $\lambda=[\frac{1}{2}, \frac{1}{2}]$ and $s = \begin{pmatrix}1 & 2 & 3 \\ 3 & 2 & 1 \end{pmatrix}$. One computes by Algorithm \[alg:spinlkt\] that the spin-lowest $K$-type $\tau=[1, 1]$. Now let us consider $SL(4)$. By Algorithm \[interlaced\], the chain $\{5\quad 3\quad 1\}$ for $SL(3)$ produces two chains $$\{7\quad 5\quad 3 \quad 1\} \ \qquad\qquad\qquad\qquad \begin{aligned} \{5 \ \ \ &\ \ \ \ 3 \ \ \ \ \ \ \ 1\} \\ \{ &4 \} \end{aligned}.$$ The first chain corresponds to the trivial representation, while the second one gives the representation with $\lambda=[\frac{1}{2}, \frac{1}{2}, 1]$ and $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 4 & 2 & 3 & 1 \end{pmatrix}$. One computes by Algorithm \[alg:spinlkt\] that the spin-lowest $K$-type $\tau=[2, 0, 1]$. The other chain of $SL(3)$ shall produce $$\begin{aligned} \{5 \ \ \ \ \ \ \ 3 \ \ \ &\ \ \ \ 1\} &\ \qquad \{&3 \ \ \ \ \ \ \ 1\}\\ \{ &2 \} &\ \qquad \{4 \ \ \ \ &\ \ \ 2\}\end{aligned}$$ One computes that $\lambda=[1, \frac{1}{2}, \frac{1}{2}]$, $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 4 & 2 & 3 & 1 \end{pmatrix}$, $\tau=[1, 0, 2]$; and that $\lambda=[\frac{1}{2}, \frac{1}{2}, \frac{1}{2}]$, $s = \begin{pmatrix}1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{pmatrix}$, $\tau=[1, 1, 1]$, respectively. These four representations (and their spin-lowest $K$-types) match precisely with Table 1 of [@D2]. <span style="font-variant:small-caps;">Funding</span> Dong is supported by the National Natural Science Foundation of China (grant 11571097) and Shanghai Gaofeng Project for University Academic Development Program. Wong is supported by the National Natural Science Foundation of China (grant 11901491) and the Presidential Fund of CUHK(SZ). [PRV]{} D. Barbasch, C.-P. Dong, K.D. Wong, [*A multiplicity one theorem for Dirac cohomology of unitary $(\mathfrak{g},K)$-modules*]{}, preprint. D. Barbasch, P. Pandžić, [*Dirac cohomology and unipotent representations of complex groups*]{}, Noncommutative geometry and global analysis, Contemp. Math. **546**, Amer. Math. Soc., Providence, RI, 2011, pp. 1–22. J. Ding, C.-P. Dong, *Unitary representations with Dirac cohomology: a finiteness result*, preprint 2017 (arXiv:1702.01876). C.-P. Dong, *On the Dirac cohomology of complex Lie group representations*, Transform. Groups **18** (1) (2013), 61–79. Erratum: Transform. Groups **18** (2) (2013), 595–597. C.-P. Dong, *Unitary representations with non-zero Dirac cohomology for complex $E_6$*, Forum Math. [**31**]{} (1) (2019), 69–82. C.-P. Dong, J.-S. Huang, *Jacquet modules and Dirac cohomology*, Adv. Math. **226** (4) (2011), 2911–2934. R. Goodman, N. Wallach, *Symmetry, representations, and invariants*, Graduate Texts in Mathematics, vol. **255**. Springer, Dordrecht, 2009. J.-S. Huang, Y.-F. Kang, P. Pandžić, *Dirac cohomology of some Harish-Chandra modules*, Transform. Groups. **14** (2009), no.1, 163–173. J.-S. Huang, P. Pandžić, [*Dirac cohomology, unitary representations and a proof of a conjecture of Vogan*]{}, J. Amer. Math. Soc. **15** (2002), 185–202. J.-S. Huang, P. Pandžić, [*Dirac Operators in Representation Theory*]{}, Mathematics: Theory and Applications, Birkhauser, 2006. B. Kostant, *Dirac cohomology for the cubic Dirac operator*, Studies in Memory of Issai Schur, Progress in Mathematics, Vol. **210**, Birkhäuser, Boston, 2003, pp. 69–93. R. Parthasarathy, *Dirac operators and the discrete series*, Ann. of Math. **96** (1972), 1–30. R. Parthasarathy, *Criteria for the unitarizability of some highest weight modules*, Proc. Indian Acad. Sci. **89** (1) (1980), 1–24. R. Parthasarathy, R. Ranga Rao, S. Varadarajan, *Representations of complex semi-simple Lie groups and Lie algebras*, Ann. of Math. **85** (1967), 383–429. S. Salamanca-Riba, D. Vogan, *On the classification of unitary representations of reductive Lie groups*, Ann. of Math. **148** (3) (1998), 1067–1133. D. Vogan, [*The unitary dual of $GL(n)$ over an archimedean field*]{}, Invent. Math. **83** (1986), 449–505. D. Vogan, [*Dirac operator and unitary representations*]{}, 3 talks at MIT Lie groups seminar, Fall 1997. D. P. Zhelobenko, *Harmonic analysis on complex semisimple Lie groups*, Mir, Moscow, 1974.
--- abstract: 'We present a numerical study of a continuum plasticity field coupled to a Ginzburg-Landau model for superfluidity. The results suggest that a supersolid fraction may appear as a long-lived transient during the time evolution of the plasticity field at higher temperatures where both dislocation climb and glide are allowed. Supersolidity, however, vanishes with annealing. As the temperature is decreased, dislocation climb is arrested and any residual supersolidity due to incomplete annealing remains frozen. Our results provide a resolution of many perplexing issues concerning a variety of experiments on solid $^4$He.' author: - Debabrata Sinha - Surajit Sengupta - Chandan Dasgupta - 'Oriol T. Valls' title: Annealing of supersolidity in plastically deformed solid $^4$He --- The first reports of non-classical rotational inertia (NCRI) in solid ${\rm ^4He}$, [@sup-kim-1; @sup-kim] interpreted as evidence for a “supersolid” phase, triggered an avalanche of additional work [@sup-others] which largely confirmed the existence of an anomalous NCRI signal in the form of a sharp drop in the period of a torsional oscillator (TO) filled with solid $^4$He. However, the nature and origin of the TO anomalies have been very much in dispute, especially after the experimental observation [@elastic] of an increase in the shear modulus of solid $^4$He at the onset of the NCRI. The period anomalies can be due to the appearance of a new supersolid phase (producing an NCRI fraction, NCRIF, which decouples from the TO), to changes in the elastic properties, or to a combination of both. It is not at all easy to disentangle these two effects. At one extreme it has been suggested that the phenomena involved are largely or exclusively elastic anomalies [@no-ncrif]. On the other hand, experimental evidence for the mass flow equivalent of a fountain effect [@mass-flux], which is in principle the gold standard for the existence of a superfluid field, indicates the presence of superfluidity. Recent experiments [@rotation] on the effects of dc rotation on the observed NCRIF also suggest the occurrence of superfluidity in solid $^4$He. In our view, the above alternatives may be a false dichotomy. The apparent close connection between the elastic properties of the solid and those of the putative superfluid field may be understood if one assumes that extended crystal defects, such as dislocations and grain boundaries, play a crucial role in the observed phenomena. A prominent role for crystal defects [@sample; @anneal] is indicated by several experimental and theoretical results. The NCRIF reported varies dramatically from sample to sample [@sample], and can decrease substantially on annealing[@anneal]. Also, quantum Monte Carlo [@gb-super-qmc; @core] calculations have shown that superfluidity can occur along dislocation cores and near grain boundaries. As posited in Ref. , the apparent contradictions may be explained by assuming that the motion of dislocations, known to affect the elastic properties of the solid, also has a strong effect on the occurrence of superfluidity along their cores. The reported [@mass-flux] mass flow through solid $^4$He has been attributed to quantum superclimb arising from the flow of atoms though superfluid dislocation cores [@superclimb]. In this Letter we present the results of a calculation that points to a reasonable resolution of these dilemmas by modeling the dynamics of a large number of dislocations using a recent formulation of continuum plasticity theory [@acharya; @Lim-1; @Lim-2; @Lim-GB; @Lim-CW; @Lim-thesis] coupled to a complex scalar field $\psi$ [@Dorsey; @Toner; @Dasbiswas] that describes superfluid order. Our argument is as follows: It is known [@Lim-2] that an initial smooth distribution of dislocations spontaneously coarsens into defect-free regions interspersed with shock-like structures of high dislocation density or “cell walls”. Internal stresses at such cell walls, which may be large initially, eventually anneal out producing stress free-grain boundaries (representing discontinuities only in crystal orientation) at late times [@Lim-GB; @Lim-CW]. If $\psi$ couples only to volumetric stress $\sigma_{ii}$ (the trace of the stress tensor), as in [@Toner], transient superfluidity may exist at cell walls with large $\sigma_{ii}$ as long as dislocation climb is allowed. Once cell walls evolve into symmetric grain boundaries, this superfluidity vanishes, in agreement with quantum Monte Carlo results [@gb-super-qmc]. If climb is arrested, the plastic current is volume preserving [@Lim-GB; @Lim-CW] and $\psi$, if initially absent, cannot form. Similarly, annealing of an initially non-zero $\psi$ is severely constrained without dislocation climb. This suggests the following scenario for the occurrence of NCRI: At high temperatures, dislocation climb, ensured through mass transport through dislocation cores [@superclimb], results in the formation of symmetric grain boundaries and vanishing of the supersolid fraction at long times. As the temperature is reduced and/or pinning of dislocations by impurities becomes effective, climb is suppressed and the resulting long-lived residual supersolidity at cell walls contribute to the NCRI. The Ginzburg-Landau free-energy for the complex superfluid order-parameter $\psi({\bf r})$ coupled to the elastic displacement ${\bf u}_i$ of the solid is [@Dorsey; @Toner], $$\begin{aligned} \mathcal{H} = \int d {\bf r} \left[ c_0\, \vert \nabla \psi \vert^2 + \frac{a_{0}}{2}|\psi|^{2}+\frac{d_{0}}{4}|\psi|^{4} \right] + \mathcal{H}_{int} \label{LG}\end{aligned}$$ The parameters $c_{0}$,$a_{0}$ and $d_{0}$ all are temperature dependent and the interaction energy is $$\begin{aligned} \mathcal{H}_{int}=\frac{1}{2}g_{ij}\int d {\bf r} u_{ij}|\psi|^{2}, %ovcd \label{int}\end{aligned}$$ where $u_{ij}=\frac{1}{2} (\partial_{i} u_{j}+\partial_{j} u_{i})$ and $g_{ij}$ couple the elastic degrees of freedom to the magnitude of $\psi$. To the extent that NCRI is caused by supersolidity of $^4$He, the norm $|\psi|^2$ of the order parameter is expected to be related to the NCRIF measured in experiments. As a consequence of the coupling in (\[int\]), the solid experiences an “external” stess due to $\psi$, $$\begin{aligned} \sigma^{ex}_{ij}={\frac{\delta \mathcal{H}_{int}} %otv over {\delta u_{ij}}}=\frac{1}{2}g_{ij}\vert \psi\vert ^{2} \label{super-sig}\end{aligned}$$ Dislocations within the solid respond to both $\sigma^{ex}_{ij}$ and to the stress due to the presence of other dislocations. To address the problem of defect dynamics in a solid, we follow the recent work of Acharya [@acharya] and that of Limkumnerd and Sethna [@Lim-1; @Lim-2; @Lim-GB; @Lim-CW; @Lim-thesis]. It is convenient to introduce the plastic deformation field [@acharya] $\beta_{ij}^p$, such that the total deformation gradient $\partial_j u_i = \beta_{ij}^e + \beta_{ij}^p$ is a sum of elastic and plastic parts[@Lim-2]. The dislocation density tensor is $\rho_{ij} ({\bf r}) = \sum_{\gamma} t_i^{\gamma} b_j^{\gamma} \delta({\bf r} - {\bf r}^{\gamma}) = -\varepsilon_{ilm} \partial \beta^p_{mj}/\partial r_l$, where the vectors ${\bf t}^{\gamma}$ and ${\bf b}^{\gamma}$ are the tangent and Burgers vectors respectively of a dislocation line at ${\bf r}^{\gamma}$ and $\varepsilon_{ilm}$ is the antisymmetric tensor. Dynamical equations for $\beta_{ij}^p$ can now be derived after writing the current $\dot \beta_{ij}^p = \sum_{\gamma} J_{ij}^{\gamma} = \sum_{\gamma} {\varepsilon}_{ilm} t_l^{\gamma} b_j^{\gamma} v_m^{\gamma} \delta({\bf r} - {\bf r}^{\gamma}) $ as a sum over single dislocation contributions, with the velocity of a single dislocation line $v_i %ovcd PK not introduced yet = D f^{PK}_i = - D {\varepsilon}_{ijk} t_j b_l \sigma_{kl}$ proportional to the [*total*]{} stress $\sigma_{ij}= \sigma^d_{ij} + \sigma^{ex}_{ij}$ consisting of separate contributions from other dislocations ($\sigma^d_{ij}$) and from $\psi$; $D^{-1}$ is a material dependent time scale over which plasticity anneals. Apart from being driven in the direction of ${\bf b}$ by the local stress (glide), dislocations may also [*climb*]{} i.e. move in the perpendicular direction in response to the local flux of point defects. Total volume is preserved by glide, but inclusion of climb removes this constraint. The difference between glide and climb motion was incorporated phenomenologically in Ref.  by writing the total flux as a sum of two terms, $$\begin{aligned} J_{ij}^{\gamma} &=& D [\epsilon_{ilm}t_{l}^{\gamma}b_{j}^{\gamma}\epsilon_{mpq}\sigma_{pr}t_{q}^{\gamma}b_{r}^{\gamma}\delta({\bf r} - {\bf r}^{\gamma}) \nonumber\\ & & -\frac{\lambda}{3}\delta_{ij}\epsilon_{klm}t_{l}^{\gamma}b_{k}^{\gamma}\epsilon_{mpq}\sigma_{pr}t_{q}^{\gamma}b_{r}^{\gamma}\delta ({\bf r} - {\bf r}^{\gamma})] \label{current}\end{aligned}$$ For $\lambda=1$, $J_{ij}$ is traceless i.e only glide motion is possible and for $\lambda=0$ glide and climb are equally probable [@acharya]. Next, coarse graining the current over all dislocations, within a mean-field approximation, one obtains $$\begin{aligned} \partial_{t}\beta^{p}_{ij} & = & \frac{D}{2} \Big[(\sigma_{ic}\rho_{ac}-\sigma_{ac}\rho_{ic})\rho_{aj} - \nonumber \\ & & \frac{\lambda}{3}\delta_{ij}(\sigma_{kc}\rho_{ac}-\sigma_{ac}\rho_{kc})\rho_{ak})\Big] \label{plas-dyn}\end{aligned}$$ which is identical to that obtained in [@Lim-2] except for a redefinition of $\sigma_{ij}$. The stress due to dislocations is $\sigma^{d}_{ij}=-\bar{C}_{ijkm}\beta^{p}_{km}$where, for an isotropic solid, $$\begin{aligned} \bar{C}_{ijkm} & = & \mu (\delta_{ik} \delta_{jm}+ \delta_{im} \delta_{jk}+\frac{2\nu}{1-\nu} \delta_{ij} \delta_{km}) \nonumber \\\end{aligned}$$ and $\mu$ and $\nu$ are the shear modulus and Poisson’s ratio respectively. In Ref.  it was found that an equation similar to (\[plas-dyn\]) spontaneously leads to dislocation pile-ups with associated stress jumps [@Lim-CW], known as cell walls, at finite time. At later times, cell walls evolved to symmetric stress-free grain boundaries [@Lim-GB] by attracting more and more dislocations to themselves, as in the formation of finite time shocks in Burgers turbulence [@burgulence]. In principle, Eq. (\[plas-dyn\]) together with a dynamical equation for $\psi$ such as the time dependent Ginzburg-Landau (TDGL) equation $\partial_{t}\psi=-\Gamma \delta\mathcal{H}/\delta \psi$, is sufficient to describe the dynamics of a supersolid with plastic deformation. The parameter $\Gamma$ sets the time scale for the evolution of $\psi$. To make explicit calculations, we now introduce two simplifications. Firstly, since stress $\sigma_{ij} = \partial {\mathcal H}/\partial u_{ij}$ relaxes much faster than the dislocation configuration, we may assume that for all times, the divergence of the total stress vanishes, i.e, $\partial_{j}\sigma_{ij}=0$ and the solid is in [*mechanical equilibrium*]{}. Secondly, we report calculations only for situations where all the fields, $\beta^p_{ij}, \psi$ etc. are functions only of [*one*]{} dimension, $z$ describing [*flat*]{} cell walls and grain boundaries [@Lim-CW; @Lim-GB]. Using $\sigma^{ex}_{ij}$ and $\sigma^d_{ij}$ in the mechanical equilibrium condition, we obtain, in one dimension, $$\begin{aligned} %\vec{\nabla}.\vec{u} \frac{\partial u_z}{\partial z} & = &-\frac{\nu}{1-\nu}\beta^{p}_{kk} - \frac{(1-2\nu)}{4\mu(1-\nu)}g|\psi|^{2}, \label{diver}\end{aligned}$$ where $\beta^p_{kk} \equiv \beta^p_{11}+\beta^p_{22}$. We have simplified ${\mathcal H}_{int}$ by assuming that $\psi$ couples only to the volumetric stress $\sigma_{ii} = [2 \mu (1+\nu)/3 (1- 2 \nu)] \partial_z u_z$. A similar assumption was used by Toner [@Toner] with the sign of $g$ chosen such that compressive stresses lead to superfluidity. The Hamiltonian is now given by,$$\begin{aligned} \mathcal{H} & = & \mathcal{H}_{LG}+\mathcal{H}_{int}=\int dz\,\, c_0 \left(\frac{\partial \psi}{\partial z}\right)^{2}+\frac{a}{2}|\psi|^{2}+\frac{d}{4}|\psi|^{4}, \nonumber \\ \label{total}\end{aligned}$$ with renormalized parameters as a consequence of Eq. (\[diver\]) viz., $a = a_{0}-g(\nu/1-\nu)\beta^{p}_{kk}$ and $d = d_{0}-g^{2}[(1-2\nu)/\mu(1-\nu)]$. The constants $a_0$ and $d_0$ are both $>0$, so that the equilibrium bulk solid does not show superfluidity at any temperature. The time evolution equation for $\beta^{p}_{ij}$ ($i,j = 1,2$) and $\beta^p_{33}$ including superfluidity is given by, \[supersolid\] $$\begin{aligned} %ovcd -\partial_{t}\beta^{p}_{ij} & = \frac{1}{2}(\partial_{z}\mathcal{E})\partial_{z}(\beta^{p}_{ij}-\frac{\lambda}{3}\beta^{p}_{kk}\delta_{ij}) -\frac{1}{4}g|\psi|^{2}\partial_{z}\beta^{p}_{ij}\partial_{z}\beta^{p}_{kk} \nonumber \\ &+ \frac{\lambda}{12}g|\psi|^{2}(\partial_{z}\beta^{p}_{kk})^{2}\delta_{ij}, \\ -\partial_t\beta^p_{33} & = - \frac{\lambda}{6}\partial_{z}\mathcal{E}\partial_{z}\beta^{p}_{kk} + \frac{\lambda}{12} g|\psi|^{2}(\partial_{z}\beta^{p}_{kk})^{2} %\label{supersolid}\end{aligned}$$ with $\mathcal{E}=-\sigma^{d}_{ij}\beta^{p}_{ij}$ being the elastic energy density from dislocations. For $\lambda = 0$, $\beta^p_{33}$ does not evolve and for $\lambda = 1$, the evolution is constrained such that the trace $\beta^p_{11} + \beta^p_{22} + \beta^p_{33}$ vanishes at all times. We choose $c_0$ and $D^{-1}$ as our units for length and time respectively, the energy scale is set by $k_B T$. We shall first consider the case with $\lambda=0$. The components of $\beta^{p}_{ij}$ (see Eq. (\[supersolid\])) are all coupled through the Peach-Köhler force density (PKF) $\mathcal{F}^{PK}=\partial \mathcal{E}/\partial z$. It is instructive, therefore, to obtain the time evolution of $\mathcal{F}^{PK}$. For $\lambda=0$ we obtain from Eq. (\[supersolid\]). $$%otv \partial_{t}\mathcal{F}^{PK}+\mathcal{F}^{PK}\partial_{z}\mathcal{F}^{PK}- \frac{1}{4}g\partial_{z}(|\psi|^{2}\mathcal{F}^{PK}\partial_{z}\beta^{p}_{kk}) =0. %otv \label{pk force}$$ For $g=0$ Eq. (\[pk force\]) becomes the Burgers equation [@burgulence]. We have solved the coupled Eqs. (\[supersolid\]) and (\[pk force\]) together with the TDGL equation for $\psi$, numerically using an accurate fourth order Runge-Kutta scheme [@NR] with time step $\Delta t = 0.05$ and spatial discretization $\Delta z = 1$. The initial input for $\beta^{p}_{ij}$ is random, drawn from a Gaussian distribution with mean zero and width $=0.15$. The equations are regularized by adding a diffusive term $\alpha\, \partial_z^2 \beta^p_{ij}$ with a small initial value of $\alpha$ which is subsequently reduced further at later times. The rest of the parameters $\mu = 1$, $\nu = .49$, $a_0 = 0.01$, $g = .5$ and $d = 1$ are chosen to represent a generic solid above the bulk superfluid transition. Finally, for the dynamical parameters we use $\Gamma = D$ which represents a scenario where $|\psi|^2$ relaxes together with the plasticity and the associated stress. In Fig. \[fig:f-and-beta\](a) and (b) we plot, respectively, the PKF and the two diagonal components of $\beta^{p}_{ij}$. Both the PKF and the $\beta^p_{ij}$ have two discontinuities at $z =180$ and $\approx 300$ which are inherited from the Burgers-like terms in Eq. (\[supersolid\]) modified by the presence of $\psi$. Since the dislocation density $\rho_{ij}$ is given by $z$ derivatives of $\beta_{ij}^p$, these regions of $\beta^p_{ij}$ are also regions of large dislocation content signifying the presence of dislocation pile-ups or cell walls [@Lim-CW]. While the first of these cell walls is, coincidentally, almost free of compressive stresses, the second has a prominent stress jump which reduces $a$ in Eqn.(\[total\]) locally. The location and nature of the discontinuities depend, of course, on the realization of the random initial condition used. In Fig. \[fig:time-evolution\](a) and (b) we show, respectively, the time evolution of $\beta_{kk}^p$ and of $|\psi|^2$ at these cell walls. Initially, there are small pockets of superfluidity at random values of $z$, where the local stress is suitably compressive so that $a < 0$. As a consequence of the plasticity dynamics, dislocations bunch together producing cell walls after some time $t \sim t_0$. Supersolidity is associated [*only with those cell walls which have a positive stress jump*]{}. While this condition is satisfied at the second cell wall, the first one remains free of $\psi$. Subsequently, the stress decays as $\sim 1/\sqrt{t - t_0}$ [@burgulence; @Lim-CW] driving the local $a > 0$, causing $\psi$ to vanish after some time. If $\psi$ couples only to the volumetric stress as in our case (and in Ref. ), then without dislocation climb, we do not find supersolid behaviour. This is shown in Fig. \[fig:couplings\](a) where we have plotted $|\psi|^2$ for different values of $\lambda$. For $\lambda = 1$, when climb motion is completely suppressed (a reasonable limit for solids at low temperatures far from melting), supersolidity never appears since $J_{ij}$ is traceless (see Eq. (\[current\])) and compressive stresses are not spontaneously produced. On the other hand, any pre-existing compressive stress relaxes extremely slowly due to the constraints in Eq. (\[supersolid\]). This suggests a strong link between suppression of dislocation climb and supersolidity [@superclimb]. In Fig.\[fig:couplings\](b) we illustrate this link. We show the evolution of the spatially averaged $\overline{|\psi|^2}$ as a function of time following two protocols A and B. Protocol A is identical to that shown in Fig.\[fig:time-evolution\] with $\lambda = 0$. On the other hand in B, we increase $\lambda$ to unity after simulating with $\lambda = 0$ up to $t=12.5$, mimicking a decrease in the rate of climb. Fig.\[fig:couplings\](b) shows that, in this case, the annealing of $\psi$ is arrested and cells walls with large $|\psi|^2$ may persist at low temperatures to experimentally observable times. Finally, the experimental time over which NCRIF would become unobservable in solid $^4$He does, of course, depends on the parameters $\Gamma$ and $D$. If $\Gamma \geq D$ and the plasticity relaxation time, $D^{-1}$, large, NCRI phenomena will be ubiquitious but may depend sensitively to sample preparation history [@sample; @anneal]. Further, $D$ is, in general, also dependent on the dislocation density $\rho_{ij}$ (work hardening) and pinning by $^3$He impurities [@pinning] at low temperatures, both of which will act to arrest the annealing of $\psi$ and strengthen our conclusions. However, inclusion of such effects may require going beyond the version of continuum plasticity used here [@Lim-2; @Lim-thesis]. On the other hand if $\Gamma << D$, NCRI phenomena may become altogether unobservable since the required stress relaxes before a non-vanishing $\psi$ can develop. The results obtained from our coarse-grained description of superfluidity and plasticity are consistent with those of earlier microscopic studies [@superclimb; @gb-super-qmc] using quantum Monte Carlo methods. In Ref. , it was found that grain boundaries separating crystallites that differ from each other by a simple rotation do not support superfluidity. Other, more generic grain boundaries were found to exhibit superfluidity in regions of a few lattice spacings in width. Such microscopic regions of superfluidity will not appear in our coarse-grained description. Supersolid behavior may, of course, arise from a system-spanning network of superfluid channels [@Dorsey; @Toner; @shev] of microscopic width (e.g. cores of dislocation lines [@core] and microscopic regions near grain boundaries). A description of this behavior is beyond the scope of our coarse-grained analysis. The importance of dislocation climb in supersolidity, found in the present work, has been emphasized earlier in the microscopic study of Ref. . Our work shows that supersolidity should be observable in solid $^4$He when certain thermodynamic, mechanical and dynamical conditions are satisfied. We find that the degree of supersolid behavior depends crucially on the details of the annealing process. This necessarily implies a large sample to sample variation, as observed in experiments [@sample]. Our observation that supersolidity vanishes in the long-time limit if the plasticity field evolves according to its natural dynamics (including dislocation climb) provides a reasonable explanation of the experimental observation [@elastic] that the shear modulus of solid $^4$He increases as $T$ is lowered across the onset temperature of the NCRIF. The increase in the shear modulus is generally attributed [@elastic] to the pinning of dislocation lines at impurities such as $^3$He atoms [@pinning]. Pinning should suppress climb motion, thereby preventing the annealing out of the supersolid field. So, the onset of supersolid behavior and pinning-induced enhancement of the shear modulus should coincide in this picture. Our results on the importance of climb motion in the initial development of the supersolid order parameter provide an explanation of experimental results indicating that mass flux, dislocation climb and NCRI are coupled together in solid $^4$He [@mass-flux; @core; @superclimb]. Extensions of this work to investigate the role of $^3$He impurities [@pinning] and to obtain the elasto-plastic response of a supersolid with dislocations to external stress and the glass transition [@super-glass] in this system are in progress. The authors acknowledge useful discussions with Sriram Ramaswamy. Financial assistance from a Indo-US Science and Technology Forum grant is gratefully acknowledged. [00]{} E.Kim and M.H.W. Chan, Nature (London) [**427**]{}, 225 (2004). E. Kim and M. H. W. Chan. Science [**305**]{}, 1941 (2004). A. S. C. Rittner and J. D. Reppy, [**97**]{}, 165301 (2006); Y. Aoki, J. C. Graves, and H. Kojima, [**99**]{}, 015301 (2007); M. Kondo [*et al.*]{}, J. Low Temp. Phys. [**148**]{}, 695 (2007); S. Sasaki [*et al.*]{}, Science [**313**]{}, 1098 (2006); S. Sasaki [*et al.*]{} J. Low Temp. Phys. [**148**]{}, 665 (2007). J. Day and J. Beamish, Nature (London) [**450**]{}, 853 (2007); J. T.West, O. Syshchenko, J. Beamish, and M. H.W. Chan, Nature Phys. [**5**]{}, 598 (2009). E. Varoquaux. , 064524 (2012); J. D. Reppy, [**104**]{}, 255301 (2010); H. Maris [**86**]{}, 020502 (2012); D. Kim and M. Chan, [**109**]{}, 155301 (2012). M. W. Ray and R. B. Hallock, , [**100**]{}, 23530 (2008); [*ibid.*]{} [**105**]{}, 1453018, (2010); M. W. Ray and R. B. Hallock, J. Low Temp. Phys. [**158**]{}, 560 (2010); [*ibid.*]{} [**162**]{}, 421 (2011). H. Choi [*et al.*]{}, Science [**330**]{}, 1512 (2010); H. Choi [*et al.*]{}, Phys. Rev. Lett. [**108**]{}, 105302 (2012). A. C. Clark, J. T. West, and M. H. W. Chan, [**99**]{}, 135302 (2007). A.S.C. Rittner and J. Reppy, Ref. . L. Pollet [*et al.*]{}, , [**98**]{}, 135301 (2007). M. Boninsegni [*et. al*]{}, Phys. Rev. Lett. [**99**]{}, 035301 (2007). A.D Fefferman [*el al.*]{}, [**85**]{}, 094103 (2012); S. Balibar, Nature [**464**]{}, 176 (2010). S. G. Söyler [*et al.*]{}, [**103**]{}, 175301 (2009). A. Acharya, Proc. R Soc. A, [**459**]{}, 1343 (2003); A. Roy and A. Acharya, J. Mech. Phys. Solids [**53**]{}, 143 (2005); A. Acharya and A. Roy, [*ibid*]{} [**54**]{}, 1687 (2006). Adrian Cho, Science [**311**]{}, 1361 (2006). S. Limkumnerd and J. P. Sethna, Phys. Rev. Lett. [**96**]{}, 095503 (2006). S. Limkumnerd and J. P. Sethna, [**75**]{}, 224121 (2007). S. Limkumnerd and J. P. Sethna. J. Mech. Phys. Solids, [**56**]{}, 1450 (2008). S. Limkumnerd, [*Ph. D. thesis*]{}, Cornell University (2007). Alan.T. Dorsey, P. M. Goldbart, J.Toner, [**96**]{}, 055301 (2006). J. Toner, Phys. Rev. Lett. [**100**]{}, 35302 (2008). D. Goswami, K. Dasbiswas, C.-D. Yoo, Alan T.Dorsey, Phys.Rev. B [**84**]{}, 054523 (2011). U. Frisch and J. Bec. [*Burgulence, Les Houches 2000: New Trends in Turbulence*]{}, M. Lesieur, A.Yaglom and F. David, eds., pp. 341-383, (Springer EDP-Sciences, 2001) W. H. Press [*et al.*]{} [*Numerical Recipes in Fortran 77*]{}, (Second Edition, Cambridge University Press, Cambridge, 1992) E. Kim [*et al.*]{} [**100**]{}, 065301 (2008). S. I. Shevchenko, Sov. J. Low Temp. Phys. [**13**]{}, 61 (1987); [*ibid.*]{} [**14**]{}, 553 (1987). B. Hunt [*et al.*]{}, Science [**324**]{}, 632 (2009).
[**CHIRAL EFFECTIVE\ THEORIES**]{}\ 205. WE-Heraeus-Seminar\ Physikzentrum Bad Honnef, Bad Honnef, Germany\ November 30 — December 4, 1998\ [**Johan Bijnens**]{}\ Department of Theoretical Physics 2, Lund University\ Sölvegatan 14A, S22362 Lund, Sweden\ [**Ulf-G. Meißner**]{}\ [FZ Jülich, IKP(Th), D-52425 Jülich, Germany]{}\ [ABSTRACT]{} These are the proceedings of the workshop on “Chiral Effective Theories” held at the Physikzentrum Bad Honnef of the Deutsche Physikalische Gesellschaft, Bad Honnef, Germany from November 30 to December 4, 1998. The workshop concentrated on Chiral Perturbation Theory in its various settings and its relations with lattice QCD and dispersion theory. Included are a short contribution per talk and a listing of some review papers on the subject. Introduction ============ The field of Chiral Perturbation Theory is a growing area in theoretical physics. We therefore decided to organize the next topical workshop. This meeting followed the series of workshops in Ringberg (Germany), 1988, Dobogókö (Hungary), 1991, Karrebæksminde (Denmark), 1993 and Trento (Italy), 1996. All these workshops shared the same features, about 50 participants, a fairly large amount of time devoted to discussions rather than presentations and an intimate environment with lots of discussion opportunities. This meeting took place in late fall 1998 in the Physikzentrum Bad Honnef in Bad Honnef, Germany and the funding provided by the Dr. Wilhelm Heinrich Heraeus und Else Heraeus–Stiftung allowed us to provide for the local expenses for all participants and to support the travel of some participants. The WE-Heraeus foundation also provided the administrative support for the workshop in the person of the able secretary Jutta Lang. We extend our sincere gratitude to the WE-Heraeus Stiftung for this support. We would also like to thank the staff of the Physikzentrum for the excellent service given us during the workshop and last but not least the participants for making this an exciting and lively meeting. The meeting had 53 participants whose names, institutes and email addresses are listed below. 43 of them presented results in presentations of various lengths. A short description of their contents and a list of the most relevant references can be found below. As in the previous two of these workshops we felt that this was more appropriate a framework than full-fledged proceedings. Most results are or will soon be published and available on the archives so this way we can achieve speedy publication and avoid duplication of results in the archives. Below follows first the program, then the list of participants and a subjective list of review papers, lectures and other proceedings relevant to the subject of this workshop.\ Johan Bijnens and Ulf-G. Meißner The Program =========== xx.xx = a very long name= [**Monday**]{}\ 12.30 Lunch\ 14.00 U.-G. Meißner\ E. Dreisigacker Opening and Welcome\ 14.20 G. Ecker Introduction to CHPT\ 14.35 G. Ecker The generating functional at next-to-next-to-leading\ order in CHPT\ 15.25 P. Talavera Pion form factors at $p^6$\ 15.50 Coffee\ 16.20 M. Golterman Quenched and Partially Quenched Chiral Logarithms\ 17.00 E. Pallante The Generating Functional for Hadronic Weak\ Interactions and its Quenched Approximation\ 17.40 H. Neufeld A Super-Heat-Kernel Representation for the One-Loop\ Functional of Boson-Fermion systems\ 18.10 End of Session\ 18.30 Dinner followed by a social evening\ [**Tuesday**]{}\ 9.00 S. Güsken Some Low-Energy constants from lattice QCD\ 9.40 A. Vladikas Quark masses and the chiral condensate\ from the lattice\ 10.10 J. Donoghue Dispersive Calculation of Weak Matrix Elements\ 10.55 Coffee\ 11.25 J. Prades $\Delta S=1$ Transitions in the $1/N_c$ Expansion\ 12.05 H.-C. Kim Effective $\Delta S=1$ Chiral Lagrangian in the Chiral\ Quark Model\ 12.35 G. D’Ambrosio Vector Meson Dominance in the Delta S=1\ Weak Chiral Lagrangian\ 13.00 End of Session\ 14.20 M. Knecht Electromagnetic Corrections to Semi-Leptonic Decays\ of Pseudoscalar mesons\ 15.00 B. Moussalam An estimate of the $p^4$ EM contribution to $M_{\pi^+} - M_{\pi^0}$\ 15.30 S. Peris Matching Long and Short Distances in large $N_c$ QCD\ 16.00 Coffee\ 16.40 E. de Rafael QCD at large-$N_c$ and Weak matrix Elements\ 17.25 A. Rusetsky Bound States with Effective Lagrangians\ 17.55 End of session\ [**Wednesday**]{}\ 9.00 B. Borasoy Long Distance Regularization and Chiral Perturbation\ Theory\ 9.30 J. Kambor Generalized Heavy Baryon Chiral Perturbation Theory\ and the Nucleon Sigma term\ 10.10 T. Hemmert Baryon CHPT and Resonance Physics\ 10.50 Coffee\ 11.20 M. Mojzic Nucleon properties to O($p^4$) in HBCHPT\ 11.50 N. Fettes Pion-Nucleon scattering in Chiral Perturbation theory\ 12.20 G. Höhler Relations of Dispersion Theory to Chiral Effective\ Theories for $\pi N$ Scattering\ 13.00 End of Session\ 14.20 S. Steininger Isospin violation in the Pion Nucleon System\ 14.50 M. Sainio Goldberger-Miyazawa-Oehme Sum Rule Revisited\ 15.20 E. Epelbaoum Low-Momentum Effective Theory for Two Nucleons\ 15.50 Coffee\ 16.20 M. Savage Effective Field Theory in the Two-Nucleon Sector\ 17.00 N. Kaiser Chiral Dynamics of NN Interaction\ 17.35 S. Beane Compton scattering off the Deuteron in HBCHPT\ 18.10 End of session\ [**Thursday**]{}\ 9.00 H. Leutwyler $1/N_c$ Expansion in CHPT and Lorentzinvariant\ HBCHPT: Introduction\ 9.30 T. Becher HBCHPT in a Lorentzinvariant Form\ 10.10 R. Kaiser Chiral $SU(3)\times SU(3)$ at large $N_C$: The $\eta$-$\eta^\prime$ system\ 10.50 Coffee\ 11.30 D. Diakonov Derivation of the Chiral Lagrangian from the Instanton\ Vacuum\ 12.15 J. Stern Theoretical Study of Chiral Symmetry Breaking:\ Recent Developments\ 13.00 End of Session\ 14.20 G. Colangelo Numerical Solutions of Roy Equations\ 15.00 J. Gasser The One-Channel Roy Equation\ 15.35 Coffee\ 16.05 G. Wanders How do the Uncertainties on the Input Affect the\ Solution of he Roy Equations ?\ 16.40 J. Schechter Low Energy $\pi K$ Scattering and a Possible Scalar Nonet\ 17.20 A. Pich CHPT Phenomenology in the Large-$N_C$ Limit\ 18.00 End of session\ [**Friday**]{}\ 9.00 A. Nyffeler Gauge Invariant Effective Field Theory for a Heavy\ Higgs Boson\ 9.35 J. Goity The Goldberger-Treiman Discrepancy in the\ Chiral Expansion\ 10.10 L. Girlanda Tau decays and Generalized CHPT\ 10.35 Coffee Break\ 11.15 F. OrellanaDifferent Approaches to Loop Calculations in\ ChPT Exemplified with the Meson Form Factors\ 11.40 D. Toublan From Chiral Random Matrix Theory to Chiral\ Perturbation Theory\ 12.10 J. Bijnens Farewell Participants and their email ============================ A very long nameā very long instituteēmail S. Beane Maryland sbeane@physics.umd.edu\ T. Becher Bern becher@itp.unibe.ch\ V. Bernard Strasbourg bernard@lpt1.u-strasbg.fr\ J. Bijnens Lund bijnens@thep.lu.se\ B. Borasoy Amherstborasoy@het.phast.umass.edu\ P. Buettiker FZ Jülich p.buettiker@fz-juelich.de\ G. Colangelo Zürich gilberto@physik.unizh.ch\ G. D’Ambrosio Naples dambrosio@na.infn.it\ D. Diakonov Nordita diakonov@nordita.dk\ J. Donoghue Amherst donoghue@phast.umass.edu\ G. Ecker Wien ecker@merlin.pap.univie.ac.at\ E. Epelbaoum FZ Jülich e.epelbaum@fz-juelich.de\ N. Fettes FZ Jülich n.fettes@fz-juelich.de\ M. Franz Bochum mario.franz@tp2.ruhr-uni-bochum.de\ J. Gasser Bern gasser@itp.unibe.ch\ L. Girlanda Orsay girlanda@ipno.in2p3.fr\ K. Goeke Bochum goeke@hadron.tp2.ruhr-uni-bochum.de\ J. Goity Jefferson Lab goity@cebaf.gov\ M. Golterman St.Louismaarten@aapje.wustl.edu\ S. Güsken Wuppertalguesken@theorie.physik.uni-wuppertal.de\ T. Hemmert FZ Jülich th.hemmert@fz-juelich.de\ G. Höhler Karlsruhe gerhard.hoehler@physik.uni-karlsruhe.de\ N. Kaiser TU München nkaiser@physik.tu-muenchen.de\ R. Kaiser Bern kaiser@itp.unibe.ch\ J. Kambor Zürich kambor@physik.unizh.ch\ H.-C. Kim Pusan hchkim@hyowon.pusan.ac.kr\ M. Knecht Marseilleknecht@cpt.univ-mrs.fr\ H. Leutwyler Bernleutwyler@itp.unibe.ch\ U. Mei[ß]{}ner FZ Jülich Ulf-G.Meissner@fz-juleich.de\ M. Mojžič Bratislava Martin.Mojzis@fmph.uniba.sk\ B. Moussallam Orsay moussall@ipno.in2p3.fr\ G. Müller Wien mueller@itkp.uni-bonn.de\ H. Neufeld Wien neufeld@merlin.pap.univie.ac.at\ A. Nyffeler DESY, Zeuthen nyffeler@ifh.de\ F. Orellana Zurich fjob@physik.unizh.ch\ E. Pallante Barcelona pallante@greta.ecm.ub.es\ S. Peris Barcelona peris@ifae.es\ H. Petry Bonn petry@pythia.itkp.uni-bonn.de\ A. Pich Valencia pich@chiral.ific.uv.es\ J. Prades Granada prades@ugr.es\ E. de Rafael Marseille EdeR@cptsu5.univ-mrs.fr\ A. Rusetsky Bern rusetsky@itp.unibe.ch\ M. Sainio Helsinkisainio@phcu.helsinki.fi\ M. Savage Seattle savage@phys.washington.edu\ J. Schechter Syracuseschechter@suhep.phy.syr.edu\ K. Schilling Wuppertalschillin@wpts0.physik.uni-wuppertal.de\ S. Steininger FZ Jülich s.steininger@fz-juelich.de\ J. Stern Orsay stern@ipno.in2p3.fr\ P. Talavera Lund pere@thep.lu.se\ D. Toublan Stony Brooktoublan@nuclear.physics.sunysb.edu\ A. Vladikas Rome vladikas@roma2.infn.it\ G. Wanders Lausanne gerard.wanders@ipt.unil.ch\ A. Wirzba Darmstadtandreas.wirzba@physik.tu-darmstadt.de A short guide to review literature ================================== Chiral Perturbation Theory grew out of current algebra, and it soon was realized that certain terms beyond the lowest order were also uniquely defined. This early work and references to earlier review papers can be found in [@1]. Weinberg then proposed a systematic method in [@2], later systematized and extended to use the external field method in the classic papers by Gasser and Leutwyler [@3],[@4], which, according to Howard Georgi, everybody should put under his/her pillow before he/she goes to sleep. The field has since then extended a lot and relatively recent review papers are: Ref.[@5] with an emphasis on the anomalous sector, Ref.[@6] giving a general overview over the vast field of applications in various areas of physics, Ref.[@7] on mesons and baryons, and Ref.[@8] on baryons and multibaryon processes. In addition there are books by Georgi[@9], which, however, does not cover the standard approach, including the terms in the lagrangian at higher order and a more recent one by Donoghue, Golowich and Holstein[@10]. There are also the lectures available on the archives by H. Leutwyler [@10b] E. de Rafael [@11], A. Pich [@12b], G. Ecker [@12c] as well as numerous others (the single nucleon sector is covered in most detail in [@12d]).. The references to the previous meetings are [@13],[@14],[@15],[@15b]. There are also the proceedings of the Chiral Dynamics meetings at MIT (1994) [@16] and in Mainz (1997) [@16b]. The DA$\Phi$NE handbook [@17] also contains useful overviews. The $NN$ sector is covered in the proceedings of the Caltech workshop[@18]. [99]{} [\[zyxy1\]]{} H. Pagels, Phys. Rep. 16 (1975) 219 [\[zyxy2\]]{} S. Weinberg, Physica 96A (1979) 327 [\[zyxy3\]]{} J. Gasser and H. Leutwyler, Ann. Phys. (NY) 158 (1984) 142 [\[zyxy4\]]{} J. Gasser and H. Leutwyler, Nucl. Phys. B250 (1985) 465 [\[zyxy5\]]{} J. Bijnens, Int. J. Mod. Phys. A8 (1993) 3045 [\[zyxy6\]]{} U.-G. Mei[ß]{}ner, Rep. Prog. Phys. 56 (1993) 903 [\[zyxy7\]]{} G. Ecker, Prog. Nucl. Part. Phys. 35 (1995) 1 [\[zyxy8\]]{} V. Bernard, N. Kaiser and U.-G. Mei[ß]{}ner, Int. J. Mod. Phys. E4 (1995) 193 [\[zyxy9\]]{} H. Georgi, [*Weak Interactions and Modern Particle Theory*]{}, 1984, Benjamin/Cummings. [\[zyxy10\]]{} J. Donoghue, E. Golowich and B. Holstein, [*Dynamics of the Standard Model*]{}, Cambridge University Press. [\[zyxy10b\]]{} H. Leutwyler, hep-ph/9406283, lectures at Hadron Physics 94, Gramado, Brazil [\[zyxy11\]]{} E. de Rafael, hep-ph/9502254, lectures at the TASI-94 Summer School, ed. J.F. Donoghue, World Scientific, Singapore, 1995. [\[zyxy12b\]]{} A. Pich, lectures at Les Houches Summer School in Theoretical Physics, Session 68: Probing the Standard Model of Particle Interactions, Les Houches, France, 28 Jul - 5 Sep 1997, hep-ph/9806303 [\[zyxy12c\]]{} G. Ecker, lectures at 37th Internationale Universitatswochen für Kernphysik und Teilchenphysik (International University School of Nuclear and Particle Physics): Broken Symmetries (37 IUKT), Schladming, Austria, 28 Feb - 7 Mar 1998, hep-ph/9805500 [\[zyxy12d\]]{} Ulf-G. Mei[ß]{}ner, hep-ph/9711365, lectures at the 12th Annual Hampton University Graduate Studies (HUGS) at CEBAF, June 1997. [\[zyxy13\]]{} A. Buras, J.-M. Gérard and W. Huber (eds.), Nucl. Phys. B (Proc. Suppl) 7A (1989) [\[zyxy14\]]{} U.-G. Mei[ß]{}ner (ed.), [*Effective Field Theories of the Standard Model*]{}, World Scientific, Singapore, 1992 [\[zyxy15\]]{} J. Bijnens, Workshop on Chiral Perturbation Theory and Other Effective Theories, Karrebæksminde, Denmark, 1993, Miniproceedings, NORDITA-93/73. [\[zyxy15b\]]{} J. Bijnens and U.-G. Meißner, Workshop on the Standard Model at low Energies, ECT$^*$, Trento, 1996, Miniproceedings, hep-ph/9606301 [\[zyxy16\]]{} A. Bernstein and B. Holstein (eds.), [*Chiral Dynamics : Theory and Experiment*]{}, Springer Verlag, 1995. [\[zyxy16b\]]{} A. Bernstein, B. Holstein and T. Walcher (eds.), [*Chiral Dynamics : Theory and Experiment*]{}, Springer Verlag, 1998. [\[zyxy17\]]{} L. Maiani, G. Pancheri and N. Paver (eds.), [*The Second DA$\Phi$NE Physics Handbook*]{}, 1995, SIS-Pubblicazione dei Laboratori Nazionali di Frascati, P.O.Box 13, I-00044 Frascati, Italy. [\[zyxy18\]]{} M. Savage, R. Seki and U. van Kolck (eds), Proceedings of the Workshop on Nuclear Physics with Effective Field Theories, Pasadena, CA, 26-27 Feb 1998, to be published. [**The Generating Functional\ at Next-to-Next-to-Leading Order**]{}\ Gerhard Ecker\ Institut für Theoretische Physik, Universität Wien\ Boltzmanng. 5, A-1090 Wien, Austria\ The generating functional for Green functions of quark currents receives contributions from tree, one-loop and two-loop diagrams at $O(p^6)$. After a classification of these diagrams, I describe the construction of the effective chiral Lagrangian of $O(p^6)$ for a general number $N_f$ of light flavours [@BCE2]. Special matrix relations reduce the number of independent terms substantially for $N_f=$ 3 and 2. The $SU(3)$ Lagrangian is compared with the Lagrangian of Fearing and Scherer [@FS]. The one- and two-loop contributions to the generating functional are divergent. In a mass independent regularization scheme like dimensional regularization, the divergent parts are polynomials in masses and external momenta. The cancellation of nonlocal divergences is an important consistency check for the renormalization procedure [@BCE2]. The double and single poles in $d-4$ in the coefficients of the local divergences are then cancelled by the divergent parts of the coupling constants of $O(p^6)$. The dependence of the renormalization procedure on the choice of the chiral Lagrangian of $O(p^4)$ is discussed. The divergence structure of the generating functional can be used to check specific two-loop calculations in chiral perturbation theory. Moreover, it provides the leading infrared singular pieces of Green functions, the chiral double logs. These double logs ($L^2$) come together with terms of the form $L\times L_i^r$ and products $L_i^r\times L_j^r$, involving also the low-energy constants $L_i^r(\mu)$ of $O(p^4)$. It is natural to include all such terms in the analysis especially because they are often comparable to or even bigger than the proper double-log terms. All contributions of this type to the generating functional can be given in closed form [@BCE1]. This generalized double-log approximation is applied to several quantities of interest such as mesonic decay constants and form factors where complete calculations to $O(p^6)$ are not yet available. The results indicate where large $p^6$ corrections are to be expected. [12]{} [\[zyxyBCE2\]]{} J. Bijnens, G. Colangelo and G. Ecker, in preparation. [\[zyxyFS\]]{} H.W. Fearing and S. Scherer, Phys. Rev. D53 (1996) 315. [\[zyxyBCE1\]]{} J. Bijnens, G. Colangelo and G. Ecker, Double chiral logs, hep-ph/9808421, Phys.Lett. B441 (1998) 437. [**Pion Form Factors at $p^6$**]{}\ J. Bijnens$^1$, G. Colangelo$^2$ and [**P. Talavera**]{}$^1$\ $^1$Dep. Theor. Phys. 2, Lund University,\ Sölvegatan 14A, S22362 Lund, Sweden\ $^2$Inst. Theor. Phys., Univ. Zürich,\ Winterthurerstr. 190, CH–8057 Zürich–Irchel, Switzerland.\ We compute the vector and scalar form factors of the pion to two loops in CHPT and compare carefully with the existing data. For the scalar form factor this involves a comparison with the form factor derived using dispersion theory and chiral constraints from the $\pi\pi$ phase shifts[@1]. The CHPT formula fits well over the entire range of validity. Moreover, we show that using the “modified Omnès representation” which exponentiates the unitarity correction, the chiral representation improves and follows the exact form factor up to about 700 MeV. For the vector form factor we collected all available data and performed the standard simple fits. We fit the CHPT formula at two loops together with a phenomenological higher order term to obtain the pion charge radius and $c_V^\pi$: $${\langle r^2\rangle^\pi_V} = (0.437\pm0.016)\mbox{ fm}^2 \; \; , \quad c_V^\pi = (3.85\pm0.60)\mbox{ GeV}^{-4} \; \; .$$ The error is a combination of theoretical and experimental errors. By comparing to the Taylor expansions of the measured form factors, we have been able to better determine some of the LEC’s: $\bar{l}_4$ and $\bar{l}_6$. $\bar{l}_6$ together with results from $\pi \to l\nu\gamma$[@3] lead then to $\bar{l}_5$. $$\bar{l}_4 = 4.4\pm 0.3\;,\quad\bar{l}_6 = 16.0\pm0.5 \pm 0.7 \; \quad\mbox{and}\quad \bar{l}_5 = 13.0 \pm 0.9 \; \; .$$ The other two LEC’s we determined are ${\cal O}(p^6)$ constants, that contribute to the quadratic term in the polynomial of the scalar and vector form factors. We found $$r^r_{S3}(M_\rho) \simeq 1.5 \cdot 10^{-4}\; , \; \; \; \; r^r_{V2}(M_\rho) \simeq 1.6 \cdot 10^{-4} \; \; ,$$ with a substantial uncertainty. These values are rather close to those obtained with the resonance saturation hypothesis, supporting the idea that this hypothesis works also at order $p^6$.The full discussion can be found in [@2]. [99]{} [\[zyxy1\]]{} J. Donoghue, J. Gasser and H. Leutwyler, Nucl. Phys. B343(1990)341 [\[zyxy3\]]{} J. Bijnens and P. Talavera, Nucl. Phys. B489(1997)389. [\[zyxy2\]]{} J. Bijnens, G. Colangelo and P. Talavera, JHEP05(1998)014 [**Quenched and Partially Quenched Chiral Logarithms**]{}\ [**Maarten Golterman**]{}\ Department of Physics, Washington University,\ St. Louis, MO 63130, USA\ In this talk, I reported on the recent quenched hadron spectrum results of the CP-PACS collaboration [@cppacs]. I discussed in particular the evidence for quenched chiral logarithms in the pion mass [@bergol1],[@sharpe1]. In the second part, I gave an introduction to partially quenched chiral perturbation theory [@bergol2],[@shazha],[@sharpe2],[@golleu], and discussed its relevance for the analysis of future results from lattice QCD. [12]{} [\[zyxycppacs\]]{} Contributions of R. Burkhalter and T. Yoshié (for CP-PACS) to the 16th International Symposium on Lattice Field Theory (LATTICE 98), to appear in the proceedings, hep-lat/9810043 and hep-lat/9809146; T. Yoshié, private communication. [\[zyxybergol1\]]{} C. Bernard and M. Golterman, Phys. Rev. D46 (1992) 853. [\[zyxysharpe1\]]{} S. Sharpe, Phys. Rev. D46 (1992) 3146. [\[zyxybergol2\]]{} C. Bernard and M. Golterman, Phys. Rev. D49 (1994) 486. [\[zyxyshazha\]]{} S. Sharpe and Y. Zhang, Phys. Rev. D53 (1996) 5125. [\[zyxysharpe2\]]{} S. Sharpe, Phys. Rev. D56 (1997) 7052. [\[zyxygolleu\]]{} M. Golterman and K.-C. Leung, Phys. Rev. D58 (1998) 097503. [**The Generating Functional for Hadronic Weak Interactions and its Quenched Approximation** ]{}\ [**Elisabetta Pallante**]{}$^1$\ $^1$Facultat de Física, Universitat de Barcelona,\ Av. Diagonal 647, 08028 Barcelona, Spain.\ Chiral Perturbation Theory (ChPT) combined with lattice QCD is a promising tool for computing hadronic weak matrix elements at long distances. Here I discuss the derivation of the one loop generating functional of ChPT in the presence of weak interactions with $\vert\Delta S\vert =1,\, 2$ and the modifications induced by the [*quenched*]{} approximation adopted on the lattice. The framework I use is known as quenched ChPT [@qCHPT], while its extension to hadronic weak interactions can be found in [@EP1] and refs. therein. The advantage of deriving the generating functional is twofold: first, it allows for a systematic control on the quenched modifications (i.e. how much the coefficients of the chiral logs are modified by quenching) and second, it gives in one step the coefficients of all chiral logarithms for any Green’s function or S-matrix element, full and quenched. The main relevant modification induced by quenching is the appearance of the so called [*quenched chiral logs*]{} both in the strong and weak sector. They can be accounted for via a redefinition of the leading order parameters associated to the mass-like terms (the usual mass term and the weak mass term). As an immediate application, the full and quenched behaviours of the chiral logarithms which appear in $K\to\pi\pi$ matrix elements can be studied both for $\Delta I=1/2$ and $3/2$. The numerical analysis shows that the modification induced by quenching follows a pattern that tends to suppress the $\Delta I=1/2$ dominance. [12]{} [\[zyxyqCHPT\]]{} C.W. Bernard, and M.F.L. Golterman, Phys. Rev. D 46 (1992) 853;\ G. Colangelo and E. Pallante, Nucl. Phys. B520 (1998) 433. [\[zyxyEP1\]]{} E. Pallante, hep-lat/9808018, to appear in JHEP. [**A Super-Heat-Kernel Representation for the One-Loop Functional of Boson–Fermion Systems**]{}\ [**Helmut Neufeld**]{}\ Institut für Theoretische Physik der Universität Wien,\ Boltzmanngasse 5, A-1090 Wien, Austria.\ The one-loop functional of a general quantum field theory with bosons and fermions can be written in terms of the super-determinant of a super-matrix differential-operator. This super-determinant is then further evaluated by using heat-kernel methods. This approach [@Berezinian] corresponds to a simultaneous treatment of bosonic, fermionic and mixed loop-diagrams. The determination of the one-loop divergences is reduced to simple matrix manipulations, in complete analogy to the familiar heat-kernel expansion technique for bosonic or fermionic loops. Applications to the renormalization of the pion–nucleon interaction [@SHK] and to chiral perturbation theory with virtual photons and leptons [@Knecht] demonstrate the efficiency of the new method. The cumbersome and tedious calculations of the conventional approach are now reduced to a few simple algebraic manipulations. The presented computational scheme is, of course, not restricted to chiral perturbation theory, but can easily be applied or extended to any (in general non-renormalizable) theory with boson–fermion interactions. [12]{} [\[zyxyBerezinian\]]{} H. Neufeld, J. Gasser, G. Ecker, Phys. Lett. B438 (1998) 106 [\[zyxySHK\]]{} H. Neufeld, [*The Super-Heat-Kernel Expansion and the Renormalization of the Pion–Nucleon Interaction*]{}, CERN-TH/98-231, 1998, hep-ph/9807425, to be published in Eur. Phys. Jour. C [\[zyxyKnecht\]]{} M. Knecht, H. Neufeld, H. Rupertsberger, P. Talavera, [*Chiral Perturbation Theory with Photons and Leptons*]{}, UWThPh-1998-62, 1999, in preparation [**Some Low Energy Constants from Lattice QCD: Recent Results**]{}\ Stephan Güsken\ Dep. Phys. , University of Wuppertal,\ 42097 Wuppertal, Germany\ Hadronic properties at low energies are sensitive to non-perturbative contributions from quantum fluctuations. In particular, flavour singlet quantities like the pion-nucleon-sigma term $\sigma_{\pi N}$ and the flavour singlet axial coupling of the proton $G_A^1$ might be determined largely by so called disconnected insertions. These are given by the correlation of the nucleon propagator with a quark-antiquark vacuum loop. Recently the SESAM collaboration has performed a full QCD lattice simulation with $n_f=2$ dynamical fermions on 4 different values of the sea quark mass, and with a statistics of 200 independent gauge configurations per sea quark [@sesam_light_spectrum]. In this talk we present the results of the analysis of these gauge configurations with respect to $\sigma_{\pi N}$ [@sesam_nsigma] and $G_A^1$ [@sesam_ga]. SESAM finds a quite low value for the pion-nucleon-sigma term, $\sigma_{\pi N} = 18(5)$MeV. Its smallness is directly related to the apparent decrease of light quark masses when unquenching lattice QCD simulations [@sesam_quarkmass],[@cp_pacs_burkhalter]. For the flavor singlet axial coupling of the proton, SESAM estimates $G_A^1 = 0.20(12)$, consistent with the experimental result and with previous findings from quenched simulations [@ga1_quenched]. [12]{} [\[zyxysesam\_light\_spectrum\]]{} SESAM Collaboration, N. Eicker et al., hep-lat/9896027, Phys. Rev. D.59, 014509 (1999). [\[zyxysesam\_nsigma\]]{} SESAM Collaboration, S. Güsken et al., hep-lat/9809066, Phys. Rev. D59, in print. [\[zyxysesam\_ga\]]{} SESAM Collab., S. Güsken et al., preprint WUB 98-44, HLRZ 1998-85, hep-lat/9801009, submitted to Phys. Rev. D. . [\[zyxysesam\_quarkmass\]]{} SESAM Collaboration, N. Eicker et al., Phys. Lett. B407(1997)290. [\[zyxycp\_pacs\_burkhalter\]]{} CP-PACS Collaboration, R. Burkhalter et al., hep-lat/9810043, Nucl. Phys. B (Proc. Suppl.) (1999), to appear. [\[zyxyga1\_quenched\]]{} M. Fukugita, Y. Kuramashi, M. Okawa, A. Ukawa, Phys. Lett. 75 (1995)2092; S.J. Dong, J.F. Laga$\ddot{e}$, K.F. Liu, Phys. Lett. 75 (1995)2096. [**Quark Masses and Chiral Condensate from the Lattice**]{}\ Anastassios Vladikas\ INFN c/o Department of Physics, Universitá di Roma “Tor Vergata”,\ Via della Ricerca Scientifica 1, I-00133 Rome, Italy\ Ward Identities can be used in order to measure, from first principles, the light quark masses and the chiral condensate in lattice QCD. Particular attention is paid to the problem of chiral symmetry breaking by the Wilson action and its restoration in the continuum limit [@KSBOC]. The main sources of systematic errors in computations are: (1) quenching; (2) finite mass extrapolation to the chiral limit; (3) scaling violations; (4) renormalization of lattice operators in 1-loop perturbation theory. Scaling violations can be reduced by applying Symanzik-improvement to Wilson fermions, as reviewed in [@LUESCH]. Operator renormalization can be carried out non-perturbatively; see [@NP]. All results are in the ${\overline MS}$ scheme at renormalization scale $\mu = 2$GeV: $$\begin{aligned} \langle \bar \psi \psi \rangle &=& - [245 \pm 15 MeV ]^3 \qquad {[{\ref{zyx\arabic{zyxabstract}yAPE1}}]} \nonumber \\ \langle \bar \psi \psi \rangle &=& - [253 \pm 25 MeV ]^3 \qquad {[{\ref{zyx\arabic{zyxabstract}yAPE2}}]} \nonumber \\ m_{u,d} &=& 5.7 \pm 0.8 MeV \qquad {[{\ref{zyx\arabic{zyxabstract}yAPE1}}]} \nonumber \\ m_{u,d} &=& 4.5 \pm 0.4 MeV \qquad {[{\ref{zyx\arabic{zyxabstract}yAPE2}}]} \nonumber \\ m_{u,d} &=& 4.6 \pm 0.2 MeV \qquad {[{\ref{zyx\arabic{zyxabstract}yCPPACS}}]} \nonumber\end{aligned}$$ Important open questions remain: (1) the dependence of the strange quark mass on the bare mass calibration from the $\phi$- or the $K$-meson [@CPPACS]; (2) current unquenched quark mass results appear to be smaller by about $30\%$. [12]{} [\[zyxyKSBOC\]]{} L.H. Karsten and J. Smit, Nucl.Phys. B183 (1981) 103;\ M. Bochicchio et al., Nucl.Phys. B262 (1985) 331. [\[zyxyLUESCH\]]{} M. Lüscher, hep-lat/9808021. [\[zyxyNP\]]{} G.Martinelli et al, Nucl.Phys.B445(1995)81. [\[zyxyAPE1\]]{} V. Giménez, L. Giusti, F. Rapuano and M. Talevi, hep-lat/9801028;\ L. Giusti, F. Rapuano, M. Talevi and A. Vladikas, hep-lat/9807014. [\[zyxyAPE2\]]{} D. Becirevic et al., hep-lat/9807046. [\[zyxyCPPACS\]]{} R. Burkhalter, hep-lat/9810043;\ CP-PACS collaboration, S. Aoki et al., hep-lat/9809146. [**Dispersive Calculation of Weak Matrix Elements**]{}\ John F. Donoghue\ Department of Physics and Astronomy\ University of Massachussets, Amherst, MA 01003 USA\ I described a project with Eugene Golowich in which we provide a rigorous calculational framework for certain weak non-leptonic matrix elements, valid in the chiral limit. This involves relating the weak operators to the vacuum polarization functions of vector and axial-vector currents. These functions obey dispersion relations and the inputs to these are largely known from experiment, and there are firm theoretical constraints. Certain aspects of this program were accomplished a few years ago [@dg1], and we explored the use of data for the Weinberg sum rules and for this weak calculation [@dg2]. What is new now is the understanding of how this fits into the operator product expansion, and the separate determination of two local operators (those related to $O_7$ and $O_8$ in the usual basis). We define these matrix elements at a scale $\mu$ in the ${\bar{MS}}$ scheme, and verify the renormalization group running and the OPE structure. An updated phenomenological analysis incorporating new data was described and will be given in the upcoming publication[@dg3]. [12]{} [\[zyxydg1\]]{} J. F. Donoghue and E. Golowich, Phys. Lett. [**B315**]{}, 406 (1993). [\[zyxydg2\]]{} J. F. Donoghue and E. Golowich, Phys. Rev. [**D49**]{}, 1513 (1994). [\[zyxydg3\]]{} J. F. Donoghue and E. Golowich, to appear (most likely in Feb. 1999). [**$\Delta S=1$ Transitions in the $1/N_c$ Expansion** ]{}\ Johan Bijnens$^1$ and [**Joaquim Prades**]{}$^2$\ $^1$Department of Theoretical Physics 2, Lund University,\ Sölvegatan 14A, S-22362 Lund, Sweden\ $^2$Departamento de Física Teórica y del Cosmos, Universidad de Granada, Campus de Fuente Nueva, E-18002 Granada, Spain\ In this talk we present the results obtained in a recent work on the $\Delta I=1/2$ rule in the chiral limit [@BP98]. In particular, we discuss the matching between long- and short-distance contributions at next-to-leading in a $1/N_c$ expansion and show how the scheme-dependence from the two-loop renormalization group running can be treated. We then use this method to study the three $O(p^2)$ couplings modulating the terms contributing to non-leptonic kaon decays, namely the usual octet and 27-plet derivative terms as well as the weak mass term. We use the Extended Nambu–Jona-Lasinio model as the low energy approximation. The known unsatisfactory high energy behaviour at large $N_c$ of this model we treat as explained in [@BP98]. For attempts to avoid this problem see the talks by Santi Peris and Eduardo de Rafael. At present, their method cannot be applied to the quantities considered here. Reasonable matching for the three $O(p^2)$ couplings introduced before is obtained. We predict them within ranges and obtain a huge enhancement of the $\Delta I=1/2$ amplitude with respect to the $\Delta I=3/2$ one. This we identify to come from $Q_2$ and $Q_6$ Penguin-like diagrams. These predictions are [*parameter free*]{} and agree within the uncertainties with the experimental values. We also show how the factorizable contributions from the $Q_6$ operator are IR divergent. This divergence cancels only when the non-factorizable contributions are added consistently. This makes the $B_6$ parameter not well defined. We believe that this work presents some advances towards the mastering of non-leptonic kaon decays. This will be pursued in forthcoming works determining the $\Delta S=1$ non-leptonic couplings at $O(p^4)$ and $\varepsilon'/\varepsilon$ within the Standard Model [@BPP99]. [12]{} [\[zyxyBP98\]]{} J. Bijnens and J. Prades, [*The $\Delta I=1/2$ Rule in the Chiral limit*]{}, Lund and Granada preprint LU TP 98–26, UG–FT–94/98, hep-ph/9811472 and references therein. [\[zyxyBPP99\]]{} J. Bijnens, E. Pallante, and J. Prades, work in progress. [**Effective $\Delta S = 1$ Weak Chiral Lagrangian in the Instanton-induce Chiral Quark Model**]{}\ Mario Franz$^1$, [**Hyun-Chul Kim**]{}$^2$, and Klaus Goeke$^1$\ $^1$Inst. f. Theo. Phys. II, Ruhr-Universität Bochum,\ D-44780 Bochum, Germany\ $^2$Dep. of Phys., Pusan National University,\ 609-735 Pusan, The Republic of Korea\ In this talk, we present the recent investigation of the effective $\Delta S = 1$ weak chiral Lagrangian within the framework of the instanton-induced chiral quark model. Starting from the effective four-quark operators, we derive the effective weak chiral action by integrating out the constituent quark fields. Employing the derivative expansion, we are able to obtain the effective $\Delta S = 1$ weak chiral Lagrangian to order ${\cal O}(p^4)$. The resulting ${\cal O}(p^4)$ low energy constants are derived as follows [@FKG]: $$\begin{aligned} N_{1}^{(\underline{8})} &=& \left( -{N_c^2 M^2 \over 128 \pi^4 f_\pi^2 } + {N_c \over 8 \pi^2} - {f_\pi^2 \over 2 M^2} \right) c_6 \;+\; \left( -{N_c M^2 \over 128 \pi^4 f_\pi^2 } + {1 \over 8 \pi^2} - {f_\pi^2 \over 2 N_c M^2} \right) c_5, \nonumber \\ N_{2}^{(\underline{8})} &=& {N_c \over 60 \pi^2} \left( \left( -2 +{1 \over N_c} 3 \right) c_1 + \left( 3 - {1 \over N_c} 2 \right) c_2 + {1 \over N_c} 5 c_3 + 5 c_4 \right. \nonumber \\ && + \; \left. \left( - 3 + {1 \over N_c} 2 \right) c_9 + \left( 2 - {1 \over N_c} 3 \right) c_{10} \right), \nonumber\\ N_{3}^{(\underline{8})} &=& 0, \nonumber\\ N_{4}^{(\underline{8})} &=& {N_c \over 60 \pi^2 } \left( \left( \frac{3}{2} - {1 \over N_c} \right) c_1 + \left( - 1 + {1 \over N_c} \frac{3}{2} \right) c_2 + \frac{5}{2} c_3 + {1 \over N_c} \frac{5}{2} c_4 \right. \nonumber \\ && \hspace{10mm} \left. - \frac{5}{2} c_5 - {1 \over N_c} \frac{5}{2} c_6 + \left( 1 - {1 \over N_c} \frac{3}{2} \right) c_9 + \left( - \frac{3}{2} + {1 \over N_c} \right) c_{10} \right), \nonumber\\ N_{28}^{(\underline{8})} &=& {N_c \over 60 \pi^2} \left( \left( -\frac{3}{2} + {1 \over N_c} \right) c_1 + \left( 1 - {1 \over N_c} \frac{3}{2} \right) c_2 - \frac{5}{2} c_3 - {1 \over N_c} \frac{5}{2} c_4 \right. \nonumber \\ && \hspace{10mm} \left. - \frac{5}{2} c_5 - {1 \over N_c} \frac{5}{2} c_6 + \left( - 1 + {1 \over N_c} \frac{3}{2} \right) c_9 + \left( \frac{3}{2} - {1 \over N_c} \right) c_{10} \right), \nonumber\\ N_{1}^{(\underline{27})} &=& N_{5}^{(\underline{27})} = N_{6}^{(\underline{27})} = N_{20}^{(\underline{27})} = 0, \nonumber\\ N_{2}^{(\underline{27})} &=& -N_{3}^{(\underline{27})} = -N_{4}^{(\underline{27})} = N_{21}^{(\underline{27})} = {N_c \over 60 \pi^2 } \left( 1 + {1 \over N_c} \right) \nonumber \\ &&\times \left( - 3 c_1 - 3 c_2 - \frac{9}{2} c_9 - \frac{9}{2} c_{10} \right) \nonumber.\end{aligned}$$ [12]{} [\[zyxyFKG\]]{} M. Franz, H.-C. Kim, and K. Goeke, in preparation. [**Rare kaon decays: $K^{+}\rightarrow \pi ^{+}\ell ^{+}\ell ^{-} $ ** and** $K_{L}\rightarrow \mu ^{+}\mu ^{-}$**]{}\ **G. D’Ambrosio**\ INFN, Sezione di Napoli I–80134 Napoli, Italy.\ Rare kaon decays are an important tool to test the chiral theory and establish the Standard Model and/or its extensions. $K^{+}\rightarrow \pi ^{+}\gamma ^{*}$ starts at $\mathcal{O(}p^{4})$ in $\chi $PT with loops (dominated by the $\pi \pi -$cut$)$ and counter-term contributions [@r1]. Higher order contributions ($\mathcal{O(}p^{6}))$ might be large, but not completely under control since new (and unknown) counter-term structures will appear. Experimentally the $K^{+}\rightarrow \pi ^{+}l^{+}l^{-}$ ($l$=$e,\mu )$widths are known while the slope is known only in the electron channel. An interesting question is the origin of the 2.2$\sigma $ discrepancy of the ratio of the widths $e/\mu $ from the $\mathcal{O(}p^{4})$ prediction. In Ref. [@r3] we have parameterized, very generally, the $K^{+}\rightarrow \pi ^{+}\gamma ^{*}(q)$ form factor as $$W_{+}(z)\,=\,G_{F}M_{K}^{2}\,(a_{+}\,+\,b_{+}z)\,+\,W_{+}^{\pi \pi }(z)\;,$$ with $z=q^{2}/M_{K}^{2}$, and where $W_{+}^{\pi \pi }(z)$ is the loop contribution given by the unitarity cut of $K^{+}\rightarrow \pi ^{+}\pi ^{+}\pi ^{-}$. The two unknown parameters $a_{+}$ and $b_{+}$ can be fixed from rate and the slope in the electron channel to predict the muon rate and consequently the ratio of the widths $e/\mu ,$ which still comes out 2.2$\sigma $ away from the expt. result, pointing out either an experimental problem in the $\ $muon channel or a violation of lepton universality. Also we speculate on the prediction of Vector Meson Dominance (VMD) for this channel. To fully exploit the potential of $K_{L}\rightarrow \mu ^{+}\mu ^{-}$ in probing short–distance dynamics it is necessary to have a reliable control on its long–distance amplitude. The branching ratio can be generally decomposed as $B(K_{L}\rightarrow \mu ^{+}\mu ^{-})\,=\,|{\Re e\mathcal{A}}|^{2}\,+\,|{\Im m\mathcal{A}}|^{2}$ and ${\Re e\mathcal{A}}\,=\,{\Re e\mathcal{A}}_{long}\,+\,{\Re e\mathcal{A}}_{short}$. The recent measurement of $B(K_{L}\rightarrow \mu ^{+}\mu ^{-})$ is almost saturated by the absorptive amplitude leaving a very small room for the dispersive contribution : $|{\Re e\mathcal{A}}_{exp}|^{2}\,=\,(-1.0\pm 3.7)\times 10^{-10}$. Within the Standard Model the NLO short-distance amplitude gives the possibility to extract a lower bound on $\overline{\rho },$ once we have under control the dispersive contribution generated by the two–photon intermediate state. In order to saturate this lower bound we propose [@DIP] a low energy parameterization of the $K_{L}\rightarrow \gamma ^{*}\gamma ^{*}$ form factor that include the poles of the lowest vector meson resonances. Using experimental slope from $K_{L}\rightarrow \gamma \ell ^{+}\ell ^{-}$ and QCD constraint we predict $\overline{\rho }>-0.38\;(90\%C.L.)$. This bound could be very much improved if the linear and quadratic slope of the $K_{L}\rightarrow \gamma ^{*}\gamma ^{*}$ form factor were measured with good precision and a more stringent bound on $|{\Re e\mathcal{A}}_{exp}|$ is established.\ [9]{} [\[zyxyr1\]]{} G. Ecker, A. Pich and E. de Rafael, Nucl. Phys., **B291** (1987) 692. [\[zyxyr3\]]{} G. D’Ambrosio, G. Ecker, G. Isidori and J. Portolés, JHEP 08 (1998) 004, hep-ph/9808289. [\[zyxyDIP\]]{} [G. D’Ambrosio, G. Isidori and J. Portolés]{} , Phys.Lett.B 423 (1998) 385, hep-ph/9708326. [**Electromagnetic Corrections to Semi-leptonic**]{}\ [**Decays of Pseudoscalar Mesons**]{}\ [**[M. Knecht]{}**]{}$^1$, H. Neufeld$^2$, H. Rupertsberger$^2$ and P. Talavera$^3$\ $^1$Centre de Physique Théorique,\ CNRS-Luminy, Case 907, F-13288 Marseille Cedex 9, France\ $^2$Inst. f. Theor. Phys. der Universität Wien,\ Boltzmanngasse 5, A-1090 Wien, Austria.\ $^3$Dept. Theor. Phys., Lund University,\ Sölvegatan 14A, S-22362 Lund, Sweden. In order to extract the information on hadronic matrix elements of QCD currents made of light quarks from high-statistics semi-leptonic decays of pseudoscalar mesons, a quantitative understanding of electromagnetic corrections to these processes is necessary. Virtual photons spoil the factorization property of the effective Fermi theory, so that the description of radiative corrections must be done within an extended framework, which includes also the leptons. An effective theory for the interactions of the light pseudoscalar mesons with light leptons and with photons, and which respects all the properties required by chiral symmetry, has been constructed. It is based on a power counting consistent with the loop expansion. The renormalization of this effective theory has been studied at the one-loop order. The divergences arising from the loops involving a lepton require, besides the usual mass and wave function renormalizations, only three additional nontrivial counter-terms. Applications to the $\pi_{\ell 2}$ and pion beta decays are being completed. Further work will also consider the semi-leptonic decays of the kaons, in view of the forthcoming high-precision data from the KLOE experiment at the DAPHNE $\phi$–Factory. The case of the $K_{\ell 4}$ decays of charged kaons with two charged pions in the final state is of particular importance, since they allow to access the $\pi$–$\pi$ phase-shifts at low energies. [**An estimate of the O(p$^4$) E.M. contribution to M$_{\pi^+}$-M$_{\pi^0}$** ]{}\ Bachir Moussallam\ Groupe de Physique Théorique, IPN,\ Université Paris-Sud, 91406 Orsay\ We reanalyze a sum rule due to Das et al.[@dgmly]. This sum rule is interesting not as an approximation to the physical $\pi^+-\pi^0$ mass difference but as an exact result for a chiral low-energy parameter. A sufficiently precise evaluation provides a model independent estimate for a combination of $O(p^4)$ electromagnetic chiral low-energy parameters recently introduced by Urech[@urech]. Three ingredients are necessary in order to reach the required level of accuracy: firstly one must use a euclidian space approach, secondly one must use accurate experimental data such as provided recently by the ALEPH collaboration on $\tau$ into hadrons decays[@aleph]. Finally, it is necessary to extrapolate to the chiral limit $m_u=m_d=0$. We show how a set of sum rules allows to perform this extrapolation in a reliable way[@preprint]. [12]{} [\[zyxydgmly\]]{} T. Das, G.S. Guralnik, V.S. Mathur, F.E. Low and J.E. Young, Phys. Rev. Lett. [**18**]{}, 759 (1967). [\[zyxyurech\]]{} R. Urech, Nucl. Phys. [**B433**]{}, 234 (1995). [\[zyxyaleph\]]{} ALEPH coll., R. Barate et al., Zeit. Phys. [**C76**]{}, 15 (1997), Eur. Phys. Jour. [**C4**]{}, 409 (1998). [\[zyxypreprint\]]{} B. Moussallam, [hep-ph/9804271]{}. [**Matching Long and Short Distances\ in Large-$N_c$ QCD**]{}\ [**Santiago Peris**]{}$^1$ , Michel Perrottet$^2$ and Eduardo de Rafael$^2$\ $^1$Grup de Fisica Teorica and IFAE,\ Universitat Autonoma de Barcelona, 08193 Barcelona, Spain\ $^2$Centre de Physique Theorique, CNRS-Luminy, Case 907\ F-13288 Marseille Cedex 9, France\ It is shown, with the example of the experimentally known Adler function, that there is no matching in the intermediate region between the two asymptotic regimes described by perturbative QCD (for the very short–distances) and by chiral perturbation theory (for the very long–distances). We then propose to consider an approximation to large–$N_c$ QCD which consists in restricting the hadronic spectrum in the channels with $J^P$ quantum numbers $0^{-}$, $1^{-}$, $0^{+}$ and $1^{+}$ to the lightest state and in treating the rest of the narrow states as a perturbative QCD continuum; the onset of this continuum being fixed by consistency constraints from the operator product expansion. We show how to construct the low–energy effective Lagrangian which describes this approximation. A comparison of the corresponding predictions, to ${\cal O}(p^4)$ in the chiral expansion, with the phenomenologically known couplings $L_i$ is also made in terms of a [*single*]{} unknown, namely $f_{\pi}/M_V$ [@LMD]: $${\label{zyx\arabic{zyxabstract}els}} 6 L_1 = 3 L_2 = \frac{-8}{7} L_3 = 4 L_5 = 8 L_8 = \frac{3}{4} L_9 = - L_{10} = \frac{3}{8} \frac{f_{\pi}^2}{M_V^2}\, ,$$ where $f_{\pi}, M_V$ are the pion decay constant and the $1^-$ state’s mass, respectively. [12]{} [\[zyxyLMD\]]{} S. Peris, M. Perrottet and E. de Rafael, JHEP05 (1998) 011. [**Large $N_c$ QCD and Weak Matrix Elements**]{}\ [**Eduardo de Rafael**]{}\ Centre de Physique Th[é]{}orique\ CNRS-Luminy, Case 907\ F-13288 Marseille Cedex 9, France\ The first part of my talk was an overview of the progress made and the problems which remain in deriving an effective Lagrangian which describes the non–leptonic weak interactions of the Nambu–Goldstone degrees of freedom ($K$, $\pi$ and $\eta$) of the spontaneous $SU(3)_{L}\times SU(3)_{R}$ symmetry breaking in the Standard Model. I showed, with examples, how the coupling constants of the $\vert\Delta S\vert=1$ and $\vert\Delta S\vert=2$ effective low energy Lagrangian are given by [*integrals*]{} of Green’s functions of QCD currents and density currents, while those of the strong sector (QCD only) are more simply related to the coefficients of the Taylor expansions of QCD Green’s functions. The study of these issues within the framework of the $1/N_c$ expansion, and in the [*lowest meson dominance*]{} approximation described in ref. [@PPdeR98], seems a very promising path. The second part of my talk was dedicated to a review of the properties of the $\Pi_{LR}(Q^2)$ correlation function in the large–$N_c$ limit as recently discussed in ref. [@KdeR97]. This is the correlation function which governs the [*electroweak*]{} $\pi^{+}-\pi^{0}$ mass difference. Following the discussion of ref. [@KPdeR98], I showed how the calculation of this observable, which requires non–trivial contributions from next–to–leading terms in the $1/N_c$ expansion, provides an excellent theoretical laboratory for studying issues of long– and short– distance matching in calculations of weak matrix elements of four–quark operators. The third part of my talk was dedicated to recent work reported in ref [@KPdeR98d], where it is shown that the $K\rightarrow\pi\pi$ matrix elements of the four–quark operator $Q_7$, generated by the electroweak penguin–like diagrams of the Standard Model, can be calculated to first non–trivial order in the chiral expansion and in the $1/N_c$ expansion. I compared the results to recent numerical evaluations from lattice–QCD. [12]{} [\[zyxyPPdeR98\]]{} S. Peris, M. Perrottet and E. de Rafael, JHEP [**05**]{} (1998) 011. [\[zyxyKdeR97\]]{} M. Knecht and E. de Rafael, Phys. Lett. [**B424**]{} (1998) 335. [\[zyxyKPdeR98\]]{} M. Knecht, S. Peris and E. de Rafael, Phys. Lett.[**B443**]{} (1998) 255. [\[zyxyKPdeR98d\]]{} M. Knecht, S. Peris and E. de Rafael, hep-ph/9812471. [**Bound States with Effective Lagrangians:\ Energy Level Shift in the External Field**]{}\ Vito Antonelli$^1$, Alex Gall$^1$, Jürg Gasser$^1$ and [**Akaki Rusetsky**]{}$^{1,2,3}$\ $^1$Institute for Theoretical Physics, University of Bern,\ Sidlerstrasse 5, CH-3012, Bern, Switzerland\ $^2$Laboratory of Theoretical Physics, JINR, 141980 Dubna, Russia\ $^3$IHEP, Tbilisi State University, 380086 Tbilisi, Georgia\ Recent growth of interest in both the experimental and theoretical study of the properties of hadronic atoms is motivated by the possibility of direct determination of the strong hadronic scattering lengths from the atomic data. The detailed analysis of the $\pi^+\pi^-$ atom decay within ChPT has been carried out under the assumption of locality of strong interactions at the atomic length scale [@Atom]. Two conceptual difficulties arise in the theory beyond this approximation: $\bullet$ Relativistic approaches to the bound-state problem deal with the off-shell Green’s functions. It is at present unclear whether the ambiguity of the off-shell extrapolation in the effective theory affects the bound-state observables. $\bullet$ The bound-state observables in nonrenormalizable theories contain additional UV divergences in the matrix elements of strong amplitudes between the bound-state wave functions. It is at present unclear whether these divergences can be canceled by the same LEC’s which render finite the amplitudes itself. Addressing these problems, a simple model of a heavy massive scalar particle - bound in an external Coulomb field - is considered within the nonrelativistic effective Lagrangian approach [@Nonrel]. Radiative corrections to the bound state energy levels due to the interaction with a dynamical scalar “photon”, are calculated. In the model studies it is demonstrated that [@New]: $\bullet$ The ambiguity in the off-shell extrapolation of the Green’s function in the relativistic theory does not affect the bound-state spectrum. $\bullet$ Bound-state observables are made finite by the same counter-terms which render finite Green’s functions, even in effective nonrenormalizable theories. $\bullet$ UV divergences in the nonrelativistic bound-state matrix elements are correlated by matching and cancel. [12]{} [\[zyxyAtom\]]{} H. Jallouli and H. Sazdjian, Phys. Rev. [**D58**]{}, 014011 (1998);\ M.A. Ivanov, V.E. Lyubovitskij, E.Z. Lipartia and A.G. Rusetsky, Phys. Rev. [**D 58**]{}, 094024 (1998). [\[zyxyNonrel\]]{} W.E. Caswell and G.P. Lepage, Phys. Lett. [**B 167**]{}, 437 (1986). [\[zyxyNew\]]{} V. Antonelli, A. Gall, J. Gasser and A. Rusetsky, in preparation. [**SU(3) Baryon Chiral Perturbation Theory and Long Distance Regularization**]{}\ Bu[g]{}ra Borasoy\ Department of Physics and Astronomy\ University of Massachusetts\ Amherst, MA 01003, USA\ Baryon chiral perturbation theory as conventionally applied has a well-known problem with the SU(3) chiral expansion: loop diagrams generate very large SU(3) breaking corrections and greatly upset the subsequent phenomenology. This problem is due to the portions of loop integrals that correspond to propagation at short distances for which the effective theory is not valid. One can reformulate the theory just as rigorously by regulating the loop integrals using a momentum-space cutoff which removes the spurious short distance physics [@dh],[@dhb]. The chiral calculations can now provide a model independent realistic description of the very long distance physics. In [@b1] and [@b2] this scheme is applied to the sigma-terms and baryon axial currents. The results are promising and show that this development may finally allow realistic phenomenology to be accomplished in SU(3) baryon chiral perturbation theory. [12]{} [\[zyxydh\]]{} J. F. Donoghue, B. R. Holstein, hep-ph/9803312, to appear in Phys. Lett. B [\[zyxydhb\]]{} J. F. Donoghue, B. R. Holstein, B. Borasoy, hep-ph/9804281, to appear in Phys. Rev. D [\[zyxyb1\]]{} B. Borasoy, hep-ph/9807453, to appear in Eur. Phys. J. C [\[zyxyb2\]]{} B. Borasoy, hep-ph/9811411, to appear in Phys. Rev. D [**Generalized Heavy Baryon Chiral Perturbation Theory and the Nucleon Sigma Term**]{}\ Robert Baur and [**Joachim Kambor**]{}\ Institut für Theoretische Physik, Universität Zürich,\ Winterthurerstr. 190, 8057 Zürich, Switzerland\ The scenario of spontaneous chiral symmetry breakdown with small or vanishing quark condensate [@FSS90] and it’s phenomenological consequences for the $\pi N$–system are investigated. [@BK98] Standard Heavy Baryon Chiral Perturbation Theory is extended in order to account for the modified chiral counting rules of quark mass insertions. The effective lagrangian is given to ${\cal O}(p^2)$ in its most general form and to ${\cal O}(p^3)$ in the scalar sector. As a first application, mass- and wave-function renormalization as well as the scalar form factor of the nucleon is calculated to ${\cal O}(p^3)$. The result is compared to a dispersive analysis of the nucleon scalar form factor [@GLS91], adopted to the case of a small quark condensate. In this latter analysis, the shift of the scalar form factor between the Cheng-Dashen point and zero momentum transfer is found to be enhanced over the result assuming strong quark condensation by up to a factor of two, with substantial deviations starting to be visible for $r=m_s/\hat{m}\leq 12$.[@BK98] As a result, the nucleon sigma term as determined from $\pi N$-scattering data decreases with decreasing quark condensate. [@Kam99] On the other hand, the sigma term can also be determined from the baryon masses. To leading order in the quark masses, $\sigma_N$ is in proportion to $(r-1)^{-1}(1-y)^{-1}$, [*i.e.*]{} strongly increasing with decreasing quark condensate. If the the strange quark content of the nucleon, $y$, were known, strong bounds on the light quark condensate would follow. We also consider the so called backward sum rule for the difference of electric and magnetic polarizabilities of the nucleon. Although the effect of a small quark condensate is less pronounced here, this observable has the advantage of being directly experimentally accessible. Detailed numerical study of this sum rule is under way. [12]{} [\[zyxyFSS90\]]{} N.H. Fuchs, H. Sazdjian and J. Stern, Phys. Lett. B238 (1990) 380; Phys. Rev. D47 (1993) 3814; J. Stern, hep-ph/9801282. [\[zyxyBK98\]]{} R. Baur and J. Kambor, [*Generalized Heavy Baryon Chiral Perturbation Theory*]{}, hep-ph/9803311, to be published in Eur. Phys. J. C (1999). [\[zyxyGLS91\]]{} J. Gasser, H. Leutwyler and M.E. Sainio, Phys. Lett. B253(1991)252, 260. [\[zyxyKam99\]]{} J. Kambor, in preparation. [**Heavy Baryon ChPT and Nucleon Resonance Physics**]{}\ Thomas R. Hemmert\ FZ J[" u]{}lich, IKP (Th), D-52425 J[" u]{}lich, Germany\ Several calculations [@delta] have appeared since the “small scale expansion” (SSE) has been presented to the chiral community at the Trento workshop in 1996 [@trento],[@letter]. The idea is to incorporate the effects of low lying nucleon resonances via a phenomenologically motivated power counting in ${\cal O}(\epsilon^n)$ with $\epsilon=\{m_\pi,p,\Delta\}$, which supersedes the standard ${\cal O}(p^n)$ power counting of HBChPT. The new scale $\Delta$ corresponds to the energy difference between the mass of a resonance and the mass of the nucleon and in SSE is counted as a small parameter[^1] of ${\cal O}(\epsilon^1)$. SSE therefore not only allows for calculations with explicit resonance degrees of freedom but also resums the resonance effects into lower orders of the perturbative expansion—for example see the discussion regarding the spin-polarizabilities of the nucleon in [@menu]. The complete ${\cal O}(\epsilon^2)$ SSE lagrangians for $NN$, $N\Delta$ and $\Delta\Delta$ are now worked out and published [@HHK]. Progress has also been achieved at ${\cal O}(\epsilon^3)$ where the complete lagrangian for single nucleon transitions has been worked out [@bfhm]. The ${\cal O}(\epsilon^3)$ divergence structure was found to be quite different from the corresponding ${\cal O}(p^3)$ one in HBChPT. In addition to modifications in the beta-functions of the 22 HBChPT counter terms (c.t.s) [@Ecker] one needs 10 extra c.t.s to account for additional divergences proportional to the new scale $\Delta^n,\, n\leq 3$. The finite parts of theses 10 c.t.s are utilized to guarantee a smooth transition from ${\cal O}(\epsilon^n)$ SSE to ${\cal O}(p^n)$ HBChPT for any process in the decoupling limit $m_\pi /\Delta\rightarrow 0$. [12]{} [\[zyxydelta\]]{} [*e.g.*]{} see G.C. Gellas, T.R. Hemmert, C.N. Ktorides and G.I. Poulis, preprint no. and references therein. [\[zyxytrento\]]{} J. Bijnens and U.-G. Meißner, Proceedings of the meeting at ECT$^*$, April 29 – May 10, 1996, Trento, Italy, . [\[zyxyletter\]]{} T.R. Hemmert, B.R. Holstein and J. Kambor, Phys. Lett. [**B395**]{}, 89 (1997). [\[zyxymenu\]]{} T.R. Hemmert, $\pi N$ Newslett. [**13**]{}, 63 (1997). [\[zyxyHHK\]]{} T.R. Hemmert, B.R. Holstein and J. Kambor, J. Phys. [**G24**]{}, 1831 (1998) . [\[zyxybfhm\]]{} V. Bernard, H.W. Fearing, T.R. Hemmert and U.-G. Mei[ß]{}ner, Nucl. Phys. [**A635**]{}, 121 (1998). [\[zyxyEcker\]]{} G. Ecker, Phys. Lett. [**B336**]{}, 508 (1996). ${\bf O(p}^4{\bf )}$\ Martin Mojžiš$^1$\ $^1$Dep. Theor. Phys., Comenius University,\ Mlynská dolina F2, SK-84215 Bratislava, Slovakia\ Complete one-loop calculations of nucleon properties in CHPT require 1\. construction of the effective Lagrangian up to the 4th order 2\. renormalization of $m_N$, $Z_N$ and $g_A$ (and higher order LECs) 3\. calculation of nucleon form-factors, cross-sections, $\ldots $   A brief summary of progress made in these three points:   1\. A [*Mathematica*]{} program for the construction of the effective Lagrangian was developed. It reproduces successfully the known results in the 2nd and 3rd orders, but does not eliminate all the dependent terms yet (e.g. in the 3rd order it gives two terms more than it should). The number of the 4th order terms produced by the program is $155$ so far, this number will be probably slightly decreased in the final version.   2\. Nucleon wave-function renormalization is known to be a tricky issue already at the 3rd order of HBCHPT. At the 4th order yet some new subtleties enter, but all of them have become well understood recently. The discussion of general aspects of this topic, as well as explicit calculation of $m_N$, $Z_N$ and $g_A$, is to be found in [@KM99] and references therein.   3\. Work in progress. [9]{} [\[zyxyKM99\]]{} J. Kambor and M. Mojžiš, [*Field Redefinitions and Wave Function Renormalization to* ]{}$O(p^4)$[* in Heavy Baryon Chiral Perturbation Theory*]{} , hep-ph/9901235. [**Pion–nucleon scattering in heavy baryon chiral perturbation theory**]{}\ [**Nadia Fettes**]{}$^1$, Ulf-G. Meißner$^1$ and Sven Steininger$^2$\ $^1$Forschungszentrum Jülich, Institut für Kernphysik (Theorie)\ D-52425 Jülich, Germany\ $^2$Universität Bonn, Institut für Theoretische Kernphysik\ Nussallee 14-16, D-53115 Bonn, Germany\ We discuss in detail pion–nucleon scattering in the framework of heavy baryon chiral perturbation theory to third order in small momenta. In particular we show that the $1/m$ expansion of the Born graphs calculated relativistically can be recovered exactly in the heavy baryon approach without any additional momentum-dependent wave function renormalization. Since the normalization factor of the nucleon spinors, appearing in the relativistic calculation, enters the heavy baryon amplitude via the wave function renormalization[@smf], we do not expand this factor. The pion–nucleon scattering amplitude is given in terms of four second order LECs and five combinations of LECs from ${\cal L}_{\pi N}^{(3)}$. In order to fix these constants, we fit various empirical phase shifts for pion laboratory momenta between 50 and 100 MeV. As input we use the phase shifts of the Karlsruhe group[@ka85] (KA85), from the analysis of Matsinos[@em98] (EM98) and the latest update of the VPI group[@sp98] (SP98). This leads to a satisfactory description of the phase shifts up to momenta of about 200 MeV. The two S-waves are reproduced very well, whereas in the P-waves the tail of the Delta is strongly underestimated and the bending from the Roper resonance cannot be accounted for. We also predict threshold parameters, which turn out to be in good agreement with analyses based on dispersion relations. Finally we consider sub-threshold parameters and give a short comparison to other calculations[@bkm],[@mm]. [12]{} [\[zyxysmf\]]{} S. Steininger, U. Meißner and N. Fettes, JHEP 09 (1998) 008 [\[zyxyka85\]]{} R. Koch, Nucl. Phys. A448 (1986) 707 [\[zyxyem98\]]{} E. Matsinos, Phys. Rev. C56 (1997) 3014 [\[zyxysp98\]]{} R.A. Arndt, R.L. Workman et al.,\ see website http://clsaid.phys.vt.edu/ CAPS/ [\[zyxybkm\]]{} V. Bernard, N. Kaiser and Ulf-G. Mei[ß]{}ner, Nucl. Phys. A615 (1997) 483 [\[zyxymm\]]{} M. Mojžiš, Eur. Phys. J. C2 (1998) 181 [**Relations of Dispersion Theory\ to Chiral Effective Theories for $\pi N$ Scattering**]{}\ Gerhard Höhler\ Institut für Theoretische Teilchenphysik der Universität\ D-76128 Karlsruhe, Germany\ Predictions for $\pi N$ scattering amplitudes within the Mandelstam triangle and near its boundaries were recently derived from chiral perturbation theory (CHPT)[@Bernard],[@Mojzis]. The results were compared with predictions from partial wave analyses or from analytic continuations using various dispersion relations. Mandelstam analyticity is assumed for two steps: i\) for constraints which lead to a [**unique**]{} result of partial wave analysis. E. Pietarinen’s expansion method (references in Ref.[@LB]) made it possible to include all data available at that time in the Karlsruhe-Helsinki solution KH80.– The solution SP98 of R.A. Arndt et al.[@Arndt] is based on an [*empirical parametrization*]{} which ignores well known nearby l.h. cuts. It includes new data, but covers only the range up to 2.1 GeV. The attempt to apply approximately the much weaker constraint from fixed-t analyticity was successful for all invariant amplitudes only up to 0.6 GeV, so above this energy the solution is questionable. ii\) For the continuation into the unphysical region we have used many single variable dispersion relations, which follow from Mandelstam analyticity[@LB]. The compatibility of these relations with KH80 supports the assumptions on the analytic properties. The comparison of the predictions from CHPT and dispersion relations should be made not only for the numerical results but [*mainly for the theoretical expressions*]{}. Plots of the integrands of fixed-t dispersion relations give an important information on sub-threshold coefficients. A paper with many details can be obtained from the author [^2]. [12]{} [\[zyxyBernard\]]{} V. Bernard et al., Nucl. Phys. A615(1997)483; N. Fettes et al., Nucl. Phys. A640(1998)199 [\[zyxyMojzis\]]{} M. Mojžiš, Eur. Phys. C2(1998)181 [\[zyxyLB\]]{} G. Höhler, [*Pion-Nucleon Scattering*]{}, Landolt-Börnstein Vol.I/9b2, ed. H. Schopper, Springer, 1983; Lecture Notes in Physics 452(1994) 199 [\[zyxyArndt\]]{} R.A. Arndt et al., nucl-th/9807087, subm. to Phys. Rev. C [**Isospin Violation in Pion–Nucleon Scattering** ]{}\ Nadia Fettes$^1$, Ulf-G. Mei[ß]{}ner$^1$ and [**Sven Steininger**]{}$^1$,$^2$\ $^1$IKP(Theorie), Forschungszentrum Jülich,\ D-52425 Jülich, Germany.\ $^2$ITKP, Uni Bonn,\ Nussallee 14-16, D-53115 Bonn, Germany.\ We construct the complete effective chiral pion-nucleon Lagrangian in the presence of virtual photons to one loop. Taking only the charged to neutral pion and the proton to neutron mass differences into account, we calculate the scattering lengths of all physical $\pi N$–scattering processes. Furthermore we construct six independent relations among the physical scattering amplitudes, which vanish in the case of perfect isospin symmetry. If we now take these relations at threshold, we find large violation in the isoscalar ones: $$\begin{aligned} R_1 & = & 2 \, \frac{T_{\pi^+ p \to \pi^+ p} + T_{\pi^- p \to \pi^- p} + 2 \, T_{\pi^ 0 p \to \pi^0 p}} {T_{\pi^+ p \to \pi^+ p} + T_{\pi^- p \to \pi^- p} - 2 \, T_{\pi^ 0 p \to \pi^0 p}} = 36 \% \nonumber \\[0.3em] R_6 & = & 2 \, \frac{T_{\pi^0 p \to \pi^0 p} - T_{\pi^0 n \to \pi^0 n}} {T_{\pi^0 p \to \pi^0 p} + T_{\pi^0 n \to \pi^0 n}} = 19 \% \nonumber \end{aligned}$$ In 1977 Weinberg has already pointed out the possible large value for $R_6$ but there isn’t (and probably will never be) any experimental data to verify this relation. On the other hand there is hope to measure the $\pi^0 p$ scattering length so that one could check the prediction of the huge violation in the comparison of elastic scattering of charged with neutral pions. The not mentioned remaining four relations involving the isoscalar amplitudes are small (less than 2%). [12]{} [\[zyxyms\]]{} Ulf-G. Mei[ß]{}ner and S. Steininger, Phys. Lett. B419 (1998) 403. [\[zyxyWeinberg\]]{} S. Weinberg, Trans. N.Y. Acad. of Sci. 38 (1977) 185. [\[zyxyfms1\]]{} N. Fettes, Ulf-G. Mei[ß]{}ner and S. Steininger, Nucl. Phys A640 (1998) 199. [\[zyxyfms2\]]{} N. Fettes, Ulf-G. Mei[ß]{}ner and S. Steininger, \[hep-ph/9811366\]. [**Goldberger-Miyazawa-Oehme Sum Rule Revisited**]{}\ M.E. Sainio\ Dept. of Physics, University of Helsinki,\ P.O. Box 9, FIN-00014 Helsinki, Finland There has been recently quite a lot of activity attempting to determine the value of the pion-nucleon coupling constant with high precision. The results vary roughly in the range $f^2=0.075-0.081$ which covers the previous standard value, $f^2=0.079$, but the tendency is to move the central value downwards. The Goldberger-Miyazawa-Oehme sum rule [@GMO], the forward dispersion relation for the $C^-$ amplitude at the physical threshold, provides a relationship between the isovector $s$-wave pion-nucleon scattering length, the total cross section data and the pion pole term. The coupling constant can be extracted ($x=\mu/m$, where $\mu$ and $m$ are the masses for the charged pion and the proton) $$f^2=\frac{1}{2}[1-(\frac{x}{2})^2]\times [(1+x)\mu a^-_{0+}-\mu^2 J^-] \; \; {\rm with} \; \; J^-=\frac{1}{2\pi^2}\int_0^{\infty}\frac{\sigma^-(k)}{\omega}dk,$$ where $\sigma^-=\frac{1}{2}(\sigma_- - \sigma_+)$ in terms of the $\pi^-p$ and $\pi^+p$ total cross sections. The isovector scattering length is accessible in pionic hydrogen experiments. Also, there are plenty of cross section data up to about 350 GeV/c, which allows for a determination of the integral $J^-$. However, the data around the $\Delta$-resonance will have a significant influence on the $f^2$ value [@LS]. The precision of the input information is not, at present, high enough to be able to compete with other methods of determining the pion-nucleon coupling constant, but in principle the GMO approach can relate more directly the uncertainty in $f^2$ to the uncertainties in experimental quantities. [12]{} [\[zyxyGMO\]]{} M.L. Goldberger, H. Miyazawa and R. Oehme, Phys. Rev. 99 (1955) 986 [\[zyxyLS\]]{} M.P. Locher and M.E. Sainio, Proc. XIII Int. Conf. on Particles and Nuclei, Perugia, Italy, 1993, Ed. A. Pascolini (World Scientific, Singapore, 1994), p. 590 [**Low–momentum effective theory for two nucleons**]{}\ [**E. Epelbaoum**]{}$^{1, 2}$, W. Glöckle$^1$ and Ulf-G. Meißner$^2$\ $^1$Ruhr–Universität Bochum, Institut für theoretische Physik II\ D–44870 Bochum, Germany\ $^2$Forschungszentrum Jülich, Institut für Kernphysik (Theorie)\ D–52425 Jülich, Germany\ For the case of a Malfliet–Tjon type model potential [@1] we show explicitly that it is possible to construct a low–momentum effective theory for two nucleons. To that aim we decouple the low and high momentum components of this two–nucleon potential using the method of unitary transformation [@2]. We find the corresponding unitary operator for the $s$–wave numerically [@3]. The $S$–matrices in the full space and in the subspace of low momenta are shown to be identical. This is also demonstrated numerically by solving the corresponding Schrödinger equation in the small momentum space. Using our exact effective theory we address some issues related to the effective field theory approach of the two–nucleon system [@4]. In particular, we consider the $n p$ $^3 S_1$ and $^1 S_0$ channels. Expanding the heavy repulsive meson exchange of the effective potential in a series of local contact terms we discuss the question of naturalness of the corresponding coupling constants. We demonstrate that the quantum averages of the local expansion of the effective potential converge. This indicates that our effective theory has a systematic power counting. However terms of rather high order should be kept in the effective potential to obtain an accurate value of the binding energy (scattering length) in the $^3 S_1$ ($^1 S_0$) channel. We hope that this study might be useful for the real case of NN–forces derived from chiral Lagrangians in the low–momentum regime. [12]{} [\[zyxy1\]]{} R.A. Malfliet and J.A. Tjon, Nucl.  Phys.  A127 (1969) 161 [\[zyxy2\]]{} S. Okubo, Prog. Theor. Phys. 12 (1954) 603 [\[zyxy3\]]{} E. Epelbaoum, W. Glöckle and Ulf-G. Mei[ß]{}ner, Phys. Lett. B 439 (1998) 1 [\[zyxy4\]]{} E. Epelbaoum, W. Glöckle, A. Krüger and Ulf-G. Mei[ß]{}ner,\ [nucl-th/9809084]{}, Nucl. Phys. A in print \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} [**Effective Field Theory in the Two-Nucleon Sector**]{}\ Martin J. Savage\ Dept. of Physics, University of Washington, Seattle, WA 98915, USA.\ The two-nucleon sector contains length scales that are much larger than one would naively expect from QCD. The s-wave scattering lengths $a^{(\si)} = -23.7\ {\rm fm}$ and $a^{(\siii)} = 5.4\ {\rm fm}$ are much greater than both $1/\Lambda_\chi\sim 0.2\ {\rm fm}$ and $1/f_\pi\sim 1.5\ {\rm fm}$. The lagrange density describing such interactions consists of local four-nucleon operators and nucleon-pion interactions. Weinberg[@Weinberg1] suggested expanding the NN potential in powers of the quark mass matrix and external momenta. Several phenomenological applications of this counting have been explored (e.g. [@Bira]). However, one can show that leading order graphs require counter-terms that occur at higher orders in the expansion. A power counting was suggested[@KSW] that does not suffer from these problems. The leading interaction is the four-nucleon interaction, with pion exchange being sub-leading order. Recently, Mehen and Stewart[@MehStew] have suggested that the scale for the breakdown of the theory is $\sim 500\ {\rm MeV}$. The deuteron can be simply incorporated into the theory and its moments, form factors[@KSW2] and polarizabilities[@CGSSpol] have been computed. The cross section for $\gamma$-deuteron Compton scattering[@CGSScompt] has been computed in this theory and agrees well with the available data. [12]{} [\[zyxyWeinberg1\]]{} S. Weinberg, ; ; . [\[zyxyBira\]]{} C. Ordonez and U. van Kolck, [*Phys. Lett.*]{} B [**291**]{}, 459 (1992); C. Ordonez, L. Ray and U. van Kolck, [*Phys. Rev. Lett.*]{} [**72**]{}, 1982 (1994); [*Phys. Rev.* ]{} C [**53**]{}, 2086 (1996); U. van Kolck, [*Phys. Rev.* ]{} C [**49**]{}, 2932 (1994). [\[zyxyKSW\]]{} D.B. Kaplan, M.J. Savage and M.B. Wise, ; . [\[zyxyMehStew\]]{} T. Mehen and I.W. Stewart, [nucl-th/9809071]{}; [nucl-th/9809095]{}. [\[zyxyKSW2\]]{} D.B. Kaplan, M.J. Savage and M.B. Wise, [nucl-th/9804032]{}. [\[zyxyCGSSpol\]]{} J.W. Chen, H. W. Griesshammer, M. J. Savage and R. P. Springer,\ [nucl-th/9806080]{}. [\[zyxyCGSScompt\]]{} J.W. Chen, H. W. Griesshammer, M. J. Savage and R. P. Springer,\ [nucl-th/9809023]{}. J.W. Chen, [nucl-th/9810021]{}. [**Peripheral NN-Scattering and Chiral Symmetry**]{}\ Norbert Kaiser\ Physik Department T39, TU München, D-85747 Garching, Germany.\ We evaluate in one-loop chiral perturbation theory all $1\pi$- and $2\pi$-exchange contributions to the elastic NN-interaction. We find that the diagrams with virtual $\Delta(1232)$-excitation produce the correct amount of isoscalar central attraction as needed in the peripheral partial waves ($L\geq 3$). Thus there is no need to introduce the fictitious scalar isoscalar $\sigma$-meson. We also compute the two-loop diagrams involving $\pi\pi$-interaction (so-called correlated $2\pi$-exchange). Contrary to common believe these graphs lead to a negligibly small and repulsive NN-potential. Vector meson ($\rho$ and $\omega$) exchange becomes important for the F-waves above $T_{\rm lab}=150$ MeV. Without adjustable parameters we reproduce the empirical NN-phase shifts with $L\geq 3$ and mixing angles with $J\geq 3$ up to $T_{\rm lab}=350$ MeV. Further details on the subject can be found in refs.[@kbw],[@kgw]. [12]{} [\[zyxykbw\]]{} N. Kaiser, R. Brockmann, W. Weise, Nucl. Phys. A626 (1997) 758 [\[zyxykgw\]]{} N. Kaiser, S. Gerstendörfer and W. Weise, Nucl. Phys. A637 (1998) 395 [**Compton Scattering on the Deuteron in HBChPT**]{}\ Silas R. Beane\ Department of Physics,\ University of Maryland,\ College Park, MD 20742 USA\ There exists a systematic procedure for computing nuclear processes involving an external pionic or electromagnetic probe at energies of order $M_\pi$[@weinberg]. A perturbative kernel is calculated in baryon HBChPT and folded between phenomenological nuclear wavefunctions. A computation of $\pi^0$ photoproduction on the deuteron has been performed to $O(q^4)$ in HBChPT[@photo]. This process is of interest because an accurate measurement of the deuteron electric dipole amplitude (EDA) allows a model independent extraction of the neutron EDA, to this order in HBChPT. Recent experimental results from SAL indicate a value for the deuteron EDA which is very close to the HBChPT prediction, which in turn implies a large neutron EDA. In similar spirit, experimental information about the $\pi$-nucleus scattering lengths can be used to learn about $\pi$-$N$ scattering lengths [@weinberg][@bblm]. A recent accurate measurement of the $\pi$-deuteron scattering length constrains HBChPT counter-terms which contribute to the isoscalar S-wave $\pi$-$N$ scattering length, which is a particularly problematic observable. Compton scattering on the deuteron has been computed to order $O(q^3)$ in HBChPT[@compton]. Our predictions at low energies are in agreement with old Illinois data. This process has been measured at SAL at higher photon energies ($95MeV$) and data is currently being analyzed. An ingredient of the deuteron process is the neutron polarizability, which cannot be directly measured. Thus our calculation provides a systematic means of learning about the neutron polarizability. [12]{} [\[zyxyweinberg\]]{} S. Weinberg, Phys. Lett. B [**295**]{} (1992) 114. [\[zyxyphoto\]]{} S.R. Beane, C.Y. Lee and U. van Kolck, [nucl-th/9506017]{}, Phys. Rev. C [**52**]{} (1995) 2914; S.R. Beane, V. Bernard, T.S.H. Lee, Ulf-G. Mei[ß]{}ner and U. van Kolck, [nucl-th/9702226]{}, Nucl. Phys. A [**618**]{} (1997) 381. [\[zyxybblm\]]{} S.R. Beane, V. Bernard, T.S.H. Lee, and Ulf-G. Mei[ß]{}ner,\ [nucl-th/9708035]{}, Phys. Rev. C [**57**]{} (1998) 424. [\[zyxycompton\]]{} S.R. Beane, M. Malheiro, D.R. Phillips and U. van Kolck, DOE/ER/40762-164. [**(a) Baryon $\chi$PT and (b) Mesons at Large $N_c$**]{}\ H. Leutwyler\ Institute for Theoretical Physics, University of Bern,\ Sidlerstr. 5, CH-3012 Bern, Switzerland\ I gave a brief introduction to some of the work being carried out at Bern:[^3]\ (a) Thomas Becher and I have shown that the approach of Tang and Ellis [@Tang; @Ellis] can be developed into a coherent, manifestly Lorentz invariant formulation of B$\chi$PT that preserves chiral power counting. The method avoids some of the shortcomings of HB$\chi$PT. In particular, it allows us to analyze the low energy structure also in cases where the straightforward nonrelativistic expansion of the amplitudes in powers of momenta and quark masses fails. As an illustration, I discussed the application of the method to the $\sigma$-term and to the corresponding form factor. A preliminary version of a paper on the subject is available [@Becher; @Leutwyler].\ (b) I then briefly described the work on the large $N_c$ limit in the mesonic sector, done in collaboration with Roland Kaiser. We use a simultaneous expansion of the effective Lagrangian in powers of derivatives, quark masses and $1/N_c$ and account for the terms of first non-leading order, as well as for the one loop graphs – although these only occur at next-to-next-to leading order, they are not irrelevant numerically on account of chiral logarithms. I drew attention to related work by Feldmann, Kroll and Stech [@Feldmann]. We are currently grinding out the numerical implications for the various observables of interest. Some of the work is described in ref. [@Kaiser; @Leutwyler]. A more extensive report is under way. [12]{} [\[zyxyTang Ellis\]]{} H. B. Tang, [*A new approach to $\chi$PT for matter fields*]{}, hep-ph/9607436\ P. J. Ellis and H. B. Tang, Phys. Rev. C57 (1998) 3356 [\[zyxyBecher Leutwyler\]]{} Thomas Becher and H. Leutwyler, [ *Baryon $\chi$PT in Manifestly Lorentz Invariant Form*]{}, available at becher@itp.unibe.ch [\[zyxyFeldmann\]]{} T. Feldmann, P. Kroll and B. Stech, Phys. Rev. D58 (1998) 114006 and [*Mixing and decay constants of the pseudoscalar mesons: the sequel*]{}, hep-ph/9812269 [\[zyxyKaiser Leutwyler\]]{} R. Kaiser, [*Diploma work*]{}, University of Bern (1997);\ R. Kaiser and H. Leutwyler, [*Pseudoscalar decay constants at large $N_c$*]{}, Proc. Workshop on Nonperturbative Methods in Quantum Field Theory, NITP/CSSM, University of Adelaide, Australia, Feb. 1998, hep-ph/9806336. [**Baryon $\chi$PT in Manifestly Lorentz Invariant Form[^4]**]{}\ [**T. Becher**]{} and H. Leutwyler\ Universität Bern, Institut für theoretische Physik,\ Sidlerstr. 5, CH-3012 Bern, Switzerland\ The effective theory describing the interaction of pions with a single nucleon can be formulated in manifestly Lorentz invariant form [@GSS], but it is not a trivial matter to keep track of the chiral order of graphs containing loops within that framework: The chiral expansion of the loop graphs in general starts at the same order as the corresponding tree graphs, so that the renormalization of the divergences also requires a tuning of those effective coupling constants that occur at lower order. Most of the recent calculations avoid this complication by expanding the baryon kinematics around the nonrelativistic limit. This method is referred to as heavy baryon chiral perturbation theory [@JMBKKM]. It keeps track of the chiral power counting at every step of the calculation at the price of manifest Lorentz invariance, but suffers from a deficiency: The corresponding perturbation series fails to converge in part of the low energy region. The problem is generated by a set of higher order graphs involving insertions in nucleon lines – this sum diverges. The problem does not occur in the relativistic formulation of the effective theory. The purpose of this talk was to present a method [@BL] that exploits the advantages of the two techniques and avoids their disadvantages. We showed that the infrared singularities of the one-loop graphs occurring in B$\chi$PT can be extracted in a relativistically invariant fashion and that this result can be used to set up a renormalization scheme that preserves both Lorentz invariance and chiral power counting. The method we have presented follows the approach of Tang and Ellis [@ET], but we do not expand the infrared singular part in a chiral series, because that expansion does not always converge. [12]{} [\[zyxyGSS\]]{} J. Gasser, M. E. Sainio and A. Svarc, Nucl. Phys. B307 (1988) 779 [\[zyxyJMBKKM\]]{} E. Jenkins and A. V. Manohar, Phys. Lett. B255 (1992) 558\ V. Bernard, N. Kaiser, J. Kambor, U.-G. Meissner, Nucl. Phys. B388 (1992) 315 [\[zyxyBL\]]{} T. Becher, H. Leutwyler, Work in progress [\[zyxyET\]]{} H. B. Tang, hep-ph/9607436\ P. J. Ellis and H. B. Tang, Phys. Rev. C57 (1998) 3356 [**Chiral U(3)[$\times$]{}U(3) at large [$N_{\!c}$]{}: the [$\eta\!-\!\eta'$]{} system** ]{}\ [**R. Kaiser**]{} and H. Leutwyler\ Institute for Theoretical Physics, University of Bern,\ Sidlerstr. 5, CH-3012 Bern, Switzerland\ In the large $N_{\!c}$ limit the variables required to analyze the low energy structure of QCD in the framework of an effective field theory necessarily include the degrees of freedom of the $\eta'$. In a previous analysis [@KL1] we demonstrated that the effective Lagrangian prescription of the pseudoscalar nonet, pions, kaons, $\eta$ and $\eta'$, yields results consistent with nature. The calculation relies on a simultaneous expansion in powers of momenta, quark masses and $1/N_{\!c}$ which is truncated at first non-leading order. In particular we were able to calculate the decay constants of the $\eta$ and the $\eta'$ using experimental data on the decay rates of the neutral mesons into two photons. The main effect generated by the corrections to the well known leading order results concerns $\eta-\eta'$ mixing: at this order of the low energy expansion we need to distinguish two mixing angles. The purpose of the present talk is to report on the results obtained when the above analysis is carried further on to systematically include the non-leading corrections to the decay amplitudes [@KL2] (In ref. [@KL1] we disregarded the effect of SU(3) breaking in the electromagnetic decay rates). Furthermore we include the effects of the one loop graphs: although algebraically these contributions are suppressed by one power of $1/N_{\!c}$, some of these are enhanced by logarithmic factors. The points of main interest are the following: (a) is the hypothesis that the $1/N_{\!c}$ corrections are small valid for the world we live in and (b) what happens to the low energy theorem for the difference between the two mixing angles if we include the one loop graphs. Unfortunately, the analysis now involves a larger number of unknown low energy constants, so that the numerical analysis yields ranges for the observables rather than definite values. [12]{} [\[zyxyKL1\]]{} R. Kaiser, [*Diploma work*]{}, University of Bern (1997);\ H. Leutwyler, Proc. QCD 97, Montpellier, France, July 1997, Ed. S. Narison, Nucl. Phys. B (Proc. Suppl.) 64 (1998) 223, hep-ph/9709408. [\[zyxyKL2\]]{} R. Kaiser and H. Leutwyler, [*Pseudoscalar decay constants at large $N_{\!c}$*]{}, Proc. Workshop on Nonperturbative Methods in Quantum Field Theory, NITP/CSSM, University of Adelaide, Australia, Feb. 1998, hep-ph/9806336; R. Kaiser and H. Leutwyler, work in progress. [**Derivation of the Chiral Lagrangian from the Instanton Vacuum**]{}\ \[0.5cm\] Dmitri Diakonov\ \[0.3cm\] NORDITA, Blegdamsvej 17, 2100 Copenhagen Ø, Denmark\ \[0.3cm\] The QCD vacuum is populated by strong fluctuations of the gluon field carrying topological charge; these fluctuations are called instantons. A theory of the QCD instanton vacuum has been suggested in the 80’s, based on the Feynman variational principle [@DP1]. In the last years it has been strongly supported by direct lattice simulations [@Negele]. Instantons lead to a theoretically beautiful and phenomenologically realistic microscopic mechanism of chiral symmetry breaking (for a review see [@D1]), which has been also recently checked directly on the lattice [@Negele]. Chiral symmetry breaking manifests itself in quarks acquiring a momentum-dependent dynamical mass, sometimes called the constituent mass, and in the appearance of pseudo-Goldstone pions interacting with the dynamically massive quarks. The strength and the formafactors of pion-quark interactions are fixed unambiguously by the basic parameters of the instanton medium, and agree nicely with phenomenology [@D2]. Integrating out quarks, one gets the chiral lagrangian, whose low-momentum expansion and the asymptotic form have been investigated [@D2]. [12]{} [\[zyxyDP1\]]{} D.Diakonov and V.Petrov, Nucl. Phys. B245 (1984) 259; D.Diakonov, M.Polyakov and C.Weiss, Nucl. Phys. B461 (1996) 539 [\[zyxyNegele\]]{} T.L.Ivanenko and J.W.Negele, Nucl. Phys. Proc. Suppl. B63 (1998) 504, hep-lat/9709129; J.W.Negele, hep-lat/9810053 [\[zyxyD1\]]{} D.Diakonov, in: [*Selected Topics in Non-perturbative QCD*]{}, Proc. Enrico Fermi School, Course CXXX, A.Di Giacomo and D.Diakonov, eds., Bologna, (1996) p.397, hep-ph/9602375 [\[zyxyD2\]]{} D.Diakonov, hep-ph/9802298, to be published by World Scientific. [**Theoretical Study of Chiral Symmetry Breaking in QCD: Recent Development**]{}\ Jan Stern\ Groupe de Physique Theorique, IPN,\ 91406 Orsay, France\ If forthcoming experiments confirmed the actual value of $\pi\pi$ scattering length $a^0_0=0.26 \pm 0.05$ with a smaller error bar, we would have to conclude that the condensate $<\bar q q>$ is considerably smaller than one usually believes. I have presented few theoretical speculations inspired by this possibility: 1) Chiral symmetry breaking is due to the accumulation of small eigenvalues of the Euclidean Dirac operator in the limit of a large volume V. Since the fermion determinant reduces the weight of small eigenvalues, order parameters $F^2_\pi $ and $<\bar q q>$ are suppressed for large $N_f / N_c$ and the symmetry will be restored for $N_f/N_c >n_0 $ , (likely, $n_0=3.25$). The condensate could disappear more rapidly than $F^2_\pi$, since it is merely sensitive to the smallest eigenvalues behaving as $1/V$. It is even conceivable that for $n_1<N_f/N_c<n_0$, there is a phase in which $<\bar q q> =0$ but $F_\pi \ne 0$, implying symmetry breaking and the existence of Goldstone bosons[@Stern1]. Close to the critical point $n_1$, it would then be natural to expect a tiny quark condensate. 2) Kogan, Kovner and Shifman [@Kogan] have pointed out that due to Weingarten’s inequality, the bare, cutoff dependent condensate cannot vanish unless $F_\pi =0$. This, however, does not exclude vanishing of the renormalized condensate (assuming QCD-like sign of its anomalous dimension) and the existence of the critical point $n_1<n_0$ remains an open possibility. 3) Below but close to $n_1$ one should expect important Zweig rule violation and huge corrections to large $N_c$ predictions [@Stern2]. 4) The critical point $n_1$ could be seen through the volume dependence of Leutwyler-Smilga sum rules for Dirac eigenvalues [@Descotes]. The latter could be investigated numerically, diagonalizing the discretized square of the continuum massless Dirac operator, thereby avoiding the well known difficulties with massless fermions on the lattice . [12]{} [\[zyxyStern1\]]{} Jan Stern, hep-ph/9801282 ,(unpublished) [\[zyxyKogan\]]{} I.I.Kogan, A.Kovner and M.Shifman hep-ph/9807286 [\[zyxyStern2\]]{} Jan Stern, to be published [\[zyxyDescotes\]]{} Sebastien Descotes and Jan Stern, to be published [**NUMERICAL SOLUTIONS OF ROY EQUATIONS**]{}\ Gilberto Colangelo\ Institut für Theoretische Physik, Universität Zürich\ Winterthurerstr. 190, CH–8057 Zürich\ Roy equations [@Roy] are a system of coupled integral equations for the partial wave amplitudes of $\pi \pi$ scattering that incorporate the properties of analyticity, unitarity and crossing symmetry. Besides the dispersive integrals containing the Roy kernels and the partial wave amplitudes, the equations contain polynomial terms which depend on two constants only, i.e. the two $S$-wave scattering lengths. Solutions of Roy equations depend on the input values of these two constants. I have reported about recent work [@roy_coll] in solving numerically the Roy equations, describing both the method and the results. Our aim is to extract information on the low-energy behaviour of the $\pi \pi$ scattering partial wave amplitudes. For what concerns the high energy part, we use all the available information (coming both from experimental data and, where these are not available, from theoretical modelling) as input in the Roy equations, and solve them numerically only in the low-energy region. Since unitarity, analyticity and crossing do not constrain in any way the two $S$-wave scattering lengths on which the solution depends, we need additional information on those. This information comes from chiral symmetry, and to provide it we match the chiral representation of the amplitude (now available at the two loop level [@BCEGS]) to the dispersive one, at low energy. I have detailed how this matching is done, and have shown that the outcome of this combination of two different theoretical tools, together with the experimental information available on this process, is a very precise representation for the $\pi \pi$ scattering amplitude at low energy. New $K_{l4}$ data on the $\pi \pi$ phase shifts near threshold, expected in the near future from the E865 collaboration at BNL and from KLOE at DA$\Phi$NE (the Frascati $\Phi$ factory), will test the validity of this representation with high accuracy. A different experimental test will also come from the measurement of the pionic atoms lifetime made by the DIRAC collaboration: this measurement will give direct access to the difference of the two $S$–wave scattering lengths. [12]{} [\[zyxyRoy\]]{} S. M. Roy, Phys. Lett. 36B (1971) 353. [\[zyxyroy\_coll\]]{} B. Ananthanarayan, G. Colangelo, J. Gasser, H. Leutwyler and G. Wanders, work in progress. [\[zyxyBCEGS\]]{} J. Bijnens, G. Colangelo, G. Ecker, J. Gasser and M. Sainio, Phys. Lett. B374 (1996) 210, and Nucl. Phys. B508 (1997) 263. [**One-channel Roy equation revisited**]{}\ [**J. Gasser**]{}$^1$ and [G. Wanders]{}$^2$\ $^1$ Institut für Theoretische Physik, Universität Bern,\ Sidlerstrasse 5, CH–3012 Bern, Switzerland\ $^2$ Institut de Physique Théorique, Université de Lausanne,\ CH–1015 Lausanne, Switzerland\ The Roy equation [@roy] in the single channel case amounts to a nonlinear, singular integral equation for the phase shift $\delta$ in the low–energy region, $$\begin{aligned} \label{eqone} \frac{1}{2\sigma(s)}\sin{[2\delta(s)]}&=&a+\frac{(s-4)}{\pi} P\hspace{-.38cm}\int_4^\infty \frac{dx}{x-4}\frac{\omega(x)}{x-s}\;,\;\nonumber\\ \omega(x)&=&\left\{\begin{array}{cl} \sigma(x)^{-1}\sin^2[\delta(x)] &; \; 4 \leq x \leq s_0\\ A(x) & ; \;x \geq s_0 \;, \end{array}\right..\end{aligned}$$ Here, $\sigma(s)=(1-4/s)^{1/2}$ is the phase space factor, and $A(x)$ denotes the imaginary part of the partial wave above the matching point $s_0$. The integral is a principal value one. The problem consists in solving (\[eqone\]) in the interval $[4,s_0]$ at given scattering length $a$ and given imaginary part $A$. Investigating the infinitesimal neighborhood of a solution, the following proposition was established some time ago [@pw]: [ *Let $\delta$ be a solution of equation (\[eqone\]) with input $(a,A)$. It is an isolated solution if $-\frac{\pi}{2} < \delta(s_0)<\frac{\pi}{2}$. If $\delta(s_0)> \frac{\pi}{2}$, the infinitesimal neighborhood of $\delta$ is an $m$–parameter family of solutions $\delta'$ with $\delta'(s_0)=\delta(s_0)$, where $m$ is the integral part of $2\delta(s_0)/\pi$ for $\delta(s_0)>\pi/2$, and zero otherwise.*]{} In [@gw], we recall the derivation of this proposition – in particular, we detail its connection with the homogeneous Hilbert problem on a finite interval. In addition, we construct explicit expressions for amplitudes that solve the full, nonlinear Roy equation (\[eqone\]) exactly. These amplitudes contain free parameters that render the non–uniqueness of the solution manifest. The amplitudes develop, however, in general an unphysical singular behaviour at the matching point $s_0$. This singularity disappears and uniqueness is achieved if one uses analyticity properties of the amplitudes that are not encoded in the Roy equation. [12]{} [\[zyxyroy\]]{} S.M. Roy, Phys. Lett. B 36 (1971) 353. [\[zyxypw\]]{} C. Pomponiu and G. Wanders, Nucl. Phys. B 103 (1976) 172. [\[zyxygw\]]{} J. Gasser and G. Wanders, to appear. [**How do the uncertainties on the input affect the solution of the Roy equations?**]{}\ Gérard Wanders\ Institut de physique théorique, Université de Lausanne,\ CH-1015 Lausanne, Switzerland\ E-mail: Gerard.Wanders@ipt.unil.ch\ An update of previous work [@Pom1],[@Epe2] on the possible non-uniqueness of the solution of $S$- and $P$-wave Roy equations [@Roy3] and its sensitivity to the uncertainties in the input is under way. The Roy equations determine the $S$- and $P$-wave phase shifts in a low-energy interval $[2M_\pi,E_0]$. The $S$-wave scattering lengths and absorptive parts above $E_0$ are part of the input. The Roy equations are used nowadays for the extrapolation of the available data down to threshold. Colangelo et al. [@Col5] take $E_0=800$ MeV, a value for which the solution is unique \[there is a continuous family of solutions if $E_0>860$ MeV\]. The result of a variation of the input has been derived in [@Epe2] for $E_0=1.1$ GeV and this is redone now for $E_0=800$ meV. The linear response to small variations of the scattering lengths has been determined using a model with $a_0^0=0.2$ and $a_0^2=-0.041$. The effect is strongest on the $P$-wave. The phase shift $\delta_1^1$ is changed into $\delta_1^1+\Delta_1^1$ and the ratio $\Delta_1^1/\delta_1^1$ is of the same order of magnitude as $\delta a_0^0/a_0^0$ or $\delta a_0^2/a_0^2$ over the whole interval $[2M_\pi,E_0]$ when $a_0^0$ and $a_0^2$ are varied separately. According to the previous talk [@Gas6], $\Delta_1^1/\delta_1^1$ exhibits a cusp at $E_0$. This cusp is extremely sharp and plays a role in the numerical analysis [@?7]. In conformity with a universal curve in the $(a_0^0,a_0^2)$-plane, $\Delta_1^1\sim(\delta a_0^0-4.9\delta a_0^2)$ over $[2M_\pi,E_0]$. The effects on the $S$-waves are less spectacular and the cusps are practically invisible. [12]{} [\[zyxyPom1\]]{} C. Pomponiu and G. Wanders, Nucl. Phys. B103 (1976) 172 [\[zyxyEpe2\]]{} L. Epele and G. Wanders, Nucl. Phys. B137 (1978) 521 [\[zyxyRoy3\]]{} S. M. Roy, Phys. Lett. 36B (1971) 353 [\[zyxyCol5\]]{} G. Colangelo, Contribution to this Conference [\[zyxyGas6\]]{} J. Gasser, Contribution to this Conference [\[zyxy?7\]]{} J. Gasser, private communication;\ B. Ananthanarayan et al., work in progress. [**Properties of a possible scalar nonet**]{}\ Deirdre Black$^1$, Amir Fariborz$^1$, Francesco Sannino$^2$\ and [**Joseph Schechter**]{}$^1$\ $^1$Physics Dept, Syracuse University,\ Syracuse NY 13244-1130, USA\ $^2$Physics Dept, Yale University,\ New Haven CT 06520-8120\ It was found that a light sigma-type meson and a light kappa-type meson are needed to preserve unitarity in, respectively, models of $\pi\pi$ [@1] and $\pi K$[@2] scattering. These models are based on a chiral Lagrangian and yield simple approximate amplitudes which satisfy both crossing symmetry and the unitarity bounds. Together with the well established $f_0(980)$ and $a_0(980)$ mesons these would fill up a low lying nonet. We investigate the “family” relationship of the members of this postulated nonet. We start by considering Okubo’s original formulation[@3] of “ideal mixing”. It is noted that the original mass sum rules possess another solution which has a natural interpretation describing a meson nonet made from a dual quark and a dual anti-quark. This has the same structure as a model for the scalars proposed[@4] by Jaffe in the context of the MIT bag model. However, our masses do not exactly satisfy the alternate ideal mixing sum rule. For this reason, and also to obtain the experimental pattern of decay modes, we introduce additional terms which break ideal mixing. Then two different solutions giving correct masses but characterized by different scalar mixing angles emerge. The solution which yields decay widths agreeing with experiment has a mixing angle about $-17^o$ while the other has a mixing angle about $-90^o$. For comparison, ideal mixing for a quark anti-quark nonet is $\pm 90^o$ while ideal mixing for a dual quark, dual anti-quark nonet is $0^o$. Hence the dual picture is favored. A more detailed description of this work is given in the report[@5]. [12]{} [\[zyxy1\]]{} M. Harada, F. Sannino and J.Schechter, Phys. Rev.D[**54**]{}, 1991(1996). [\[zyxy2\]]{} D. Black, A. Fariborz, F. Sannino and J. Schechter, Phys. Rev.D[**58**]{}, 054012(1998). [\[zyxy3\]]{} S. Okubo, Phys. Lett.[**5**]{},165(1963). [\[zyxy4\]]{} R. Jaffe, Phs. Rev.D[**15**]{},267(1977). [\[zyxy5\]]{} D. Black, A. Fariborz, F. Sannino and J. Schechter hep-ph/9808415. [**ChPT Phenomenology in the Large–$N_C$ Limit**]{}\ [**A. Pich**]{}\ Department of Theoretical Physics, IFIC, Univ. Valencia – CSIC\ Dr. Moliner 50, 46100 Burjassot (Valencia), Spain.\ Chiral symmetry provides strong low–energy constraints on the dynamics of the Goldstone bosons. However, we need additional input to analyze physics at higher energy scales. The following two examples show that very useful information can be obtained from the large–$N_C$ limit of QCD: - Using our present knowledge on effective hadronic theories, short–distance QCD information, the $1/N_C$ expansion, analyticity and unitarity, we derive an expression for the pion form factor [@GP:97] in terms of $m_\pi$, $M_\rho$ and $f_\pi$. This parameter–free prediction provides a surprisingly good description of the experimental data up to energies of the order of 1 GeV. A similar analysis of the $K\pi$ scalar form factor, needed for the determination of the strange quark mass, is in progress. - The dispersive two–photon contribution to the $K_L\to\mu^+\mu^-$ decay amplitude is analyzed using chiral perturbation theory techniques and large–$N_C$ considerations. A consistent description [@GP:98] of the decays $\pi^0\to e^+e^-$, $\eta\to\mu^+\mu^-$ and $K_L\to\mu^+\mu^-$ is obtained. Moreover, the present data allow us to derive a useful constraint on the short–distance $K_L\to\mu^+\mu^-$ amplitude, which could be improved by more precise measurements of the $\eta\to\mu^+\mu^-$ and $K_L\to\mu^+\mu^-$ branching ratios. This offers a new possibility for testing the flavour–mixing structure of the Standard Model. As a by–product one predicts $B(\eta\to e^+e^-) = (5.8\pm 0.2) \times 10^{-9}$ and $B(K_L\to e^+e^-) = (9.0\pm 0.4) \times 10^{-12}$. [12]{} [\[zyxyGP:97\]]{} F. Guerrero and A. Pich, Phys. Lett. B412 (1997) 382. [\[zyxyGP:98\]]{} D. Gómez Dumm and A. Pich, Phys. Rev. Lett. 80 (1998) 4633. [**Gauge-invariant Effective Field Theory\ for a Heavy Higgs Boson**]{}\ [**Andreas Nyffeler**]{}$^1$ and Andreas Schenk$^2$\ $^1$DESY, Platanenallee 6, D-15738 Zeuthen, Germany; nyffeler@ifh.de\ $^2$Hambacher Straße 14, D-64625 Bensheim-Gronau, Germany.\ The method of effective field theory has repeatedly been used to analyze the symmetry breaking sector of the Standard Model and to parametrize effects of new physics. Comparing the theoretical predictions for the low energy constants in the corresponding effective Lagrangian for different models with experimental constraints might help to distinguish between different underlying theories before direct effects become visible. At low energies the Standard Model with a heavy Higgs boson in the spontaneously broken phase, which serves as a reference point for a strongly interacting symmetry breaking sector, can adequately be described by such an effective field theory. Moreover, the low energy constants can explicitly be evaluated using perturbative methods by matching the full and the effective theory at low energies. Several groups have performed such a calculation for a heavy Higgs boson in recent years [@HM_EM_DGK]. In gauge theories there are, however, some subtleties involved when this matching is performed with gauge-dependent Green’s functions. We therefore proposed a manifestly gauge-invariant approach in Ref. [@Abelian_Higgs] which deals only with gauge-invariant Green’s functions. In this talk we presented the extension of our method to the non-abelian case. Using a generating functional of gauge-invariant Green’s functions for the bosonic sector of the Standard Model which was discussed recently [@SM_gaugeinv], we evaluated the effective Lagrangian for a heavy Higgs boson at the one-loop level and at order $p^4$ in the low-energy expansion by matching corresponding gauge-invariant Green’s functions at low energies. A detailed description of our calculation which preserved gauge invariance throughout the matching procedure and a comparison of our results with those obtained by the other groups [@HM_EM_DGK] can be found in Ref. [@Heavy_Higgs_gaugeinv]. [12]{} [\[zyxyHM\_EM\_DGK\]]{} M. J. Herrero and E. R. Morales, Nucl. Phys. B437, 319 (1995); D. Espriu and J. Matias, Phys. Lett. B341, 332 (1995); S. Dittmaier and C. Grosse-Knetter, Nucl. Phys. B459, 497 (1996). [\[zyxyAbelian\_Higgs\]]{} A. Nyffeler and A. Schenk, Phys. Rev. D53, 1494 (1996). [\[zyxySM\_gaugeinv\]]{} A. Nyffeler and A. Schenk, DESY Preprint, DESY 98-202, December 1998, hep-ph/9812437. [\[zyxyHeavy\_Higgs\_gaugeinv\]]{} A. Nyffeler and A. Schenk, Effective field theory and gauge-invariant matching for a heavy Higgs boson, in preparation. [**The Goldberger-Treiman Discrepancy in SU(3)**]{}\ [**J. L. Goity**]{}$^1$, R. Lewis$^2$, M. Schvellinger$^3$, and L. Zhang$^1$\ [*$^1$Dept. Phys., Hampton University, Hampton, VA 23668, USA,\ and Jefferson Lab, Newport News, VA 23606, USA.\ $^2$Dept. of Phys., University of Regina, Regina, Canada.\ $^3$Dept. of Phys., Universidad Nac. de La Plata, La Plata, Argentina.\ *]{} We studied the Goldberger-Treiman discrepancy (GTD)[@GTD] in the baryon octet in the framework of HBChPT. We confirm the previous conclusion[@Gasser],[@HUGS] that at leading order (order $p^2$) the discrepancy is entirely given by contact terms in the baryon effective Lagrangian of order $p^3$, and demonstrate that the subleading corrections are of order $p^4$. At leading order and in the isospin symmetry limit there are only two terms that affect the GTD, namely, $$\begin{aligned} {\cal {L}}^{(3)}_{GTD} &=& F_{19} {\rm Tr} ({\bar{B}} S_v^\mu [\nabla_\mu\chi_{-}, \, B]) \nonumber \\ &+& D_{19} {\rm Tr} ({\bar{B}} S_v^\mu \{\nabla_\mu\chi_{-}, \, B\}), \nonumber \label{L3}\end{aligned}$$ where the notation is standard. We analyze the discrepancies that can be determined from the available meson-nucleon couplings, namely $\pi{N}N$, $KN\Lambda$ and $KN\Sigma$, and conclude that only the smaller values of the $g_{\pi NN}$ coupling lead to a consistent picture where the $p^4$ corrections to the discrepancies would have natural size. We also check that with the smaller values of $g_{\pi NN}$ the Dashen-Weinstein relation is well satisfied, while it is badly violated for the larger values. [12]{} [\[zyxyGTD\]]{} J. L. Goity, R. Lewis, M. Schvellinger and L. Zhang, Preprint JLAB-THY-98-51, hep-ph/9901374 [\[zyxyGasser\]]{} J. Gasser, M. E. Sainio and A Švarc, Nucl. Phys. [**B307**]{} (1988) 779. [\[zyxyHUGS\]]{} U-G. Meissner, in Themes in Strong Interactions", Proceedings of the $12^{th}$ HUGS at CEBAF, J. L. Goity Editor, World Scientific (1998) 139, and references therein. [**Tau decays and Generalized $\chi$PT**]{}\ Luca Girlanda\ Groupe de Physique Théorique, Institut de Physique Nucléaire,\ F-91406 Orsay, France.\ The hadronic matrix element of the decays $\tau \rightarrow 3 \pi \nu_{\tau}$ contains a large spin-1 part, dominated by the $a_1$ resonance, and a small spin-0 part, which is proportional to the divergence of the axial current and then to the light quark mass. The latter, involving the explicit chiral symmetry breaking sector of the theory, is expected to be very sensitive to the size of the quark anti-quark condensate $\langle \bar q q \rangle$. In order to investigate this sensitivity we use the generalized version of $\chi$PT [@gchpt]. The $SU(2) \times SU(2)$ generalized chiral lagrangian is constructed and renormalized at one-loop level. We then compute the nine structure functions (see Ref. [@km]) for both the charge modes $\tau \rightarrow \pi^- \pi^- \pi^+ \nu_{\tau}$ and $\tau \rightarrow \pi^0 \pi^0 \pi^- \nu_{\tau}$. In the limit of a large condensate we recover the results of standard $\chi$PT, found in Ref. [@gilberto]. The spin-1 contribution is kinematically suppressed, at threshold, leading to sizeable azimuthal asymmetries which depend strongly on the size of $\langle \bar q q \rangle$, up to rather high values of the hadronic invariant mass $Q^2$. The integrated left-right asymmetry for the all-charged mode, for instance, varies from $(28 \pm 4) \%$ up to $(60 \pm 6) \%$ at $Q^2 = 0.35\,\, \mbox{GeV}^2$ if the condensate decreases from its standard value down to zero[^5]. Unfortunately the branching ratio in this kinematical region is quite small: the largest statistics up to now has been collected by the CLEO Collaboration, with about $10^7$ $\tau$-pairs [@perl], corresponding to about 100 events from threshold to $Q^2 = 0.35\,\, \mbox{GeV}^2$. [100]{} [\[zyxygchpt\]]{} N.H. Fuchs, H. Sazdjian and J. Stern, Phys. Lett. B269 (1991) 183;\ J. Stern, H. Sazdjian and N.H. Fuchs, phys. Rev D47 (1993) 3814. [\[zyxykm\]]{} J.H. Kühn and E. Mirkes, Z. Phys. C56 (1992) 661; erratum [*ibidem*]{} C67 (1995) 364. [\[zyxygilberto\]]{} G. Colangelo, M. Finkemeier and R. Urech, Phys. Rev. D54 (1996) 4403. [\[zyxyperl\]]{} M.L. Perl, invited talk at WEIN 98, Santa FE, NM, 14-21 June 1998, hep-ph/9812400. [**Automatized ChPT Calculations**]{}\ Frederik Orellana$^{1}$\ $^{1}$Inst. Theor. Phys., University of Zürich,\ Winterthurerstr. 190, 8057 Zürich, Switzerland\ In recent years much effort has been put into automatizing Feynman diagram calculations in the Standard Model and in the Minimal Super Symmetric Standard Model (see [@AIHEP], [@TEFE], [@HAPE] and references therein). The reason is the increasing complexity of calculations due to more loops, more couplings and intermediate particles and more particles in the final state. These complications are motivated by the increasing energy and precision of the experiments. It is argued that despite the different nature of ChPT [@GL1], it is interesting and feasible also here to apply some automatization. Some of the different features of ChPT are due to the fact that it is a non-renormalizable effective theory with an expansion in the momentum, multi-leg vertices and new couplings at each order in the expansion, expressed in a rather compact notation. An overview of existing computer programs is given. A framework is proposed for automatizing ChPT calculations [@FA], [@FC] and a new program is presented [@PHI]. [9]{} [\[zyxyAIHEP\]]{} AIHEP, New Computing Techniques in Physics Research,\ http://lapphp0.in2p3.fr/aihep/aihep96/abstracts/a\_sm/p\_sm.html, 1996 [\[zyxyTEFE\]]{} M. Tentyukov (Dubna, JINR), J. Fleischer (Bielefeld U.), Automatization of Feynman diagram evaluation, hep-ph/9802243 [\[zyxyHAPE\]]{} T. Hahn (Karlsruhe U.), M. Perez-Victoria (Granada U.), Automatized one loop calculations in four-dimensions and D-dimensions, hep-ph/9807565 [\[zyxyGL1\]]{} J. Gasser and H. Leutwyler, Ann. Phys. (N.Y.) 158 (1984) 142 [\[zyxyFA\]]{} T. Hahn, FeynArts: A Mathematica package for the generation and visualization of Feynman diagrams and amplitudes, http://www-itp.physik.uni-karlsruhe.de/feynarts/ [\[zyxyFC\]]{} R. Mertig, FeynCalc: A Mathematica 3.0 program for Feynman diagram calculations in elementary particle physics, http://www.mertig.com/ [\[zyxyPHI\]]{} F. Orellana, Phi: A Mathematica package for ChPT calculations,\ http://phi.cabocomm.dk/ [**From Chiral Random Matrix Theory\ to Chiral Perturbation Theory**]{}\ J.C. Osborn, [**D. Toublan**]{} and J.J.M. Verbaarschot\ Department of Physics and Astronomy,\ SUNY, Stony Brook, New York 11794.\ We study the spectrum of the QCD Dirac operator by means of the valence quark mass dependence of the chiral condensate in partially quenched Chiral Perturbation Theory in the supersymmetric formulation of Bernard and Golterman [@pqChPT]. We consider valence quark masses both in the ergodic domain ($m_v \ll E_c$) and the diffusive domain ($m_v \gg E_c$). These domains are separated by a mass scale $E_c \sim F^2/\Sigma_0 L^2$ (with $F$ the pion decay constant, $\Sigma_0$ the chiral condensate and $L$ the size of the box) [@OV]. We perform a finite volume analysis of partially quenched Chiral Perturbation Theory along the same lines as the one done in Chiral Perturbation Theory [@GLS]. In the ergodic domain the effective super-Lagrangian reproduces the microscopic spectral density of chiral Random Matrix Theory [@Cambridge],[@OTV]. In the diffusive domain we extend the results for the slope of the Dirac spectrum first obtained by Smilga and Stern [@SS]. We find that the spectral density diverges logarithmically for nonzero (sea) quark masses. We study the transition between the ergodic and the diffusive domain and identify a range where chiral Random Matrix Theory and partially quenched Chiral Perturbation Theory coincide [@OTV]. We also point out some interesting analogies with mesoscopic disordered systems and with chaotic ones [@Bohigas].\ [12]{} [\[zyxypqChPT\]]{} C. Bernard and M. Golterman, Phys. Rev. D49 (1994) [\[zyxyOV\]]{} J.C. Osborn and J.J.M. Verbaarschot, Phys. Rev. Lett. 81 (1998) 248 [\[zyxyGLS\]]{} J. Gasser and H. Leutwyler, Phys. Lett. 188B (1987) 477; Nucl. Phys. B307(1988)763; H. Leutwyler and A. Smilga, Phys. Rev. D46(1992)5607 [\[zyxyCambridge\]]{} J.J.M. Verbaarschot, [*Lectures given at NATO Advanced Study Institute on Confinement, Duality and Nonperturbative Aspects of QCD*]{}, Cambridge, 1997, hep-th/9710114 [\[zyxyOTV\]]{} J.C. Osborn, D. Toublan and J.J.M. Verbaarschot, [*From Chiral Random Matrix Theory to Chiral Perturbation Theory*]{}, hep-th/9806110, accepted for publication in Nuc. Phys. B; P. H. Damgaard, J.C. Osborn, D. Toublan and J.J.M. Verbaarschot, [ *The Microscopic Spectral Density of the QCD Dirac Operator*]{}, hep-th/9811212 [\[zyxySS\]]{} A. Smilga and J. Stern, Phys. Lett. B318 (1993) 531 [\[zyxyBohigas\]]{} O. Bohigas, M. Giannoni, Lecture notes in Physics 209 (1984) 1. [^1]: In ChPT the nucleon-delta mass splitting counts as ${\cal O}(p^0)={\cal O}(1)$, in contrast to SSE. [^2]: E-mail gerhard.hoehler@physik.uni-karlsruhe.de [^3]: see also the reports of Thomas Becher and Roland Kaiser in these mini-proceedings. [^4]: see also H. Leutwylers report in these mini-proceedings [^5]: These numbers correspond to taus produced at rest, and thus they are relevant only for the $\tau$-charm factories. Minor modifications in the analysis are needed to account for ultrarelativistic taus, currently produced in the accelarators.
--- abstract: | The input to the [*stochastic orienteering*]{} problem [@GKNR12] consists of a budget $B$ and metric $(V,d)$ where each vertex $v\in V$ has a job with a deterministic reward and a *random* processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a non-anticipatory policy (originating from a given root vertex) to run jobs at different vertices, that maximizes expected reward, subject to the total distance traveled plus processing times being at most $B$. An [*adaptive*]{} policy is one that can choose the next vertex to visit based on observed random instantiations. Whereas, a [*non-adaptive*]{} policy is just given by a fixed ordering of vertices. The [*adaptivity gap*]{} is the worst-case ratio of the expected rewards of the optimal adaptive and non-adaptive policies. We prove an $\Omega\left((\log\log B)^{1/2}\right)$ lower bound on the adaptivity gap of stochastic orienteering. This provides a negative answer to the $O(1)$-adaptivity gap conjectured in [@GKNR12], and comes close to the $O(\log\log B)$ upper bound proved there. This result holds even on a line metric. We also show an $O(\log\log B)$ upper bound on the adaptivity gap for the [*correlated*]{} stochastic orienteering problem, where the reward of each job is random and possibly correlated to its processing time. Using this, we obtain an improved quasi-polynomial time $ \min\{\log n,\log B\}\cdot \tilde{O}(\log^2\log B)$-approximation algorithm for correlated stochastic orienteering. author: - 'Nikhil Bansal[^1]' - 'Viswanath Nagarajan[^2]' bibliography: - 'stoc-ks.bib' title: On the Adaptivity Gap of Stochastic Orienteering --- Introduction ============ In the [*orienteering*]{} problem [@GLV87], we are given a metric $(V,d)$ with a starting vertex $\rho\in V$ and a budget $B$ on length. The objective is to compute a path originating from $\rho$ having length at most $B$, that maximizes the number of vertices visited. This is a basic vehicle routing problem (VRP) that arises as a subroutine in algorithms for a number of more complex variants, such as VRP with time-windows, discounted reward TSP and distance constrained VRP. The stochastic variants of orienteering and related problems such as traveling salesperson and vehicle routing have also been extensively studied. In particular, several dozen variants have been considered depending on which parameters are stochastic, the choice of the objective function, the probability distributions, and optimization models such as [*a priori*]{} optimization, stochastic optimization with recourse, probabilistic settings and so on. For more details we refer to a recent survey [@Weyland] and references therein. Here, we consider the following stochastic version of the orienteering problem defined by [@GKNR12]. Each vertex contains a job with a deterministic reward and random processing time (also referred to as size); these processing times are independent across vertices. The processing times model the random delays encountered at the node, say due to long queues or activities such as filling out a form, before the reward can be collected. The distances in the metric correspond to travel times between vertices, which are deterministic. The goal is to compute a [*policy*]{}, which describes a path originating from the root $\rho$ that visits vertices and runs the respective jobs, so as to maximize the total expected reward subject to the total time (for travel plus processing) being at most $B$. Stochastic orienteering also generalizes the well-studied stochastic knapsack problem [@DeanGV08; @BGK11; @Bhalgat11] (when all distances are zero). We also consider a further generalization, where the reward at each vertex is also random and possibly [*correlated*]{} to its processing time. A feasible solution (policy) for the stochastic orienteering problem is represented by a decision tree, where nodes encode the “state” of the solution (previously visited vertices and the residual budget), and branches denote random instantiations. Such solutions are called [*adaptive*]{} policies, to emphasize the fact that their actions may depend on previously observed random outcomes. Often, adaptive policies can be very complex and hard to reason about. For example, even for the stochastic knapsack problem an optimal adaptive strategy may have exponential size (and several related problems are PSPACE-hard) [@DeanGV08]. Thus a natural approach for designing algorithms in the stochastic setting is to: (i) restrict the solution space to the simpler class of [*non adaptive*]{} policies (eg. in our stochastic orienteering setting, such a policy is described by a fixed permutation to visit vertices in, until the budget $B$ is exhausted), and (ii) design an efficient algorithm to find a (close to) optimum non-adaptive policy. While non-adaptive policies are often easier to optimize over, the drawback is that they could be much worse than the optimum adaptive policy. Thus, a key issue is to bound the [*adaptivity gap*]{}, introduced by [@DeanGV08] in their seminal paper, which is the worst-case ratio (over all problem instances) of the optimal adaptive value to the optimal non-adaptive value. In recent years, increasingly sophisticated techniques have been developed for designing good non-adaptive policies and for proving small adaptivity gaps [@DeanGV08; @GuhaM09; @chenetal; @BGLMNR10; @GKMR11; @GKNR12]. For stochastic orienteering, [@GKNR12] gave an $O(\log\log B)$ bound on the adaptivity gap, using an elegant probabilistic argument (previous approaches only gave a $\Theta(\log B)$ bound). More precisely, they considered certain $O(\log B)$ correlated probabilistic events and used martingale tails bounds on suitably defined stopping times to bound the probability that none of these events happen. In fact, [@GKNR12] conjectured that the adaptivity gap for stochastic orienteering was $O(1)$, suggesting that the $O(\log \log B)$ factor was an artifact of their analysis. Our Results and Techniques -------------------------- [**Adaptivity gap for stochastic orienteering:**]{} Our main result is the following lower bound. \[thm:ad-gap\] The adaptivity gap of stochastic orienteering is $\Omega\left((\log\log B)^{1/2}\right)$, even on a line metric. This answers negatively the $O(1)$-adaptivity gap conjectured in [@GKNR12], and comes close to the $O(\log\log B)$ upper bound proved there. To the best of our knowledge, this gives the first non-trivial $\omega(1)$ adaptivity gap for a natural problem. The lower bound proceeds in three steps and is based on a somewhat intricate construction. We begin with a basic instance described by a directed binary tree of height $\log\log B$ that essentially represents the optimal adaptive policy. Each processing time is a Bernoulli random variable: it is either zero, in which case the optimal policy goes to its left child, or a carefully set positive value, in which case the optimal policy goes to its right child. The edge distances and processing times are chosen so that when a non-zero size instantiates, it is always possible to take a right edge, while the left edges can only be taken a few times. On the other hand, if the non-adaptive policy chooses a path with mostly right edges, then it cannot collect too much reward. In the first step of the proof, we show that this directed tree instance has an $\Omega((\log\log B)^{1/2})$ adaptivity gap. The main technical difficulty here is to show that every fixed path (which may possibly skip vertices, and gain advantage over the adaptive policy) either runs out of budget $B$ or collects low expected reward. In the second step, we drop the directions on the edges and show that the adaptivity gap continues to hold (up to constant factors). The optimum adaptive policy that we compare against remains the same as in the directed case, and the key issue here is to show that the non-adaptive policy cannot gain too much by backtracking along the edges. To this end, we use some properties of the distances on edges in our instance. In the final step, we embed the undirected tree onto a line at the expense of losing another $O(1)$ factor in the adaptivity gap. The problem here is that pairs of nodes that are far apart on the tree may be very close on the line. To get around this, we exploit the asymmetry of the tree distances and some other structural properties to show that this has limited effect. [**Correlated Stochastic Orienteering:**]{} Next, we consider the correlated stochastic orienteering problem, where the reward at each vertex is also random and possibly correlated with its processing time (the distributions are still independent across vertices). In this setting, we prove the following. \[thm:corr-loglog-UB\] The adaptivity gap of correlated stochastic orienteering is $O(\log\log B)$. This improves upon the $O(\log B)$-factor adaptivity gap that is implicit in [@GKNR12], and matches the adaptivity gap upper bound known for uncorrelated stochastic orienteering. The proof makes use of a martingale concentration inequality [@Zhang05] (as [@GKNR12] did for the uncorrelated problem), but dealing with the reward-size correlations requires a different definition of the stopping time. For the uncorrelated case, the stopping time [@GKNR12] used a single “truncation threshold” (equal to $B$ minus the travel time) to compare the instantiated sizes and their expectation. In the correlated setting, we use $\log B$ different truncation thresholds (all powers of $2$), irrespective of the travel time, to determine the stopping criteria. [**Algorithm for Correlated Stochastic Orienteering:**]{} Using some structural properties in the proof of the adaptivity gap upper bound above, we obtain an improved [*quasi-polynomial*]{}[^3] time algorithm for correlated stochastic orienteering. \[thm:corr-NA\] There is an $O\left( \alpha\cdot \log^2\log B/\log\log\log B\right)$-approximation algorithm for correlated stochastic orienteering, running in time $(n+\log B)^{O(\log B)}$. Here $\alpha\le \min\{O(\log n),\, O(\log B)\}$ denotes the best approximation ratio for the orienteering with deadlines problem. The [*orienteering with deadlines*]{} problem is defined formally in Section \[subsec:defn\]. Previously, [@GKNR12] gave a polynomial time $O(\alpha\cdot \log B)$-approximation algorithm for correlated stochastic orienteering. They also showed that this problem is at least as hard to approximate as the deadline orienteering problem, i.e. an $\Omega(\alpha)$-hardness of approximation (this result also holds for quasi-polynomial time algorithms). Our algorithm improves the approximation ratio to $O(\alpha\cdot \log^2\log B)$, but at the expense of quasi-polynomial running time. We note that the running time in Theorem \[thm:corr-NA\] is quasi-polynomial for general inputs where probability distributions are described [*explicitly*]{}, since the input size is $n\cdot B$. If probability distributions are specified implicitly, the runtime is quasi-polynomial only for $B\le 2^{poly(\log n)}$. The algorithm in Theorem \[thm:corr-NA\] is based on finding an approximate non-adaptive policy, and losing an $O(\log\log B)$-factor on top by Theorem \[thm:corr-loglog-UB\]. There are three main steps in the algorithm: (i) we enumerate over $\log B$ many “portal” vertices (suitably defined) on the optimal policy; (ii) using these portal vertices, we solve (approximately) a [*configuration LP relaxation*]{} for paths between portal vertices; (iii) we randomly round the LP solution. The quasi-polynomial running time is only due to the enumeration. In formulating and solving the configuration LP relaxation, we also use some ideas from the earlier $O(\alpha\cdot \log B)$-approximation algorithm [@GKNR12]. Solving the configuration LP requires an algorithm for deadline orienteering (as the dual separation oracle), and incurs an $\alpha$-factor loss in the approximation ratio. This configuration LP is a “packing linear program”, for which we can use fast combinatorial algorithms [@PST91; @GK07]. The final rounding step involves randomized rounding with alteration, and loses an extra $O(\frac{\log\log B}{\log\log\log B})$ factor. Related Work {#s:prev-work} ------------ The deterministic orienteering problem was introduced by Golden et al. [@GLV87]. It has several applications, and many exact approaches and heuristics have been applied to this problem, see eg. the survey [@VSO11]. The first constant-factor approximation algorithm was due to Blum et al. [@BCKLMM07]. The approximation ratio has been improved [@BBCM04; @CKP08] to the current best $2+\epsilon$. Dean et al. [@DeanGV08] were the first to consider stochastic packing problems in this adaptive optimization framework: they introduced the [*stochastic knapsack*]{} problem (where items have random sizes), and obtained a constant-factor approximation algorithm and adaptivity gap. The approximation ratio has subsequently been improved to $2+\epsilon$, due to [@BGK11; @Bhalgat11]. The stochastic orienteering problem [@GKNR12] is a common generalization of both deterministic orienteering and stochastic knapsack. Gupta et al. [@GKMR11] studied a generalization of the stochastic knapsack problem, to the setting where the reward and size of each item may be correlated, and gave an $O(1)$-approximation algorithm and adaptivity gap for this problem. Recently, Ma [@M14] improved the approximation ratio to $2+\epsilon$. The correlated stochastic orienteering problem was studied in [@GKNR12], where the authors obtained an $O(\log n\cdot \log B)$-approximation algorithm and an $O(\log B)$ adaptivity gap. They also showed the problem to be at least as hard to approximate as the deadline orienteering problem, for which the best approximation ratio known is $O(\log n)$ [@BBCM04]. A related problem to stochastic orienteering was considered by Guha and Munagala [@GuhaM09] in the context of the [*multi-armed bandit*]{} problem. As observed in [@GKNR12], the approach in [@GuhaM09] yields an $O(1)$-approximation algorithm (and adaptivity gap) for the variant of stochastic orienteering with two [*separate*]{} budgets for the travel and processing times. In contrast, our result shows that stochastic orienteering (with a single budget) has super-constant adaptivity gap. Problem Definition {#subsec:defn} ------------------ An instance of stochastic orienteering ([$\mathsf{StocOrient}$]{}) consists of a metric space $(V, d)$ with vertex-set $|V| = n$ and symmetric integer distances $d: V \times V \rightarrow {{\ensuremath{\mathbb{Z}}}}^+$ (satisfying the triangle inequality) that represent travel times. Each vertex $v \in V$ is associated with a stochastic job, with a deterministic reward $r_v\ge 0$ and a random processing time (also called size) ${{S}}_v \in {{\ensuremath{\mathbb{Z}}}}^+$ distributed according to a known probability distribution. The processing times are independent across vertices. We are also given a starting “root” vertex $\rho\in V$, and a budget $B\in {{\ensuremath{\mathbb{Z}}}}^+$ on the total time available. A solution (policy) must start from $\rho$, and visit a sequence of vertices (possibly adaptively). Each job is executed non-preemptively, and the solution knows the precise processing time only upon completion of the job. The objective is to maximize the expected reward from jobs that are completed before the horizon $B$; note that there is no reward for partially completing a job. The approximation ratio of an algorithm is the ratio of the expected reward of an optimal policy to that of the algorithm’s policy. We assume that all times (travel and processing) are integer valued and lie in $\{0,1,\cdots,B\}$. In the [*correlated*]{} stochastic orienteering problem ([$\mathsf{CorrOrient}$]{}), the job sizes and rewards are both random, and correlated with each other. The distributions across different vertices are still independent. For each vertex $v\in V$, we use $S_v$ and $R_v$ to denote its random size and reward, respectively. We assume an explicit representation of the distribution of each job $v\in V$: for each $s\in\{0,1,\cdots,B\}$, job $v$ has size $S_v=s$ and reward $r_v(s)$ with probability $\Pr[S_v=s]=\pi_v(s)$. Note that the input size is $nB$. An *adaptive policy* is a decision tree where each node is labeled by a job/vertex of $V$, with the outgoing arcs from a node labeled by $u$ corresponding to the possible sizes in the support of $S_u$. A *non-adaptive policy* is simply given by a path $P$ starting at $\rho$: we just traverse this path, processing the jobs that we encounter, until the total (random) size of the jobs plus the distance traveled reaches $B$. A randomized non-adaptive policy may pick a path $P$ at random from some distribution before it knows any of the size instantiations, and then follows this path as above. Note that in a non-adaptive policy, the order in which jobs are processed is independent of their processing time instantiations. In our algorithm for [$\mathsf{CorrOrient}$]{}, we use the [*deadline orienteering*]{} problem as a subroutine. The input to this problem is a metric $(U,d)$ denoting travel times, a reward and deadline at each vertex, start ($s$) and end ($t$) vertices, and length bound $D$. The objective is to compute an $s-t$ path of length at most $D$ that maximizes the reward from vertices visited before their deadlines. The best approximation ratio for this problem is $\alpha=\min\{O(\log n), O(\log B)\}$ due to [@BBCM04; @CKP08]. Organization ------------ The adaptivity gap lower bound appears in Section \[sec:ad-gap\], where we prove Theorem \[thm:ad-gap\]. In Section \[sec:corr-ad-gap\], we consider the correlated stochastic orienteering problem and prove the upper bound on its adaptivity gap (Theorem \[thm:corr-loglog-UB\]). Finally, the improved quasi-polynomial time algorithm (Theorem \[thm:corr-NA\]) for correlated stochastic orienteering appears in Section \[sec:corr-alg\] Lower Bound on the Adaptivity Gap {#sec:ad-gap} ================================= Here we describe our lower bound instance which shows that the adaptivity gap is $\Omega(\sqrt{\log \log B})$ even for an undirected line metric. The proof and the description of the instance is divided into three steps. First we describe an instance where the underlying graph is a directed complete binary tree, and prove the lower bound for it. The directedness ensures that all policies follow a path from root to a leaf (possibly with some nodes skipped) without any backtracking. Second, we show that the directed assumption can be removed at the expense of an additional $O(1)$ factor in the adaptivity gap. In particular this means that the nodes on the tree can be visited in any order starting from the root. Finally, we “embed" the undirected tree into a line metric, and show that the adaptivity gap stays the same up to a constant factor. Directed Binary Tree -------------------- Let $L\ge 2$ be an integer and $p:=1/\sqrt{L}$. We define a complete binary tree [$\mathcal{T}$]{}of height $L$ with root $\rho$. All the edges are directed from the root towards the leaves. The [*level*]{} $\ell(v)$ of any node $v$ is the number of nodes on the shortest path from $v$ to any leaf. So all the leaves are at [*level*]{} one and the root $\rho$ is at level $L$. We refer to the two children of each internal node as the left and right child, respectively. Each node $v$ of the tree has a job with some deterministic reward $r_v$ and a random size $S_v$. Each random variable $S_v$ is Bernoulli, taking value zero with probability $1-p$ and some positive value $s_v$ with the remaining probability $p$. The budget for the instance is $B = 2^{2^{L+1}}$. To complete the description of the instance, we need to define the values of the rewards $r_v$, the job sizes $S_v$, and the distances $d(u,v)$ on edges $e=(u,v)$. [**Defining rewards.**]{} For any node $v$, let $\tau(v)$ denote the number of right-branches taken on the path from the root to $v$. We define the reward of each node $v$ to be $r_v:=(1-p)^{\tau(v)}$. [**Defining sizes.**]{} Let ${e\left(x\right)}:=2^{x}$ for any $x\in \mathbb{R}$. The size at the root, $s_\rho:={e\left(2^L\right)}=2^{2^L}$. The rest of the sizes are defined recursively. For any non-root node $v$ at level $\ell(v)$ with $u$ denoting its parent, the size is: $$s_v:=\left\{ \begin{array}{ll} s_u\cdot {e\left(2^{\ell(v)}\right)} & \mbox{ if $v$ is the right child of $u$}\\ s_u\cdot {e\left(-2^{\ell(v)}\right)} & \mbox{ if $v$ is the left child of $u$} \end{array}\right.$$ In other words, for a node $v$ at level $\ell$, consider the path $P=(\rho=u_L,u_{L-1},\ldots,u_{\ell+1},u_\ell=v)$ from $\rho$ to $v$. Let $k = \sum_{j=L}^{\ell} (-1)^{i(u_j)} 2^j$ where $i(u_j)=1$ if $u_j$ is the left child of its parent $u_{j+1}$, and $0$ otherwise (we assume $i(\rho)=0$). Then $s_v = {e\left(k\right)}$. Observe that for a node $v$, each node $u$ in its left (resp. right) subtree has $s_u < s_v$ (resp. $s_u > s_v$). It remains to define distances on the edges. This will be done in an indirect way, and it is instructive to first consider the adaptive policy that we will work with. In particular, the distances will be defined in such a way that the adaptive policy can always continue till it reaches a leaf node. [**Adaptive policy [[$\mathcal{A}$]{}]{}.**]{} Consider the policy [[$\mathcal{A}$]{}]{}that goes left at node $u$ whenever it observes size zero at $u$, and goes right otherwise. Clearly, the [*residual budget*]{} $b(v)$ at node $v$ under ${{\ensuremath{\mathcal{A}}}\xspace}$ will satisfy the following: $b(\rho) = B = {e\left(2^{L+1}\right)}= 2^{2^{L+1}}$, and $$b(v) := \left\{ \begin{array}{ll} b(u)-s_u - d(u,v) & \mbox{ if $v$ is the right child of $u$}\\ b(u) - d(u,v) & \mbox{ if $v$ is the left child of $u$} \end{array}\right.$$ [**Defining distances.**]{} We will define the distances so that the residual budgets $b(\cdot)$ under ${{\ensuremath{\mathcal{A}}}\xspace}$ satisfy the following: $b(\rho)=B$, and for any node $v$ with parent $u$, $$b(v) := \left\{ \begin{array}{ll} b(u)-s_u & \mbox{ if $v$ is the right child of $u$}\\ s_u & \mbox{ if $v$ is the left child of $u$} \end{array}\right.$$ In particular, this implies the following lengths on edges. For any node $v$ with parent $u$, $$d(u,v) := \left\{ \begin{array}{ll} 0 & \mbox{ if $v$ is the right child of $u$}\\ b(u)-s_u = b(u)-b(v) & \mbox{ if $v$ is the left child of $u$} \end{array}\right.$$ In Claim \[cl:well-defn\] below we will show that the distances are non-negative, and hence well-defined. Figure \[fig:tree-def\] gives a pictorial view of the instance. ![The binary tree [$\mathcal{T}$]{}. \[fig:tree-def\]](tree-defn) #### Basic properties of the instance Let $A_d(v)$ denote the distance traveled by the adaptive strategy to reach $v$, and let $A_s(v)$ denote the total size instantiation before reaching $v$. By the definition of the budgets, and as takes the right branch at $u$ iff the size at $u$ instantiates, we have the following. \[lem:bv\] For any node $v$, the budget $b(v)$ satisfies $b(v)=B - A_d(v) - A_s(v)$. \[size:val\] If a node $w$ is a left child of its parent, then $b(w) = s_w \cdot {e\left(2^{\ell(w)}\right)}$. Let $u$ be the parent of $w$. By definition of sizes, $s_w = s_u \cdot {e\left(-2^{\ell(w)}\right)}$. As $b(w) = s_u$ by the definition of residual budgets, the claim follows. \[cl:well-defn\] For any node $u$, we have $3\cdot s_u\le b(u)$. This implies that all the residual budgets and distances are non-negative. Let $w$ denote the lowest level node on the path from $\rho$ to $u$ that is the left child of its parent (if $u$ is the left child of its parent, then $w=u$); if there is no such node, set $w=\rho$. Note that by Claim \[size:val\] and the definition of $s_\rho$ and $b(\rho)$, in either case it holds that $b(w)=s_w\cdot {e\left(2^{\ell(w)}\right)}$. Let $\pi$ denote the path from $w$ to $u$ (including $w$ but not $u$; so $\pi=\emptyset$ if $w=u$). Since $\pi$ contains only right-branches, $b(u)=b(w)-\sum_{y\in \pi} s_y$ and hence $b(u)\ge b(w)- 3 \sum_{y\in \pi} s_y$. Thus to prove $3\cdot s_u\le b(u)$ it suffices to show $3(s_u+ \sum_{y\in \pi} s_y)\le b(w)$. For brevity, let $s:=s_w$ and $\ell=\ell(w)$. Using the definition of sizes, $$\begin{aligned} s_u+\sum_{y\in \pi} s_y & \le & \sum_{i=1}^{\ell} s\cdot {e\left(2^{\ell-1} + 2^{\ell-2} +\cdots 2^i\right)} \quad = \quad s\cdot \sum_{i=1}^{\ell} {e\left(2^{\ell}-2^i\right)} \\ & = & s \cdot \sum_{i=1}^{\ell} {e\left(2^\ell\right)} \cdot {e\left(-2^i\right)} \quad \le \quad s\cdot {e\left(2^{\ell}\right)} \cdot\sum_{i\ge 1} 4^{-i} \\ & \le & \frac{1}{3}\cdot s\cdot {e\left(2^{\ell}\right)} \quad = \quad \frac{b(w)}{3},\end{aligned}$$ as desired. Here the right hand side of the first inequality is simply the total size of nodes in the $w$ to leaf path using all right branches. The inequality in the second line follows as ${e\left(-2^{i}\right)}= 2^{-2^{i}} \leq 2^{-2i} = 4^{-i}$ for all $i\geq 1$. Thus we always have $3\cdot s_u\le b(u)$. As $b(v)=b(u)-s_u$ if $v$ is the right child of $u$, or $b(v) = s_u$ otherwise, this implies that all the residual-budgets are non-negative. Similarly, as $d(u,v)$ is either $0$ or $b(u)-s_u$ (and hence at least $2/3 b(u)$), this implies that all edge lengths are non-negative. This claim shows that the above instance is well defined, and that [[$\mathcal{A}$]{}]{}is a feasible adaptive policy that always continues for $L$ steps until it reaches a leaf. Next, we show that [[$\mathcal{A}$]{}]{}obtains large expected reward. \[lem:ad-profit\] The expected reward of policy [[$\mathcal{A}$]{}]{}is $\Omega(L)$. Notice that [[$\mathcal{A}$]{}]{}accrues reward as follows: it keeps getting reward $1$ (and going left) until the first positive size instantiation, then it goes right for a single step and keeps going left and getting reward $(1-p)$ till the next positive size instantiation and so on. This continues for a total of $L$ steps. In particular, at any time $t$ it collects reward $(1-p)^i$, if exactly $i$ nodes have positive sizes among the $t$ nodes seen. Let $X_i$ denote the Bernoulli random variable that is $1$ if the $i^{th}$ node in [[$\mathcal{A}$]{}]{}has a positive size instantiation, and $0$ otherwise. So $E[X_i]=p$, and $E[X_1 + \ldots + X_L] = L p = \sqrt{L}$. By Markov’s inequality, the probability that more than $2\sqrt{L}$ nodes in [[$\mathcal{A}$]{}]{}have positive sizes is at most half. Hence, with probability at least $\frac12$ the reward collected in the last node of [[$\mathcal{A}$]{}]{}is at least $(1-p)^{2 \sqrt{L}}$. That is, the total expected reward of [[$\mathcal{A}$]{}]{}is at least $\frac12\cdot L \cdot (1-p)^{2 \sqrt{L}} \approx L/2 \cdot e^{-2} = \Omega(L)$. Bounding Directed Non-adaptive Policies {#subsec:dir-gap} --------------------------------------- We will first show that any non-adaptive policy [[$\mathcal{N}$]{}]{}that is constrained to visit vertices according to the partial order given by the tree [$\mathcal{T}$]{}gets reward at most $O(\sqrt{L})$. Notice that these correspond precisely to non-adaptive policies on the directed tree [$\mathcal{T}$]{}. The key property we need from the size construction is the following. \[lem:prefix-size\] For any node $v$, the total size instantiation observed under the adaptive policy [[$\mathcal{A}$]{}]{}before $v$ is strictly less than $s_v$. Consider the path $\pi$ from the root to $v$, and let $k_1 < k_2<\cdots<k_t$ denote the levels at which $\pi$ “turns left”. That is, for each $i$, the node $u_i$ at level $k_i$ in path $\pi$ satisfies (a) $u_i$ is the right child of its parent, and (b) $\pi$ contains the left child of $u_i$ if it goes below level $k_i$. (If $v$ is the right child of its parent then $u_1=v$ and $k_1=\ell(v)$.) Let $s_i$ denote the size of $u_i$, the level $k_i$ node in $\pi$. Also, set $k_{t+1}=L$ corresponding to the root. Below we use $[t]:=\{1,2,\cdots,t\}$. ![The path $\pi$ in proof of Lemma \[lem:prefix-size\]. \[fig:??\] ](prefix-size) We first bounds the size instantiation between levels $k_{i+1}$ and $k_{i}$ in terms of $s_i$. Observe that a positive size instantiation is seen in [[$\mathcal{A}$]{}]{}only along right branches. So for any $i\in [t]$, the total size instantiation seen in $\pi$ between levels $k_i$ and $k_{i+1}$ is at most: $$\begin{aligned} \label{eq:prefix-size1} & & s_i\cdot \left[ {e\left(-2^{k_i}\right)} + {e\left(-2^{k_i}-2^{k_i+1}\right)} + {e\left(-2^{k_i}-2^{k_i+1} - 2^{k_i+2} \right)}\cdots\right] \nonumber \\ & \le & s_i \cdot {e\left(-2^{k_i}\right)} \cdot \left(1+1/2+1/4+\cdots \right) \quad \leq \quad 2\,s_i\cdot {e\left(-2^{k_i}\right)}\end{aligned}$$ Now, note that for any $i\in[t]$, the sizes $s_{i+1}$ and $s_i$ are related as follows: $$\begin{aligned} s_{i+1} &\le & s_i \cdot {e\left(-2^{k_i} + 2^{k_i+1} + 2^{k_i+2} +\cdots +2^{k_{i+1}-1} \right)}\quad =\quad s_i \cdot {e\left(-2^{k_i} - 2^{k_i+1} + 2^{k_{i+1}}\right)} \notag \\ &\le &\frac{s_i}{4}\cdot {e\left(-2^{k_i} + 2^{k_{i+1}}\right)} \label{eq:prefix-size2}\end{aligned}$$ The first inequality uses the fact that the path from $u_{i+1}$ to $u_i$ is a sequence of (at least one) left-branches followed by a sequence of (at least one) right-branches. As the size decreases along left-branches and increases along right branches, it follows that conditional on the values of $k_{i}$ and $k_{i+1}$, the ratio $s_{i+1}/s_{i}$ is maximized for the path with a sequence of left branches followed by a single right branch (at level $k_i$). Using , we obtain inductively that: $$\label{eq:prefix-size3} s_{i+1}\cdot {e\left(-2^{k_{i+1}}\right)} \quad \le \quad \frac14\cdot s_i\cdot {e\left(-2^{k_i}\right)} \quad \le \quad \frac{1}{4^{i}} \cdot s_1\cdot {e\left(-2^{k_1}\right)},\qquad \forall i\in[t].$$ Using  and , the total size instantiation seen in $\pi$ (this does not include the size at $v$) is at most: $$\label{eq:prefix-size4} \sum_{i=1}^t 2\,s_i\cdot {e\left(-2^{k_i}\right)} \quad \le \quad 2\sum_{i=1}^t \frac{1}{4^{i-1}} \cdot s_1\cdot {e\left(-2^{k_1}\right)} \quad < \quad 4\,s_1\cdot {e\left(-2^{k_1}\right)}.$$ Finally, observe that the size at the level $k_1$ node $s_1\le s_v\cdot {e\left(2^{k_1-1}+2^{k_1-2}+\cdots+2^1\right)}=s_v\cdot {e\left(2^{k_1}-2\right)}$, since $k_1$ is the lowest level at which $\pi$ turns left (i.e. $\pi$ keeps going left below level $k_1$ until $v$). Together with , it follows that the total size instantiation seen before $v$ is strictly less than $$4 \,s_1\cdot {e\left(-2^{k_1}\right)} \leq 4 \, {e\left(-2^{k_1}\right)} \cdot s_v\cdot {e\left(2^{k_1}-2\right)} = 4 \, {e\left(-2\right)} \, s_v = s_v.$$ This completes the proof of Lemma \[lem:prefix-size\]. We now show that any non-adaptive policy on the directed tree [$\mathcal{T}$]{}achieves reward $O(\sqrt{L})$. Note that any such solution [[$\mathcal{N}$]{}]{}is just a root-leaf path in [$\mathcal{T}$]{}that skips some subset of vertices. A node $v$ in [[$\mathcal{N}$]{}]{}is an [*L-branching*]{} node if the path [[$\mathcal{N}$]{}]{}goes left after $v$. [*R-branching*]{} nodes are defined similarly. \[cl:na-Rbranch\] The total reward from R-branching nodes is at most $\sqrt{L}$. As the reward of a node decreases by a factor of $(1-p)$ upon taking a right branch, the total reward of such nodes is at most $\sum_{i=0}^L (1-p)^i \le \frac{1}{p}=\sqrt{L}$. \[cl:na-Lbranch\] [[$\mathcal{N}$]{}]{}can not get any reward after two L-branching nodes instantiate to positive sizes. For any node $v$ in tree [$\mathcal{T}$]{}, let $A_d(v)$ (resp. $A_s(v)$) denote the distance traveled (resp. size instantiated) in the adaptive policy [[$\mathcal{A}$]{}]{}until $v$; here $A_s(v)$ does not include the size of $v$. Observe that Lemma \[lem:prefix-size\] implies that $A_s(v)<s_v$ for all nodes $v$. In the non-adaptive solution [[$\mathcal{N}$]{}]{}, let $u$ and $v$ be any two L-branching nodes that instantiate to positive sizes $s_u$ and $s_v$; say $u$ appears before $v$. Under this outcome, we will show that [[$\mathcal{N}$]{}]{}exhausts its budget after $v$. Note that the distance traveled to node $v$ in [[$\mathcal{N}$]{}]{}is exactly $A_d(v)$, the same as that under [[$\mathcal{A}$]{}]{}. So the total distance plus size instantiated in [[$\mathcal{N}$]{}]{}is at least $A_d(v)+s_v+s_u$, which (as we show next) is more than the budget $B$. By Claim \[lem:bv\], $b(v)=B-A_d(v)-A_s(v)$. Moreover, the residual budget $b(u')$ at the left child $u'$ of $u$ equals $s_u$. Since the residual budgets are non-increasing down the tree [$\mathcal{T}$]{}, we have $B-A_d(v)-A_s(v)=b(v) \le b(u') = s_u$, i.e. $A_d(v)\ge B-A_s(v)-s_u$. Hence, the total distance plus size in [[$\mathcal{N}$]{}]{}is at least $$A_d(v)+s_v+s_u \quad \geq \quad B-A_s(v)+s_v \quad > \quad B,$$ where the last inequality follows from Lemma \[lem:prefix-size\]. So [[$\mathcal{N}$]{}]{}can not obtain reward from any node after $v$. Combining the above two claims, we obtain: \[cl:mon-na\] The expected reward of any directed non-adaptive policy is at most $3\sqrt{L}$. Using Claim \[cl:na-Lbranch\], the expected reward from L-branching nodes is at most the expected number of L-branching nodes until two positive sizes instantiate, i.e. at most $\frac{2}{p}=2\sqrt{L}$. Claim \[cl:na-Rbranch\] implies that the expected reward from R-branching nodes is at most $\sqrt{L}$. Adding the two types of rewards, we obtain the claim. This proves an $\Omega(\sqrt{\log\log B})$ adaptivity gap for stochastic orienteering on directed metrics. We remark that the $O(\log\log B)$ upper bound in [@GKNR12] also holds for directed metrics. Adaptivity Gap for Undirected Tree ---------------------------------- We now show that the adaptivity gap does not change much even if we make the edges of the tree undirected. In particular, this has the effect of allowing the non-adaptive policy [[$\mathcal{N}$]{}]{}to backtrack along the (previously directed) edges, and visit any collection of nodes in the tree. Recall that in the directed instance of the previous subsection, the non-adaptive policy could not try too many $L$-branching nodes (Claim \[cl:na-Lbranch\]) and hence was forced to choose mostly $R$-branching nodes, in which case the rewards decreased rapidly. However, in the undirected case, the non-adaptive policy can move along some right edges to collect rewards and then backtrack to high-reward nodes. The main idea of the analysis below will be to show that non-adaptive policies cannot gain much more from backtracking (Claims \[cl:na-noLback\] and \[cl:na-back\]). The adaptive policy we compare against is the same as in the directed case. Let ${{\ensuremath{\mathcal{N}}}\xspace}'$ denote some fixed non-adaptive policy. Using the definition of edge-lengths, \[cl:na-noLback\] ${{\ensuremath{\mathcal{N}}}\xspace}'$ can not backtrack over any left-branching edge. As in the proof of Claim \[cl:na-Lbranch\], for any $v\in{\ensuremath{\mathcal{T}}\xspace}$, let $A_d(v)$ (resp. $A_s(v)$) denote the distance traveled (resp. size instantiated) in the adaptive policy [[$\mathcal{A}$]{}]{}until node $v$; recall $A_s(v)$ does not include the size of $v$. If ${{\ensuremath{\mathcal{N}}}\xspace}'$ backtracks over the left-edge $(u,v)$ out of some node $u$ then the distance traveled is at least: $$\begin{aligned} A_d(v) + d(u,v) & = & A_d(v)+b(u)-b(v) = B-2\cdot b(v)-A_s(v)+b(u) \\ & > & \quad B-2\cdot b(v)-s_{v}+b(u) = B-2\cdot s_u-s_{v}+b(u) \\ & \ge & \quad B-3\cdot s_u +b(u) \quad\ge \quad B\end{aligned}$$ The first equality follows as $d(u,v)=b(u)-b(v)$ and the second equality follows as $A_d(v)=B- A_s(v) - b(v)$ by Claim \[lem:bv\]. The first inequality follows as $A_s(v)<s_v$, by Lemma \[lem:prefix-size\]. The second inequality follows as $s_v < s_u$ by the definition of sizes. Finally, the last inequality follows by Claim \[cl:well-defn\]. Since the distance traveled by [[$\mathcal{N}$]{}]{}’ is more than $B$, the claim follows. We now focus on bounding the contribution due to backtracking on the right edges. Let $\{e_i\}_{i=1}^k$ be the left-edges traversed by ${{\ensuremath{\mathcal{N}}}\xspace}'$; we denote $e_i=(u_i,v_i)$ where $v_i$ is the left child of $u_i$. We now partition the nodes visited in ${{\ensuremath{\mathcal{N}}}\xspace}'$ as follows. For each $i\in[k]:=\{1,2,\cdots k\}$, group $G_i$ consists of nodes visited after traversing $e_{i-1}$ and before traversing $e_i$; and $G_{k+1}$ is the set of nodes visited after $e_k$. Note that the nodes in $G_i$ are visited contiguously using only right edges (they need not be visited in the order given by tree [$\mathcal{T}$]{}, as the algorithm may bactrack). See Figure \[fig:back-NA\] for a pictorial view. For each $i\in[k]$, let $X_i{\subseteq}G_i$ denote the nodes at level more than $u_i$ (the parent node of left-edge $e_i$); and let $Y_i=G_i\setminus X_i\setminus \{u_i\}$. We also set $X_{k+1}=G_{k+1}$. By using exactly the argument in Claim \[cl:na-Rbranch\], the total reward in $\{X_i\}_{i=1}^{k+1}$ is at most $\sum_{j=0}^L (1-p)^j \le \sqrt{L}$. Let us modify ${{\ensuremath{\mathcal{N}}}\xspace}'$ by dropping all nodes in $\{X_i\}_{i=1}^{k+1}$. Each remaining node $w$ of ${{\ensuremath{\mathcal{N}}}\xspace}'$ is either (i) an L-branching node, where ${{\ensuremath{\mathcal{N}}}\xspace}'$ goes left after $w$ (these are the end-points $u_i$s of left-edges), or (ii) a “[*backtrack*]{} node” where ${{\ensuremath{\mathcal{N}}}\xspace}'$ backtracks on the edge from $w$ to its parent (these are nodes in $Y_i$s). By Claim \[cl:na-Lbranch\], the expected reward from L-branching nodes is at most $2\sqrt{L}$. In order to bound the total expected reward, it now suffices to bound the reward from the backtrack nodes. \[cl:na-back\] The expected reward of ${{\ensuremath{\mathcal{N}}}\xspace}'$ from the backtrack nodes is at most $2\sqrt{L}$ Consider the partition (defined above) of backtrack nodes of ${{\ensuremath{\mathcal{N}}}\xspace}'$ into groups $Y_1,Y_2,\cdots,Y_k$. Recall that ${{\ensuremath{\mathcal{N}}}\xspace}'$ visits each group $Y_i$ contiguously (perhaps not in the order given by [$\mathcal{T}$]{}) and then traverses left-edge $e_i$ to go to the next group $Y_{i+1}$. Moreover, $u_i$ (the parent end-point of left-edge $e_i$) is an ancestor of all $Y_i$-nodes. See also Figure \[fig:back-NA\]. Note also that the walk visiting each group $Y_i$ consists only of right-edges: so the total reward in any single group $Y_i$ is at most $\min\{|Y_i|, \sqrt{L}\}$ (see Claim \[cl:na-Rbranch\]). Define $h_i:=\min\{|Y_i|, \sqrt{L}\}$ for each $i\in[k]$. ![The walk corresponding to non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}'$ in Claim \[cl:na-back\]. \[fig:back-NA\] ](back-NA) Let $I$ denote the (random) index of the first group where a positive size is instantiated. We now show that ${{\ensuremath{\mathcal{N}}}\xspace}'$ can not visit any group indexed more than $I$. Let $u=u_I$ and $v=v_I$ denote the end-points of the left-edge $e_I$. Note that ${{\ensuremath{\mathcal{N}}}\xspace}'$ must traverse the left edge $e_I=(u,v)$ out of $u$ to reach groups $Y_{I+1},\cdots,Y_k$. If $w\in Y_I$ is the node with positive size instantiation and $\ell$ its level, then $s_w\ge s_u\cdot {e\left(2^{\ell}\right)}\ge 4\,s_u$ (since $u$ is an ancestor of all $Y_I$-nodes). The distance traveled by [[$\mathcal{N}$]{}]{}’ till $v$ is $$A_d(v) = B-b(v)-A_s(v) = B - s_u - A_s(v) >B-2\cdot s_u$$ where the last inequality follows by Lemma \[lem:prefix-size\]. Thus the total distance plus size seen in ${{\ensuremath{\mathcal{N}}}\xspace}'$ (till $v$) is at least $B-2s_u+s_w$, which is at least $\ge B+2s_u$ and hence $>B$. Thus ${{\ensuremath{\mathcal{N}}}\xspace}'$ can not visit any higher indexed group. Using the above observation, the expected reward from backtrack nodes is at most: $$\begin{aligned} {\mathbb{E}}\left[\sum_{i=1}^I h_i\right] & =& \sum_{i=1}^k h_i\cdot \Pr[I\ge i] \quad \le \quad \max_{i=1}^k h_i \,+\, \sum_{i=1}^k h_i\cdot \Pr[I\ge i+1] \\ &\le & \sqrt{L} \,+\, \sum_{i=1}^k h_i\cdot (1-p)^{\sum_{j=1}^{i} |Y_j|} \quad \le \quad \sqrt{L} \,+\, \sum_{i=1}^k h_i\cdot (1-p)^{\sum_{j=1}^{i} h_j} \\ &\le & \sqrt{L} \,+\, \sum_{x\ge 0} (1-p)^x \quad = \quad 2\sqrt{L}.\end{aligned}$$ Above we used the fact that $\Pr[I\ge i+1] = (1-p)^{\sum_{j=1}^i|Y_j|} \le (1-p)^{\sum_{j=1}^i h_j}$. Altogether, it follows that any non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}'$ has expected reward at most $\sqrt{L} + 2\sqrt{L}+ 2\sqrt{L} =5\sqrt{L}$. Finally, using Lemma \[lem:ad-profit\], we obtain an $\Omega(\sqrt{L})=\Omega\left(\sqrt{\log\log B}\right)$ adaptivity gap. Adaptivity Gap on Line Metric ----------------------------- We now show that the previous instance on a tree metric can also be embedded into a line metric such that the adaptivity gap does not change much. This gives an $\Omega\left(\sqrt{\log\log B}\right)$ adaptivity gap for stochastic orienteering even on line metrics. The line metric [[$\mathcal{L}$]{}]{}is defined as follows. Each node $v$ of the tree [$\mathcal{T}$]{}is mapped (on the real line) to the coordinate $d(\rho,v)$ which is the distance in [$\mathcal{T}$]{}from the root $\rho$ to $v$. Since all distances in our construction are integers, each node lies at a non-negative integer coordinate. Note that multiple nodes may be at the same coordinate (for example, as all right-edges in [$\mathcal{T}$]{}have zero length). Below, $d(\cdot)$ will denote distances in the tree metric [$\mathcal{T}$]{}, and $d_{{\ensuremath{\mathcal{L}}}\xspace}(\cdot)$ denotes distances in the line metric [[$\mathcal{L}$]{}]{}. Note that $d_{{\ensuremath{\mathcal{L}}}\xspace}(\rho,v)=d(\rho,v)$ for all nodes $v\in{\ensuremath{\mathcal{T}}\xspace}$. Moreover, the distance $d_{{\ensuremath{\mathcal{L}}}\xspace}(u,v)$ between two nodes $u$ and $v$ in the line metric is $|d(\rho,v)-d(\rho,u)|$, which is at most the distance $d_{{\ensuremath{\mathcal{T}}\xspace}}(u,v)$ in the tree metric. Thus the adaptive policy [[$\mathcal{A}$]{}]{}for the tree [$\mathcal{T}$]{}is also valid for the line, which (by Lemma \[lem:ad-profit\]) has expected reward $\Omega(L)$. However, the distances $d_L(u,v)$ on the line could be arbitrarily smaller than $d_T(u,v)$, and thus the key issue is to show that non-adaptive policies cannot do much better. To this end, we begin by observing some more properties of the distances $d(\rho,v)$ and the embedding on the line. \[lem:dist-lr\] For any internal node $u \in {\ensuremath{\mathcal{T}}\xspace}$, let $L_u$ (resp. $R_u$) denote the subtree rooted at the left (resp. right) child of $u$. Then, for any node $v \in L_u$, $ d(\rho,v) > B-2 s_u$ and for any node $v \in R_u$, $d(\rho,v) \leq B - 4 s_u.$ For any node $v$, recall that its residual budget $b(v) = B - d(\rho,v)-A_s(v)$, where $A_s(v)$ is the total size instantiated in the adaptive policy [[$\mathcal{A}$]{}]{}before node $v$. Suppose $v \in L_u$, and let $u'$ be the left child of $u$. Then $$d(\rho,v) \geq d(\rho,u') = B - b(u') - A_s(u') = B-s_u - A_s(u') = B - s_u - A_s(u) > B -2 s_u,$$ where we use that $b(u')=s_u$, $A_s(u')=A_s(u)$ and the last inequality follows from Lemma \[lem:prefix-size\]. Now consider $v \in R_u$. We have $A_s(v) \geq s_u$, as $v$ lies in the right subtree under $u$ and so $u$ must have instantiated to a positive size before reaching $v$. By Claim \[cl:well-defn\], $b(v) \geq 3 s_v$ which is at least $3 s_u$ since $s_v > s_u$ for each $v\in R_u$. Thus $d(\rho,v)= B - b(v) - A_s(v) \leq B - 4s_u$. This implies the following useful fact. \[cor:left-right\] In the line embedding, for any node $u\in {\ensuremath{\mathcal{T}}\xspace}$, all nodes in the left-subtree $L_u$ appear after all nodes in the right-subtree $R_u$. We will now show that any non-adaptive policy has reward at most $O(\sqrt{L})$. This requires more work than in the tree metric case, but the high level idea is quite similar: we restrict how any non-adaptive policy can look like by using the properties of distances, and show that such policies cannot obtain too much profit. Observe that a non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}''$ is just a walk on [[$\mathcal{L}$]{}]{}, originating from $\rho$ and visiting a subset of vertices. \[lem:line-na-ordered\] Any non-adaptive policy on [[$\mathcal{L}$]{}]{}must visit vertices ordered by non-decreasing distance from $\rho$. We will show that if vertex $v$ is visited before $w$ and $d(\rho,v)>d(\rho,w)$ then the walk to $w$ has length more than $B$; this would prove the lemma. Let $u$ denote the least common ancestor of $v$ and $w$. There are two cases depending on whether $u=w$ or $u \notin \{v,w\}$; note that the ancestor $u$ cannot be $v$ as $d(\rho,v) > d(\rho,w)$. If $u\notin\{v,w\}$, since $d(\rho,v) > d(\rho,w)$, it must be that $v \in L_u$ and $w\in R_u$ by Corollary \[cor:left-right\]. Moreover, the total distance traveled by the path is at least $$d_{{\ensuremath{\mathcal{L}}}\xspace}(\rho,v) + d_{{\ensuremath{\mathcal{L}}}\xspace}(v,w) \ge d(\rho,v) + d(\rho,v) - d(\rho,w) = 2 d(\rho,v) - d(\rho,w) > 2(B - 2 s_u) - (B - 4 s_u) = B,$$ where the second inequality is by Lemma \[lem:dist-lr\]. If $u=w$, since $d(\rho,v)>d(\rho,w)$, there must be at least one left edge $e=(x,y)$ on the path from $w$ to $v$ in the tree (as the length of the right edges is 0). Then, the distance traveled by the path is at least $d_{{\ensuremath{\mathcal{L}}}\xspace}(\rho,v) + d_{{\ensuremath{\mathcal{L}}}\xspace}(v,w) \ge d(\rho,y) + d(x,y) = d(\rho,x) + 2d(x,y)$. As $d(\rho,x) = B-b(x) - A_s(x) > B-b(x) - s_x$ by Lemma \[lem:prefix-size\], and as $d(x,y) = b(x)-s_x$ (by definition of distances on left edges), we have $$d(\rho,x) + 2d(x,y) \geq (B - b(x) - s_x) + 2(b(x)-s_x) = B + b(x) - 3s_x > B$$ where the last inequality follows from Claim \[cl:well-defn\]. By Lemma \[lem:line-na-ordered\], any non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}''$ visits vertices in non-decreasing coordinate order. For vertices at the same coordinate, we can break ties and assume that these nodes are visited in decreasing order of their level in [$\mathcal{T}$]{}. This does not decrease the expected reward due to the following exchange argument. \[cl:line-na-ties\] If ${{\ensuremath{\mathcal{N}}}\xspace}''$ visits two vertices $\{u,v\}$ consecutively that have the same coordinate in [[$\mathcal{L}$]{}]{}and have levels $\ell(u)>\ell(v)$, then $u$ must be visited before $v$. Since $u$ and $v$ have the same coordinate in [[$\mathcal{L}$]{}]{}, by Lemma \[lem:dist-lr\] it must be that one is an ancestor of the other, and the $u-v$ path in [$\mathcal{T}$]{}consists only of right-edges. Since $\ell(u)>\ell(v)$, node $u$ is an ancestor of $v$ in [$\mathcal{T}$]{}. Suppose that ${{\ensuremath{\mathcal{N}}}\xspace}''$ chooses to visit $v$ before $u$. We will show that the alternate solution that visits $u$ before $v$ has larger expected reward. This is intuitively clear since $u$ stochastically dominates $v$ in our setting: the probabilities are identical, size of $u$ is less than $v$, and reward of $u$ is more than $v$. The formal proof also requires independence of $u$ and $v$, and is by a case analysis. Let us [*condition*]{} on all instantiations other than $u$ and $v$: we will show that has larger conditional expected reward than ${{\ensuremath{\mathcal{N}}}\xspace}''$. This would also show that the (unconditional) expected reward of is more than ${{\ensuremath{\mathcal{N}}}\xspace}''$. Let $X$ denote the total distance plus size in ${{\ensuremath{\mathcal{N}}}\xspace}''$ (resp. ) when it reaches $v$ (resp. $u$). Irrespective of the outcomes at $u$ and $v$, the residual budgets in ${{\ensuremath{\mathcal{N}}}\xspace}''$ and $\oN$ before/after visiting $\{u,v\}$ will be identical. So the only difference in (conditional) expected reward is at $u$ and $v$. The following table lists the different possibilities for rewards from $u$ and $v$, as $X$ varies (recall that $B$ is the budget). Cases Reward $({{\ensuremath{\mathcal{N}}}\xspace}'')$ Reward $(\oN)$ ------------------------ -------------------------------------------------- ----------------------- $X+s_u+s_v\le B$ $r_u+r_v$ $r_u+r_v$ $X+s_v\le B<X+s_u+s_v$ $(1-p^2)r_u+r_v$ $r_u+(1-p^2)r_v$ $X+s_u\le B<X+s_v$ $(1-p)(r_u+r_v)$ $r_u+(1-p)r_v$ $X\le B < X+s_u$ $(1-p)^2r_u+(1-p)r_v$ $(1-p)r_u+(1-p)^2r_v$ $B<X$ $0$ $0$ In each case, $\oN$ gets at least as much reward as ${{\ensuremath{\mathcal{N}}}\xspace}''$ since $r_u>r_v$. This completes the proof. For any node $v$ in ${{\ensuremath{\mathcal{N}}}\xspace}''$, let $E_v$ denote the set of nodes $u$ satisfying (i) $u$ appears before $v$ in ${{\ensuremath{\mathcal{N}}}\xspace}''$, and (ii) $u$ is [*not*]{} an ancestor of $v$ in tree [$\mathcal{T}$]{}. We refer to $E_v$ as the “blocking set” for node $v$. We first prove a useful property of the $\{E_v\}$ sets. \[cl:line-E-prop\] For any $v\in {{\ensuremath{\mathcal{N}}}\xspace}''$ and $u\in E_v$, we must have $u\in$ right-subtree$(a)$ and $v\in$ left-subtree$(a)$ at the lowest common ancestor $a$ of $v$ and $u$. Moreover ${{\ensuremath{\mathcal{N}}}\xspace}''$ can not get reward from $v$ if any vertex in its blocking set $E_v$ instantiates to a positive size. Observe that $u$ and $v$ are incomparable in [$\mathcal{T}$]{}because: - $u$ is not an ancestor of $v$ by definition of $E_v$. - $u$ is not a descendant of $v$. Suppose (for a contradiction) that $u$ is a descendant of $v$. Note that $u$ and $v$ are not co-located in [[$\mathcal{L}$]{}]{}: if it were, then by Claim \[cl:line-na-ties\] and the fact that ${{\ensuremath{\mathcal{N}}}\xspace}''$ visits $u$ before $v$, $u$ must be an ancestor of $v$, which contradicts the definition of $E_v$. So the only remaining case is that $u$ is located further from $\rho$ than $v$: but this contradicts Lemma \[lem:line-na-ordered\] as ${{\ensuremath{\mathcal{N}}}\xspace}''$ visits $u$ before $v$. So the lowest common ancestor $a$ of $v$ and $u$ is distinct from both $v,u$. Since $d(\rho,u)\le d(\rho,v)$, we must have $u\in R_a$ and $v\in L_a$. This proves the first part of the claim. Since $u\in R_a$, its size $s_u\ge s_a\cdot {e\left(2^{\ell(a)-1} - 2^{\ell(a)-2}-\cdots -2^1\right)} = 4\cdot s_a$. As $v \in L_a$, by Lemma \[lem:dist-lr\] $d(\rho,v)> B-2\cdot s_a$ and hence if $u$ has non-zero size, the total distance plus size until $v$ is more than $4s_a+B-2s_a>B$, i.e. ${{\ensuremath{\mathcal{N}}}\xspace}''$ can not get reward from $v$. The next key claim shows that the sets $E_v$ are increasing along ${{\ensuremath{\mathcal{N}}}\xspace}''$. \[cl:inc-E\] If node $v$ appears before $w$ in ${{\ensuremath{\mathcal{N}}}\xspace}''$ then $E_v{\subseteq}E_w$. Consider nodes $v$ and $w$ as in the claim, and suppose (for contradiction) that there is some $u\in E_v\setminus E_w$. Since $u\in E_v$, by Claim \[cl:line-E-prop\], $u\in$ right-subtree$(a)$ and $v\in$ left-subtree$(a)$, where $a$ is the lowest common ancestor of $u$ and $v$. Clearly $u$ appears before $w$ in ${{\ensuremath{\mathcal{N}}}\xspace}''$ ($u$ is before $v$ which is before $w$). And since $u \not\in E_w$, $u$ must be an ancestor of $w$. Hence $w$ is also in the right-subtree$(a)$, and $d(\rho,w)<d(\rho,v)$; recall that $v\in$ left-subtree$(a)$. This contradicts with Lemma \[lem:line-na-ordered\] since $v$ is visited before $w$. Thus $E_v{\subseteq}E_w$. Based on Claim \[cl:inc-E\], the blocking sets in ${{\ensuremath{\mathcal{N}}}\xspace}''$ form an increasing sequence. So we can partition ${{\ensuremath{\mathcal{N}}}\xspace}''$ into contiguous segments $\{N_i\}_{i=1}^k$ with $u_i$ (resp. $v_i$) denoting the first (resp. last) vertex of $N_i$, so that the following hold for each $i\in[k] := \{1,2,\cdots,k\}$. - The first vertex $u_i$ of $N_i$ has $|E_{u_i}|\ge (i-1)\sqrt{L}$, and - the increase in the blocking set $|E_{v_i}\setminus E_{u_i}| = |E_{v_i}|-|E_{u_i}| < \sqrt{L}$. [**Defining directed non-adaptive policies from ${{\ensuremath{\mathcal{N}}}\xspace}''$.**]{} For each $i\in[k]$ consider the non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}_i$ that traverses segment $N_i$ and visits only vertices in $N_i\setminus E_{v_i}$; note that $N_i\setminus E_{v_i} = N_i\setminus \left(E_{v_i} \setminus E_{u_i}\right)$ since $E_{u_i}\cap N_i=\emptyset$. Notice that the blocking set is always empty in ${{\ensuremath{\mathcal{N}}}\xspace}_i$: this means that nodes in ${{\ensuremath{\mathcal{N}}}\xspace}_i$ are visited in the order of some root-leaf path in tree [$\mathcal{T}$]{}, i.e. ${{\ensuremath{\mathcal{N}}}\xspace}_i$ is a directed non-adaptive policy (as considered in Section \[subsec:dir-gap\]). So by Claim \[cl:mon-na\], the expected reward in each ${{\ensuremath{\mathcal{N}}}\xspace}_i$ is at most $3\sqrt{L}$. That is, $$\label{eq:line-mon-na} \max_{i=1}^k \,\, {\mathbb{E}}\left[\mbox{reward }{{\ensuremath{\mathcal{N}}}\xspace}_i\right] \quad \le \quad 3\sqrt{L}$$ Now we can upper bound the reward in the original non-adaptive policy ${{\ensuremath{\mathcal{N}}}\xspace}''$. $$\begin{aligned} {\mathbb{E}}\left[\mbox{reward }{{\ensuremath{\mathcal{N}}}\xspace}''\right] & =& \sum_{i=1}^k \Pr\left[{{\ensuremath{\mathcal{N}}}\xspace}'' \mbox{ reaches }u_i\right] \cdot {\mathbb{E}}\left[\mbox{reward in }N_i \,|\, {{\ensuremath{\mathcal{N}}}\xspace}'' \mbox{ reaches }u_i \right] \label{eq:line-na-fin1}\\ &\le & \sum_{i=1}^k e^{-i+1} \cdot {\mathbb{E}}\left[\mbox{reward in }N_i \, |\, {{\ensuremath{\mathcal{N}}}\xspace}'' \mbox{ reaches }u_i \right] \label{eq:line-na-fin2}\\ &\le & \sum_{i=1}^k e^{-i+1} \cdot \left( {\mathbb{E}}\left[\mbox{reward }{{\ensuremath{\mathcal{N}}}\xspace}_i\right] +|E_{v_i}\setminus E_{u_i}|\right) \label{eq:line-na-fin3}\\ &\le & \sum_{i=1}^k e^{-i+1} \cdot 4\sqrt{L}\quad \le \quad \frac{4e}{e-1}\,\sqrt{L}.\label{eq:line-na-fin4}\end{aligned}$$ Inequality  uses $|E_{u_i}|\ge (i-1)\sqrt{L}$ and the second part of Claim \[cl:line-E-prop\] which implies $\Pr[{{\ensuremath{\mathcal{N}}}\xspace}''\mbox{ reaches }u_i]\le (1-p)^{|E_{u_i}|}\le e^{-i+1}$. Inequality  uses the definition of ${{\ensuremath{\mathcal{N}}}\xspace}_i$ which is obtained by skipping nodes $E_{v_i}\setminus E_{u_i}$ in $N_i$, and also the independence of the sizes (which allows us to drop the conditioning). Finally,  uses the property $|E_{v_i}\setminus E_{u_i}|< \sqrt{L}$ and . This completes the proof of Theorem \[thm:ad-gap\]. Adaptivity Gap Upper Bound for Correlated Orienteering {#sec:corr-ad-gap} ====================================================== Here we show that the adaptivity gap of stochastic orienteering is $O(\log\log B)$ even in the setting of [*correlated*]{} sizes and rewards. This is an extension of the result proved in [@GKNR12] for the uncorrelated case. The correlated stochastic orienteering problem again consists of a budget $B$ and metric $(V,d)$ with each vertex $v\in V$ denoting a random job. Each vertex has a joint distribution over its reward and processing time (size); so the reward and size at any vertex are correlated. These distributions are still independent across vertices. The basic stochastic orienteering problem is the special case when vertex rewards are deterministic. We prove Theorem \[thm:corr-loglog-UB\] in this section. [**Notation.**]{} Recall that for each vertex $v\in V$, $S_v$ and $R_v$ denote its (random) size and reward. As before, all sizes are integers in the range $[B]:=\{0,1,\cdots,B\}$. We assume an explicit representation of each job’s distribution: the job at vertex $v\in V$ has size $s\in[B]$ and reward $r_v(s)$ with probability $\Pr[S_v=s]=\pi_v(s)$. We represent the optimal adaptive policy naturally as a decision tree [$\mathcal{T}$]{}. Nodes in [$\mathcal{T}$]{}are labeled by vertices in $V$ and branches correspond to size (and reward) instantiations. Note that the same vertex of $V$ may appear at multiple nodes of [$\mathcal{T}$]{}(note the distinction between nodes and vertices). However, any root-leaf path in [$\mathcal{T}$]{}contains each vertex at most once. For nodes $u,u'$, we use $u\prec u'$ to denote $u$ being an ancestor of $u'$ in [$\mathcal{T}$]{}, where $u\ne u'$. For any node $u\in {\ensuremath{\mathcal{T}}\xspace}$, let $d_u$ (resp. $i_u$) denote the total distance traveled (resp. size observed) in [$\mathcal{T}$]{}before $u$. For node $u\in {\ensuremath{\mathcal{T}}\xspace}$, we overload notation and use $S_u$, $R_u$ etc to denote the respective term for the vertex labeling $u$. Note that at any node $u\in{\ensuremath{\mathcal{T}}\xspace}$, only size instantiations $S_u\le B-d_u-i_u$ contribute reward (any larger size violates the budget). Define the expected reward at node $u$ as $\rr_u := \sum_{s=0}^{B-d_u-i_u} \Pr[S_u=s]\cdot r_u(s)$. Observe that the optimal adaptive reward is ${\ensuremath{\mathsf{Opt}}\xspace}=\sum_{u\in{\ensuremath{\mathcal{T}}\xspace}} \Pr[{\ensuremath{\mathcal{T}}\xspace}\mbox{ reaches }u]\cdot \rr_u$. For each integer $j=0,1,\cdots,\lceil \log_2B\rceil$ and node $u\in{\ensuremath{\mathcal{T}}\xspace}$, define $X^j_u:=\min\{S_u,2^j\}$. Let $K:=\Theta(\log\log B)$ be a parameter that will be fixed later. We are now ready to prove Theorem \[thm:corr-loglog-UB\]. It is along similar lines as the proof for uncorrelated stochastic orienteering in [@GKNR12], and also makes use of the following concentration inequality. \[thm:zhang\] Let $I_1,I_2,\ldots$ be a sequence of possibly dependent random variables; for each $k\ge 1$ variable $I_k$ depends only on $I_{k-1},\ldots,I_1$. Consider also a sequence of random functionals $\xi_k(I_1,\ldots,I_k)$ that lie in $[0,1]$. Let ${\mathbb{E}}_{I_k}[\xi_k(I_1,\ldots,I_k)]$ denote the expectation of $\xi_k$ with respect to $I_k$, conditional on $I_1,\ldots,I_{k-1}$. Furthermore, let $\tau$ denote any stopping time. Then, $$\Pr\left[ \,\,\sum_{k=1}^\tau {\mathbb{E}}_{I_k}[\xi_k(I_1,\ldots,I_k)] \,\, \ge \,\, \frac{e}{e-1}\cdot \left( \sum_{k=1}^\tau \xi_k +\delta\right) \right] \quad \le \quad \exp(-\delta), \qquad \forall \delta\ge 0.$$ This result is used in proving the following important property. \[lem:corr-prob\] Assume $K\ge 12$, and fix any $j\in\{0,1,\cdots,\lceil \log B\rceil\}$. Then, the probability of reaching a node $u\in {\ensuremath{\mathcal{T}}\xspace}$ with: - $\sum_{v\preceq u} X^j_v \le 2\cdot 2^j$, and - $\sum_{v\preceq u} {\mathbb{E}}_v\left[ X^j_v \right] > K\cdot 2^j$ is at most $e^{-K/3}$. For each $k=1,2,\cdots$, set $I_k$ to be the $k^{th}$ node seen in [$\mathcal{T}$]{}, and $$\xi_k(I_1,\ldots,I_k) \quad := \quad \frac{X^j_{I_k}}{2^j} \quad = \quad \min\left\{ \frac{S_{I_k}}{2^{j}}\, , \, 1\right\}.$$ Observe that this sequence satisfies the condition in Theorem \[thm:zhang\], since the identity of the $k^{th}$ node in [$\mathcal{T}$]{}depends only on the outcomes of the previous $k-1$ nodes. Moreover, each $\xi_k(\cdot)$ has a value in the range $[0,1]$. Note that the conditional expectation ${\mathbb{E}}_{I_k}[\xi_k(I_1,\ldots,I_k)] = {\mathbb{E}}_{I_k}\left[X^j_{I_k}\right]/2^j$. Define stopping time $\tau$ to be the first node $I_k=u$ (if one exists) at which the following two conditions hold: $$\sum_{h=1}^k \xi_h(I_1,\ldots,I_h) \le 2, \quad \mbox{ i.e. } \sum_{v\preceq u} X^j_v \le 2\cdot 2^j; \quad \mbox{ and }$$ $$\sum_{h=1}^k {\mathbb{E}}_{I_h}\left[\xi_h(I_1,\ldots,I_h)\right] > K, \quad \mbox{ i.e. } \sum_{v\preceq u} {\mathbb{E}}_v\left[ X^j_v \right] > K\cdot 2^j.$$ If there is no such node, then $\tau$ stops when [$\mathcal{T}$]{}ends (i.e. after a leaf-node of [$\mathcal{T}$]{}). Clearly, if $\tau$ stops before [$\mathcal{T}$]{}ends, then $$\sum_{k=1}^\tau {\mathbb{E}}_{I_k}[\xi_k(I_1,\ldots,I_k)] \,\, > \,\, K \,\, > \,\, \frac{e}{e-1}\cdot \left( \sum_{k=1}^\tau \xi_k(I_1,\ldots,I_k) +\frac{K}{2}-2\right)$$ Now, setting $\delta=\frac{K}{2}-2$ in Theorem \[thm:zhang\], the probability that $\tau$ stops before [$\mathcal{T}$]{}ends (i.e. we reach a node $u$ satisfying the two conditions stated in the lemma) is at most $e^{-K/2+2}\le e^{-K/3}$ using $K\ge 12$. \[lem:corr-NAfromA\] Assume $K\ge 3\cdot\log(6\log B)+12$. There is some node $s\in {\ensuremath{\mathcal{T}}\xspace}$ such that the path $\sigma$ from the root to $s$ satisfies: - Total reward: $\sum_{v\in \sigma} \rr_v = \sum_{v\preceq s}\rr_v\ge {\ensuremath{\mathsf{Opt}}\xspace}/2$. - Prefix size: For each $v\in\sigma$ and $j$, either $\sum_{w\preceq v} X^j_w > 2\cdot 2^j$ or $\sum_{w\preceq v} {\mathbb{E}}_w\left[X^j_w\right] \le K\cdot 2^j$. For each $j=0,1,\cdots,\lceil \log_2B\rceil$, define [*band $j$ “star nodes”*]{} to be those $u\in {\ensuremath{\mathcal{T}}\xspace}$ that satisfy the two conditions in Lemma \[lem:corr-prob\]. Using Lemma \[lem:corr-prob\] and a union bound over $1+\lceil\log_2B\rceil$ values of $j$, the probability of reaching any star node is at most $\frac{3\log_2B}{e^{K/3}}\le \frac12$ since $K\ge 3\cdot\log(6\log B)$. Observe that for any node $u\in {\ensuremath{\mathcal{T}}\xspace}$, the conditional expected reward from the subtree of [$\mathcal{T}$]{}under $u$ is at most ${\ensuremath{\mathsf{Opt}}\xspace}$: otherwise, the alternate policy that visits $u$ directly from the root and follows the subtree below $u$ would be a feasible policy of value more than ${\ensuremath{\mathsf{Opt}}\xspace}$, contradicting the optimality of [$\mathcal{T}$]{}. Consider tree [$\mathcal{T}$]{}truncated just before all the star nodes. By the above two properties, the expected reward lost is at most ${\ensuremath{\mathsf{Opt}}\xspace}\cdot \Pr[\mbox{reach a star node}]\le {\ensuremath{\mathsf{Opt}}\xspace}/2$. So the remaining reward is at least ${\ensuremath{\mathsf{Opt}}\xspace}/2$. By averaging, there is some leaf node $s$ in truncated [$\mathcal{T}$]{}, at which $\sum_{v\preceq s} \rr_v\ge {\ensuremath{\mathsf{Opt}}\xspace}/2$; this proves the “total reward” property. Let $\sigma$ denote the path from root to $s$ in [$\mathcal{T}$]{}. Since $\sigma$ does not contain a star node (of any band), every $v\in\sigma$ violates one of the conditions in Lemma \[lem:corr-prob\] for each value of $j$. This proves the “prefix size” property. [**The non-adaptive policy.**]{} We now complete the proof of Theorem \[thm:corr-loglog-UB\] by implementing the path $\sigma$ from Lemma \[lem:corr-NAfromA\] as a non-adaptive solution [[$\mathcal{N}$]{}]{}. The policy [[$\mathcal{N}$]{}]{}simply involves visiting the vertices on $\sigma$ (in that order) and attempting each job independently with probability $\frac{1}{4K}$. Observe that the distance traveled by [[$\mathcal{N}$]{}]{}until any $v\in \sigma$ is exactly $d_v$. We will show that the expected reward of policy [[$\mathcal{N}$]{}]{}is at least $\frac{1}{8K}\sum_{v\in\sigma} \rr_v$. \[cl:corr-NA\] For any $v\in\sigma$, $\Pr\left[{{\ensuremath{\mathcal{N}}}\xspace}\mbox{ has seen total size $\le i_v$ until }v\right] \ge \frac34$. Let $j\in\{0,1,\cdots,\lceil \log_2B\rceil\}$ be the value for which $2^j-1\le i_v<2^{j+1}-1$. So, $$\sum_{w\prec v} X^j_w \quad =\quad \sum_{w\prec v} \min\{S_w, 2^j\} \quad \le \quad \sum_{w\prec v} S_w \quad =\quad i_v \quad < \quad 2\cdot 2^j.$$ Note that the claim is trivially true for $v$ being the root. For any other $v\in \sigma$, let $v'$ denote the node in $\sigma$ (and [$\mathcal{T}$]{}) immediately preceding $v$. Using the “prefix size” property (with $v'$ and $j$) in Lemma \[lem:corr-NAfromA\], it follows that $\sum_{w\prec v} {\mathbb{E}}_w\left[X^j_w\right] \le K\cdot 2^j$. Since policy [[$\mathcal{N}$]{}]{}tries each job only with probability $\frac{1}{4K}$, by Markov’s inequality, $$\Pr\left[{{\ensuremath{\mathcal{N}}}\xspace}\mbox{ has seen total size $\ge 2^j$ until }v\right] \le \frac14.$$ Since $i_v\ge 2^j-1$ and sizes are integral, the claim follows. We can now bound the reward in [[$\mathcal{N}$]{}]{}. $$\begin{aligned} {\mathbb{E}}[\mbox{reward of }{{\ensuremath{\mathcal{N}}}\xspace}] & \ge & \sum_{v\in \sigma} \Pr[{{\ensuremath{\mathcal{N}}}\xspace}\mbox{ tries }v]\cdot \Pr[{{\ensuremath{\mathcal{N}}}\xspace}\mbox{ has seen total size $\le i_v$ until }v]\cdot \sum_{s=0}^{B-d_v-i_v} \Pr[S_v=s]\cdot r_v(s) \\ &\ge & \sum_{v\in \sigma} \frac{1}{4K}\cdot \frac{3}{4}\cdot \rr_v \quad >\quad \frac{1}{6K}\sum_{v\in \sigma} \rr_v\quad \ge \quad \frac{{\ensuremath{\mathsf{Opt}}\xspace}}{12K}.\end{aligned}$$ The first inequality uses independence across vertices. The second inequality uses the sampling probability of jobs in [[$\mathcal{N}$]{}]{}, Claim \[cl:corr-NA\] and the definition of $\rr$. The last inequality uses the “total reward” property from Lemma \[lem:corr-NAfromA\]. This completes the proof of Theorem \[thm:corr-loglog-UB\]. #### Defining portals on non-adaptive policy We now define some special nodes on the path $\sigma$ from Lemma \[lem:corr-NAfromA\], which will be useful in the approximation algorithm given in the next section. First, some notation. \[def:cap-size-rew\] For any vertex $v$ and integer $j\ge 0$, let $\mu^j_v:={\mathbb{E}}[X^j_v]={\mathbb{E}}\left[\min\{S_v,2^j\}\right]$ be the mean size of $v$ capped at $2^j$. For any vertex $v$ and integer $d\ge 0$, let $\eta_v(d):=\sum_{s=0}^d \pi_v(s)\cdot r_v(s)$ be the expected reward from $v$ under size instantiation at most $d$. Note that the expected reward of any $v\in\sigma$ in [$\mathcal{T}$]{}is $\rr_v=\eta_v(B-d_v-i_v)$. Also, capped sizes satisfy the following property which will play a crucial role in our algorithm: $$\label{eq:capped-mu} \frac{\mu^{j+1}_v}{2^{j+1}} \quad \le \quad \frac{\mu^{j}_v}{2^{j}},\qquad \forall v\in V \mbox{ and } j\ge 0.$$ This inequality follows from the fact that ${\mathbb{E}}\left[\min\{S_v,2^{j+1}\}\right]\le 2\cdot {\mathbb{E}}\left[\min\{S_v,2^j\}\right]$. Let $L:=\lceil \log_2B\rceil$, and $[L]:=\{0,1,\cdots,L\}$. $$\label{eq:NA-portals} \mbox{ Set {\em portal} $v_j$ to be the first vertex $u\in \sigma$ with $i_u\ge 2^{j+1}-1$}, \quad \forall \, j\in [L].$$ Recall that $\sigma$ starts at the root $\rho$; for notational convenience set $v_{-1}=\rho$. Clearly, $v_0\prec v_1\prec\cdots \prec v_{L}$. For any $j\in [L]$, define [*segment*]{} $O_j$ to consist of the vertices $v_{j-1}\preceq u\prec v_j$ in path $\sigma$. The following lemma shows that we can round size instantiations in $\sigma$ to powers of two, and still retain the two properties in Lemma \[lem:corr-NAfromA\]. \[lem:corr-portals\] Given any instance of correlated stochastic orienteering, there exist “portal vertices” $\{v_j\}_{j=0}^L$ and path $\sigma$ originating from $\rho$ and visiting the portals in that order, such that: - Reward: $\sum_{j\in[L]} \,\, \sum_{u\in O_j} \eta_u(B-d_u-2^j+1) \ge {\ensuremath{\mathsf{Opt}}\xspace}/2$. For each $j\in[L]$, $O_j$ consists of the vertices in $\sigma$ between $v_{j-1}$ and $v_{j}$. For any $u\in\sigma$, $d_u$ is the distance to vertex $u$ along $\sigma$. - Prefix mean size: $\sum_{\ell=0}^j \,\,\sum_{u\in O_\ell} \mu^j_u \le (K+1)\cdot 2^j$, for all $j\in[L]$. Here $K=\Theta(\log\log B)$. The path $\sigma$ is from Lemma \[lem:corr-NAfromA\], and portals $\{v_j\}_{j=0}^L$ are as in . For the first property, consider any segment $O_j$ and vertex $u\in O_j$. By the definition of portals, we have $i_u\ge i_{v_{j-1}} \ge 2^{j}-1$; so $\rr_u=\eta_u(B-d_u-i_u)\le \eta_u(B-d_u-2^j+1)$. Using the “total reward” property in Lemma \[lem:corr-NAfromA\], we have: $$\sum_{j\in[L]} \,\, \sum_{u\in O_j} \eta_u(B-d_u-2^j+1) \quad \ge \quad \sum_{v\in \sigma} \rr_v \quad \ge \quad {\ensuremath{\mathsf{Opt}}\xspace}/2.$$ We used the fact that for any vertex $w\in \sigma$ with $v_L\prec w$, $\rr_w=0$ since the total size observed before $w$ is at least $i_{v_L}>B$. To see the second property consider any $j\in [L]$. Let $w\in \sigma$ be the vertex immediately preceding $v_j$, and let $w'$ be the immediate predecessor of $w$. By definition of portal $v_j$, we have $i_{w}<2^{j+1}$; i.e. $\sum_{u\preceq w'} X^j_u \le i_w < 2^{j+1}$. Using the “prefix size” property in Lemma \[lem:corr-NAfromA\] with $w'$ and $j$, we obtain that $\sum_{u\prec w}\mu^j_u\le K\cdot 2^j$. So $\sum_{\ell=0}^j \,\,\sum_{u\in O_\ell} \mu^j_u = \sum_{u\prec w}\mu^j_u + \mu^j_w \le (K+1)\cdot 2^j$. New Approximation Algorithm for Correlated Orienteering {#sec:corr-alg} ======================================================= In this section, we present an improved quasi-polynomial time approximation algorithm for correlated stochastic orienteering, and prove Theorem \[thm:corr-NA\]. An important subroutine in our algorithm is the [*deadline orienteering*]{} problem [@BBCM04]. The input to deadline-orienteering is a metric $(U,d)$ denoting travel times, rewards $\{r_v\}_{v\in U}$ and deadlines $\{\Delta_v\}_{v\in U}$ at all vertices, start ($s$) and end ($t$) vertices, and length bound $D$. The objective is to compute an $s-t$ path of length at most $D$ that maximizes the reward from vertices visited before their deadlines. The best approximation ratio for this problem is $\alpha=\min\{O(\log n), O(\log B)\}$ due to Bansal et al. [@BBCM04]; see also Chekuri et al. [@CKP08]. (Strictly speaking, this definition is slightly more general than the usual one where there is no end-vertex or length bound; but all known approximation algorithms also work for the version we use here.) We actually need an algorithm for a generalization of this problem, in the presence of an additional knapsack constraint. The input to [*knapsack deadline orienteering*]{} ([$\mathsf{KDO}$]{}) is the same as deadline-orienteering, along with a knapsack constraint given by sizes $\{a_v:v\in U\}$ and capacity $A$. The objective is an $s-t$ path of length at most $D$, having total knapsack size at most $A$, that maximizes the reward from vertices visited before their deadlines. We will use the following known result for [$\mathsf{KDO}$]{}. \[thm:kdo\] There is an $O(\alpha)$-approximation algorithm for knapsack deadline orienteering, where $\alpha$ denotes the best approximation ratio for the deadline orienteering problem. Previously, a polynomial time $O(\alpha\cdot \log B)$-approximation algorithm and $\Omega(\alpha)$-hardness of approximation were known for correlated stochastic orienteering [@GKNR12], where $\alpha$ is the best approximation ratio for deadline orienteering. Our result improves the approximation ratio to $O(\alpha\cdot \log^2\log B)$, at the expense of quasi-polynomial running time. [**Outline:**]{} The algorithm involves three main steps. First, it guesses (by enumeration) $\log B$ many “portal vertices” corresponding to the near-optimal structure in Lemma \[lem:corr-portals\], as well as the distances traveled between consecutive portal vertices. (This is the only step that requires quasi-polynomial time.) Next, based on this information, the algorithm solves a configuration LP-relaxation for paths between the portal vertices. This step also makes use of some results/ideas from the previous algorithm [@GKNR12]. Finally, the algorithm uses randomized rounding with alterations to compute the non-adaptive policy from the LP solution. #### Portals. The simple enumeration algorithm guesses the $L+1$ portal vertices $\{v_j\}_{j=0}^L$ as in Lemma \[lem:corr-portals\]; recall that $L=\lceil \log_2B\rceil$. It also guesses the lengths $\{D_j\}_{j=0}^L$ of the segments $O_j$s in Lemma \[lem:corr-portals\]. This requires running time $(nB)^{L+1}$. We can reduce the enumeration of lengths using a somewhat stronger property than in Lemma \[lem:corr-portals\]. \[lem:corr-enum\] Given any instance of correlated stochastic orienteering, there exist portal vertices $\{v_j\}_{j=0}^L$, auxiliary vertices $\{m_j\}_{j=0}^L$, integers $\{e_j \in [L]\}_{j=0}^L$, and for each $j\in [L]$ a $v_{j-1}$-$v_j$ path $P_j$, such that: - Path length: for any $j\in[L]$, path $P_j$ has length at most $D_j := d(v_{j-1},m_j)+d(m_j,v_j)+2^{e_j}-1$. - Reward: $\sum_{j\in[L]} \,\, \sum_{u\in P_j} \eta_u(B-\sum_{i=0}^{j-1} D_i - t_u - 2^j+1) \ge {\ensuremath{\mathsf{Opt}}\xspace}/4$. Here, $t_u$ is the distance to vertex $u\in P_j$ from $v_{j-1}$ along path $P_j$. - Prefix mean size: $\sum_{\ell=0}^j \,\,\sum_{u\in P_\ell} \mu^j_u \le (K+1)\cdot 2^j$, for all $j\in[L]$. Here $K=\Theta(\log\log B)$. Consider the path $\sigma$ and portal vertices $\{v_j\}_{j=0}^L$ satisfying the properties in Lemma \[lem:corr-portals\]; recall that $v_{-1}=\rho$. For each $j\in[L]$, let $O_j$ denote the subpath of $\sigma$ from vertex $v_{j-1}$ to $v_j$. Recall that for any $u\in \sigma$, $d_u=$ distance from $\rho$ to $u$ along path $\sigma$. For each $j\in[L]$, let $D'_j$ denote the length of subpath $O_j$. Also define for each $j\in[L]$ and vertex $u\in O_j$, $t_u=$ distance to vertex $u$ from $v_{j-1}$ along path $O_j$; note that $d_u=\sum_{i=0}^{j-1} D'_i + t_u$. We now modify each subpath $O_j$ to obtain a new $v_{j-1}$-$v_j$ subpath $P_j$ as follows. For each vertex $u\in O_j$ define its profit $q_u:=\eta_u(B-d_u-2^j+1)$, and let $q(O_j)$ denote the total profit of vertices in $O_j\setminus \{v_{j-1}\}$. Let $m_j$ denote any vertex on $O_j$ such that: $$\label{eq:portal-mid} \sum_{v_{j-1}\prec u \preceq m_j} \, q_u \,\,\ge \,\,\frac{q(O_j)}{2}\quad \mbox{and} \quad \sum_{m_{j}\preceq u \preceq v_j} \, q_u \,\, \ge \,\,\frac{q(O_j)}{2} .$$ Here $\prec$ denotes the order in which vertices appear on $O_j$. So $m_j$ can be viewed as a “mid point” of $O_j$, containing half the total profit on either side of it. Let $\ell_1=$ length of $O_j$ from $v_{j-1}$ to $m_j$, and $b_1=d(v_{j-1},m_j)$. Similarly, let $\ell_2=$ length of $O_j$ from $m_{j}$ to $v_j$, and $b_2=d(m_{j},v_j)$. Also, let $\epsilon_1=\ell_1-b_1$, $\epsilon_2=\ell_2-b_2$ and $\epsilon=\epsilon_1+\epsilon_2$; clearly $\epsilon_1,\epsilon_2\ge 0$. Note that the length of $O_j$ is $D'_j=b_1+b_2+\epsilon$. Let $O'_j$ denote the subpath $v_{j-1} \rightsquigarrow m_j \rightarrow v_j$ of $O_j$ that shortcuts over vertices between $m_j$ and $v_j$. Similarly, $O''_j$ denotes the subpath $v_{j-1} \rightarrow m_j \rightsquigarrow v_j$ of $O_j$ that shortcuts over vertices between $v_{j-1}$ and $m_j$. By  the total profit on each of $O'_j$ and $O''_j$ is at least $q(O_j)/2$. The length of subpath $O'_j$ (resp. $O''_j$) is $\ell_1+b_2$ (resp. $b_1+\ell_2$). We have: $$\min\{\ell_1+b_2, b_1+\ell_2\} \,\, = \,\, \min\{b_1+b_2+\epsilon_1, b_1+b_2 + \epsilon_2\} \,\, \le \,\, b_1+b_2 +\frac{\epsilon_1+\epsilon_2}{2} \,\, = \,\, b_1+b_2 +\frac{\epsilon}2.$$ Let $e_j$ denote the unique integer such that $2^{e_j}-1\le \epsilon< 2^{e_j+1}-1$; note that $e_j\in [L]$ since $\epsilon$ is an integer between $0$ and $B$. Based on the above inequality, the shorter of $O'_j$ and $O''_j$ has length at most $b_1+b_2+2^{e_j}-1$. We set subpath $P_j$ to be the shorter one among $O'_j$ and $O''_j$. This choice ensures that: $$\label{eq:enum-subpath} \mbox{ length of } P_j \,\le\, d(v_{j-1},m_j)+d(m_j,v_j)+2^{e_j}-1=:D_j,\,\,\mbox{and}\,\, \sum_{u\in P_j} \eta_u(B-d_u-2^j+1)\,\ge \, \frac{q(O_j)}{2}.$$ Note also that $D'_j=b_1+b_2+\epsilon\ge b_1+b_2+2^{e_j}-1=D_j$. We set the portal vertices $\{v_j\}_{j=0}^L$, auxiliary vertices $\{m_j\}_{j=0}^L$, integers $\{e_j \in [L]\}_{j=0}^L$, and path $\{P_j\}_{j=0}^L$ as defined above. The path length property follows from the first condition in  and the definition of $D_j$. The prefix size property is immediate from that in Lemma \[lem:corr-portals\] since each $P_j{\subseteq}O_j$. We now show the reward property: $$\begin{aligned} \sum_{j=0}^L \,\, \sum_{u\in P_j} \eta_u(B-\sum_{i=0}^{j-1} D_i - t_u - 2^j+1) &\ge & \sum_{j=0}^L \,\, \sum_{u\in P_j} \eta_u(B-\sum_{i=0}^{j-1} D'_i - t_u - 2^j+1) \label{eq:enum-profit1}\\ &=& \sum_{j=0}^L \,\, \sum_{u\in P_j} \eta_u(B- d_u - 2^j+1) \label{eq:enum-profit2}\\ &=& \sum_{j=0}^L \,\, \sum_{u\in P_j} q_u \quad \ge \quad \frac{1}{2} \sum_{j=0}^L \,\, \sum_{u\in O_j} q_u \label{eq:enum-profit3}\\ &= & \frac{1}{2} \sum_{j=0}^L \sum_{u\in O_j} \eta_u(B- d_u - 2^j+1)\quad \ge \quad \frac{{\ensuremath{\mathsf{Opt}}\xspace}}{4}. \label{eq:enum-profit4}\end{aligned}$$ Inequality  uses the fact that $D_i\le D'_i$ for all $i\in[L]$. The equality  uses $d_u=\sum_{i=0}^{j-1} D'_i + t_u$ for any $u\in P_j{\subseteq}O_j$ and $j\in[L]$. Inequality  is by the definition of $q_u$ and the second condition from . Inequality  uses the reward property in Lemma \[lem:corr-portals\]. Based on Lemma \[lem:corr-enum\], our enumeration algorithm guesses the $L+1$ portals $\{v_j\}_{j=0}^L$ and segment lengths $\{D_j\}_{j=0}^L$ in time $n^{2L+2}\cdot L^{L+1}=(n\log B)^{O(\log B)}$ (by there are at most $nL$ choices for $D_j$, as it is determined by $e_j \in [L]$ and $m_j \in V$). #### Knapsack Deadline Orienteering Instances. Our goal is to find a $v_{j-1}-v_j$ path $P_j$ for each $j\in[L]$, that have properties similar to the segments in Lemma \[lem:corr-enum\]. To this end, we define a knapsack-deadline-orienteering instance ${{\mathcal{I}}}_j$ for each $j\in[L]$. Instance ${{\mathcal{I}}}_j$ is defined on metric $(V,d)$ with start vertex $v_{j-1}$, end vertex $v_j$ and length bound $D_j$. The rewards, sizes and deadlines will be defined shortly. As in [@GKNR12], we will introduce a suitable set of [*copies*]{} of each vertex $u\in V$, with deadlines that correspond to visiting $u$ at different possible times. $$\label{eq:kdo-copies} \mbox{ For each $v\in V$ and distance $d\in [B]$, define $f_{v}^j(d):=\eta_v(B-\sum_{i=0}^{j-1}D_i-d-2^j+1)$, }$$ the expected reward from $v$ under instantiation at most $B-\sum_{i=0}^{j-1}D_i-d-2^j+1$. Intuitively, $f^j_v(d)$ is the expected reward obtained by visiting vertex $v$ in segment $j$ at distance $d$ along $P_j$ (so the total distance to $v$ is $\sum_{i=0}^{j-1}D_i+d$) and having observed total size $2^j-1$ until $v$. In the [$\mathsf{KDO}$]{}instance ${{\mathcal{I}}}_j$, we would like to introduce a copy of vertex $v$ for each distance $d\in [B]$, in order to permit all possibilities for visiting $v$ in segment $j$. However, solutions to such an instance may obtain a large reward just by visiting multiple copies of the same vertex. In order to control the total reward obtainable from multiple copies of a vertex, we introduce copies of each vertex corresponding only to a suitable subset of $[B]$. This relies on the following construction. \[cl:gknr-count\] For each $j\in[L]$ and $v\in V$, we can compute in polynomial time, a subset $I^j_v{\subseteq}[B]$ s.t. $$\frac{1}{3}\cdot \sum_{y\in I^j_v:y\ge d} f^j_v(y) \quad \le \quad f^j_v(d) \quad \le \quad 2\cdot \max_{y\in I^j_v:y\ge d} f^j_v(y),\qquad \forall d\in[B].$$ Roughly speaking, the copies $I^j_v$ of vertex $v$ are those times $t$ at which its expected reward $f^j_v(t)$ doubles. We now formally define the entire instance ${{\mathcal{I}}}_j$. The metric is $(V,d)$, start-vertex $v_{j-1}$, end-vertex $v_j$, length bound $D_j$ and knapsack capacity $(K+1)\cdot 2^j$. For each $v\in V$ and $t\in I^j_v$, a job $\langle v, j, t\rangle$ with reward $f^j_v(t)$, deadline $t$ and knapsack-size $\mu^j_v$ is located at vertex $v$. To reduce notation, when it is clear from context, we will use $u,v$ etc. to refer to jobs as well. A feasible solution $\tau$ to ${{\mathcal{I}}}_j$ is a $v_{j-1}-v_j$ path of length $d(\tau)\le D_j$ and total size $\sum_{u\in \tau} \mu^j_u \le (K+1)\cdot 2^j$; the objective value is the total reward of jobs visited by $\tau$ before their deadlines. We will also use ${{\mathcal{I}}}_j$ to denote the set of all feasible solutions to this [$\mathsf{KDO}$]{}instance. [**Remark:**]{} A solution $\tau$ to this instance ${{\mathcal{I}}}_j$ corresponds to the $j^{th}$ segment in the correlated orienteering solution (between portal $v_{j-1}$ and $v_j$). Note that ${{\mathcal{I}}}_j$ imposes no upper bound on $\tau$’s mean size capped at $2^h$ (i.e. $\mu^h(\tau)$) for any $h\ne j$, which however will be required in the the expected reward analysis (this corresponds to the “prefix mean size” property in Lemma \[lem:corr-enum\]). Although there is no explicit bound on $\mu^h(\tau)$ for $h\ne j$, we can infer this using the capped mean properties, as follows. - For any $h<j$, we do not need a bound on $\mu^h(\tau)$ since $\tau$ does not affect segment $h$. - For any $h>j$, we can use  to obtain $$\frac{\sum_{v\in \tau} \mu^{h}_v}{2^{h}} \quad \le \quad \frac{\sum_{v\in \tau} \mu^{j}_v}{2^{j}}\quad \le \quad K+1.$$ So the knapsack constraint in ${{\mathcal{I}}}_j$ corresponding to $\mu^j(\tau)$ also implicitly bounds $\mu^h(\tau)$. The fact that we introduce only a [*single*]{} knapsack constraint in ${{\mathcal{I}}}_j$ turns out to be important since the best algorithm (that we are aware of) for deadline-orienteering with multiple knapsack constraints has an approximation factor that grows linearly with the number of knapsacks. #### Configuration LP relaxation. We now give an LP relaxation to the near-optimal structure in Lemma \[lem:corr-enum\]. We make use of the guessed portals $\{v_j\}_{j=0}^L$ and lengths $\{D_j\}_{j=0}^L$. The segments $P_j$s in Lemma \[lem:corr-enum\] will correspond to solutions of [$\mathsf{KDO}$]{}instances ${{\mathcal{I}}}_j$s. The LP relaxation is given below: $$\begin{aligned} {2} \mbox{max } \,\, \sum_{j=0}^L \, \sum_{v \in V} \, \sum_{t \in I^j_v} & \, f^j_v(t)\cdot y^j_{v,t} & & \label{LP:0} \tag{{\ensuremath{\mathsf{LP}}\xspace}} \\ \mbox{s.t. } \,\, \sum_{\tau \in {{\mathcal{I}}}_j} x^j_{\tau} & \, \le \, 1 & \qquad & \forall \, j \in [L] \label{LP:1} \\ y^j_{v,t} & \, \leq \, \, \sum_{\tau\ni \langle v,j,t\rangle} x^j_\tau & \qquad & \forall \, j\in [L],\, v \in V, \, t\in I^j_v \label{LP:2} \\ \sum_{j=0}^L \, \sum_{t\in I^j_v} y^j_{v,t} & \, \leq \, 1 & \qquad & \forall \, v \in V \label{LP:3} \\ \sum_{v \in V} \mu^j_v\cdot \sum_{\ell=0}^j \, \sum_{t \in I^\ell_v} y^\ell_{v,t} & \, \leq \, (K+1)\cdot 2^j & \qquad & \forall \, j \in [L] \label{LP:4} \\ \mathbf{x},\, \mathbf{y} &\geq 0. & & \label{LP:5}\end{aligned}$$ Above, $\{x^j_\tau: \tau\in {{\mathcal{I}}}_j\}$ is intended to be the indicator variable that chooses a solution path for the [$\mathsf{KDO}$]{}instance ${{\mathcal{I}}}_j$. Variables $y^j_{v,t}$ indicate whether/not job $\langle v,j,t\rangle$ is selected in ${{\mathcal{I}}}_j$. These are enforced by constraints  and . Constraint  requires at most one copy of each vertex to be chosen, over all segments. And constraint  bounds the “prefix mean size” as in Lemma \[lem:corr-enum\]. The objective corresponds to the reward in Lemma \[lem:corr-enum\]. First, observe that we can ensure equality in . \[cl:xy-equality\] Any [$\mathsf{LP}$]{}solution $(x,y)$ can be modified to another solution that satisfies  with equality and has the same objective value. This follows immediately from the fact that each solution-set ${{\mathcal{I}}}_j$ is “down monotone”, i.e. $I\in{{\mathcal{I}}}_j$ and $I'{\subseteq}I$ implies $I'\in {{\mathcal{I}}}_j$. Formally, consider the constraints  in any order. For each $\langle v,j,t\rangle$, order arbitrarily the $x^j_\tau$ variables with $\tau \ni \langle v,j,t\rangle$ as $x^j_{\tau(1)}, x^j_{\tau(2)},\cdots,x^j_{\tau(a)}$. Let $b$ denote the unique index with $\sum_{k=1}^{b-1} x^j_{\tau(k)} < y^j_{v,t} \le \sum_{k=1}^{b} x^j_{\tau(k)}$. Now perform the following modification: - For each index $k>b$, drop $\langle v,j,t\rangle$ from the solution $\tau(k)$. - For index $b$, consider the solutions $\sigma=\tau(b)$ and $\sigma'=\tau(b)\setminus \langle v,j,t\rangle$. Set $x^j_{\sigma'}= \sum_{k=1}^{b} x^j_{\tau(k)} - y^j_{v,t}$ and $x^j_{\sigma} = y^j_{v,t} - \sum_{k=1}^{b-1} x^j_{\tau(k)}$. After this change, it is clear that we have equality for $\langle v,j,t\rangle$ in constraint . Also, constraint  remains feasible. Finally, since the $y$ variables remain unchanged, constraints - remain feasible and the objective value stays the same. \[cl:LP-obj\] The optimal value of [$\mathsf{LP}$]{}is at least ${\ensuremath{\mathsf{Opt}}\xspace}/8$. We will show that the subpaths $\{P_j\}_{j=0}^L$ in Lemma \[lem:corr-enum\] correspond to a feasible integral solution to [$\mathsf{LP}$]{}. Let $\sigma$ denote the concatenated path $P_0,P_1,\cdots,P_L$. Based on our guess of the portals and distances, we have $d(P_j)\le D_j$. For any vertex $u\in P_j$ let $t_u$ denote the distance to $u$ along $P_j$ (this also appears in Lemma \[lem:corr-enum\]); note that the distance to $u$ along $\sigma$ is then $d_u=\sum_{i=0}^{j-1}D_i + t_u$. For each $u\in P_j$ let $t'_u:=\min\{t\in I^j_u : t\ge t_u\}$ be the deadline of the earliest copy of $u$ that path $P_j$ can visit in the [$\mathsf{KDO}$]{}instance ${{\mathcal{I}}}_j$. Consider $P_j$ as a solution to ${{\mathcal{I}}}_j$ which visits jobs $\{\langle u,j,t'_u\rangle : u\in P_j\}$; in order to verify feasibility, we only need to check that $\sum_{u\in P_j} \mu^j_u\le (K+1)\cdot 2^j$, which follows from the third property in Lemma \[lem:corr-enum\]. Consider now the solution to [$\mathsf{LP}$]{}with $x^j_{P_j}=1$ for all $j\in [L]$; $y^j_{u,t'_u}=1$ for each $u\in P_j$, $j\in[L]$; and all other variables set to zero. Constraints , ,  and   are clearly satisfied. Observe that the left-hand-side of  is exactly $\sum_{\ell=0}^j \sum_{u\in P_\ell} \mu^j_u$ which is at most $(K+1)\cdot 2^j$ by the third property in Lemma \[lem:corr-enum\]. So $(\mathbf{x}, \mathbf{y})$ is a feasible (integral) solution to [$\mathsf{LP}$]{}. We now bound the objective value: $$\begin{aligned} \sum_{j=0}^L \sum_{u\in P_j} f^j_u(t'_u) &\ge &\frac12 \cdot \sum_{j=0}^L \sum_{u\in P_j} f^j_u(t_u) \\ &= & \frac12 \cdot \sum_{j=0}^L \sum_{u\in P_j} \eta_u\big(B-\sum_{i=0}^{j-1}D_i -t_u-2^j+1\big) \\ &= & \frac12 \cdot \sum_{j=0}^L \sum_{u\in P_j} \eta_u\left(B-d_u-2^j+1\right)\quad \ge \quad \frac{{\ensuremath{\mathsf{Opt}}\xspace}}{8}.\end{aligned}$$ The first inequality is by definition of $t'_u$ and Claim \[cl:gknr-count\]; the next two equalities use the definitions $f^j_u(\cdot)$ and $d_u$; and the final inequality is by the first property in Lemma \[lem:corr-enum\]. #### Solving the configuration LP. We show that [$\mathsf{LP}$]{}can be solved approximately using an approximation algorithm for [$\mathsf{KDO}$]{}. This is based on applying the Ellipsoid algorithm to the dual of [$\mathsf{LP}$]{}, which is given below: $$\begin{aligned} {2} \mbox{min } \,\, \sum_{j=0}^L \, \beta_j \,\,+\,\, \sum_{v \in V} \,\delta_v \,\, + & \,\, (K+1)\cdot \sum_{j=0}^L \, 2^j\cdot z_j & & \label{DLP:0} \tag{{\ensuremath{\mathsf{DLP}}\xspace}} \\ \mbox{s.t. } \,\, \gamma^j_{v,t} + \delta_v + \sum_{\ell\ge j} \mu^\ell_v\cdot z_\ell & \, \ge \, f^j_v(t) & \qquad & \forall \, j \in [L], \, v\in V,\, t\in I^j_v \label{DLP:1} \\ -\sum_{\langle v,j,t\rangle\in \tau} \gamma^j_{v,t} \,+\, \beta_j & \, \geq \, \, 0 & \qquad & \forall \, j\in [L],\, \tau \in {{\mathcal{I}}}_j \label{DLP:2} \\ \mathbf{\beta},\,\mathbf{\gamma},\, \mathbf{\delta},\, z &\geq 0. & & \label{DLP:3}\end{aligned}$$ In order to solve [$\mathsf{DLP}$]{}using the Ellipsoid method, we need to provide a separation oracle that tests feasibility. Observe that constraints  are polynomial in number, and can be checked explicitly. Constraint  for any $j\in[L]$ is equivalent to asking whether the optimal value of [$\mathsf{KDO}$]{}instance ${{\mathcal{I}}}_j$ with rewards $\{\gamma^j_{v,t} : v\in V,\, t\in I^j_v\}$ is at most $\beta_j$. Using an $O(\alpha)$-approximate separation oracle (the knapsack deadline orienteering algorithm from Theorem \[thm:kdo\]) within the Ellipsoid algorithm, we obtain an $O(\alpha)$-approximation algorithm for [$\mathsf{DLP}$]{}and hence [$\mathsf{LP}$]{}. [**Remark:**]{} Alternatively, we can solve [$\mathsf{LP}$]{}using a faster combinatorial algorithm that is based on multiplicative weight updates. Note that we can eliminate $y$-variables in [$\mathsf{LP}$]{}by setting constraints  to equality (Claim \[cl:xy-equality\]). This results in a [*packing LP*]{}, consisting of non-negative variables with each constraint of the form $\mathbf{a}^T x \le b$ where all entries in $\mathbf{a}$ and $b$ are non-negative. So we can solve [$\mathsf{LP}$]{}using faster approximation algorithms [@PST91; @GK07] for packing LPs, that also require only an approximate dual separation oracle (which is [$\mathsf{KDO}$]{}in our setting). #### Rounding the LP solution. Let $(x,y)$ denote the $O(\alpha)$-approximate solution to [$\mathsf{LP}$]{}. By Claim \[cl:LP-obj\] the objective value is $\Omega({\ensuremath{\mathsf{Opt}}\xspace}/\alpha)$. The rounding algorithm below describes a (randomized) non-adaptive policy. 1. \[step:corr1\] For each $j\in[L]$, independently select solution $\tau_j\in {{\mathcal{I}}}_j$ as: $$\tau_j \gets \left\{ \begin{array}{ll} T&\mbox{ with probability } \frac{x^j_T}{2}, \mbox{ for each } T\in {{\mathcal{I}}}_j \\ \langle v_{j-1},v_j\rangle & \mbox{ with the remaining probability } 1-\frac12 \sum_{T\in {{\mathcal{I}}}_j} x^j_T \end{array}\right.$$ 2. \[step:corr2\] If any vertex $v\in V$ appears in more than one solution $\{\tau_j\}_{j=0}^L$, then drop $v$ from all of them. 3. \[step:corr3\] If solution $\{\tau_j\}_{j=0}^L$ exceeds any constraint  by a factor more than $\frac{\log L}{\log\log L}$ then return an empty solution. 4. \[step:corr4\] For each $j\in[L]$, if $\tau_j$ contains multiple copies of any vertex $v\in V$ then retain only the copy with earliest deadline (i.e. highest reward). 5. \[step:corr5\] Return the non-adaptive policy that traverses the path $\tau_0\cdot \tau_1\cdots\tau_L$ and attempts each vertex independently with probability $\frac{\log\log L}{4K\cdot \log L}$. #### Analysis. We now show that the expected reward of this non-adaptive policy is $\Omega(\frac{\log\log L}{\alpha\, K\cdot \log L})\cdot {\ensuremath{\mathsf{Opt}}\xspace}$, which would prove Theorem \[thm:corr-NA\]; recall that $K=\Theta(\log\log B)$ and $L=\Theta(\log B)$. \[lem:RRalt\] For any $j\in[L]$, $v\in V$ and $t\in I^j_v$, $\Pr[\langle v,j,t\rangle \in \tau_j \mbox{ after Step~\ref{step:corr3}}]\,\ge\, y^j_{v,t}/8$. Let $\tau^1_j$, $\tau^2_j$ and $\tau^3_j$ denote the solution $\tau_j$ after Step 1, 2 and 3 respectively. Clearly, $$\label{eq:prob-tau1} \Pr\left[\langle v,j,t\rangle \in \tau_j^1\right] \quad = \quad\sum_{T\ni \langle v,j,t\rangle} \frac{x^j_T}{2} \quad = \quad \frac{y^j_{v,t}}{2}.$$ The last equality uses Claim \[cl:xy-equality\]. Note that $\langle v,j,t\rangle$ gets dropped in Step \[step:corr2\] exactly when there is some other solution $\{\tau^1_{\ell} : \ell\in[L]\setminus j\}$ that contains a copy of $v$. By union bound,  and , $$\label{eq:prob-tau2}\Pr\left[\langle v,j,t\rangle \not\in \tau_j^2 \, |\, \langle v,j,t\rangle \in \tau_j^1\right] \quad \le \quad \sum_{\ell\in [L]\setminus j}\,\, \sum_{t\in I^\ell_v} \frac{y^j_{v,t}}{2}\quad \le \quad \frac{1}{2}.$$ In Step \[step:corr3\], the entire solution is declared empty if any constraint  is violated by more than a factor of $\frac{\log L}{\log\log L}$; otherwise, $\tau^3_j=\tau^2_j$. Consider the constraint  with index $h\in[L]$. This reads $\sum_{\ell=0}^h \mu^h(\tau_\ell) \,\le\, (K+1)\cdot 2^h$, where $\mu^h(\tau_\ell):= \sum_{\langle v,\ell,t\rangle \in \tau_\ell} \mu^h_v$. The key observation is the following: $$\label{eq:main-RR} \mbox{For any $\ell\le h$ and $\tau_\ell\in {{\mathcal{I}}}_\ell$, we have }\mu^h(\tau_\ell)\le (K+1)\cdot 2^h.$$ This uses the fact that $\tau_\ell$ satisfies the knapsack constraint $\mu^\ell(\tau_\ell)\le (K+1)\cdot 2^\ell$ in [$\mathsf{KDO}$]{}instance ${{\mathcal{I}}}_\ell$; and by the observation  on capped sizes, $\frac{\mu^h(\tau_\ell)}{2^h}\le \frac{\mu^\ell(\tau_\ell)}{2^\ell}$ since $h\ge \ell$. Using  it follows that $Z_h:=\sum_{\ell=0}^h \frac{\mu^h(\tau_\ell)}{(K+1)\cdot 2^h}$ is the sum of independent $[0,1]$ bounded random variables. Using , and [$\mathsf{LP}$]{}constraint  we have: $${\mathbb{E}}[Z_h] \quad \le \quad \frac1{(K+1)\cdot 2^h} \,\,\sum_{\ell=0}^h \sum_{v\in V} \sum_{t\in I^\ell_v} \,\mu^h_v\cdot \frac{y^j_{v,t}}{2} \quad \le \quad \frac12.$$ Hence, by a Chernoff bound, [$\Pr\left[Z_h > \frac{\log L}{\log\log L}\right] \le \frac{1}{2L}$]{}. Taking a union bound over all $L$ constraints , it follows that with probability at least half, none of them is violated by a factor more than $\frac{\log L}{\log\log L}$. That is, $\Pr\left[\langle v,j,t\rangle \in \tau_j^3 \, |\, \langle v,j,t\rangle \in \tau_j^2\right] \ge \frac12$. Combined with  and , we obtain the lemma. \[cl:corr-exp\] The expected [$\mathsf{LP}$]{}objective value of solution $\{\tau_j\}_{j=0}^L$ after Step \[step:corr4\] is $\Omega({\ensuremath{\mathsf{Opt}}\xspace}/\alpha)$. By Lemma \[lem:RRalt\] it follows that expected [$\mathsf{LP}$]{}objective value of solution $\{\tau_j\}_{j=0}^L$ after Step \[step:corr3\] is at least: [$$\sum_{j=0}^L \, \sum_{v \in V} \, \sum_{t \in I^j_v} \, f^j_v(t)\cdot \frac{y^j_{v,t}}8 \quad \ge\quad \Omega\left(\frac{{\ensuremath{\mathsf{Opt}}\xspace}}{\alpha}\right).$$]{} The last inequality uses Claim \[cl:LP-obj\] and the fact that we have an $O(\alpha)$-approximately optimal [$\mathsf{LP}$]{}solution. In Step \[step:corr4\], we retain only one copy of each vertex in each $\tau_j$. Using Claim \[cl:gknr-count\], since we retain the most profitable copy of each vertex, this decreases the total reward of each $\tau_j$ by at most a factor of $3$. Consider now the non-adaptive policy in Step \[step:corr5\] and [*condition*]{} on any solution $\{\tau_j\}_{j=0}^L$. Fix any $j\in [L]$ and $\langle v, j, t\rangle\in \tau_j$. The distance traveled until $v$ is at most $\sum_{i=0}^{j-1}D_i + t$. By Step \[step:corr4\] and [$\mathsf{LP}$]{}constraint , the total $\mu^j$-size of vertices in $\{\tau_i\}_{i=0}^j$ is at most $\frac{(K+1)\log L}{\log\log L}\cdot 2^j$. Since each vertex is attempted only with probability $\frac{\log\log L}{4K\cdot \log L}$, we have $$\Pr\left[ \sum_{u\prec v} \min\{S_u,2^j\} \ge 2^j \right] \,\, \le \,\, \frac12,$$ where the summation ranges over all vertices visited before $v$. Thus with probability at least half, vertex $v$ is visited by time $\sum_{i=0}^{j-1}D_i + t+2^j-1$. So the expected reward from vertex $v$ is $\Omega\left(\frac{\log\log L}{K\cdot \log L}\right)\cdot \eta_v(B-\sum_{i=0}^{j-1}D_i - t- 2^j+1) = \Omega\left(\frac{\log\log L}{K\cdot \log L}\right)\cdot f^j_v(t)$. Adding this contribution over all vertices, the total reward is $\Omega\left(\frac{\log\log L}{K\cdot \log L}\right)$ times the [$\mathsf{LP}$]{}objective of solution $\{\tau_j\}_{j=0}^L$ after Step \[step:corr4\]. Taking expectations over Steps \[step:corr1\]-\[step:corr4\], and using Claim \[cl:corr-exp\], it follows that our non-adaptive policy has expected reward $\Omega\left(\frac{\log\log L}{\alpha\, K\cdot \log L}\right)\cdot {\ensuremath{\mathsf{Opt}}\xspace}$. This completes the proof of Theorem \[thm:corr-NA\]. Conclusion ========== In this paper, we proved an $\Omega(\sqrt{\log\log B})$ lower bound on the adaptivity gap of stochastic orienteering. The best known upper bound is $O(\log\log B)$ [@GKNR12]. Closing this gap is an interesting open question. For the [*correlated*]{} stochastic orienteering problem, we gave a quasi-polynomial time $O(\alpha\cdot \log^2\log B)$-approximation algorithm, where $\alpha$ denotes the best approximation ratio for the deadline-orienteering problem. It is known that correlated stochastic orienteering can not be approximated to a factor better than $\Omega(\alpha)$ [@GKNR12]. Finding an $O(\alpha)$ approximation algorithm for correlated stochastic orienteering is another interesting direction. [^1]: Eindhoven University of Technology. Supported by the Dutch NWO Grant 639.022.211. [^2]: IBM T.J. Watson Research Center [^3]: A quasi-polynomial time algorithm runs in $2^{\log^c N}$ time on inputs of size $N$, where $c$ is some constant.
--- bibliography: - 'thesis.bib' --- [**Dynamical fermions in lattice quantum chromodynamics**]{}\ Kálmán Szabó\ \ PhD thesis, WUB-DIS 2007-10\ advisor: Zoltán Fodor The thesis will present results in Quantum Chromo Dynamics (QCD) with dynamical lattice fermions. The topological susceptibilty in QCD is determined, the calculations are carried out with dynamical overlap fermions. The most important properties of the quark-gluon plasma phase of QCD are studied, for which dynamical staggered fermions are used. Introduction ============ The theory of the strong interaction is known to be Quantum Chromo Dynamics (QCD). It has all the features which are necessary for a successful description of the strong interaction. There are very important reasons why it is necessary to invest a lot of effort to solve QCD: - Validate or invalidate QCD by comparing its predictions with experiments. Results in the high energy regime show very good agreement with the experiments, however there are still many white areas with no results at all, among them is the missing connection between nuclear physics and QCD. - Validate or invalidate the Standard Model of particle physics. Even if QCD is the proper theory of strong interactions, it can happen that the weak and electromagnetic interactions are not correctly described by Standard Model. Examining weak decays can only be done by taking into account low-energy strong interaction effects. The success of the (in)validation is now mostly depends on the precision of QCD calculations. - Unfold the phase diagram and properties of QCD at finite temperature and baryon densities. In parallel with the theoretical developments intensive experimental work is done (and will be done) to produce and investigate the high temperature phase of QCD: the quark-gluon plasma. Among these investigations the major goal is to find signals of a first-order or second-order transition. Solving the above problems is known to be extremely difficult. Currently available methods (e.g.. weak coupling perturbation, $1/N_c$ expansion[^1], string theory methods, lattice) are not able to provide us with rigorous solutions. However some of these methods are believed to give us very good approximations of these solutions. Today the lattice technique is the only which is (or will soon be) able to calculate masses of hadrons, properties of low energy scattering processes, bulk and spectral properties of the quark-gluon plasma and many more based only on the Lagrangian of QCD. It contains systematical errors, however these can be quantified, therefore can be kept under control. In the following short introduction to the lattice technique we will highlight the role of “dynamical fermions” in lattice QCD. Dynamical fermions in lattice QCD --------------------------------- Lattice QCD discretizes[^2] the path integral ($Z$) \[eq:z\] Z=(-S\_[gauge]{} - \_f |[\_f]{}D\[U\]\_f) on a four dimensional Euclidean lattice. The Euclidean space formalism is useful for obtaining the spectrum of the theory or for doing finite temperature calculations. The Minkowski space approach, which is necessary to investigate real time processes, is not available (however see [@Berges:2006xc]). In Eq. \[eq:z\] we have an integral over the gauge ($U$) and flavored fermion fields [^3] ($\bar{\psi_f},\psi_f$). The $S_{\rm gauge}$ is the gauge action, the fermion action is bilinear in the fermion fields, so the fermion integral can be easily carried out. We end up with the determinant of the Dirac-operator ($D[U]$) under the path integral: Z=D\[U\]\^[n\_f]{} (-S\_[gauge]{}), with $n_f$ being the number of fermion flavors. The minimal distance on the lattice is called lattice spacing ($a$). The final results are obtained by sending the lattice spacing to zero together with doing some necessary renormalization. There are two important observations: firstly the Euclidean path integral is equivalent with a Boltzmann sum of a statistical mechanical system and secondly the discretized path integral in a finite volume can be put on a computer. These properties made lattice QCD a multidisciplinary science: it is a mixture of quantum field theory, statistical physics, numerical analysis and computer science. The development in computer algorithms and the exponential rise of the available computational capacity made lattice QCD from a toy model to a powerful predictive tool, giving us high precision pre- and postdictions in a huge number of areas. Nowadays the calculations are reaching the % level precision thanks to the gradual elimination of the so called quenching effects. Quenching means approximating the fermion determinant with a constant, $U$ independent value in Eq. \[eq:z\] and keeping fermions only in the correlation functions. It is used to decrease the computational requirements, since taking into account the fermion determinant in the path integral (in other words dealing with the fermions dynamically) is a hard task. One can look on the system of \[eq:z\] as a gauge system but with a highly nonlocal[^4] effective action: $$S_{\rm eff}=S_{\rm gauge} - n_f \log \det D.$$ Developing efficient algorithms for such systems is nontrivial, however there is a considerable progress in the last years. There is a huge arbitrariness in choosing the type of the discretization, only a few requirements are to be fulfilled: eg. it should have appropriate symmetries or there should exist an equivalent local formulation. Universality, a well-known concept from statistical physics ensures that in the zero lattice spacing limit the results will not depend on the choice of the discretization. Since the cost of algorithms usually goes with an enormous power of the inverse lattice spacing, in practice it is desirable to improve the lattice actions, that is to reduce their lattice artefacts to make the continuum extrapolations easier from the available lattice spacings. However one should be careful with the improvement: overimproving can lead to several practical problems (loss of locality, unitarity, irregular continuum limit, slowing down of algorithms etc.). Even there is a possibility that for the fermion determinant and for the fermion correlation functions one uses different discretizations (the first are called sea, the latter are the valence fermions). This is the so called mixed approach. Then the expensive, improved fermion is used in the valence sector, whereas for the sea fermions a faster, less improved is chosen. The correct continuum limit is again ensured by universality. The design of lattice fermion actions is hindered by the fermion doubling problem. Naive discretization of the continuum Dirac action yields 16 fermions on the lattice. There are three different ways to cure this: staggered, overlap and Wilson fermions. Let us take a brief look on all of them. ### Fast, but ugly[^5]: staggered fermions {#fast-but-ugly-staggered-fermions .unnumbered} The naive fermion determinant containing 16 fermions has an $U(4)\times U(4)$ symmetry, which can be eliminated by the staggering transformation. This reduces the degeneracy from 16 to four. To get one out of the remaining four fermions one applies the fourth root trick: D\[U\] D\_[st]{}\[U\] (D\_[st]{}\[U\])\^[1/4]{}. The quark correlation functions are usually[^6] calculated with the four flavor operator $D_{\rm st}$. The main advantage of staggered fermions is that the $D_{\rm st}$ operator has a $U(1)_{\epsilon}$ symmetry in the massless case at any finite lattice spacing. This symmetry corresponds to the flavor non-singlet axial symmetries in the continuum, an important organizing principle in low-energy QCD. Thanks to this symmetry the spectrum of $D_{\rm st}$ is bounded from below, making staggered algorithms well-conditioned. The bare quark mass is only multiplicatively renormalized. These make the simulations fast and convenient. The major problem is that no local theory is known to correspond to the fourth-root trick, which can be a danger for the universality of the theory. Which means that it can happen that results of staggered lattice QCD differ from that of other discretizations. However any attack against staggered QCD is in a hard position, it should give account for the remarkable agreement between staggered lattice results and real world. There is an explicit example [@Durr:2003xs], where fourth root trick gives an incorrect result: case of one massless fermion. According to the anomaly the chiral condensate should acquire some nonvanishing value in the continuum limit with a proper fermion discretization. However due to the $U(1)_{\epsilon}$ symmetry of staggered fermions, the staggered chiral condensate is always exactly zero at any finite lattice spacing (in a finite volume), making staggered fermions fail at this setup. Usual calculations are done with unphysical, large pion masses, and then an extrapolation to the physical pion mass is carried out (staggered chiral perturbation theory). These extrapolations are controlled by several parameters ($O(40)$ in NLO for the kaon bag constant), which can make them very ill-conditioned. ### Glorious, but slow: chiral fermions {#glorious-but-slow-chiral-fermions .unnumbered} According to the Nielsen-Ninomya theorem eliminating the doubling problem is equivalent with violating continuum chiral symmetry ($\{D,\gamma_5\}=0$) on the lattice. The idea of Ginsparg and Wilson was to find a Dirac-operator satisfying \[eq:gw\] {D,\_5}=2D\_5D. Only much later was it realized, that this relation makes possible to maintain the chiral symmetry at finite lattice spacing, for which an $O(a)$ redefinition of the chiral transformation is needed. All continuum relations related to chiral symmetry (Ward identities, index theorem, low energy theorems, continuum chiral perturbation theory) are one to one applicable at finite $a$ using a fermion satisfying this relation. The overlap fermion and the fixed-point fermion are the known solutions of Eq. \[eq:gw\], whereas the domain-wall fermion provides an approximation to such operators. The price of these nice properties is quite high: one ends up with a non-ultralocal Dirac-operator (there are interactions between points at any distance), which results in O(100) times or more slower algorithms compared to other discretizations. The determinant of a Ginsparg-Wilson Dirac-operator will have discontinuities in the space of gauge fields at the topological sector boundaries, just as the continuum Dirac-operator. It is a feature from one hand, on the other hand these jumps makes the conventional dynamical fermion algorithms with chiral Dirac-operators to slow down considerably. ### Robust: Wilson fermions {#robust-wilson-fermions .unnumbered} Wilson fermions are curing the doubling problem by violating the chiral symmetry drastically. One gets several inconvenient features at a first sight: additive quark mass renormalization, $O(a)$ lattice artefacts (the previous two discretizations have $O(a^2)$), loss of strict spectral bound on the Dirac-operator. Latter yields ill-conditioned, slow algorithms. The absence of the chiral symmetry makes necessary to evaluate renormalization constants at those places where it is trivial in case of staggered or overlap fermions. Much theoretical and numerical work was done to improve these properties: using smeared gauge links in the fermion action the additive renormalization can be decreased by two orders of magnitude, the $O(a)$ lattice artefacts can be reduced to $O(a^2)$ via the Symanzik-improvement program, the spectral bound is reported to be recovered in the infinite volume limit [@DelDebbio:2005qa]. These made possible to be competitive with the staggered discretization in speed and since it is in much better theoretical shape it might be the choice for the near future. Overview of the thesis ---------------------- The use of dynamical fermions is nowadays obligatory in lattice QCD. Developing new algorithms for dynamical fermions is still an active area of research. In this thesis I will present two dynamical fermion projects in which I have participated. The first topic, discussed in Chapter 2, deals with the dynamical overlap fermion project in detail. This chapter is partially based on the articles: - [@Fodor:2003bh] Z. Fodor, S.D. Katz, K.K. Szabo JHEP 0408:003,2004 - [@Egri:2005cx] G.I. Egri, Z. Fodor, S.D. Katz, K.K. Szabo JHEP 0601:049,2006 First I will present our dynamical overlap algorithm, then I will show how the naive algorithm fails to change topological sectors. A new algorithm is proposed and tested, which solves the problem. Finally the topological sector changing of the new algorithm is examined, and attempts to improve it are proposed. I am trying to give a comprehensive review of the field, which also means that only a part of the results belong to me. My contributions are the following: - Writing and developing a 5000 line C program for generating overlap fermion configurations. - Modifying the conventional HMC algorithm to circumvent its failure at topological sector boundaries. - Improving the stepsize dependence of this algorithm. - Examining the tunneling behavior of this algorithm. - Performing simulations and measuring the topological susceptibility in two flavor QCD. The second topic is the dynamical staggered project (Chapter 3). This is a large scale computation of thermodynamical properties of the quark gluon plasma. I will describe our choice of action and the algorithmic improvements first. Then I will show our determination on the order of the finite temperature QCD transition in continuum limit and with physical quark masses. Next the transition temperature in physical units is calculated, again in the continuum limit and with physical quark masses. These results can be considered as final ones modulo the uncertainty in the staggered discretization. Finally I will present the equation of state, but there the calculations were only done with two lattice spacings, the continuum limit is missing. The chapter is partially contained in these articles: - [@Aoki:2006we] Y. Aoki, G. Endrodi, Z. Fodor, S.D. Katz, K.K. Szabo Nature 443:675-678,2006 - [@Aoki:2006br] Y. Aoki, Z. Fodor, S.D. Katz, K.K. Szabo Phys. Lett. B643:46-54,2006 - [@Aoki:2005vt] Y. Aoki, Z. Fodor, S.D. Katz, K.K. Szabo JHEP 0601:089,2006 Again not all results belong to me, my contributions are: - Writing and developing a 5000 line C program for generating staggered fermion configurations. - Performing large scale zero and finite temperature simulations. - Analyzing and renormalizing the data. Dynamical overlap fermions ========================== Chiral symmetry is one of the most important feature of the strong interaction. Lattice regularization and chiral symmetry were contradictious concepts for many years. Fermionic operators satisfying the Ginsparg-Wilson relation [@Ginsparg:1981bj] \[eq:gw2\] {D,\_5}=D\_5D. made possible to solve the chirality problem of four-dimensional QCD at finite lattice spacing [@Hasenfratz:1998jp; @Neuberger:1997bg; @Niedermayer:1998bi; @Luscher:1998pq]. Several numerical studies with exact chirality operators were done in the quenched approximation [@DeGrand:2000tf; @Gattringer:2003qx; @Babich:2005ay]. The results were really compelling, but people were forced to work with nonlocal Dirac-operators. This made the algorithms more complicated and slowed them down by large factors. At the same time one could reach rather small quark masses, which was unimaginable with Wilson-type discretizations before. The life becomes even more complicated when introducing dynamical fermions with exact chirality. This chapter is devoted to this problem. We will going to work with overlap fermions [@Neuberger:1997fp; @Neuberger:1998wv], it is an explicit solution of the Ginsparg-Wilson relation. The other type of solutions, the fixed-point Dirac operators [@Hasenfratz:1998ri] are defined via a recursive equation, they are considerably harder to implement[^7]. Overlap Dirac operator ---------------------- First we fix our notations. The massless Neuberger-Dirac operator (or overlap operator) $D$ can be written as $$\label{eq:overlap} D=m_0 [1+\gamma_5 {\rm sgn}(H_W)],$$ This $D$ operator satisfies Eq. (\[eq:gw2\]). $H_W$ is the hermitian Dirac operator, $H_W=\gamma_5 D_W$, which is built from the massive Wilson-Dirac operator, $D_W$, defined by $$\begin{aligned} \label{eq:wilson} [D_W]_{xy}=(4-m_0)\delta_{xy}-\frac{1}{2} \sum_{\mu} \left\{ U_{\mu}(x)(1+\gamma_\mu)\delta_{x,y-\mu}+ U_{\mu}^{\dagger}(y)(1-\gamma_\mu)\delta_{y,x+\mu} \right\}. \nonumber\end{aligned}$$ One fermion is obtained in the continuum limit, if $m_0$ takes any value between $0$ and $2$. In a finite volume one should be careful, that the physical branch of the spectrum of the $D_W$ operator is to be projected to the physical part of the overlap circle. The mass is introduced in the overlap operator by $$D(m)=(1-\frac{m}{2m_0})D+m. \nonumber$$ Sometimes it is useful to consider the hermitian version of the overlap operator. Let us review its properties. The massless hermitian overlap operator is $H=\gamma_5 D =m_0(\gamma_5 + {\rm sgn}(H_W))$. The eigenvalues are real, the eigenvectors ($|\lambda \rangle$) are orthogonal and span the whole space. Due to the Ginsparg-Wilson relation {H,\_5}=H\^2, the matrix elements of $\gamma_5$ satisfy: | \_5 | (+ )= \_. From this equation it follows, that the zeromodes can be chosen to be chiral, furthermore eigenvectors with eigenvalues $\pm2m_0$ have positive/negative chirality. The difference in the number of left and right handed zeromodes is proportional to the trace of $H$: H = \_ = \_| \_5 | - (n(+0)-n(-0)) = n(-0)-n(+0). This difference is called the index of $H$, it can be considered as the definition of the topological charge on the lattice. This is supported by the fact, that in the continuum limit the density ${\rm tr} H$ converges to $\sim F\tilde{F}$. Since the right hand side can take only integer values, we can immediately conclude that the overlap operator cannot be continuous function of the gauge fields. It should be nonanalytic on the boundaries of topological sectors. Further eigenvalues are always coming in pairs, the vector |-= ( 1-)\^[-1/2]{} (\_5 - )| is an eigenvector with eigenvalue $-\lambda$. The $\gamma_5$ matrix leaves the subspace $\{|\lambda \rangle, |-\lambda \rangle\}$ invariant, it can be written as \_5= & ( 1-)\^[1/2]{}\ ( 1-)\^[1/2]{} & - . Since $\gamma_5$ is traceless on the subspace $\{|\lambda \rangle, |-\lambda \rangle\}$ with $\lambda\neq \{0,\pm2m_0\}$, it should be also traceless for the $\lambda= \{0,\pm2m_0\}$ space. This requires that the numbers of $\pm2m_0$ eigenmodes satisfy \[eq:nzero\] n(+2m\_0)-n(-2m\_0)=n(-0)-n(+0). The overlap operator squared $H^2(m)$ commutes with $\gamma_5$. This is a trivial fact in the $\lambda=\{0,2m_0\}$ subspaces, where even $[H(m),\gamma_5]=0$ is true. In $\{|\lambda \rangle, |-\lambda \rangle\}$ subspace it is proportional to the identity matrix H\^2(m)= (1-)\^2 + m\^2& 0\ 0 & (1-)\^2 +m\^2 . ### Numerical implementation In the sign function of Eq. (\[eq:overlap\]) one uses ${\rm sgn} (H_W)=H_W/\sqrt{H_W^2}$. We usually need the action of the ${\rm sgn} (H_W)$ operator on a given vector, which was studied many times in the literature [@Edwards:1998yw; @vandenEshof:2002ms]. The common in all algorithms is that their speed is proportional to the inverse condition number of the matrix $H_W$. To make the algorithms better conditioned, one can project out the few low-lying eigenmodes of the matrix $H_W$ and calculate the ${\rm sgn}$ operator in this space exactly: \[eq:sgn\] [sgn]{}(H\_W)=(s) P\_s + [sgn]{} (QH\_W), where $P_s$ is the projector to the $s$ eigenspace of $H_W$ and $Q=1-\sum_s P_s$. The projections were done by the ARPACK code. To speed up the projections we preconditioned the problem with a Chebyshev-polynomial transformation [@Struckmann:2000bt]. We have taken the $n$-th order approximation of the $\tanh (80(x+1))-1$ function in $[-1,1]$ interval: T\_n(x) (80(x+1))-1. This blows up very fast around $-1$. If we concentrate the interesting part of the spectrum of $H_W$ there, then we make the job of the eigenvector projecting algorithm considerably easier (its speed usually depends on the distance between consecutive eigenmodes). That is we were calculating the eigenvectors of $T_n(H_W^2/s_{\rm max}^2-1)$ instead of those of $H_W$. Since this is only a polynomial transformation, only the eigenvalues are different, the eigenvectors should be the same. The result is that the problem is much better conditioned using Chebyshev-polynomials. The speed gain was almost an order of magnitude. For the rest of the sign function (${\rm sgn}(QH_W)$) one can take her/his favorite approximation: $\sigma(QH_W)$. We have considered two of them: Zolotarev rational function and Chebyshev polynomial. The $n^{\rm th}$ order Zolotarev optimal rational approximation for $1/\sqrt{x}$ in some interval $[x_{\rm min},x_{\rm max}]$ can be expressed by elliptic functions (see e.g. [@Chiu:2002eh]), the coefficients can be also determined by a Remes-algorithm. A particularly useful form of the approximation for the sign function is given by the sum of partial fractions $$\label{eq:parcfrac} \sigma_{\rm Zol}(x) =x\left(a_0 +\sum_{i=1}^n {a_i \over x^2+b_i}\right),$$ in usual cases $n\sim O(10)$. To get the approximation of the sgn of a matrix $A$ one has to plug the $A$ into Eq. \[eq:parcfrac\]: ${\rm sgn} (A) \approx \sigma_{\rm Zol}(A)$. The inversions appearing in this approximation all contain the same matrix but with different shifts ($b_i$). There are so-called multishift Krylov-space methods [@Frommer:1995ik; @Jegerlehner:1996pm], which can solve the system of $n$ inversions with the same number of matrix multiplications as needed to solve only one system. Since matrix multiplications dominate such algorithms, this means that practically for the cost of one inversion one obtains the solutions of $n$ systems. When inverting the multishift system it might be desirable to project out even more eigenvectors than before in Eq. \[eq:sgn\] and calculate the inverse in this subspace exactly: $(s^2+b_i)^{-1}P_s$. The Chebyshev-polynomial approximation ($\sigma_{\rm Cseb}$) in principle performs similarly as the Zolotarev rational function. Here one has to make multiplication with the Wilson matrix many times ($\sim O(100)$), there is no need for global summations as in Zolotarev case. There are architectures (eg. Graphical Processing Units [@Egri:2006zm]), where global summation is a bottleneck, it should be avoided everywhere if possible. In these cases the Chebyshev-approximation is a good choice. Hybrid Monte-Carlo ------------------ Hybrid Monte-Carlo (HMC, [@Duane:1987de]) is the most popular method for simulating dynamical fermions. There are many other choices possible, but HMC outperforms from all of these. We will base our work on the HMC algorithm. One would like to generate gauge configurations via a Monte-Carlo update with the following weight (see \[eq:z\]): \[eq:w\] w\[U\]=(-S\_[gauge]{})(D\^D), where we have $D^\dagger D$ instead of $D^2$ for the weight of two fermion flavors. This substitution is legal, since the one flavor fermion determinant is real. The standard procedure to implement the fermion determinant is to rewrite it using bosonic fields, so-called pseudofermions ($\phi(x)$): w\[U\]=(-S\_[gauge]{}) (-\^(D\^D)\^[-1]{} ) Now let us consider the following steps: 1. Choose Gaussian distributed momenta $P_\mu(x)$. 2. Choose a $\phi$ field according to the distribution $\exp (-\phi^\dagger(D^\dagger D)^{-1}\phi)$. 3. At a fixed $\phi$ background evolve the $U_\mu(x)$ gauge fields and $P_\mu(x)$ momenta using the equations of motion derived from the Hamiltonian H=P\^2+S\_[gauge]{}+\^(D\^D)\^[-1]{}, and from the structure of the manifold. The evolution from $(U,P)$ fields to some $(U',P')$ via the equations of motion is usually called a trajectory. Iterating steps from 1. to 3. one obtains a chain of gauge configurations with the distribution Eq. \[eq:w\]. Along a trajectory the energy and the area are conserved, moreover these trajectories are exactly reversible. To determine the trajectories is an infinitely hard problem, it requires an exact solution of the equations of motion. However an approximate solution of these equations can be also used to build a chain of gauge configurations with the exact $w[U]$ distribution. One just has to find an area preserving and reversible integrator and integrate the equations approximately with it. With such an integrator at hand one trajectory will be generated via the following procedure: 1. As before. 2. As before. 3. Integrate the equations of motion with an approximate integrator. 4. Calculate the energy difference: $\Delta H=H(U',P')-H(U,P)$ and accept/reject the configuration with the probability ${\rm min}(1,\exp(-\Delta H))$. This is one iteration of the HMC algorithm. This iteration also produces configurations with the same equilibrium distribution (Eq. \[eq:w\]) as the previous one. The easiest area conserving and reversible integrator is the leapfrog. The leapfrog integration consist of making $1/\epsilon$ of the following steps[^8]: \[eq:lf\] (/2) () (/2), where $\hmcu$ operator evolves the gauge fields with the actual (fixed) momenta, wheres $\hmcp$ operator evolves the momenta using the force calculated at the actual (fixed) gauge field. The $\epsilon$ is called the stepsize, obviously in the $\epsilon \to 0$ limit one arrives to the exact solution of the equations of motion. The energy is violated by $O(\epsilon ^3)$ by making one leapfrog step, during $1/\epsilon$ steps of a unit length trajectory this error grows up to $O(\epsilon ^2)$. The average energy conservation violation can be shown to be $\langle \Delta H \rangle= C\epsilon^4 + \dots$. Since $C$ is most generally proportional to the volume, the stepsize should be decreased as $\epsilon \sim V^{-1/4}$ to keep constant acceptance. Using area conservation and reversibility one can show that the condition (-H) =1 should be satisfied, this relation provides an easy check of the consistency of the algorithm. This condition also shows us that if one has large energy conservation violations ($|\Delta H|\gtrsim 1$), then the acceptance should be small. In this case one should decrease the stepsize to obtain a good acceptance. The optimal acceptance rate depends on the type of the integrator, in case of the leapfrog and its variants the optimum is around $80\%$. Even one can drop away the area preservation property for the evolution of the gauge fields and momentum, in this case one has to include the Jacobian of the mapping into the accept/reject step. ### HMC for two flavors of overlap fermions In case of the overlap fermion the gauge field and momentum evolution is the following: \[eq:update\] (): U (P) U, && (): P P - , where in the force term the $\mathcal{A}$ operator projects onto traceless, antihermitian matrices (in color indices). The complication arises in the derivative of $S_{\rm pf}$, which schematically can be written as S\_[pf]{} =-\^(D\^D) =(D\^D)\^[-1]{} . The inversion of the fermion operator $\psi=(D^\dagger D)^{-1}\phi$ is done by $n_o$ conjugate gradient[^9] steps (”outer inversion”). Note, however, that each step in this procedure needs the calculation of $(D^\dagger D)\phi$. The operator $D$ contains $\sigma(H_W)$, which is given for example by the partial fraction expansion (see Eq. \[eq:parcfrac\]). Thus, at each ”outer” conjugate gradient step one needs the inversion of the $H_W^2$ matrix (”inner inversion”). This nested type of the inversions is the price one has to pay for an exactly chiral Dirac-operator, in other formulations one only has one matrix inversion per force calculation. No method is known to avoid the nested inversions. The $\delta (D^\dagger D(m))=\delta (H(m)^2)$ derivative has a complicated form. Let us assume, that we treat the sgn function in the overlap operator as in Eq. \[eq:sgn\]. Then the derivative of $H(m)$ will have two parts H(m)= \_1 H(m) + \_2 H(m). The first part will contain the derivative of the projectors ($P_s=|s\rangle \langle s |$), this term comes not only as a derivative of $\sum {\rm sgn}(s) P_s$, but the projectors are also there in the $Q{\rm sgn}(H_W)$ term. All together one obtains \_1H(m)=( m\_0 -m/2 )\_[s]{} P\_s ( [sgn]{}(s) - (H\_W) ). The derivative of the projector can be derived with the tools of quantum mechanical perturbation theory [@Narayanan:2000qx]: P\_s =-( H\_W P\_s + P\_s H\_W ). As we can see each projected mode brings an extra inversion of the (shifted) Wilson matrix. It might be safe to treat the few lowest lying modes of $H_W$ this way, but it is meaningless to calculate exactly the force of modes from the bulk of the spectrum. The second term in $\delta H(m)$ will be the derivative of the sgn function approximation ($\delta_2 H(m)$). In case of the Zolotarev approximation one has \_[Zol]{}(A)= A ( a\_0+\_[i=1]{}\^n )- \_[i=1]{}\^n (A\^2A + AA A), therefore the contribution to the derivative of $H(m)$ is \_2H(m)=(m\_0-m/2)Q \_[Zol]{}(H\_W). In case of the polynomial approximation the formula is more complicated. The [sgn]{} function is needed as three different places during the HMC trajectory: in the force calculation one needs $H(m)^{-2}$ and $\delta H(m)$ and in the action calculation one needs $H(m)^{-2}$ again. The presence of the accept/reject step allows one to use different approximations at different places. One can speed up the algorithm with large factors by carefully choosing and tuning the approximations. We were using the Chebyshev polynomial approximation in the inversions with O(20) projected modes whereas for the $\delta H(m)$ we chose the Zolotarev rational approximation with 2 projected modes. The relative precision was always set to $10^{-6}$ everywhere. For somewhat different implementations of the standard HMC and for various improvement techniques we refer here to the work of two other groups [@Cundy:2004pza; @DeGrand:2004nq; @Cundy:2005pi; @Schaefer:2005qg]. ### Criticism of the HMC for one fermion flavor The HMC described above works only for a positive definite fermion matrix. This is suitable for two flavors. For one flavor one can take the square root of the squared operator (RHMC algorithm). A different way to get the square root of the fermion determinant is to exploit the exact chiral symmetry of the Dirac-operator [@Bode:1999dd; @DeGrand:2006ws]. As we have seen before $H(m)^2$ has $n(+0)+n(-0)$ chiral modes with eigenvalue $m^2$, $n(2m_0)+n(-2m_0)$ chiral modes with eigenvalue $(2m_0)^2$ and all the other modes are doublets with eigenvalue $\left(1-\frac{m^2}{(2m_0)^2}\right)\lambda^2 +m^2$. It is a conventional wisdom that only those configurations contribute to the path integral where either $n(+0)=0$ or $n(-0)=0$. If this is true then we can write the two flavor fermion determinant as $$\begin{aligned} n(\pm0)=0: \quad \quad \det H(m)^2={\rm det} _\pm H(m)^2 {\rm det} _\mp H(m)^2 = [{\rm det} _\pm H(m)^2 ]^2 \left(\frac{m^2}{(2m_0)^2}\right)^{n(\mp0)}, \end{aligned}$$ where the ${\rm det} _\pm$ determinants have to be restricted to positive/negative chirality subspaces. The numerical factor at the end of the formula takes into account that the zeromodes and $\pm 2m_0$ modes are not coming in chirality pairs (see Eq. \[eq:nzero\]). At this point it is easy to perform the square root H(m) = ()\^[n(0)]{} [det]{}\_H(m)\^2, where the sign depends on the chirality of the zero modes of $H$. Since $H(m)^2$ is positive definite even on the definite chirality subspaces, there is no obstacle to introduce pseudofermions for its determinant. The contribution of the zeromodes and $\pm2m_0$ modes can be taken into account by reweighting the observables with them. One has to face with the following problem. Let us consider a trajectory which starts with a gauge configuration where $n(+0)=0$ and ends where $n(-0)=0$. The pseudofermion is generated at the beginning of the trajectory according to the distribution: (- \^ H(m)\^[-2]{} P\_[+]{} ), where $P_{+}$ projects on positive chirality. If one consider the reversed trajectory (starting from the sector, where $n(-0)=0$) the pseudofermion distribution is (- \^ H(m)\^[-2]{} P\_[-]{}), with $P_{-}$ negative chirality projector. This means that the reversed trajectory uses a different pseudofermion distribution than the original one. For the proof of the detailed balance one needs to have the same pseudofermion distribution on both ends. This algorithm yields the violation of the detailed balance for trajectories where the ends are in topological sectors with different signs. The usual way out is that the trajectories are constrained to that part of the phase space where e.g. $n(+0)=0$ is always satisfied. However this choice opens the way of a possible ergodicity breaking, which is also hard to keep under control. Reflection/refraction {#sec:rr} --------------------- In the previous section we have shown how to set up the traditional HMC for overlap fermions. Performing simulations on rather small ($6^4$) lattices the acceptance rate was almost zero. The strange thing was that decreasing the stepsize of the integrator did not help at all. Tracing down the problem, one finds that there are sudden jumps in the microcanonical energy during the trajectories. These jumps are usually in the order of O(10) or larger, the trajectories where the energy violations are of this size are practically never accepted in the final accept/reject step. These jumps occur at the discontinuity of the overlap operator, that is at the topological sector boundaries. The phenomena can be nicely observed in the spectrum of the hermitian Wilson-Dirac operator. The topological charge can be written as Q= H = \_s s| \_5 + [sgn]{} (H\_W) | s = \_s [sgn]{} (s), where $s$’s are the real eigenvalues of $H_W$. The charge changes when an eigenvalue of $H_W$ crosses zero. At this point most presumably the overlap operator itself is discontinuous, since its trace is discontinuous. This means that the pseudofermion action also has a discontinuity, which means a Dirac-delta in the fermion force. Obviously a finite-stepsize integrator will never notice the presence of a Dirac-delta in the force. Without any correction one will end up with an energy violation which is roughly the discontinuity in the fermion action. One can improve on this situation. This feature is already present in a classical one-dimensional motion of a point-particle in a step function potential. During the integration one should check whether the particle moved from one side to the other one of the step function. If it is necessary, one corrects its momentum and position. This correction has to be done also in the case of the overlap fermion. The microcanonical energy, $$H= \frac{1}{2}P^2+S_{\rm gauge}[U]+S_{\rm pf}[U,\phi]=\frac{1}{2}P^2 +S[U,\phi] \label{eq:ener}$$ has a step function type non-analyticity on the the zero eigenvalue surfaces of the $H_W$ operator in the space of link variables coming from the pseudofermion action. When the microcanonical trajectory reaches one of these surfaces, we expect either reflection or refraction. If the momentum component, orthogonal to the zero eigenvalue surface, is large enough to compensate the change of the action between the two sides of the singularity ($\Delta S$) then refraction should happen, otherwise the trajectory should reflect off the singularity surface. Other components of the momenta are unaffected. The anti-hermitian normal vector ($N$) of the zero eigenvalue surface can be expressed with the help of the gauge derivative as N\~s | ( U [$\frac{\partial H_W}{\partial U^T}$]{})| s , where $\mathcal{A}$ projects to the antihermitian, traceless matrices in color space. Table \[tab:frref\] summarizes the conditions of refraction and reflection and the new momenta. When New momenta ------------ -------------------------- ------------------------------------------------------ Refraction $ ( N,P) ^2 > 2\Delta S$ $P-N (N, P) + N (N,P)\sqrt{1-2\Delta S/ ( N,P )^2}$ Reflection $(N,P) ^2 < 2\Delta S$ $P-2N (N,P) $ : \[tab:frref\] Refraction and reflection can happen to the system when approaches a zero eigenvalue surface of $H_W$. The conditions and the new momenta are indicated. $P$ is the momentum before the refraction/reflection. ### Modified leapfrog We have to modify the standard leap-frog integration of the equations of motion in order to take into account reflection and refraction. This can be done in the following way. The standard leap-frog consists of three steps: an update of the links with stepsize $\epsilon/2$, an update of the momenta with $\epsilon$ and finally another update of the links, using the new momenta, again with $\epsilon/2$, where $\epsilon$ is the stepsize of the integration. The system can only reach the zero eigenvalue surface during the update of the links. We have to identify the step in which this happens. After identifying the step in which the zero eigenvalue surface is reached, we have to replace it with the following three steps: 1. Update the links with $\epsilon_c$, so that we reach exactly the zero eigenvalue surface. $\epsilon_c$ can be determined with the help of $N$. 2. Modify the momenta according to Table \[tab:frref\]. 3. Update the links using the new momenta, with stepsize $\epsilon/2-\epsilon_c$. This means that in leapfrog step of Eq. \[eq:lf\] we have to substitute the appropriate $\mathcal{U}(\epsilon/2)$ operator with \[eq:lfmod\] \_[mod]{}(/2)= (\_c) (/2-\_c), where $\mathcal{R}$ is the refraction/reflection operator, which changes the momenta according to Tab. \[tab:frref\]. This procedure is trivially reversible and it also preserves the integration measure as shown in the appendix of this chapter. where $P'=\mathcal{R} P$ is the reflected/refracted momentum. $F=\mathcal{A}\left( U {\ensuremath{\frac{\partial S}{\partial U^T}}}\right)$ is the force evaluated at the topological sector boundary on the starting side, which means that the undefined ${\rm sgn}(0)$ in $F$ is interpreted as ${\rm sgn}(s-)$. $F'$ is also evaluated on the boundary, but with ${\rm sgn}(0)={\rm sgn}(s+)$. Obviously $F'=F$ in case of a reflection. The energy violation of Eq. \[eq:deltah\] can be a serious problem, since a $(P,F)$ type quantity is in general proportional to the volume. Which means that one has to decrease the stepsize with $\epsilon \sim V^{-1}$ to keep the acceptance constant. This is much worse than the original $\epsilon \sim V^{-1/4}$ scaling of the leapfrog. Fig. \[fig:ref\] compares the evolution of the energy and lowest $\lambda$ for the usual and for the modified leapfrog. In the unmodified case there is a huge energy jump at the crossing, the trajectory is most probably rejected. Whereas in the modified case a reflection happens, and $\mathcal{H}$ is much better conserved. ![Comparison of the unmodified (triangles) and modified (boxes) leapfrogs. Upper part is the energy, lower part is the lowest mode of $H_W$.[]{data-label="fig:ref"}](ref.eps){width="8cm"} There is one bottleneck however. If we correct only for the discontinuity in the potential as above, then the microcanonical energy violation will be proportional to the stepsize. That is for a $\mathcal{U}_{\rm mod}(\epsilon/2)\mathcal{P}(\epsilon/2)$ step the energy violation is \[eq:deltah\] H = \_c ( (P’,F’)- (P,F)) + O(\^2), ### Improving the modified leapfrog There are several ways out of the problem. We will consider two variants in this section: - Reflection/refraction step with $C\epsilon$ energy violation, the $C$ coefficient being only an $O(1)$ size number instead of $O(V)$, which means that the bad scaling is eliminated. - Reflection procedure with $O(\epsilon^2)$ energy violation. In the literature there exists a reflection/refraction procedure with $O(\epsilon^2)$ energy violation [@Cundy:2005pi], however it is somewhat more complicated than these two improvements. The large energy conservation violation of $\mathcal{U}_{\rm mod}$ arises, since not the appropriate momentum is reflected/refracted. If one inserts two extra momentum updates into Eq. \[eq:lfmod\] (\_c) (\_c) (/2-\_c) (/2-\_c), then the energy is conserved upto $O(\epsilon^2)$. This is due to the fact that both $\mathcal{U}(\epsilon)\mathcal{P}(\epsilon)$ and $\mathcal{P}(\epsilon)\mathcal{U}(\epsilon)$ conserves the energy upto $O(\epsilon^2)$ and $\mathcal{R}$ conserves the energy exactly. This step however violates the area conservation upto $O(\epsilon)$, there is no free lunch. The first idea is based on the fact, that if one makes the extra momentum updates only in the space, which is orthogonal to the normalvector $N$ (\_c) \_(\_c) \_(/2-\_c) (/2-\_c), then the area is exactly conserved again (as shown in the appendix). Now the energy is still violated by $O(\epsilon)$ terms, but since the update in the orthogonal space is done $O(\epsilon^2)$ correctly, their coefficients is only an $O(1)$ number. The stepsize should be decreased only as $\epsilon \sim V^{-1/2}$. The second idea applies only for the reflection. It uses the following observation. In a one dimensional case if the time required to reach the boundary ($\epsilon_c$) and the time which is required to step away from the boundary ($\epsilon/2 -\epsilon_c$) are the same, then as a result of the reflection step, the trajectory has been exactly reversed. The area conservation, reversibility are obviously preserved, the energy is conserved exactly. We can try whether these remain true in arbitrary dimensions (the area conservation is proven in the appendix, the exact reversibility and energy conservation upto $O(\epsilon ^2)$ are obvious). Then we should insert a $$\label{eq:corr} \mathcal{U}(\epsilon_c)\mathcal{P}(\epsilon_c) \mathcal{R} \mathcal{P}(\epsilon_c)\mathcal{U}(\epsilon_c)$$ step into the chain of leapfrogs, when the boundary is hit. In Eq. \[eq:lf\] we have written the elementary leapfrog step in the $UPU$ form, now let us write it in the $PUP$ order: $$\mathcal{P}(\epsilon/2)\mathcal{U}(\epsilon) \mathcal{P}(\epsilon/2),$$ Now we split the evolution of the links into two parts: $$\label{eq:lf2} \mathcal{P}(\epsilon/2)\mathcal{U}(\epsilon/2) \cdot \mathcal{U}(\epsilon /2)\mathcal{P}(\epsilon/2).$$ Consider that the boundary would be crossed during one of the evolutions of the links in Eq. \[eq:lf2\]. Then replace the original leapfrog with the following: $$\mathcal{P}(\epsilon/2)\mathcal{U}(\epsilon/2) \cdot \mathcal{U}(\epsilon_c) \mathcal{P}(\epsilon_c) \mathcal{R} \mathcal{P}(\epsilon_c) \mathcal{U}(\epsilon_c) \cdot \mathcal{U}(\epsilon/2) \mathcal{P}(\epsilon/2).$$ Now $\epsilon_c$ is the time to reach the boundary surface measured from the midpoint of the leapfrog. Thus if the crossing would happen in the first evolution then $\epsilon_c<0$, if in the second, then $\epsilon_c>0$. ### Tracing the evolution of low lying eigenmodes We have to trace the evolution of the low lying eigenmodes, since we are looking for the moment when one eigenvalue crosses zero. The eigenvectors and eigenvalues are available at discrete times only (once or twice per time step), therefore one has to pair the eigenvectors at time $t$ and time $t+\epsilon$. We calculated the scalar products $\langle s'(t+\epsilon) | s(t) \rangle$ after each link update, and our recipe was the following: the $s(t)$ has evolved to that $s'(t+\epsilon)$ with which the scalar product was maximal. Of course this can break down if the time step is too large. It is easy to show, that to make this naive method work one has to decrease the stepsize as the volume is increased with $\sim V^{-1}$. Expanding the $\langle s'(t+\epsilon) | s(t) \rangle$ one obtains: s’(t+) | s(t) = \_[s’s]{}-s’(t) | | s(t) + O(\^2), where the derivative of $H_W$ is simply =(P,(U[$\frac{\partial H_W}{\partial U^T}$]{})). Clearly the $O(\epsilon)$ term is proportional to the volume, therefore the $\epsilon \sim V^{-1}$ relation should hold to be able to keep track of the evolution of the eigenvectors. If we use the derivatives of the eigenvectors, then we can get a considerably better scaling. That is, instead of $\langle s'(t+\epsilon) | s(t) \rangle$ we calculate the scalar products at time $t+\epsilon/2$ using the eigenvectors and their derivatives at $t$ and $t+\epsilon$: $$\begin{aligned} \langle s'(t+\epsilon/2) | s(t+\epsilon/2) \rangle = \\ \left(\langle s'(t+\epsilon)| -\epsilon/2 \frac{d}{dt} \langle s'(t+\epsilon)| \right) \left(|s(t)\rangle +\epsilon/2 \frac{d}{dt} |s'(t)\rangle \right) + O(\epsilon^2).\end{aligned}$$ We have not used this formula in practice, it is expensive (to monitor $N$ low lying eigenmodes one has to make $N(N-1)$ Wilson-matrix inversions to apply the above formula). Fortunately in all cases our stepsizes were always small enough that the eigenvector identification with the naive procedure was no problem. The stepsize should be also small to avoid the crossing of two or more eigenvalues in a microcanonical time step. This happened very rarely, and since the energy violation were usually very large in these cases, these configurations were simply rejected. Numerical simulations --------------------- In this section we will detail the particular implementation of our overlap HMC variant. Afterwards we will show results on the topological susceptibility as obtained from these simulations. #### Gauge action {#gauge-action .unnumbered} For testing purposes the standard Wilson action was chosen as gauge action, later on we moved to the tree-level Symanzik-improved action. Apart from decreasing the scaling violations in the gauge sector, the improvement is beneficial from the overlap operator point of view, too. Experience in the quenched case shows that improved gauge actions can drastically reduce the eigenvalue density of the negative mass hermitian Wilson-Dirac operator [@DeGrand:2002vu]. Since this operator is the kernel of the overlap operator, gauge action improvement speeds up the overlap inversion algorithms. Since the topological sector change happens when an eigenvalue of the Wilson-Dirac operator changes zero, improvement reduces the tunneling events at the same time. Therefore one has to be careful not to overimprove the gauge action (as it is done for the DBW2 action). The Symanzik tree level improved action is the simplest improved gauge action. #### Fermion action {#fermion-action .unnumbered} The fermion action is of two overlap fermions with standard Wilson-operator as a kernel. After test runs with thin-links, we started smearing the links in the kernel operator via stout smearing procedure. The smearing reduces the fluctuations of the gauge configuration, which is again helps reducing the density of zeromodes of the Wilson-operator [@Kovacs:2002nz]. It is an $O(a^2)$ redefinition of the gauge fields, so keeping the smearing recipe constant as the lattice spacing goes zero will not change the continuum limit of the theory. The stout smearing has the particular advantage compared to other smearing techniques, that it is an analytic function of the thin gauge field [@Morningstar:2003gk]. Therefore its derivative (which is needed to obtain the HMC force) can be calculated exactly. In our simulations we were using two levels of stout smearing with smearing parameter $\rho=0.15$. The speedup was almost an order of magnitude compared to the unsmeared case.\ The negative mass of the Wilson-kernel was chosen to be $-m_0=-1.3$ in the smeared link case. For smaller $m_0$ values would have been no small eigenvalues of the overlap operator, the topology of gauge fields would have been always trivial. The $-m_0=-1.3$ was chosen to be from a small $8^4$ run, where the topological susceptibility started increasing from its zero value at $m_0=0$ and reached its plateau value around $-m_0 \sim -1.3$. #### Algorithm {#algorithm .unnumbered} We have tried several variants of the HMC algorithm, which were discussed in the previous section. For all of them we were using the reflection/refraction modification in some way. In addition to the standard consistency tests (reversibility of the trajectories, $\epsilon^2$ scaling of the action and $\langle \exp (-\Delta H)\rangle=1$) we performed a brute force approach on $2^2$ and $4^4$ lattices. We generated quenched configurations, then we explicitely calculated the determinants of $H(m)$. These determinants were used in an additional Metropolis accept/reject step. The hybrid Monte-Carlo results agree completely with those of the brute force approach. #### Results {#results .unnumbered} Now let us take a closer look on the results obtained with standard Wilson gauge action with standard Wilson fermion kernel in the overlap operator (results with improved action will be discussed later). On $4\cdot 6^3$ lattices there is a sharp increase in the Polyakov loop at $\beta=5.7$ (see Fig. \[fig:ovl\]), which can give a hint on the lattice spacing, since the finite temperature transition is usually around $200$ MeV temperature ($T=1/(4a)$). This value of the coupling was used for measuring the topology on $6^4$ lattices, which were considered as zero temperature lattices. The negative quark mass was set to $m_0=1.6$, the bare fermion mass was in the range $m=0.1..1.15$, the stepsize was $\epsilon=0.025$ in average. Using the conventional HMC one would have no acceptance at all on these lattices, but modifying the leapfrog step according to the previous section the acceptance becomes $>70\%$ for these stepsizes. At each bare mass roughly 800 trajectories were generated. The results are plotted on Fig. \[fig:ts\]. The left panel shows the charge history. The average topological charge is consistent with zero for the total mass range (middle panel). $12\cdot 6^3$ lattices were used to fix the scale using $r_0$ from Wilson-loops. The result is $a \sim 0.25$ fm for small masses. Pion masses were also measured and $m_\pi^2=Am$ with $A\approx 1$ is found in lattice units. Using the scale and the pion mass, it is possible to get the topological susceptibility in physical units (right panel of Fig. \[fig:ts\]). $\chi(m)$ tends to zero for small quark masses. One can compare these results with the continuum expectation in the chiral limit (solid line of the figure): \_[m 0]{} (m) = \_[m0]{}=. ![ The $\beta$ dependence (right panel) of the Polyakov-loop on $4\cdot 6^3$ lattices at $m=0.1$.[]{data-label="fig:ovl"}](L.eps){width="10cm"} ![Topology on $N_s=6$ lattices. See text.[]{data-label="fig:ts"}](lat04.eps){width="16cm"} Topological sector changing --------------------------- In the previous two sections we have described a HMC algorithm for overlap fermions. The nonanalytic behavior of the overlap-operator at topological sector boundaries requires non-trivial modification of the original HMC. Our modification is able to handle the discontinuity problem as shown in the numerical results section. As we have started simulating even larger volumes ($8^4$) with the modified algorithm, we had to face a new problem. In majority of the reflection/refraction steps reflection happened, which means that the trajectories were confined to a given topological sector for long times. This dramatic increase of the autocorrelation time of the topological charge makes the measurement of the topological susceptibility very hard and effectively also means the violation of the ergodicity. One can come up with the solution to let the trajectories continue their evolution as if the discontinuity in the action would not be present. This algorithm would obviously allow the system to tunnel between topological sectors. The price is that at the end of each trajectory one has to keep the $\exp(-\Delta H)$ factors to finally reweight the configurations with them. The energy conservation violation is dominantly coming from the sum of the discontinuities in the action $\Delta H= \sum_a \Delta S_a + O(\epsilon)$ along a trajectory. If the system moves in a fixed potential, then $\Delta H$ will take positive and negative values equal times, since some times the system goes up sometimes goes down the same discontinuity. The reweighting would have the following form: r\[U\_n\]=(-\_[i]{}\^n H\_i), A = . \[eq:rew\] If $\Delta H_i$’s are roughly equal times positive and negative, then the reweighting works well: the configurations have nearly the same weight. Unfortunately this turned out to be not true for the overlap HMC case, almost all $\Delta H$’s were positive, the configurations were becoming unimportant very fast in the sum of Eq. \[eq:rew\]. How could this happen? The answer is that the evolution of the trajectories is not done in a fixed fermion potential $S_{\rm exact}=-\log \det H(m)^2$, but in a pseudofermionic one $S_{\rm pf}=\phi^{\dagger} H(m)^{-2} \phi$. The pseudofermion is not fixed, it is regenerated at the beginning of each trajectory. Let us take a closer look on how the pseudofermions approximate the fermion determinant. This will help us to understand the slowing down of the tunneling between topological sectors. In particular, we show that the jump in the pseudofermionic action overestimates $\Delta S_{\textrm{exact}} $. Let us assume that the trajectory crosses the boundary. Let $H_{-}$ and $H_{+}$ be the overlap operator evaluated on the two sides of the boundary right before and after the crossing, respectively. Clearly $H_{-}$ and $H_{+}$ contain the same gauge configuration, but they differ, since one eigenvalue of $H_W$ changes sign on the boundary. In the HMC algorithm one chooses the pseudofermion field as $$\phi=H_{-} \eta,\ \ \ \ \ \phi^\dagger=\eta ^\dagger H_{-},$$ where $\eta, \eta ^\dagger$ are random vectors with Gaussian distribution, in order to generate $\phi, \phi^\dagger$ with the correct distribution. (In a real simulation one chooses new pseudofermion configurations only at the beginning of each trajectory, but for simplicity let’s consider, that $\phi$ and $\phi ^ \dagger$ are refreshed when hitting the boundary.) The jump of the pseudofermionic action now reads: $$\Delta S_{\textrm{pf}} =S_{\textrm{pf} +} - S_{\textrm{pf} -}= \eta^ \dagger (H_{-} H^{-2}_{+} H_{-} -1) \eta$$ The relation between $\Delta S_{\textrm{exact}}$ and $ \Delta S_{\textrm{pf}}$ can be obtained by the following straightforward calculation: $$e^{- \Delta S_{\textrm{exact}}} = \frac{\det H^2_{+}}{\det H^2_{-}} = \frac{\int [d \eta ^ \dagger] [d \eta] e^{- \eta^ \dagger \eta} e^{- \eta^ \dagger (H_{-} H^{-2}_{+} H_{-} -1) \eta}}{\int [d \eta ^ \dagger] [d \eta] e^{- \eta^ \dagger \eta}} =$$ $$= \langle e^{-\eta^ \dagger (H_{-} H^{-2}_{+} H_{-} -1) \eta} \rangle _{\eta^{\dagger} \eta} \geq e^{- \langle \eta^ \dagger (H_{-} H^{-2}_{+} H_{-} -1) \eta \rangle _{ \eta^{\dagger} \eta}} = e^{- \langle \Delta S_{\textrm{pf}} \rangle }$$ The inequality in the second line is a consequence of the concavity of the $e^{-x}$ function. So we conclude to: $$\langle \Delta S_{\textrm{pf}} \rangle \geq \Delta S_{\textrm{exact}}.$$ We can examine this relation in realistic simulations, if we take into account, that there is a simple relation between $H_{+}$ and $H_{-}$. Let’s denote by $\lambda_0$ the eigenvalue of $H_W$ which crosses zero at the boundary, and by $|0 \rangle$ the eigenvector belonging to $\lambda_0$. With this notation: $$\label{eq:Hpm} H_{+}=H_{-} + c |0 \rangle \langle 0 |,$$ where $$c=\Delta \textrm{sgn} \lambda_0 \ m_0 (1-\frac{m}{2 m_0}),$$ with $\Delta \textrm{sgn} \lambda_0 = \pm 2$ being the jump of ${\rm sgn} \lambda_0$ on the boundary. The expectation value of the discontinuity in the pseudofermionic action is: $$\langle \Delta S_{\textrm{pf}} \rangle = \langle \eta^ \dagger (H_{-} H^{-2}_{+} H_{-} -1) \eta \rangle _{ \eta^{\dagger} \eta} = \textrm{Tr}(H_{-} H^{-2}_{+} H_{-} -1) =$$ $$\label{eq:Spf} = \textrm{Tr} \big( (1-c |0 \rangle \langle 0 | H_{+}^{-1})(1-c\ H_{+}^{-1} | 0 \rangle \langle 0 | )-1 \big) = -2 c \langle 0 | H_{+}^{-1} | 0 \rangle + c^2 \langle 0 | H_{+}^{-2} | 0 \rangle.$$ In a similar way one can get a simple formula for the exact value of the jump on the boundary: $$\label{eq:Sex} e^{-\Delta S_{\textrm{exact}}}=\frac{\det H_{+}^{2} }{\det H_{-}^{2}}= \frac{1} {\det(H_{+}^{-1} H_{-})^2} = \frac{1}{\det(1- c H_{+}^{-1} |0 \rangle \langle 0 |)^2} = \frac{1}{(1-c \langle 0 | H_{+}^{-1} | 0 \rangle)^2}.$$ Eq. (\[eq:Spf\]) and Eq. (\[eq:Sex\]) offers a numerically fast way to determine both action jumps, since one needs only one inversion of the overlap operator to obtain both of them. ![\[fi:sc\] The jump in the exact vs. pseudofermionic action at $\beta=4.05$ and $m=0.1, 0.2$. Since the average of $\langle n,p \rangle^2$ is around $\approx 1$, topological sector changing would happen considerably frequently using $S_{\rm exact}$, than with $S_{\rm pf}$. We also indicated the probability of topological sector changing with the pseudofermionic action, and an estimate on the probability using the exact action (assuming that the two algorithms would behave the same way except for the boundaries). ](scatter.eps){width="15.0cm"} For illustration we made a scatter plot (Fig. \[fi:sc\]) from a $6^4$ lattice at two different masses. From the joint distribution of ${\Delta S_{\rm exact},\Delta S_{\rm pf}}$ we can understand why are the tunneling events are so rare. Topological sector changing occurs when the HMC momentum of the system in direction of the topological sector boundary surface is large enough to “climb” the discontinuity (see Tab. \[tab:frref\]). The momentum squared is usually an $O(1)$ number. As we can see on Fig. \[fi:sc\] the $\Delta S_{\rm pf}$ distribution overestimates the real discontinuity $\Delta S_{\rm exact}$ with orders of magnitude. Therefore a crossing which would be possible with $\Delta S_{\rm exact}$ becomes impossible with $\Delta S_{\rm pf}$. The HMC which uses $\Delta S_{\rm pf}$ stucks into a given topological sector. The overestimation becomes worse with lowering the quark mass. One way to cure this is to use several pseudofermion estimators instead of one [@DeGrand:2004nq]. More pseudofermions mean smaller spread of the pseudofermionic action distribution, therefore the overestimation is smaller, too. However the computational time also increases with the number of extra fields. Obviously the best would be to use the exact action in the simulations, but only its discontinuity on the boundary can be calculated easily (the calculation of the exact fermion determinant is an $O(V^3)$ operation in general). In the following two subsections we show two different ways to use the exact action jump instead of its pseudofermion estimator in the simulations. Both of them are inexact, the errors present in the measured quantities are of $O(\epsilon^2)$. ### Using $\Delta S_{\rm exact}$ to sew together simulations with fixed topology {#ssec:sew} Let us write the partition function in the form (assuming a vanishing $\theta$ parameter): $$Z=\sum_{Q=-\infty}^{\infty} Z_Q,$$ where $Z_Q$ is the partition function of the topological sector $Q$. The expectation value of an observable: $$\langle O \rangle = \frac{\sum_{Q} Z_Q \langle O \rangle_Q}{\sum_{Q} Z_Q} = \frac{\sum_{Q} \frac{Z_Q}{Z_0} \langle O \rangle_Q}{\sum_{Q} \frac{Z_Q}{Z_0}},$$ where the restricted expectation value $\langle O \rangle_{Q}$ is $$\langle O \rangle_{Q}=\frac{1}{Z_Q}\int [dU]_Q O[U] \det H^2_Q \exp (-S_g).$$ For reasons which will be clear later the integration goes not only over the configurations with $Q$ charge, but also over the boundary of the topological sector as well (though the boundary has only zero measure in this case). When calculating the partition function in a given topological sector the following boundary prescription is used: we define the determinant on the boundary as the limit of determinants approaching the wall from the $Q$ side ($\det H^2_Q$). If the measurement of the quantities $Z_{Q+1}/Z_{Q}$ would be possible, then we could recover $Z_{Q}/Z_{0}$ for any $Q$. With these in hand, we would need only the restricted expectation values $\langle O \rangle_{Q}$, whose measurement doesn’t require topological sector changings. #### Measuring $Z_{Q+1}/Z_{Q}$ using $\Delta S_{\textrm{exact}} $ {#measuring-z_q1z_q-using-delta-s_textrmexact .unnumbered} Now we will show a way to measure $Z_{Q+1}/Z_{Q}$. It will make use of the fact, that we can calculate easily $\Delta S_{\textrm{exact}} $ on the boundary of topological sectors (see Eq. (\[eq:Sex\])). The pseudofermionic action is only used to generate configurations in fixed topological sectors, so its bad distribution for the jump of the action will not effect us. (In the following formulae $\Delta S$ will automatically mean $\Delta S_{\textrm{exact}}$.) The main idea is the following: an observable measured in sector $Q$ is inversely proportional to $Z_Q$ and an observable in $Q+1$ is to $Z_{Q+1}$. If the observables in the two sectors are concentrated only to the common wall separating the two sectors, then from the ratio of the two expectation values one can recover the ratio of the two sectors. First let us measure in the $Q$ sector an operator, which is concentrated to the boundary: $$\label{eq:fd} \langle \delta_{Q,Q+1} F\rangle_Q = \frac{1}{Z_Q}\int [dU]_{Q} \delta_{Q,Q+1}F[U] \det H^2_Q \exp (-S_g) ,$$ where we introduced the distribution $\delta_{Q,Q+1}$, a Dirac-$\delta$, which is equal to zero everywhere but on the $Q, Q+1$ boundary. Then let us measure another operator $G$ on the same wall (thus on the boundary separating sectors $Q$ and $Q+1$), but now from the $Q+1$ sector: $$\label{eq:gd} \langle \delta_{Q,Q+1} G\rangle_{Q+1} = \frac{1}{Z_{Q+1}}\int [dU]_{Q+1} \delta_{Q,Q+1}G[U] \det H^2_{Q+1} \exp (-S_g) .$$ The wall is the same (i.e. $[dU]_{Q}\delta_{Q,Q+1}=[dU]_{Q+1}\delta_{Q,Q+1}$) in both cases, however due to our boundary prescription the determinants are different on it. Therefore if $F$ and $G$ satisfies $$\label{eq:db1} F[U]\det H^2_{Q}[U]=G[U]\det H^2_{Q+1}[U]$$ for configurations on the boundary, then the ratio of Eq. (\[eq:fd\]) and Eq. (\[eq:gd\]) gives us $$\label{eq:db2} \frac{\langle \delta_{Q,Q+1} F\rangle_Q}{\langle \delta_{Q,Q+1} G\rangle_{Q+1}}=\frac{Z_{Q+1}}{Z_Q}.$$ #### Choosing $F[U]$ and $G[U]$ functions {#choosing-fu-and-gu-functions .unnumbered} The easiest choice is $G(U)=1$ and $F(U)=\det H^2_{Q+1}/\det H^2_Q = \exp (-\Delta S)$, the ratio of sectors becomes: $$\label{eq:zq} Z_{Q+1}/Z_Q=\frac{\langle \delta_{Q,Q+1} \exp (-\Delta S) \rangle_Q}{\langle \delta_{Q,Q+1}\rangle_{Q+1}}.$$ This choice is still not optimal, since the measurement of the numerator is problematic, if the distribution of $\Delta S$ extends to negative values. The exponential function amplifies the small fluctuations in the negative $\Delta S$ region, which can destroy the whole measurement: a very small fraction of the configurations will dominate the result. As a consequence one ends up with relatively large statistical uncertainties. With a slightly different choice of $F$ and $G$ we can improve on the situation. With $F(U)=\Theta (\Delta S - x) \exp (-\Delta S)$ and $G(U)=\Theta (\Delta S - x)$ we can omit the problematic part of the $\Delta S$ distribution (the values smaller than $x$) from the measurement, and we get: $$\label{eq:zq1} Z_{Q+1}/Z_Q=\frac{\langle \delta_{Q,Q+1} \exp (-\Delta S) \rangle_Q^{\Delta S >x}}{\langle \delta_{Q,Q+1}\rangle_{Q+1}^{\Delta S >x}}.$$ The price of this choice of $F,G$ is that we do not make use of the $\Delta S<x$ part of our data set. The value of $x$ can be tuned to minimize the statistical error. Let us note that Eq. (\[eq:db1\]) can be viewed as a detailed balance condition on a given $U$ configuration between $Q$ and $Q+1$ sector ($F$ and $G$ are just the “transition probabilities”). This can give us a hint, that the Metropolis-step is a good a solution for $F,G$: $F=\min(1,\exp(-\Delta S))$ and $G=\min(1,\exp(\Delta S))$. The ratio of sectors is simply: $$\label{eq:zq2} Z_{Q+1}/Z_Q=\frac{\langle \delta_{Q,Q+1} \min(1,\exp (-\Delta S)) \rangle_Q}{\langle \delta_{Q,Q+1} \min(1,\exp(\Delta S))\rangle_{Q+1}}.$$ The inconvenient part of the distribution ($\Delta S<0$) is cut off, however in contrast to Eq. \[eq:zq1\] all configurations are used to get the expectation values. #### Expectation value of a Dirac-delta type operator {#expectation-value-of-a-dirac-delta-type-operator .unnumbered} Let us discuss briefly that in the framework of HMC, how to measure an expectation value, which contains a Dirac-delta on the surface. The important observation is that one can use the pseudofermionic action in the HMC to get the fixed topology expectation values in Eq. \[eq:zq\], \[eq:zq1\], \[eq:zq2\]. Inside a topological sector the behavior of the pseudofermionic estimator is not an issue, we can use it instead of $S_{\rm exact}$ as usual. In practice it is not possible to measure an operator containing a Dirac-delta on the boundary surface on configurations generated by the pseudofermionic HMC, because none of them will be exactly located on it. If we would be able to exactly integrate the equations of motion, then all inner points of the trajectories could have been taken into the ensemble. Those ones also, which are located exactly on the surface. Here one would pick up a contribution from the Dirac-delta to the above expectation values, at the inner points the contribution would be zero. In the real case the trajectories differ by $O(\epsilon^2)$ from the exact ones[^10]. Here using the above procedure (measuring the $F[U]$ and $G[U]$ operators on the boundary and summing them up along the trajectories) one makes $O(\epsilon^2)$ errors in expectation values. #### Summarizing the new technique {#summarizing-the-new-technique .unnumbered} We have achieved our main goal: without making expensive topological sector changes we can obtain the ratio of sectors (see Eq. \[eq:zq\], \[eq:zq1\], \[eq:zq2\]). The key point is to make simulations constrained to fixed topological charge, and match the results on the common boundaries of the sectors. Since no sector changing is required, the inconvenient distribution of the pseudofermionic action jump on the boundary will not effect the measurement of the ratios of sectors. The exact action is needed only on the boundary: the formulas \[eq:zq\], \[eq:zq1\], \[eq:zq2\] require $\Delta S$. Obviously an important issue for this new method is whether topological sectors defined by the overlap charge are path-connected or not. In [@Luscher:1998du] it has been proven that Abelian lattice gauge fields satisfying the admissibility condition can be classified into connected topological sectors. No result is known for non-Abelian groups or non-admissible gauge fields. (Though there are some concerns on the structure of the space of non-Abelian lattice gauge fields [@Adams:2002ms].) If configurations with the same $Q$ would not be continuously connectable in sector $Q$, then our assumption that we make measurements on the common boundary of sectors could be violated. It could happen, that the wall sampled from sector $Q$ does not coincide with the wall sampled from $Q+1$. Moreover the fixed sector simulations would also violate ergodicity in this case. Let us note here that the large autocorrelation time for the topological charge in the conventional pseudofermionic HMC effectively also causes the breakdown of ergodicity. In case of non-connected sectors one can cure these problems by releasing the system from a sector after a certain amount of time and closing it to another. ### Using $\Delta S_{\rm exact}$ in R-algorithm In the following we will describe another technique, which uses the $\Delta S_{\rm exact}$ and can circumvent the critical slowing down of the topological sector change. If one does not insist on an exact algorithm, then an R-algorithm [@Gottlieb:1987mq] where the $\Delta S_{\rm exact}$’s are taken into account can be a particularly good choice. Let us describe it shortly. Instead of evolving the trajectory in a pseudofermion potential (see Eq. \[eq:update\]), one can try to estimate the exact force by a random vector: [$\frac{\partial }{\partial U}$]{}H(m)\^2 \~R\^ H(m)\^[-2]{} [$\frac{\partial H(m)^2}{\partial U}$]{}R. Usually one estimator ($R$) per integrator step is used, so the approximation might be poor. If the stepsize goes to zero, then on a fixed time interval the number of estimators will diverge making the approximation exact. Since there is no recipe, how to make the R-algorithm at finite stepsize exact (like the accept/reject step in the HMC algorithm) the stepsize extrapolation is a necessary ingredient. The stepsize error scales with $O(\epsilon^2)$. When a trajectory hits the topological boundary surface, then one just has to modify the trajectory according to the reflection/refraction rules, but now one can use the $\Delta S_{\rm exact}$ discontinuity instead of a badly behaving estimator (eg. $\Delta S_{\rm pf}$). The modified leapfrog step is not necessarily to be an exactly area conserving one (since stepsize errors are already present). But still it is required, that the errors caused either in the energy or in the area conservation are minimal (a good candidate is the leapfrog-in leapfrog-out, which conserves the energy upto $O(\epsilon^3)$ and the area upto $O(\epsilon^2)$). Numerical simulations 2. ------------------------ In the previous section we described two methods, to solve the topological sector changing problem of pseudofermionic HMC simulation. We were extensively using the first one (see subsection \[ssec:sew\]). Here we describe the details of these simulations, and finally give the topological susceptibility in physical units measured on $8^4$ and $8^3\times 16$ lattices. Simulations were done using unit length trajectories, separated by momentum and pseudofermion refreshments. The system was confined to a fixed topological sector in each run, we reflected the trajectories whenever they reached a sector boundary. The end points of the trajectories obviously follow the exact distribution in a given sector, usual quantities can be measured on them. We compared a few observables (plaquette, size of the potential wall) in a given topological sector, but in different runs. We have not found any sign indicating that the sectors were disconnected. When calculating the ratio of sectors using Eq. \[eq:zq\] or Eq. \[eq:zq1\] or Eq. \[eq:zq2\] we integrated along the trajectories, this quantity will be burdened by a step size error. We carried out simulations at one stepsize. ![ \[fi:test\] [*Left panel:*]{} a typical optimization procedure of the lower limit ($x$) on $\Delta S$ in the formula (\[eq:zq1\]). The statistical error of the ratio $Z_1/Z_0$ shows a minimum as the function of $x$, which is considered as the optimal value. [*Right panel:*]{} Bare mass dependence of topological susceptibility using three different methods on $6^4$ lattices. The points corresponding to the same mass were slightly shifted vertically for clarity. Result based on our new technique and Eq. \[eq:zq1\] is on the left, based on Eq. \[eq:zq2\] is in the middle, the standard pseudofermionic HMC is on the right. The simulation parameters are from [@Fodor:2004wx].](test.eps){width="15.0cm"} In case of large enough statistics the value of $Z_{Q+1}/Z_Q$ should be the same, independently which of the three formula \[eq:zq\], \[eq:zq1\] and \[eq:zq2\] was used to calculate it. We omit Eq. \[eq:zq\] in the following, since it is hard to give a reliable error estimate on the expectation value of $\exp (-\Delta S)$, if $\Delta S$ can be arbitrary negative number. Eq. \[eq:zq1\] still measures $\exp (-\Delta S)$, but with a lower limit ($x$) on $\Delta S$. Smaller limit yields a smaller and more reliable error, however the statistics is decreased at the same time. One can tune the value of $x$, so that the statistical error takes its minimum. A result of a typical optimum search can be seen on the left panel of Fig. \[fi:test\]. The optimal value can be compared to the one obtained from Eq. \[eq:zq2\]. On the right panel of Fig. \[fi:test\] the two new topological susceptibilities and the one calculated by using traditional pseudofermionic HMC [@Fodor:2004wx] are shown. The agreement is perfect. Comparing these results with those of the HMC, we conclude that the stepsize effect is negligible (at least at our present statistics). Let us compare the amount of CPU time of the two different methods for roughly the same statistical errors (see Fig. \[fi:test\]): the conventional HMC consisted 500-1000 trajectories (500 for the smallest, 1000 for the largest mass), whereas we generated less than 200 at each mass for the new method. Moreover it is important to emphasize in this context that the new method can be efficiently parallelized. m $\langle Q^2 \rangle$ $\langle Q^2\rangle r_0^4/V$ $r_0$ $m_\pi$ $L m_\pi$ \#traj -------- ----------------------- ------------------------------ ------------ ------------ ----------- -------- $0.03$ $0.13(2)$ $0.0047(9)$ $3.52(13)$ $0.29(11)$ $2.4$ $39$ $0.1$ $0.41(6)$ $0.010(1)$ $3.17(5)$ $0.53(4)$ $4.3$ $51$ $0.2$ $0.97(19)$ $0.017(3)$ $2.89(2)$ $0.74(6)$ $5.9$ $63$ $0.3$ $1.59(18)$ $0.027(3)$ $2.88(6)$ $0.99(8)$ $7.9$ $54$ : \[tb:spect\] Topological susceptibility measured on $8^4$ lattices in the second and third column. The further columns contain the Sommer-scale, pion mass, pion mass times box size and number of trajectories on $8^3\times 16$ lattices. To measure the topological susceptibility on $8^4$ lattices we generated configurations with tree-level Symanzik improved gauge action ($\beta=4.15$ gauge coupling) and 2 step stout smeared overlap kernel ($\rho=0.15$ smearing parameter, the kernel was the standard Wilson matrix with $m_0=1.3$). We performed runs in sectors $Q=0\dots 3$ (based on the measured $Z_3/Z_2$ we can conclude, that the contribution of $Q \ge 4$ sectors are small compared to statistical uncertainties). For the negatively charged sectors we used the $Q\to -Q$ symmetry of the partition function. The bare masses were $m=0.03, 0.1, 0.2$ and $0.3$, at each mass approximately 1000 trajectories were collected. The average number of the topological sector boundary hits was around $1.5$ per trajectory. We calculated the ratio of sectors using Eq. \[eq:zq1\] and Eq. \[eq:zq2\]. The result for the topological susceptibility can be seen on Fig. \[fi:ts\] (see also Table \[tb:spect\]). It is nicely suppressed for the smallest mass. To convert it into physical units, we made simulations on $8^3\times 16$ lattices. We measured the static potential by fitting the large time behavior of on and off-axis Wilson-loops. Then fitting it at intermediate distances we extracted the value of Sommer-parameter. We also measured the pion mass (see Table \[tb:spect\]). Since our statistics was quite small on these asymmetric lattices, the errors are large. Note, that in order to get the mass-dimension 4 topological susceptibility in physical units, one has to make very precise scale measurements. When interpreting the results, one should keep in mind, that the volume is small, and the lattice spacing is large. Note however, that smeared kernel overlap actions show nice scaling behavior and good locality properties already at moderate lattice spacings [@Kovacs:2002nz; @Durr:2005an]. Discussion ---------- In this chapter we have given a summary of the work to implement a dynamical overlap fermion algorithm. The lattice index theorem of the overlap Dirac-operator is a very nice feature, however it has its bottleneck. The operator is nonanalytic at the topological sector boundaries, which makes the conventional dynamical fermion algorithm (HMC) break down. We have proposed, implemented and tested a modification which is able to handle this nonanaliticity. Examining the properties of the modified algorithm carefully, we have made a few improvements on it. One of them was an improvement of the acceptance ratio, the other is connected to the slow topological sector changing of the algorithm. Even with these improvements the simulation with dynamical overlap fermions is in an exploratory phase. Other fermion formulations are considerably faster than the overlap. There are two major problems at the moment. 1. The first bottleneck is that the construction of the overlap operator is a very expensive procedure, it scales with $V^2$ (as one can see in Ref. [@Chiu:2002ak], but the extra $V$ factor can be expected, since the number of zeromodes increases with the volume). Therefore it is very hard to imagine a dynamical fermion algorithm with better scaling behavior. As it was mentioned in the introduction the algorithms for conventional fermion formulations scale usually with $V^{5/4}$. 2. The second bottleneck is handling the nonanaliticity of the overlap operator. The most simple modification of the conventional HMC (as described in the chapter) can easily bring extra $V$ factors in the scaling. There exists modifications improving the situation as we have seen, but they are really cumbersome. The problem is that the more sophisticated an improvement is, there are more ways to go wrong. The nice feature of the HMC, the robustness will be lost. Without a solution of the first issue at hand (which would mean to get rid of the nested inversion), one can simply accept that the overlap dynamical fermion algorithm will scale at least with $V^2$. At this point new algorithms might come into play where presumably different problems have to be solved. If the only gain is that one can forget the discontinuities in the overlap operator (second issue), it might worth to change. Appendix: area conservation proof --------------------------------- The leapfrog is trivially an area conserving mapping in the phase space, since the increase of the momentum depends only on the actual coordinates, and the change in the coordinates depends only on the momentum. In case of the modified leapfrog the difficulty arises since e.g. in the first step the updates of the link variables are depending on the actual links through $\epsilon_c$. Similarly the momentum update also depends on the momentum through the normal vector. In order to keep the discussion brief, first let us start with a Hamiltonian system in the $N$-dimensional Euclidean coordinate space. This shows the basic idea of the proof in a transparent way. We solve the equations of motion with a finite stepsize integration of the following Hamiltonian: $${\cal H} = \frac{1}{2}p_a p_a +S\left({\rm sgn}M(q) \right),$$ where $q_a, p_a$ $(a=1\dots N)$ are the coordinates and the momenta. $M$ depends only on the coordinates and the action $S$ is a smooth function (note that $q_a$, $M$ and $S$ are analogous to the links, the fermion matrix and the fermionic action, respectively). The standard leap-frog algorithm can be effectively applied to this system, as long as the trajectories do not cross the zero eigenvalue surface of $M$ ($\lambda(q)=0$, where $\lambda(q)$ is the eigenvalue with smallest magnitude[^11]). We have to modify the leap-frog algorithm, when the coordinates reach the zero eigenvalue surface. Instead of the original leap-frog update of the coordinates, where the constant $p_a$ momenta are used for the time $\epsilon/2$, we first update the coordinates with $p_a$ until the surface, then we change the momentum to $p'_a$, which is used to evolve $q_a$ for the remaining time. In case of [***refraction***]{} one has the following phase space transformation: $$\begin{aligned} \label{eq:transform} q'=q+\epsilon_c p+(\epsilon/2-\epsilon_c)p' \\ p'=p-n(np)+n(np') + h, \nonumber\end{aligned}$$ where $n$ is the normalvector of the surface, $\Delta S$ is the potential jump along the surface, and $(np')^2=(np)^2-2\Delta S$. $\epsilon_c$ is the time required to reach the surface with the incoming momenta $p$. $h$ is a vector orthogonal to $n$ and depending on $q,p$ only through $\epsilon_c$ or quantities which measured on the eigenvalue surface. The $h$ might be needed to improve the energy conservation of the leapfrog (see Sec. \[sec:rr\]), eg. one can use \[eq:lfh\] h=-\_c QF\_[-]{} - (/2-\_c) QF\_[+]{}, where the $F_{\pm}$ forces are measured on the eigenvalue surface with setting ${\rm sgn}(\lambda(\epsilon_c))={\rm sgn}(\lambda(\epsilon_c\pm0))$. $Q_{ab}=\delta_{ab}-n_an_b$ is simply the orthogonal projector to the surface. First let us concentrate on the $q,p$ dependence of $\epsilon_c$. $\epsilon_c(q,p)$ is determined from the condition $ \lambda(q+\epsilon_c(q,p)p)=0 $. One obtains the partial derivatives of $\epsilon_c$ with respect to $q,p$ by expanding this zero eigenvalue condition to first order in $\delta q$ or $\delta p$. First take the $\delta q$ variation: (q\_a+\_cp\_a+q\_a+[$\frac{\partial \epsilon_c}{\partial q_b}$]{}q\_b p\_a)=(q+\_cp)+.[$\frac{\partial \lambda}{\partial q_a}$]{}|\_[q+\_cp]{} (\_[ab]{}+[$\frac{\partial \epsilon_c}{\partial q_b}$]{}p\_a)q\_b=0 Since the normalvector is just $$n_a=\left.{\ensuremath{\frac{\partial \lambda}{\partial q_a}}}\right|_{q+\epsilon_cp}/||{\ensuremath{\frac{\partial \lambda}{\partial q}}}||,$$ we have for the partial derivative of $\epsilon_c$ with respect to $q$: $${\ensuremath{\frac{\partial \epsilon_c}{\partial q_a}}}=-\frac{n_a}{(np)}.$$ Similarly one gets for the partial derivative with respect to $p$: $${\ensuremath{\frac{\partial \epsilon_c}{\partial p_a}}}=-\epsilon_c\frac{n_a}{(np)}.$$ There is an important identity between the $q$ and $p$ derivatives of a function, which depends only on $q+\epsilon_c(q,p)p$. (Two examples are $n$ and $\Delta S$.) Let us evaluate $p$ and $q$ derivatives of an arbitrary $g(q+\epsilon_c(q,p)p)$ function: [$\frac{\partial g}{\partial q_a}$]{}=.[$\frac{\partial g}{\partial q_b}$]{}|\_[q+\_cp]{}(\_[ab]{}+[$\frac{\partial \epsilon_c}{\partial q_a}$]{}p\_b)=.[$\frac{\partial g}{\partial q_b}$]{}|\_[q+\_cp]{} (\_[ab]{}-),\ [$\frac{\partial g}{\partial p_a}$]{}=.[$\frac{\partial g}{\partial q_b}$]{}|\_[q+e\_1p]{}(\_c\_[ab]{}+[$\frac{\partial \epsilon_c}{\partial p_a}$]{}p\_b)=.[$\frac{\partial g}{\partial q_b}$]{}|\_[q+\_cp]{} (\_[ab]{}-)\_c, which gives $$\begin{aligned} {\ensuremath{\frac{\partial g}{\partial p_a}}}=\epsilon_c{\ensuremath{\frac{\partial g}{\partial q_a}}}. \label{eq:pq}\end{aligned}$$ Now we can consider the four different partial derivatives required for the Jacobian: $$J= \begin{pmatrix} {\ensuremath{\frac{\partial q'}{\partial q}}} & {\ensuremath{\frac{\partial q'}{\partial p}}} \\ {\ensuremath{\frac{\partial p'}{\partial q}}} & {\ensuremath{\frac{\partial p'}{\partial p}}} \\ \end{pmatrix},$$ whose determinant gives the change in the Euclidean measure $d^Nqd^Np$ due to the given phase space transformation. Introducing $$Z_{ab}\equiv {\ensuremath{\frac{\partial p'_a}{\partial q_b}}}.$$ one incorporates all terms which arise from the $q$ dependence of the normalvector and $\Delta S$. In case of a straight wall with constant potential jump and $h=0$ this matrix vanishes. (Clearly, for QCD with overlap fermions this object is very hard to calculate; they usually require the diagonalization of the whole $H_W$ matrix ). Using Eq. \[eq:pq\] one can recognize the $Z$ matrix in the other three components of $J$. Denoting X\_[ab]{}=Q\_[ab]{}+(1-)\^[1/2]{} n\_an\_b +\ Y\_[ab]{}=Q\_[ab]{}+(1-)\^[-1/2]{} n\_an\_b The useful property of $X$ and $Y$ that the determinant of their product is (XY)== 1+ (1-)\^[-1/2]{} which means that it is trivial for the $(hn)=0$ case. In terms of the $X$, $Y$ and $Z$ matrices the Jacobian is very simple. We can split it into 2 parts: the first term contains all $X$ and $Y$ factors and has determinant one and all $Z$ factors are in the second term: \[eq:jac\] J= X & \_cX+(/2-\_c)Y\ 0 & Y + (/2-\_c)Z & (/2-\_c)\_cZ\ Z & \_c Z . Let us introduce $J'$ as the product of $J$ and the inverse of its first term. Simple algebra gives: J’ = 1 & 0\ 0 & 1 +E\_cY\^[-1]{}Z, where $E$ is defined as $$E=\begin{pmatrix} -1 & -\epsilon_c \\ 1/\epsilon_c & 1 \end{pmatrix}.$$ $E$ has an eigenvector $v_1 \propto (\epsilon_c,-1)$ with zero eigenvalue. The $v_2\propto (1,\epsilon_c)$ vector is orthogonal to $v_1$ and has the property to give zero in the product $v_2^TEv_2=0$. In the orthonormal basis given by $v_1$ and $v_2$ $J'$ has the form: J’ = 1 & 0\ 0 & 1 + 0 & v\_1\^TEv\_2\ 0 & 0 \_cY\^[-1]{}Z, thus $\det J'=1$. Since $J$ and $J'$ differs only in a matrix with determinant one, we arrive $$\det J=1,$$ thus the transformation Eq. \[eq:transform\] preserves the integration measure. The transformation for [***reflection***]{} is given by $$\begin{aligned} \label{eq:transform1} q'=q+\epsilon_c p+(\epsilon/2-\epsilon_c)p' \\ p'=p-2n(np) \nonumber + h.\end{aligned}$$ $h$ can be chosen as in Eq. \[eq:lfh\], but now we have $F_-=F_+$, since at reflection the ${\rm sgn}$ function does not change sign. One can obtain the Jacobian of reflection by simply making the (1-)\^[1/2]{} -1 substitution in the Jacobian of the refraction (Eq. \[eq:jac\]). Then it is easy to see that the $\det J=1$ holds for the reflection case, too. Finally let us consider a [***modified reflection***]{}, which makes only $O(\epsilon^2)$ error in the energy conservation (see Sec. \[sec:rr\]). The phase space transformation can be written as: q’=q+\_c p + \_c p’\ p’=p-2n(np)+h, The $h$ which is needed to ensure energy conservation upto $O(\epsilon^2)$ is the following h=-2\_c QF\_-. This comes from Eq. \[eq:lfh\] and using that the inward and outward updates now take the same time ($\epsilon_c$). $h$ automatically satisfies $(hn)=0$. The Jacobian is very similar to the Jacobian of the reflection procedure above (ie. the one obtained from Eq. \[eq:jac\] with the $(~~~)^{1/2} \to -1$ substitution): J= X\_1 & \_cX\_1+\_cY\ 0 & Y + \_c Z& \_c\^2 Z\ Z & \_c Z . Instead of $\epsilon/2-\epsilon_c$ we have $\epsilon_c$ everywhere and the $X$ matrix is substituted by $X_1$: \_[ab]{}=\_[ab]{}-. $X_1$ has a trivial determinant $\det X_1=1$, since $(nh)=(n,Qp)=0$. From here the proof goes in the same way as above. One concludes to $\det J=-1$, where the minus sign[^12] comes from $\det Y =-1$. The proofs for the $SU(3)$ cases were carried out in a completely analogous way. The only difference was the appearance of factors associated with the group structure of $SU(3)$ which all canceled in the final result. Thus, we conclude that the suggested modifications of the leap-frog conserve the integration measure. Appendix: Classical motion on an $SU(3)$ manifold ------------------------------------------------- In this appendix we briefly discuss the Hamiltonian formulation of a system, which coordinates are elements of a $G=SU(3)$ group. In particular we will provide formulas to calculate the Jacobian of some map in the phase space. Some parts of the appendix closely follow Ref. [@Kennedy:1989ae]. ### Differential geometry on a Lie-group If the coordinates of a system are elements of a Lie-group manifold ($g_a \in G$), then $T_gG$ is the space of tangent vectors at point $g$, this is the vector space of velocities (with local coordinates $\dot{g}_a$). Let us consider a few relevant mappings which arise due to the Lie-group structure of $G$. There is a natural mapping called the right translation R\_g:GG h hg, the corresponding derivative mapping $R_{g*}(h):T_hG\to T_{hg}G$ is a linear transformation which has the following matrix in local coordinates: (R\_[g\*]{}(h))\_[ab]{}=[$\frac{\partial (hg)_a}{\partial h_b}$]{}. The pullback of $R_g$ is in certain sense going in backward direction as in the case of the derivative mapping, since R\_g\^\*(h):T\^\*\_[hg]{}GT\^\*\_[h]{}G (hg) (h) : (h)v =(hg)R\_[g\*]{}(h)v , for all $v$ vectors in the tangent space $T_hG$. Here $\alpha$ and $\beta$ are 1-forms, linear functionals acting on vectors. The group element dependence is indicated in the $(\dots)$ parentheses, whereas the vector, which they act on, is in the $\langle \dots \rangle$ bracket. A [vector field]{} is right invariant, if $v(hg)=R_{g*}(h) v(h)$ is fulfilled. There is a one to one correspondence between right invariant vector fields and the elements of the Lie-algebra of the group ($v(g) \leftrightarrow v(1) \in LA(G)$), thus they are elements of a linear space. The Lie-bracket of two vector fields ($v$ and $w$) measures the noncommutativity of two flows (one parameter $G\to G$ maps, whose derivatives are the vector fields themselves). It is again a vector field: $[v,w]=u$, or in local coordinates it is $u_a=w_b\partial_b v_a - v_b \partial_b w_a$. For right invariant vector fields the bracket is also right invariant, thus if ${r_A}$ is a basis in the linear space of right invariant vector fields, then =c\_[AB]{}\^C r\_C. A [1-form field]{} is right invariant, if $R_g^*\alpha=\alpha$, that is $\alpha(hg)\langle R_{g*}(h) v \rangle=\alpha(h)\langle v \rangle$ for all $v$ vectors in tangent space $T_hG$. There is a one to one correspondence between right invariant 1-form fields and 1-forms over the tangent space at the unit element ($\alpha(g) \leftrightarrow \alpha(1)$, so that $\alpha(1)\langle v \rangle=\alpha(g)\langle R_{g*}v \rangle$). In order to prove an important identity for right invariant 1-forms, we need a little preparation. If $\alpha = \alpha_a(g) dg_a$ is a 1-form field, then its derivative is $d\alpha = \partial_b \alpha_a dg_b \wedge dg_a$. Its pullback corresponding to a mapping $R$ is $R^*\alpha = \alpha_b(k) \partial_a k_b dg_a$ with the $k=R(g)$ notation. Then d(R\^\*)=(\_d\_b(k) \_c k\_d \_a k\_b + \_b(k) \_c\_a k\_b ) dg\_c dg\_a =\ = \_d \_b(k) R\^\* ( dg\_d dg\_b ) = R\^\* d, where we have used the antisymmetric property of the wedge product. Using the above equation it is easy to see that the derivative of a right invariant 1-form is also right invariant: R\_g\^\* d= d R\_g\^\* = d . This means that if we take $\varrho_A$ as a basis in the space of right invariant 1-forms [^13], then $d\varrho_A$ should be expressible in terms of $\varrho_B \wedge \varrho_C$. So let us calculate the 2-form $d\varrho_A$ on two basis vectors in local coordinates: d\^Ar\^B,r\^C =\_b \_[a]{}\^A (dg\_b dg\_a)r\^B,r\^C = (\_b \_[a]{}\^A -\_a\_[b]{}\^A)r\_[b]{}\^Br\_[a]{}\^C=\ r\_[b]{}\^B\_b(\_[a]{}\^Ar\_[a]{}\^C)-r\_[a]{}\^C\_a(\_[b]{}\^Ar\_[b]{}\^B)- r\_[b]{}\^B\_[a]{}\^A\_br\_[a]{}\^C + r\_[a]{}\^C\_[b]{}\^A\_ar\_[b]{}\^B. In parentheses we have $\delta_{AC}$ and $\delta_{AB}$ due to the normalization, therefore only the last two term remains. These two gives $-\varrho^A\langle [r^B,r^C] \rangle$, which yields the following result (Maurer-Cartan structure equation): d\_A=-c\_[BC]{}\^A\_B \_C. ### Hamiltonian dynamics The Lagrangian of the system is a real valued function on the tangent bundle ($L: TG \to \mathcal{R}$). The derivative of the Lagrangian in the direction of the velocities is a differential form, which maps from $T_gT_gG \sim T_gG$ to the real numbers (ie. it is an element of the cotangent bundle $T^*G$). Its local coordinates are ${\ensuremath{\frac{\partial L}{\partial \dot{g}_a}}}$, which are identified as the canonical momenta ($p_a$). Since the momenta are coordinates of linear forms on $TG$, the Hamiltonian phase space is the manifold $T^*G$ with local coordinates $\{g_a,p_a\}$. The $T^*G$ manifold is symplectic, ie. we have a 2-form $\omega$ on $T^*G$ which has vanishing derivative: d( \_a \_a p\_a ) d=0. According to the Maurer-Cartan equation, the = \_a \_a dp\_a + p\_a c\_[bc]{}\^a \_b \_c relation holds. From the symplectic structure follows, that there is a one to one correspondence between vector fields ($v$) and 1-form fields ($\alpha$): v w =v,w . for all $w$ vectors. The equations of motion arise through a Hamiltonian function ($H$) [*and*]{} the symplectic structure. The change in the Hamiltonian is described by the derivative 1-form $dH$. Along the vector field $h$, which corresponds to the 1-form $dH$ through the symplectic structure, the Hamiltonian is conserved: dHh =h,h =0. In order to determine $h$ we use the right invariant vector and 1-form basis on the group. In this basis the Hamiltonian vector field $h$ and an arbitrary vector field $v$ has the following form: \[eq:basis\] h= h\_a r\_a + |[h]{}\_a [$\frac{\partial }{\partial p_a}$]{}, v= v\_a r\_a + |[v]{}\_a [$\frac{\partial }{\partial p_a}$]{}. The derivative 1-form of the Hamiltonian $dH$ can be written as dH= dHr\_a \_a + [$\frac{\partial H}{\partial p_a}$]{} dp\_a, where $dH\langle r_a \rangle$ is just the $r_a$ directional derivative of $H$. Now it is easy to see that h,v =h\_a|[v]{}\_a-|[h]{}\_av\_a+c\^[a]{}\_[bc]{}p\_ah\_bv\_c dHv=v\_adHr\_a +|[v]{}\_a[$\frac{\partial H}{\partial p_a}$]{} holds. Equating coefficients of $v_a$ and $\bar{v}_a$ we get the result for $h$: h=[$\frac{\partial H}{\partial p_a}$]{}r\_a+(c\^[c]{}\_[ba]{}p\_c[$\frac{\partial H}{\partial p_b}$]{}-dHr\_a )[$\frac{\partial }{\partial p_a}$]{}. The integral curve corresponding to the vector field $h$ describes the motion of the system in the phase space as the time ($t$) goes on. The equations of motion are the differential equations for $\{g_a,p_a\}$ coordinates which is solved by the integral curve: \[eq:eom\] \_a(t)= dp\_ah=-dHr\_a +c\^[c]{}\_[ba]{}p\_c[$\frac{\partial H}{\partial p_b}$]{} \_a(t)= dg\_ah=[$\frac{\partial H}{\partial p_b}$]{}dg\_ar\_b . ### Volume element and phase space maps The integral curve[^14] corresponding to the Hamiltonian vector field preserves the symplectic structure $g(t)^*\omega=\omega$, which means $\omega(g(t))\langle g(t)_*v,g(t)_*w \rangle=\omega(g(0))\langle v,w \rangle$. Moreover higher “wedge” powers are also preserved, notably the largest one =\^d= …= \_1 \_2 …\_d dp\_1 dp\_2 …dp\_d, with $d$ being the dimension of the group. $\Omega$ is the volume element of the phase space, it is the wedge product of the Haar-measure of the group and a Euclidean volume element ($d^dp$). The $g(t)^*\Omega=\Omega$ property is usually called area conservation. Let us consider a phase space map $f:T^*G \to T^*G$ with $\{g,p\}$ coordinates mapped to $\{G,P\}$. Since $\Omega$ is the only $2d$-form on $T^*G$, $f^*\Omega$ is proportional to $\Omega$. The proportionality constant describes the change in an infinitesimal phase space volume under the map $f$. By definition \[eq:fomega\] (f\^\*)(gp)v\_[(1)]{},…,v\_[(2d)]{} = (GP)f\_\*v\_[(1)]{},…,f\_\*v\_[(2d)]{} , with $f_*$ being the derivative mapping of $f$, $v_{(a)}$’s are arbitrary vectors in $T_{gp}(T^*G)$ tangent space. In the usual basis (see Eq. \[eq:basis\]) an $f_*v$ vector can be written as f\_\*v=v\_a f\_\*r\_a + |[v]{}\_a f\_\*[$\frac{\partial }{\partial p_a}$]{}. Eq. \[eq:fomega\] is actually a $2d$ dimensional determinant, in which we have to deal with the following types of objects: \_af\_\*v= v\_b \_a f\_\*r\_b+ |[v]{}\_b \_af\_\*[$\frac{\partial }{\partial p_b}$]{} dP\_af\_\*v= v\_b dP\_a f\_\*r\_b+ |[v]{}\_b dP\_af\_\*[$\frac{\partial }{\partial p_b}$]{}. Based on these relations the determinant of Eq. \[eq:fomega\] is (GP)f\_\*v\_[(1)]{},…,f\_\*v\_[(2d)]{}=(Jv\_[(1)]{}, Jv\_[(2)]{}, …, Jv\_[(2d)]{} )= (J) (gp)v\_[(1)]{},…,v\_[(2d)]{} , where $J$ hypermatrix was introduced as: \[eq:jacobian\] J= \_af\_\*r\_b& \_af\_\*[$\frac{\partial }{\partial p_b}$]{}\ dP\_af\_\*r\_b& dP\_af\_\*[$\frac{\partial }{\partial p_b}$]{}\ . $\det J$ is the proportionality constant that we were looking for. ### Formulas in matrix representation In practice the dynamics is treated in terms of matrices instead of independent real parameters. The group variables are represented by unitary matrices. A possible parametrization is $U(g)=\exp(g_aT_a)$ with $T_a$ traceless, antihermitian matrix basis. The momentum becomes a traceless, antihermitian matrix $\Pi(p)=p_aT_a$. Let us consider the $r^A$ directional derivative of $U(g)$: \[eq:dur\] dUr\^A =[$\frac{\partial U}{\partial g_b}$]{} r\^A\_b(g) = [$\frac{\partial U}{\partial g_b}$]{}dg\_bR\_[g\*]{}(1)r\^A(1) =\ =[$\frac{\partial U}{\partial g_b}$]{}. [$\frac{\partial (hg)_b}{\partial h_c}$]{}|\_[h=0]{}r\^A\_c(1)=. [$\frac{\partial U(hg)}{\partial h_c}$]{}|\_[h=0]{} r\^A\_c(1)=T\_cU r\^A\_c(1)=T\_AU. We have used the right invariance, the form of $R_{g*}$ in local coordinates, the explicit form of $U(g)$ and finally we have fixed the local coordinates of the $r^A$ basis at the identity ($r^A_c(1)=\delta_{Ac}$). We will also need an equation similar to the above \[eq:durho\] [$\frac{\partial g^b}{\partial U^T}$]{}\^A\_b(g)=.[$\frac{\partial (hg)^b}{\partial U^T(hg)}$]{} [$\frac{\partial h^A}{\partial (hg)_b}$]{}|\_[h=0]{}= .[$\frac{\partial h^A}{\partial U^T(hg)}$]{}|\_[h=0]{}=\ =.[$\frac{\partial U(h)}{\partial U^T(hg)}$]{}[$\frac{\partial h^A}{\partial U(h)}$]{}|\_[h=0]{}=-U\^T\^A. We have used the local coordinate version of the right invariant 1-forms and the orthogonality property of the $T_A$ matrix basis (${\rm tr}(T_AT_B)=-\delta_{AB}$). For simplicity we will assume the following Hamiltonian, when deriving the equations of motion: $H=\frac{1}{2}\sum_a p_a^2 + S(g)$. Using this Hamiltonian and Eq. \[eq:dur\] the equations of motion of Eq. \[eq:eom\] can be transformed into the simple, well-known form: =[$\frac{\partial U}{\partial g_a}$]{}\_a=[$\frac{\partial H}{\partial p_a}$]{}dUr\_a =U,\ =-T\^a[tr]{}([$\frac{\partial S}{\partial U^T}$]{}dUr\_a ) =-T\^a [tr]{} (T\^a U [$\frac{\partial S}{\partial U^T}$]{}) = -(U[$\frac{\partial S}{\partial U^T}$]{}) with $\mathcal{A}$ traceless, antihermitian matrix projector. Finally let us calculate in the matrix representation the Jacobian of $f$ phase space function, which maps the $\{U(g),\Pi(p)\}$ variables to $\{\bar{U}(G),\bar{\Pi}(P)\}$. Lets take the first element of the $J$ hypermatrix in Eq. \[eq:jacobian\] and use Eq. \[eq:dur\] and Eq. \[eq:durho\] to eliminate the components of the right invariant fields: \^Af\_\*r\^B=\^A\_c(G) [$\frac{\partial G_c}{\partial g_d}$]{} r\^B\_d(g) = \^A\_c(G) [$\frac{\partial G_c}{\partial \bar{U}_{\alpha\beta}}$]{} [$\frac{\partial \bar{U}_{\alpha\beta}}{\partial U_{\gamma\delta}}$]{}r\^B\_d(g)[$\frac{\partial U_{\gamma\delta}}{\partial g_d}$]{}=\ =-(|[U]{}\^T\_A)\_[$\frac{\partial \bar{U}_{\alpha\beta}}{\partial U_{\gamma\delta}}$]{}(T\_BU)\_ Similar calculation yields the other matrix elements of $J$: \^Af\_\*[$\frac{\partial }{\partial p^B}$]{}=-(|[U]{}\^T\_A)\_[$\frac{\partial \bar{U}_{\alpha\beta}}{\partial \Pi_{\gamma\delta}}$]{}(T\_B)\_,\ dP\^Af\_\*r\^B=-(T\_A)\_[$\frac{\partial \bar{\Pi}_{\alpha\beta}}{\partial U_{\gamma\delta}}$]{}(T\_BU)\_,\ dP\^Af\_\*[$\frac{\partial }{\partial p^B}$]{}=-(T\_A)\_[$\frac{\partial \bar{\Pi}_{\alpha\beta}}{\partial \Pi_{\gamma\delta}}$]{}(T\_B)\_. Dynamical staggered fermions ============================ It is known for a long time, that for high enough temperatures and/or densities the quarks and gluons are liberated from confinement, the chiral symmetry is restored: the so called quark-gluon plasma phase of the matter is created. There is a huge literature of this transition: theoretical works based on the symmetries of QCD, analytical and numerical calculations in QCD like models and lattice QCD. It is worth emphasizing, that the only known way to obtain the properties of the quark-gluon plasma from first principles of the theory is lattice QCD. However until recently lattice result were usually burdened by large systematical errors: extrapolation to the physical quark mass, finite volume effects, missing continuum extrapolations. Lattice QCD recently has entered a new era, where we are facing a huge reduction of these systematics. In this chapter we describe the details and results of a large scale simulation, where we attempted to eliminate (almost) all systematics of previous lattice calculations. Thus these results can be considered as the final ones, where the only remaining is to crosscheck against the work of other groups or against different lattice discretizations. The amount of computer work is tremendous, we used $O(10^{19})$ floating point operations on the fastest supercomputers of the world. The algorithmic and theoretical improvements still continue, so as the increase in the speed of the computers. We hope that one day these results will be just as easy to obtain as getting the value of eg. $\sin(1.0)$ using a pocket calculator today. We emphasize that extensive experimental work is currently being done with heavy ion collisions to study the QCD transition (most recently at the Relativistic Heavy Ion Collider, RHIC). Moreover there is rich perspective for the future: the heavy ion program is expected to start in 2009 at the Large Hadron Collider (LHC) in Geneva and in 2011 at the Facility for Ion and Antiproton Research (FAIR) in Darmstadt. Both for the cosmological transition and for RHIC, the net baryon densities are quite small, and so the baryonic chemical potentials ($\mu$) are much less than the typical hadron masses ($\approx$45 MeV at RHIC and negligible in the early Universe). A calculation at $\mu$=0 is directly applicable for the cosmological transition and most probably also relevant for the transition at RHIC. Let us remark here, that even if the finite temperature equilibrium state of QCD is soon going to be solved, there are still important areas with only moderate or no progress. Most notably there is the equilibrium state at finite $\mu$, the well-known sign problem prohibited calculations for many years. The breakthrough of [@Fodor:2001au; @Fodor:2001pe] has opened new possibilities (for a recent review of this subfield see [@Schmidt:2006us]), still many questions remain unanswered. In Sec. 1 we present the definition of our lattice action, the numerical details of the algorithm used for the simulations and finally the concept of line of constant physics (LCP). In Sec. 2 we give a detailed list of our simulation points, whereas Sec. 3 is for the physics results. Setting up the simulations -------------------------- ### The lattice action First we give our definition for the Symanzik improved gauge and for the stout-link improved fermionic action. We demonstrate that our choice of stout-link improved staggered fermionic action has small taste violation, when compared to other staggered actions used in the literature to determine the equation of state (EoS) of QCD. Isotropic lattice couplings are used, thus the lattice spacings are identical in all directions. The lattice action we used has the following form: $$\begin{aligned} \label{action} S & = & S_g + S_f,\\ S_g & = & \sum_x \frac{\beta}{3} ( c_0 \sum_{\mu>\nu} W_{\mu,\nu}^{1\times 1}(x) + c_1 \sum_{\mu\ne\nu} W_{\mu,\nu}^{1\times 2}(x) ),\\ S_f & = & \sum_{x,y} \{ \overline{\eta}_{ud}(x) [{{D\hspace{-7pt}{/}}}(U^{stout})_{xy}+m_{ud}\delta_{x,y}]^{-1/2} \eta_{ud}(y) \nonumber\\ & & \mbox{\hspace{2pt}} +\mbox{\hspace{2pt}} \overline{\eta}_{s}(x) [{{D\hspace{-7pt}{/}}}(U^{stout})_{xy}+m_{s}\delta_{x,y}]^{-1/4} \eta_{s}(y)\}, \end{aligned}$$ where $W_{\mu,\nu}^{1\times 1}$, $W_{\mu,\nu}^{1\times 2}$ are real parts of the traces of the ordered products of link matrices along the $1\times 1$, $1\times 2$ rectangles in the $\mu$, $\nu$ plane. The coefficients satisfy $c_0+8c_1=1$ and $c_1=-1/12$ for the tree-level Symanzik improved action. $\eta_{ud}$ and $\eta_s$ are the pseudofermion fields for $u$, $d$ and $s$ quarks. ${{D\hspace{-7pt}{/}}}(U^{stout})$ is the four-flavor staggered Dirac matrix with stout-link improvement [@Morningstar:2003gk]. Let us also note here, that we use the 4th root trick in Eq. (\[action\]), which might lead to problems of locality. Our staggered action at a given $N_t$ yields the same limit for the pressure at infinite temperatures as the standard unimproved action. There are various techniques improving the high temperature scaling. However one also has to take into account, that with highly improved actions (which contain far neighbor interactions) smaller ($N_t \ge 8$) lattice spacings will not be available. In this case one risks to have large lattice artefacts coming from the scale setting procedure. Staggered fermions have an inconvenient property: they violate taste symmetry at finite lattice spacing. Among other things this violation results in a splitting in the pion spectrum, which should vanish in the continuum limit. The stout-link improvement makes the staggered fermion taste symmetry violation small already at moderate lattice spacings. We found that a stout-smearing level of $N_{smr}$=2 and smearing parameter of $\rho$=0.15 are the optimal values of the smearing procedure. ### The algorithm {#ssec:ralg} The equation of state calculation is an extreme high precision measurement. There are many things which can spoil it, one of them is the systematic error coming from the algorithm. Before our work the R-algorithm was used exclusively for staggered thermodynamical calculations, where in principle one has to make an extrapolation in the intrinsic parameter of the algorithm (stepsize). These extrapolations were never carried out, in the best cases there were attempts to estimate the systematic errors. The $N_t=6$ equation of state has forced us to change. Here the measured quantity (action density difference at zero and finite temperature lattices) has the same magnitude as the systematic error (see Fig. \[fi:sub\].), which is clearly an unsafe situation. Fortunately the development of exact staggered algorithms were in a prospective phase at that time (rational hybrid Monte-Carlo [@Clark:2003na] and polynomial hybrid Monte-Carlo [@Frezzotti:1997ym]). We decided to use the RHMC, the algorithm which is nowadays obligatory in lattice thermodynamics. It was worth changing, the exact RHMC algorithm is significantly faster than the R-algorithm, and one can get rid of one systematical error. The RHMC technique approximates the fractional powers of the Dirac operator by rational functions. Since the condition number of the Dirac operator changes as we change the mass, one should determine the optimal rational approximation for each quark mass. Note however, that this should be done only once, and the obtained parameters of these functions can be used in the entire configuration production. Our choices for the rational approximation were as good as few times the machine precision for the whole range of the eigenvalues of the Dirac operator. We have also introduced multiple time scales in the algorithm [@Clark:2006fx]. The time consuming parts of the computations were carried out in single precision. This might effect the algorithm in a negative way at two places: firstly the reversibility of the trajectory is lost, secondly the precision problems in the accept/reject step might result a bad distribution. The reversibility violation is considerably larger than single precision accuracy, even if every step was carried out upto single precision through the trajectory. This is due to the chaotic nature of the QCD equations of motion. The usual way out is to use double precision arithmetics everywhere. However it turns out that at several places single precision accuracy is tolerable, if at some critical places high enough precision is chosen. We make the force calculation (which is the most time consuming part) in single precision however the link and the momentum are updated in a higher precision scheme (in turned out that we need at least 80bit precision on larger volumes). In this case the reversibility will be exact in single precision. The link and the momentum might differ after going forth and back along a trajectory, but only in high precision. The forces will be bit by bit the same in the forward and backward directions, which ensures that the two cannot deviate from each other. For the other problem (leakage of precision in the accept/reject step) one can use mixed precision inverters, which work in single precision for most of the time. Here one adds intermediate double precision steps, with which one can achieve even double precision accuracy. To be on the safe side on one of our largest lattices we have cross-checked the results with a fully double precision calculation, the results were the same within the errorbars. We were also constantly monitoring the $\langle \exp(-\Delta H) \rangle$ expectation value, and find no statistically significant deviation from $1$. We based our code on the publicly available MILC lattice gauge theory code, however several parts were (re)written by ourselves. Most of the code was written in two independent copies, the two versions agree upto machine precision. These (among other things) include the staggered matrix multiplication, solvers, smearing and measurement routines. We have developed the code for four different architectures (Intel P4, AMD Opteron, Nvidia Graphics Card, IBM Blue Gene L), each version required careful optimization. For some details of the implementations see eg. [@Fodor:2002zi; @Egri:2006zm]. ### Line of constant physics (LCP) {#ssec:lcp} Let us discuss the determination of the LCP. The LCP is defined as relationships between the bare lattice parameters ($\beta$ and lattice bare quark masses $m_{ud}$ and $m_s$). These relationships express that the physics (e.g. mass ratios) remains constant, while changing any of the parameters. It is important to emphasize that the LCP is unambiguous (independent of the physical quantities, which are used to define the above relationships) only in the continuum limit ($\beta\rightarrow\infty$). For our lattice spacings fixing some relationships to their physical values means that some other relationships will slightly deviate from the physical one. In thermodynamics the relevance of LCP comes into play when the temperature is changed by $\beta$ parameter. Then adjusting the mass parameters ($m_{ud}$ and $m_s$) is an important issue, neglecting this in simulations can lead to several % error in the EoS [@Csikor:2004ik]. A particularly efficient (however only approximate, see later) way to obtain an LCP is by using simulations with three degenerate flavors with lattice quark mass $m_q$. The leading order chiral perturbation theory implies the mass relation for $s\bar{s}$ mesons. The strange quark mass is tuned accordingly, as $$m_{PS}^2/m_{V}^2|_{m_q=m_s} = (2m_K^2-m_\pi^2)/m_\phi^2, \label{eq:tl chpt}$$ where $m_{PS}$ and $m_{V}$ are the pseudoscalar and vector meson masses in the simulations with three degenerate quarks. The light quark mass is calculated using the ratio $m_{ud}=m_s/25$ obtained by experimental mass input in the chiral perturbation theory. We obtain $m_s(\beta)$ as shown in the left panel of Fig. \[fig:LCP\]. This (approximate) line of constant physics is called LCP1 later, the equation of state calculations were carried out along this line. Our approach using Eq. (\[eq:tl chpt\]) is appropriate if in the $n_f$=2+1 theory the vector meson mass depends only weakly on the light quark masses and the chiral perturbation theory for meson masses works upto the strange quark mass. After applying the LCP1 we cross-checked the obtained spectrum of the $n_f$=2+1 simulations. These simulations showed, however, that the hadron mass ratios slightly differ from their physical values on the 5–10% level. In order to eliminate all uncertainties related to an unphysical spectrum, we determined a new line of constant physics. The new LCP (which is called LCP2 afterwards) was defined by fixing $m_K/f_K$ and $m_K/m_\pi$ to their experimental values (right panel of Fig. \[fig:LCP\]). The more precise LCP2 was used for simulations to determine the order of the QCD transition, and to measure the transition temperature in physical units. We have also carried out $n_f=2+1$ flavor $T=0$ simulations on LCP2. Chiral extrapolation to the physical pion mass led to $m_K/f_K$ and $m_K/m_\pi$ values, which agree with the experimental numbers on the 2% level. (Differences resulting from various fitting forms and finite volume corrections were included in the systematics.) This is the accuracy of LCP2. ![\[fig:spect\] Scaling of the mass of the $K^*(892)$ meson, the pion decay constant and $r_0$ towards the continuum limit. As a continuum value (filled boxes) we took the average of the continuum extrapolations obtained using our 2 and our 3 finest lattice spacings. The difference was taken as a systematic uncertainty, which is included in the shown errors. The quantities are plotted in units of the kaon decay constant. In case of the upper two panels the bands indicate the physical values of the ratios and their experimental uncertainties. For $r_0$ (lowest panel) in the absence of direct experimental results we compare our value with the $r_0f_K$ obtained by the MILC, HPQCD and UKQCD collaborations [@Aubin:2004fs; @Gray:2005ur]. ](spect.eps){width="12cm" height="15cm"} In order to be sure that our results are safe from ambiguous determination of the overall scale, and to prove that we are really in the $a^2$ scaling region, we carried out a continuum extrapolation for three additional quantities which could be similarly good to set the scale (we normalized them by $f_K$, for $f_K$ determination in staggered QCD see [@Aubin:2004fs]). Fig. \[fig:spect\] shows the measured values of $m_{K^*}/f_K$, $f_\pi/f_K$ and $r_0f_K$, at different lattice spacings and their continuum extrapolation. Our three continuum predictions are in complete agreement with the experimental results (note, that $r_0$ can not be measured directly in experiments; in this case the original experimental input is the bottonium spectrum which was used by the MILC, HPQCD and UKQCD collaborations to calculate $r_0$ on the lattice [@Aubin:2004fs; @Gray:2005ur]). It is important to emphasize that at lattice spacings given by $N_t$=4 and 6 the overall scales determined by $f_K$ and $r_0$ are differing by $\sim$20-30%, which is most probably true for any other staggered formulation used for thermodynamical calculations. Since the determination of the overall scale has a $\sim$20-30% ambiguity, the value of $T_c$ can not be determined with the required accuracy. Simulation points ----------------- For our thermodynamical calculations we have used two LCPs: LCP1 and LCP2. The LCP1 can be considered as an approximate LCP, this was used for the equation of state calculation. The LCP2 was determined using the $n_f=2+1$ simulations carried out along LCP1, it can be considered as a refinement of LCP1. We used it to determine the order of phase transition and the transition temperature. In this section we list the simulation points along the two LCPs. ### Along LCP1 The determination of the EoS needs quite a few simulation points. Results are needed on finite temperature lattices ($N_t$=4 or 6) and on zero temperature lattices ($N_t \gg$ 4 or 6) at several $\beta$ values (we used 16 different $\beta$ values for $N_t$=4 and 14 values for $N_t$=6). Since our goal is to determine the EoS for physical quark masses we have to determine quantities in this small physical quark mass limit (we call these $\beta$ dependent bare light quark masses $m_{ud}(phys)$). $\beta$ $m_s$ T=0 \# T$\neq$0 \# $\beta$ $m_s$ T=0 \# T$\neq$0 \# --------- -------- ----------------- ---- ---------------- ---- --------- -------- ----------------- ---- ---------------- ----- 3.000 0.1938 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.450 0.1507 16$^3$$\cdot$32 29 18$^3$$\cdot$6 120 3.150 0.1848 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.500 0.1396 16$^3$$\cdot$32 33 18$^3$$\cdot$6 156 3.250 0.1768 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.550 0.1235 16$^3$$\cdot$32 30 18$^3$$\cdot$6 133 3.275 0.1742 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.575 0.1144 16$^3$$\cdot$32 28 18$^3$$\cdot$6 151 3.300 0.1713 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.600 0.1055 16$^3$$\cdot$32 31 18$^3$$\cdot$6 158 3.325 0.1683 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.625 0.0972 16$^3$$\cdot$32 33 18$^3$$\cdot$6 144 3.350 0.1651 16$^3$$\cdot$16 4 12$^3$$\cdot$4 9 3.650 0.0895 16$^3$$\cdot$32 30 18$^3$$\cdot$6 160 3.400 0.1583 16$^3$$\cdot$16 3 12$^3$$\cdot$4 9 3.675 0.0827 16$^3$$\cdot$32 32 18$^3$$\cdot$6 178 3.450 0.1507 16$^3$$\cdot$32 29 12$^3$$\cdot$4 9 3.700 0.0766 16$^3$$\cdot$32 33 18$^3$$\cdot$6 174 3.500 0.1396 16$^3$$\cdot$32 33 12$^3$$\cdot$4 9 3.750 0.0666 16$^3$$\cdot$32 35 18$^3$$\cdot$6 140 3.550 0.1235 16$^3$$\cdot$32 30 12$^3$$\cdot$4 9 3.800 0.0589 20$^3$$\cdot$40 26 18$^3$$\cdot$6 158 3.600 0.1055 16$^3$$\cdot$32 31 12$^3$$\cdot$4 9 3.850 0.0525 20$^3$$\cdot$40 23 18$^3$$\cdot$6 157 3.650 0.0895 16$^3$$\cdot$32 30 12$^3$$\cdot$4 9 3.930 0.0446 24$^3$$\cdot$48 6 18$^3$$\cdot$6 171 3.700 0.0766 16$^3$$\cdot$32 33 12$^3$$\cdot$4 9 4.000 0.0401 28$^3$$\cdot$56 4 18$^3$$\cdot$6 166 3.850 0.0525 20$^3$$\cdot$40 23 12$^3$$\cdot$4 9 4.000 0.0401 28$^3$$\cdot$56 4 12$^3$$\cdot$4 9 : \[ta:points\] Summary of our simulation points along LCP1. For the physical light quark masses (we call them $m_{ud}(phys)$) 25 times smaller values were taken than for the strange mass. T$\neq$0 simulations were performed with the above $m_s$ and $\beta$ pairs, and at 5 different $m_{ud}$ values: {1,3,5,7,9}$\cdot m_{ud}(phys)$. T=0 simulations were performed with the above $m_s$ and $\beta$ pairs, but at 4 different $m_{ud}$ values: {3,5,7,9}$\cdot m_{ud}(phys)$. The total number of trajectories divided by 100 are collected in the \# columns. The left column shows the $N_t$=4, whereas the right column shows the $N_t$=6 data. (For an explanation of our labeling see the text.) For our finite temperature simulations ($N_t$=4,6) we used physical quark masses. The spatial sizes were always at least 3 times the temporal sizes. For the whole $\beta$ range on $N_t=4$ we checked that by increasing the $N_s/N_t$ ratio from 3 to 4 the results remained the same within our statistical uncertainties. In the chirally broken phase (our zero temperature simulations, thus lattices for which $N_t \gg$ 4 or 6, belong always to this class) chiral perturbation theory can be used to extrapolate by a controlled manner to the physical light quark masses. Therefore for most of our simulation points[^15] we used four pion masses ($m_\pi\approx$250, 320, 380 and 430 MeV), which were somewhat larger than the physical one. (To simplify our notation in the rest of this section we label these points as 3,5,7 and 9 times $m_{ud}(phys)$.) It turns out that the chiral condensates at all the four points can be fitted by linear function of pion mass squared with good $\chi^2$. (Later we will show, that only the chiral condensate is to be extrapolated to get the EoS at the physical quark mass.) The volumes were chosen in a way, that for three out of these four quark masses the spatial extentions of the lattices were approximately equal or larger than four times the correlation lengths of the pion channel. We checked for a few $\beta$ values that increasing the spatial and/or temporal extensions of the lattices results in the same expectation values within our statistical uncertainties. (For 3$\cdot m_{ud}(phys)$ values the spatial lengths of the lattices were only three times the correlation length of the pion channel. However, excluding this point from the extrapolations, the results do not change.) A detailed list of our simulation points at zero and at non-zero temperature lattices are summarized in Table \[ta:points\]. ### Along LCP2 In order to perform the necessary renormalizations of the measured quantities and to fix the scale in physical units we carried out $T=0$ simulations on our new LCP (c.f. Table \[tab:T0\]). Six different $\beta$ values were used. Simulations at T=0 with physical pion masses are quite expensive and in our case unnecessary (chiral perturbation theory provides a controlled approximation at vanishing temperature). Thus, for each $\beta$ value we used four different light quark masses, which resulted in pion masses somewhat larger than the physical one (the $m_\pi$ values were approximately 250 MeV, 320 MeV, 380 MeV and 430 MeV), whereas the strange quark mass was fixed by the LCP at each $\beta$. The lattice sizes were chosen to satisfy the $m_\pi N_s\ge4$ condition. However, when calculating the systematic uncertainties of meson masses and decay constants, we have taken finite size corrections into account using continuum finite volume chiral perturbation theory [@Colangelo:2005gd] (these corrections were around or less than 1%). We have simulated between 700 and 3000 RHMC trajectories for each point in Table \[tab:T0\]. $\beta$ $m_{s}$ $m_{ud}$ lattice size --------- --------- ---------- --------------- $3.330$ 0.23847 0.02621 $12^3\cdot24$ 0.04368 $12^3\cdot24$ 0.06115 $12^3\cdot24$ 0.07862 $12^3\cdot24$ $3.450$ 0.15730 0.01729 $16^3\cdot32$ 0.02881 $12^3\cdot28$ 0.04033 $12^3\cdot28$ 0.05186 $12^3\cdot28$ $3.550$ 0.10234 0.01312 $16^3\cdot32$ 0.01874 $16^3\cdot32$ 0.02624 $12^3\cdot28$ 0.03374 $12^3\cdot28$ $3.670$ 0.06331 0.00928 $24^3\cdot32$ 0.01391 $16^3\cdot32$ 0.01739 $16^3\cdot32$ 0.02203 $14^3\cdot32$ $3.750$ 0.05025 0.00736 $24^3\cdot32$ 0.01104 $24^3\cdot32$ 0.01473 $16^3\cdot32$ 0.01841 $16^3\cdot32$ : \[tab:T0\] Lattice parameters and sizes of our zero temperature simulations. The strange quark mass is varied along the LCP as $\beta$ is changed. The light quark masses, listed at each ($\beta$,$m_s$) values, correspond approximately to $m_\pi$ values of 250 MeV, 320 MeV, 380 MeV and 430 MeV. The T$\neq$0 simulations (c.f. Table \[tab:T\]) were carried out along our LCP (that is at physical strange and light quark masses, which correspond to $m_K$=498 MeV and $m_\pi$=135 MeV) at four different sets of lattice spacings ($N_t=4,6,8$ and $10$) and on three different volumes ($N_s/N_t$ was ranging between 3 and 6). We have observed moderate finite volume effects on the smallest volumes for quantities which are supposed to depend strongly on light quark masses (e.g. chiral susceptibility). To determine the transition point we used $N_s/N_t\ge 4$, for which we did not observe any finite volume effect. The number of RHMC trajectories were between 1500 and 8000 for each parameter set (the integrated autocorrelation time was smaller or around 10 for all our runs). temporal size ($N_t$) $\beta$ range spatial sizes ($N_s$) ----------------------- --------------- ----------------------- $4$ $3.20-3.50$ $12,16,24$ $6$ $3.45-3.75$ $18,24,32$ $8$ $3.57-3.76$ $24,32,40$ $10$ $3.63-3.86$ $28,40,48$ : \[tab:T\]Summary of the T$\neq$0 simulation points. Improvement over previous results --------------------------------- As we have already mentioned in the introduction, there are many lattice results on QCD thermodynamics. In this section we highlight the points, where we have made improvements on previous calculations. #### Physical quark masses {#physical-quark-masses .unnumbered} We decided to use physical values for the quark masses. Owing to the computational costs this is a great challenge in lattice QCD. Previous analyses used computationally less demanding non-physically large quark masses. On the one hand, results with Wilson fermions [@AliKhan:2000iz; @AliKhan:2001ek] were obtained with pion masses $m_\pi \gtrsim 540$ MeV when approaching the thermodynamical limit (since lattice QCD can give only dimensionless combinations, it is more precise to say that $m_\pi$/$m_\rho\ge$ 0.6, where $m_\rho$ is the mass of the rho meson). On the other hand, in staggered simulations one can afford considerably smaller quark masses. The MILC collaboration [@Bernard:2004je; @Bernard:2006nj] is currently using two light quark masses (0.1 and 0.2 times $m_s$), the Bielefeld-Brookhaven-Columbia-RIKEN collaboration is studying thermodynamics down to a pion mass of $\approx$ 320 MeV on $N_t$=4 and 6 lattices [@Cheng:2006qk]. However these numbers should be taken with a grain of salt. Staggered fermions suffer from taste violation. Therefore there is a large (usually several hundred MeV), unphysical mass splitting between this lightest pion state and the higher lying other pion states. This mass splitting results in an unphysical spectrum. The artificial pion mass splitting disappears only in the continuum limit. For some choices of the actions the restoration of the proper spectrum happens only at very small lattice spacings, whereas for other actions somewhat larger lattice spacings are already satisfactory. The finite temperature transition is related to the spontaneous breaking of the chiral symmetry (which is driven by the pion sector) and the three physical pions have masses smaller than the transition temperature, thus the numerical value of $T_c$ could be sensitive to the unphysical spectrum. Furthermore, the order of the transition depends on the quark mass. In three-flavor QCD for vanishing quark masses the transition is of first-order. For intermediate masses it is most probably a crossover. For infinitely heavy quark masses the transition is again first-order. Therefore the physical quark masses should be used directly. It is also important to mention that though at $T$=0 chiral perturbation theory provides a technique to extrapolate to physical $m_\pi$, unfortunately no such controllable method exists around $T_c$. #### Continuum limit {#continuum-limit .unnumbered} The second ingredient is to remove the uncertainty associated with the lattice discretization. Discretization errors disappear in the continuum limit; however, they strongly influence the results at non-vanishing lattice spacing. For lattice spacings which are smaller than some approximate limiting value the dimensionless ratio of different physical quantities have a specific dependence on the lattice spacing (for staggered QCD the continuum value is approached in this region by corrections proportional to the square of the lattice spacing). For these lattice spacings we use the expression: $a^2$ scaling region. Clearly, results for at least three different lattice spacings are needed to decide, whether one is already in this scaling region or not (two points can always be fitted by $c_0$+$c_2$$a^2$, independently of possible large higher order terms). Only using the $a^2$ dependencies in the scaling region, is it possible to unambiguously define the absolute scale of the system. Outside the scaling region[^16] different quantities lead to different overall scales, which lead to ambiguous values for e.g. $T_c$. In three-flavour unimproved staggered QCD, using a lattice spacing of about 0.28 fm, the first-order and the crossover regions are separated by a pseudoscalar mass of $m_{\pi,c}\approx300$ MeV. Studying the same three-flavour theory with the same lattice spacing, but with an improved p4 action (which has different discretization errors) we obtain $m_{\pi,c}\approx70$ MeV. In the first approximation, a pseudoscalar mass of 140 MeV (which corresponds to the numerical value of the physical pion mass) would be in the first-order transition region, whereas using the second approximation, it would be in the crossover region. The different discretisation uncertainties are solely responsible for these qualitatively different results [@Karsch:2003va; @Endrodi:lat07; @Phil:lat07]. In summary the proper approach is to extrapolate to vanishing lattice spacings using lattices which are alrady in the scaling regime. We approach the scaling region by using four different sets of lattice spacing, which are defined as the transition region on $N_t$=4,6,8 and 10 lattices. The results show (not surprisingly) that the coarsest lattice with $N_t$=4 is not in the $a^2$ scaling region, whereas for the other three a reliable continuum limit extrapolation can be carried out. In case of the equation of state we only have two lattice spacings ($N_t=4$ and $6$), for the continuum limit we have to wait for results on finer lattices. #### Lattice artefacts for $T=0$ and for $T\to \infty$ {#lattice-artefacts-for-t0-and-for-tto-infty .unnumbered} For the staggered formulation of quarks the physically almost degenerate pion triplet has an unphysical non-degeneracy (so-called taste violation). This mass splitting $\Delta m_{\pi}^2$ vanishes in the continuum limit as $a\rightarrow$0. Due to our smaller lattice spacing and particularly due to our stout-link improved action the splitting $\Delta m_{\pi}^2$ is much smaller than that of the previously or currently used staggered actions in thermodynamics. In order to illustrate the advantage of the stout-link action Fig. \[fig:taste\] compares the taste violation in different approaches of the literature, which have been used for staggered thermodynamics. Results on the pion mass splitting for p4 improved (used by Bielefeld-Brookhaven-Columbia-RIKEN collaboration [@Cheng:2006qk]), asqtad improved (used by MILC collaboration [@Bernard:2004je; @Bernard:2006nj]) and stout-link improved (this work) staggered fermions are shown. The parameters were chosen to be the ones used by the different collaborations at the finite temperature transition point. At infinitely large temperatures improved actions (p4 [@Heller:1999xz] or asqtad [@Lepage:1998vj; @Orginos:1999cr] action) show considerably smaller discretizations errors, than the standard staggered action (used by this work). However our choice of action is about an order of magnitude faster than e.g. p4, we decided to use this less improved action, with which our CPU resources made it possible to study several lattice spacings ($N_t$=4 and 6 for the equation of state and $N_t=4,6,8$ and $10$ for determining the order of the phase transition and the transition temperature). This turned out to be extremely beneficial, when converting the transition temperature into physical units. In particular the $T=0$ simulations -which are used to do this conversion- have very large lattice artefacts at $N_t=4$ and $6$ lattice spacings and can not be used for controlled continuum extrapolations. The high-temperature improvement is not designed to reduce these artefacts. #### Setting the physical scale {#setting-the-physical-scale .unnumbered} An additional problem appears if we want to give dimensionful predictions with a few percent accuracy. As we already emphasized lattice QCD predicts dimensionless combinations of physical observables. For dimensionful predictions one calculates an experimentally known dimensionful quantity, which is used then to set the overall scale. In many analyses the overall scale is related to some quantities which strictly speaking do not even exist in full QCD (e.g. the mass of the rho eigenstate and the string tension are not well defined due to decay or string breaking). A better, though still not satisfactory possibility is to use quantities, which are well defined, but can not be measured directly in experiments. Such a quantity is the heavy quark-antiquark potential (V), or its characteristic distances: the $r_0$ or $r_1$ parameters of V [@Sommer:1993ce] ($r^2 d^2 V/dr^2$=1.65 or 1, for $r_0$ or $r_1$, respectively). For these quantities intermediate lattice calculations and/or approximations are needed to connect them to measurements. These calculations are based on bottonium spectroscopy. This procedure leads to further, unnecessary systematic uncertainties. The ultimate solution is to use quantities, which can be measured directly in experiments and on the lattice. We use the decay constant of the kaon $f_K$=159.8 MeV, which has about 1% measurement error. Detailed additional analyses were done by using the mass of the $K^*(892)$ meson $m_{K^*}$, the pion decay constant $f_\pi$ and the value of $r_0$, which all show that we are in the $a^2$ scaling regime and our choice of overall scale is unambiguous (see subsection \[ssec:lcp\]). #### Algorithm {#algorithm-1 .unnumbered} In previous staggered thermodynamics simulations the inexact R-algorithm was used exclusively to simulate three quark flavours. This algorithm has an intrinsic parameter, the stepsize which, similarly to the lattice spacing, has to be extrapolated to zero. None of the previous staggered lattice thermodynamic studies carried out this extrapolation. Using the R-algorithm without stepsize extrapolation leads to uncontrolled systematic errors. Instead of using the approximate R-algorithm this work uses the exact RHMC-algorithm (rational hybrid Monte-Carlo) [@Clark:2003na]. Order of the QCD transition --------------------------- The nature of the QCD transition affects our understanding of the Universe’s evolution (see Ref. [@Schwarz:2003du] for example). In a strong first-order phase transition the quark–gluon plasma supercools before bubbles of hadron gas are formed. These bubbles grow, collide and merge, during which gravitational waves could be produced [@Witten:1984rs]. Baryon-enriched nuggets could remain between the bubbles, contributing to dark matter. The hadronic phase is the initial condition for nucleosynthesis, so inhomogeneities in this phase could have a strong effect on nucleosynthesis [@Applegate:1985qt]. As the first-order phase transition weakens, these effects become less pronounced. Our calculations provide strong evidence that the QCD transition is a crossover and thus the above scenarios —and many others— are ruled out. There are some QCD results and model calculations to determine the order of the transition at $\mu$=0 and $\mu$$\neq$0 for different fermionic contents (compare refs [@Pisarski:1983ms; @Celik:1983wz; @Kogut:1982rt; @Gottlieb:1985ug; @Brown:1988qe; @Fukugita:1989yb; @Halasz:1998qr; @Berges:1998rc; @Schaefer:2004en; @Herpay:2005yr]). Unfortunately, none of these approaches can give an unambiguous answer for the order of the transition for physical values of the quark masses. The only known systematic technique which could give a final answer is lattice QCD. There are several lattice results for the order of the QCD transition (for the two most popular lattice fermion formulations see refs [@Brown:1990ev] and [@AliKhan:2000iz]), although they have unknown systematics. As we have already emphasized in the previous section from the lattice point of view there are two important ’ingredients’ to eliminate these systematic uncertainties: one has to use physical quark masses and carry out a continuum extrapolation. Our goal is to identify the nature of the transition for physical quark masses as we approach the continuum limit. We will study the finite size scaling of the lattice chiral susceptibilities $\chi(N_s,N_t)$=$\partial^2$/$(\partial$$m_{ud}^2)$($T/V$)$\cdot\log Z$, where $m_{ud}$ is the mass of the light u,d quarks and $N_s$ is the spatial extension. This susceptibility shows a pronounced peak around the transition temperature ($T_c$). For a real phase transition the height of the susceptibility peak increases and the width of the peak decreases when we increase the volume. For a first-order phase transition the finite size scaling is determined by the geometric dimension, the height is proportional to $V$, and the width is proportional to $1/V$. For a second-order transition the singular behaviour is given by some power of $V$, defined by the critical exponents. The picture would be completely different for an analytic crossover. There would be no singular behaviour and the susceptibility peak does not get sharper when we increase the volume; instead, its height and width will be $V$ independent for large volumes. Fig. \[fig:susc\_46\] shows the susceptibilities for the light quarks for $N_t=$4 and 6, for which we used aspect ratios $r=N_s/N_t$ ranging from 3 to 6 and 3 to 5, respectively. A clear signal for an analytic crossover for both lattice spacings can be seen. However, these curves do not say much about the continuum behaviour of the theory. In principle a phenomenon as unfortunate as that in the three-flavour theory could occur [@Karsch:2003va], in which the reduction of the discretization effects changed the nature of the transition for a pseudoscalar mass of $\approx$140 MeV. ![\[fig:susc\_46\] Susceptibilities for the light quarks for $N_t$=4 (left panel) and for $N_t$=6 (right panel) as a function of $6/g^2$, where $g$ is the gauge coupling ($T$ grows with $6/g^2$). The largest volume is eight times bigger than the smallest one, so a first-order phase transition would predict a susceptibility peak that is eight times higher (for a second-order phase transition the increase would be somewhat less, but still dramatic). Instead of such a significant change we do not observe any volume dependence. Error bars are s.e.m. ](susc_46.eps){width="13cm"} ![\[fig:cont\_ex\] Normalized susceptibilities $T^4/(m^2\Delta\chi)$ for the light quarks for aspect ratios r=3 (left panel) r=4 (middle panel) and r=5 (right panel) as functions of the lattice spacing. Continuum extrapolations are carried out for all three physical volumes and the results are given by the leftmost blue diamonds. Error bars are s.e.m with systematic estimates. ](cont_ex.eps){width="17cm"} Because we are interested in genuine temperature effects we subtract the $T$=0 susceptibility and study only the difference between $T$$\neq$0 and $T$=0 at different lattice spacings. To do it properly, when we approach the continuum limit the renormalization of $\chi$ has to be performed. This leads to $m^2$$\Delta$$\chi$, which we study (for the details see subsection \[ssec:chi\]). ![\[fig:cont\_scal\] Continuum extrapolated susceptibilities $T^4/(m^2\Delta\chi)$ as a function of 1/$(T_c^3V)$. For true phase transitions the infinite volume extrapolation should be consistent with zero, whereas for an analytic crossover the infinite volume extrapolation gives a non-vanishing value. The continuum-extrapolated susceptibilities show no phase-transition-like volume dependence, though the volume changes by a factor of five. The V$\rightarrow$$\infty$ extrapolated value is 22(2) which is 11$\sigma$ away from zero. For illustration, we fit the expected asymptotic behaviour for first-order and O(4) (second order) phase transitions shown by dotted and dashed lines, which results in chance probabilities of $10^{-19}$ ($7\times10^{-13}$), respectively. Error bars are s.e.m with systematic estimates. ](cont_scal.eps){width="10cm"} To give a continuum result for the order of the transition we carry out a finite size scaling analysis of the dimensionless quantity $T^4/(m^2\Delta\chi)$ directly in the continuum limit. For this study we need the height of the susceptibility peaks in the continuum limit for fixed physical volumes. The continuum extrapolations are done using four different lattice spacings ($N_t$=4,6,8 and 10). The volumes at different lattice spacings are fixed in units of $T_c$, and thus $VT_c^3$=$3^3$,$4^3$ and $5^3$ were chosen. (In three cases the computer architecture did not allow us to take the above ratios directly. In these cases, we used the next possible volume and interpolated or extrapolated. The height of the peak depends weakly on the volume, so these procedures were always safe.) Altogether we used twelve different lattice volumes ranging from $4\cdot12^3$ to $10\cdot48^3$ at $T>0$. For the $T=0$ runs lattice volumes from $24\cdot12^3$ up to $56\cdot28^3$ were used. The number of trajectories were between 1500 and 8000 for $T>0$ and between 1500 and 3000 for $T=0$, respectively. Fig. \[fig:cont\_ex\] shows the continuum extrapolation for the three different physical volumes. The $N_t$=4 results are slightly off but the $N_t$=6,8 and 10 results show a good $a^2$$\propto$$1/N_t^2$ scaling. Having obtained the continuum values for $T^4/(m^2\Delta\chi)$ at fixed physical volumes, we study the finite size scaling of the results. Fig. \[fig:cont\_scal\] shows our final results. The volume dependence strongly suggests that there is no true phase transition but only an analytic crossover in QCD. Transition temperature ---------------------- There are several results in the literature for $T_c$ using both staggered and Wilson fermions [@Karsch:2000kv; @Bernard:1997an; @AliKhan:2000iz; @Bornyakov:2004ii; @Bernard:2004je; @Cheng:2006qk]. There is an additional limitation of these, which has not been mentioned before. This problem is related to an implicit assumption about a a real singularity, thus ignoring the analytic cross-over feature of the finite temperature QCD transition. As we have seen before the QCD transition at non-vanishing temperatures is an analytic cross-over. Since there is no singular temperature dependence different definitions of the transition point lead to different values. The most famous example for this phenomenon is the water-vapor transition, for which the transition temperature can be defined by the peaks of $d\rho/dT$ (temperature derivative of the density) and $c_p$ (heat capacity at fixed pressure). For pressures ($p$) somewhat less than $p_c=22.064$ MPa the transition is of first order, whereas at $p=p_c$ the transition is second order. In both cases the singularity guarantees that both definitions of the transition temperature lead to the same result. For $p>p_c$ the transition is a rapid cross-over, for which e.g. both $d\rho/dT$ and $c_p$ show pronounced peaks as a function of the temperature, however these peaks are at different temperature values. Fig. \[fig:steam\] shows the phase diagram based on [@Spang]. Analogously, there is no unique transition temperature in QCD. Our goal is to eliminate all the above limitations and give the full answer. We determine $T_c$ using the sharp changes of the temperature (T) dependence of renormalized dimensionless quantities obtained from the chiral condensate ($\langle \bar \psi \psi \rangle$), quark number susceptibility ($n_q$) and Polyakov loop ($P$). We expect that all three quantities result in different transition points (similarly to the case of the water, c.f. Fig. \[fig:steam\]). ![\[fig:steam\] The phase diagram of water around its critical point (CP). For pressures below the critical value ($p_c$) the transition is first order, for $p>p_c$ values there is a rapid crossover. In the crossover region the transition temperatures defined from different quantities are not necessarily equal. This can be seen for the temperature derivative of the density ($d\rho/dT$) and the specific heat ($c_p$). The bands show the experimental uncertainties (see [@Spang]). ](steam.eps){width="12cm"} ### Chiral susceptibility {#ssec:chi} The chiral susceptibility of the light quarks ($\chi$) is defined as $$\chi_{\bar{\psi}\psi}=\frac{T}{V}\frac{\partial^2}{\partial m_{ud}^2} \log Z= -\frac{\partial^2}{\partial m_{ud}^2}f,$$ where $f$ is the free energy density. Since both the bare quark mass and the free energy density contain divergences, $\chi_{\bar{\psi}\psi}$ has to be renormalized. The renormalized quark mass can be written as $m_{R,ud}=Z_m\cdot m_{ud}$. If we apply a mass independent renormalization then we have $$m_{ud}^2\frac{\partial^2}{\partial m_{ud}^2}=m_{R,ud}^2\frac{\partial^2}{\partial m_{R,ud}^2}.$$ The free energy has additive, quadratic divergencies. They can be removed by subtracting the free energy at $T=0$ (this is the usual renormalization procedure for the free energy or pressure), which leads to $f_R$. Therefore, we have the following identity: $$m_{ud}^2\frac{\partial^2}{\partial m_{ud}^2}\left(f(T)-f(T=0)\right)= m_{R,ud}^2\frac{\partial^2}{\partial m_{R,ud}^2}f_R(T).$$ the right hand side contains only renormalized quantities, which can be determined by measuring the susceptibilities of the left hand side (for the above expression we use the shorthand notation $m_{ud}^2 \cdot \Delta\chi_{\bar{\psi}\psi}$). In order to obtain a dimensionless quantity it is natural to normalize the above quantity by $T^4$ (which minimizes the final errors). Alternatively, one can use combinations of $T$ and/or $m_\pi$ to construct dimensionless quantities (though these conventions lead to larger errors). Since the transition is a cross-over (c.f. discussion d of our Introduction) the maxima of $m_{ud}^2/m_\pi^2 \cdot \Delta\chi_{\bar{\psi}\psi}/T^2$ or $m_{ud}^2/m_\pi^4 \cdot \Delta\chi_{\bar{\psi}\psi}$ give somewhat different values for $T_c$. ![\[fig:susc\] [Temperature dependence of the renormalized chiral susceptibility ($m^2\Delta \chi_{\bar{\psi}\psi}/T^4$), the strange quark number susceptibility ($\chi_s/T^2$) and the renormalized Polyakov-loop ($P_R$) in the transition region. The different symbols show the results for $N_t=4,6,8$ and $10$ lattice spacings (filled and empty boxes for $N_t=4$ and $6$, filled and open circles for $N_t=8$ and $10$). The vertical bands indicate the corresponding transition temperatures and its uncertainties coming from the T$\neq$0 analyses. This error is given by the number in the first parenthesis, whereas the error of the overall scale determination is indicated by the number in the second parenthesis. The orange bands show our continuum limit estimates for the three renormalized quantities as a function of the temperature with their uncertainties.]{} ](susc.eps){height="15cm" width="8cm"} The upper panel of Fig. \[fig:susc\] shows the temperature dependence of the renormalized chiral susceptibility for different temporal extensions ($N_t$=4,6,8 and 10). For small enough lattice spacings, thus close to the continuum limit, these curves should coincide. As it can be seen, the $N_t=4$ result has considerable lattice artefacts, however the two smallest lattice spacings ($N_t=8$ and $10$) are already consistent with each other, suggesting that they are also consistent with the continuum limit extrapolation (indicated by the orange band). The curves exhibit pronounced peaks. We define the transition temperatures by the position of these peaks. We fitted a second order expression to the peak to obtain its position. The slight change due to the variation of the fitting range is taken as a systematic error. The left panel of Fig. \[fig:tc\] shows the transition temperatures in physical units for different lattice spacings obtained from the chiral susceptibility. As it can be seen $N_t$=6,8 and 10 are already in the scaling region, thus a safe continuum extrapolation can be carried out. The extrapolations based on $N_t=6,8,10$ fit and $N_t=8,10$ fit are consistent with each other. For our final result we use the average of these two fit results (the difference between them are added to our systematic uncertainty). Our T=0 simulations resulted in a $2\%$ error on the overall scale. Our final result for the transition temperature based on the chiral susceptibility reads: $$T_c(\chi_{\bar{\psi}\psi})=151(3)(3) {\rm ~MeV},$$ where the first error comes from the T$\neq$0, the second from the T=0 analyses. We use the second derivative of the chiral susceptibility ($\chi''$) at the peak position to estimate the width of the peak ($(\Delta T_c)^2 = - \chi(T_c)/\chi''(T_c)$). For the continuum extrapolated width we obtained: $$\Delta T_c(\chi_{\bar{\psi}\psi})=28(5)(1) {\rm ~MeV.}$$ Note, that for a real phase transition (first or second order), the peak would have a vanishing width (in the thermodynamic limit), yielding a unique value for the transition temperature (which then would be called critical temperature). Due to the crossover nature of the transition there is no such value, there is a range ($151 \pm 28$ MeV) where the transition phenomena takes place. Other quantities than the chiral susceptibility could result in transition temperatures within this range. The MILC collaboration also reported a continuum result on the transition temperature based on the chiral susceptibility [@Bernard:2004je]. Their result is 169(12)(4) MeV. Note, that their lattice spacings were not as small as ours (they used $N_t$=4,6 and 8), their aspect ratio was quite small ($N_s$/$N_t$=2), they used non-physical quark masses (their smallest pion mass at T$\neq$0 was $\approx$220 MeV), the non-exact R-algorithm was applied for the simulations and they did not use the renormalized susceptibility, but they looked for the peak in the bare $\chi_{\bar{\psi}\psi}/T^2$. Using $T^4$ as a normalization prescription (as we did) the transition temperature would decrease their $T_c$ values by approximately $9$ MeV. Note, that their continuum extrapolation resulted in a quite large error. Taking into account their uncertainties our result and their result agree on the 1-sigma level. ### Quark number susceptibility ![\[fig:tc\] Continuum limit of the transition temperatures obtained from the renormalized chiral susceptibility ($m^2\Delta \chi_{\bar{\psi}\psi}/T^4$), strange quark number susceptibility ($\chi_s/T^2$) and renormalized Polyakov-loop ($P_R$). ](tc3.eps){width="16cm" height="6cm"} For heavy-ion experiments the quark number susceptibilities are quite useful, since they could be related to event-by-event fluctuations. Our second transition temperature is obtained from the strange quark number susceptibility, which is defined via [@Bernard:2004je] $$\frac{\chi_{s}}{T^2}=\frac{1}{TV}\left.\frac{\partial^2 \log Z}{\partial \mu_{s} ^2 }\right|_{\mu_{s}=0},$$ where $\mu_s$ is the strange quark chemical potential (in lattice units). Quark number susceptibilities have the convenient property, that they automatically have a proper continuum limit, there is no need for renormalization. The middle panel of Fig. \[fig:susc\] shows the temperature dependence of the strange quark number susceptibility for different temporal extensions ($N_t$=4,6,8 and 10). For small enough lattice spacings, thus close to the continuum limit, these curves should coincide again (our continuum limit estimate is indicated by the orange band). As it can be seen, the $N_t=4$ results are quite off, however the two smallest lattice spacings ($N_t=8$ and $10$) are already consistent with each other, suggesting that they are also consistent with the continuum limit extrapolation. This feature indicates, that they are closer to the continuum result than our statistical uncertainty. We defined the transition temperature as the peak in the temperature derivative of the strange quark number susceptibility, that is the inflection point of the susceptibility curve. The position was determined by two independent ways, which yielded the same result. In the first case we fitted a cubic polynomial on the susceptibility curve, while in the second case we determined the temperature derivative numerically from neighboring points and fitted a quadratic expression to the peak. The slight change due to the variation of the fitting range is taken as a systematic error. The middle panel of Fig. \[fig:tc\] shows the transition temperatures in physical units for different lattice spacings obtained from the strange quark number susceptibility. As it can be seen $N_t$=6,8 and 10 are already in the $a^2$ scaling region, thus a safe continuum extrapolation can be carried out. The extrapolations based on $N_t=6,8,10$ fit and $N_t=8,10$ fit are consistent with each other. For our final result we use the average of these two fit results (the difference between them is added to our systematic uncertainty). The continuum extrapolated value for the transition temperature based on the strange quark number susceptibility is significantly higher than the one from the chiral susceptibility. The difference is 24(4) MeV. For the transition temperature in the continuum limit one gets: $$T_c(\chi_s)=175(2)(4) {\rm ~MeV},$$ where the first (second) error is from the T$\neq$0 (T=0) temperature analysis (note, that due to the uncertainty of the overall scale, the difference is more precisely determined than the uncertainties of $T_c(\chi_{\bar{\psi}\psi})$ and $T_c(\chi_s)$ would suggest). [^17] Similarly to the chiral susceptibility analysis, the curvature at the peak can be used to define a width for the transition. $$\Delta T_c(\chi_s)= 42(4)(1) {\rm ~MeV}.$$ ### Polyakov loop In pure gauge theory the order parameter of the confinement transition is the Polyakov-loop: $$P=\frac{1}{N_s^3}\sum_{\bf x} {\rm tr} [U_4({\bf x},0) U_4({\bf x},1) \dots U_4({\bf x},N_t-1)].$$ P acquires a non-vanishing expectation value in the deconfined phase, signaling the spontaneous breakdown of the Z(3) symmetry. When fermions are present in the system, the physical interpretation of the Polyakov-loop expectation value is more complicated (see e.g.. [@Kratochvila:2006jx]). However, its absolute value can be related to the quark-antiquark free energy at infinite separation: $$|\langle P \rangle |^2 = \exp(-\Delta F_{q\bar{q}}(r\to \infty)/T).$$ $\Delta F_{q\bar{q}}$ is the difference of the free energies of the quark-gluon plasma with and without the quark-antiquark pair. The absolute value of the Polyakov-loop vanishes in the continuum limit. It needs renormalization. This can be done by renormalizing the free energy of the quark-antiquark pair [@Kaczmarek:2002mc]. Note, that QCD at T$\neq$0 has only the ultraviolet divergencies which are already present at T=0. In order to remove these divergencies at a given lattice spacing we used a simple renormalization condition [@Fodor:2004ft]: $$V_R(r_0)=0,$$ where the potential is measured at T=0 from Wilson-loops. The above condition fixes the additive term in the potential at a given lattice spacing. This additive term can be used at the same lattice spacings for the potential obtained from Polyakov loops, or equivalently it can be built in into the definition of the renormalized Polyakov-loop. $$|\langle P_R \rangle | = |\langle P \rangle | \exp(V(r_0)/(2T)),$$ where $V(r_0)$ is the unrenormalized potential obtained from Wilson-loops. The lower panel of Fig. \[fig:susc\] shows the temperature dependence of the renormalized Polyakov-loops for different temporal extensions ($N_t$=4,6,8 and 10). The two smallest lattice spacings ($N_t=8$ and $10$) are approximately in 1-sigma agreement (our continuum limit estimate is indicated by the orange band). Similarly to the strange quark susceptibility case we defined the transition temperature as the peak in the temperature derivative of the Polyakov-loop, that is the inflection point of the Polyakov-loop curve. To locate this point and determine its uncertainties we used the same two methods, which were used to determine $T_c(\chi_s)$. The right panel of Fig. \[fig:tc\] shows the transition temperatures in physical units for different lattice spacings obtained from the Polyakov-loop. As it can be seen $N_t$=6,8 and 10 are already in the scaling region, thus a safe continuum extrapolation can be carried out. The extrapolation and the determination of the systematic error were done as for $T_c(\chi_s)$. The continuum extrapolated value for the transition temperature based on the renormalized Polyakov-loop is significantly higher than the one from the chiral susceptibility. The difference is 25(4) MeV. For the transition temperature in the continuum limit one gets: $$T_c(P)=176(3)(4) {\rm ~MeV},$$ where the first (second) error is from the T$\neq$0 (T=0) temperature analysis (again, due to the uncertainties of the overall scale, the difference is more precisely determined than the uncertainties of $T_c(\chi)$ and $T_c(P)$ suggest). Similarly to the chiral susceptibility analysis, the curvature at the peak can be used to define a width for the transition. $$\Delta T_c(P)=38(5)(1) {\rm ~MeV}.$$ ### Comparison with the recent result of the BBCR collaboration ![\[fig:note\] Resolving the discrepancy between the transition temperature of Ref. [@Cheng:2006qk] and that of the present work (see text). The major part of the difference can be traced back to the unreliable continuum extrapolation of [@Cheng:2006qk]. Left panel: In Ref. [@Cheng:2006qk] $r_0$ was used for scale setting (filled boxes), however using the kaon decay constant (empty boxes) leads to different transition temperatures even after performing the continuum extrapolation. Right panel: in our work the extrapolations based on the finer lattices are safe, using the two different scale setting methods one obtains consistent results. ](note5.eps){width="18cm"} Let us comment here on an independent study on $T_c$ based on large scale simulations of the Bielefeld-Brookhaven-Columbia-RIKEN group [@Cheng:2006qk]. The p4fat3 action was used, which is designed to give very good results in the (T$\to$$\infty$) Stefan-Boltzmann limit (their action is not optimized at T=0, which is needed e.g. to set the scale). The overall scale was set by $r_0$. The $T_c$ analysis based on the chiral susceptibility peak gave in the continuum limit $T_c(\chi)$=192(7)(4) MeV. (The second error, 4 MeV, estimates the uncertainty of the continuum limit extrapolation, which we do not use in the following, since we attempt to give a more reliable estimate on that.) This result is in obvious contradiction with our continuum result from the same observable, which is $T_c(\chi)$=151(3)(3) MeV. For the same quantity (position of chiral susceptibility peak with physical quark masses in the continuum limit) one should obtain the same numerical result independently of the lattice action. Since the chance probability that we are faced with a statistical fluctuation and both of the results are correct is small, we attempted to understand the origin of the discrepancy. We repeated some of their simulations and analyses. In these cases a complete agreement was found. In addition to their T=0 analyses we carried out an $f_K$ determination, too. This $f_K$ was used to extend their work, to use an LCP based on $f_K$ and to determine $T_c$ in physical units. We summarize the origin of the contradiction between our findings and theirs. The major part of the difference can be explained by the fact, that the lattice spacings of [@Cheng:2006qk] are too large ($\gtrsim$0.20 fm), thus they are not in the $a^2$ scaling regime, in which a justified continuum extrapolation could have been done. Setting the scale by different dimensionful quantities should lead to the same result. However at their lattice spacings the overall scales obtained by $r_0$ or by $f_K$ can differ by $\gtrsim$20%, and even the continuum extrapolated $r_0f_K$ value of these scales is about 4–5$\sigma$ away from the value given by the literature [@Aubin:2004fs; @Gray:2005ur]. This scale ambiguity appears in $T_c$, too (though other uncertainties of [@Cheng:2006qk], e.g. coming from the determination of the peak-position, somewhat hide its high statistical significance). We used their $T_c$ values fixed by their $r_0$ scale, and in addition we converted their peak position of the chiral susceptibility to $T_c$ setting the scale by $f_K$. (In order to ensure the possibility of a consistent continuum limit –independently of the actual physical value of $r_0$– we used for both $r_0$ and $f_K$ the results of [@Aubin:2004fs; @Gray:2005ur] as Ref. [@Cheng:2006qk] did it for $r_0$.) Setting the overall scale by $f_K$ predicts a much smaller $T_c$ at their lattice spacings than doing it by $r_0$ (see left panel of Fig. \[fig:note\]). Even after carrying out the continuum extrapolation the difference does not vanish ($\sim 30$ MeV), which means that the lattice spacings $\gtrsim$0.20 fm used by [@Cheng:2006qk] are not in the scaling regime. Thus, results obtained with their lattice spacings can not give a consistent continuum limit for $T_c$. In our case not only $N_t=4$ and $6$ temporal extensions were used, but more realistic $N_t$=8 and $10$ simulations were carried out, which led to smaller lattice spacings. These calculations are already in the $a^2$ scaling regime and a safe continuum extrapolation can be done. For our lattice spacing different scale setting methods give consistent results. This is shown on the right panel of Fig. \[fig:note\] (independently of the scale setting one obtains the same $T_c$) and also justified with high accuracy by Fig. \[fig:spect\], where $r_0f_K$ converges to the physical value on our finer lattices. As it can be seen on the plot, using only our $N_t=4$ and 6 results would also give an inconsistent continuum limit. This emphasizes our conclusion that lattice spacings $\gtrsim$ 0.20 fm can not be used for consistent continuum extrapolations. The second, minor part of the difference comes from the different definitions of the transition temperatures related to the chiral susceptibility. We use the renormalized chiral susceptibility with $T^4$ normalization to obtain the peak position, which yields $\sim 9$ MeV smaller transition temperature than the bare susceptibility normalized by $T^2$ of Ref. [@Cheng:2006qk]. Equation of state ----------------- The equilibrium description as a function of the temperature is given by the equation of state (EoS). The complete determination of the EoS needs non-perturbative inputs, out of which the lattice simulation is the most systematic approach. The EoS has been determined in the continuum limit for the pure gauge theory [@Boyd:1996bx; @Okamoto:1999hi; @Namekawa:2001ih]. In this case –quenched simulations– the simulations are particularly easy as there is no fermionic degree of freedom. The simulated systems is equivalent to that where all the fermions are infinitely heavy, thus, is far from the physical situation. (Note, however, that even in this relatively simple case there is still a few % difference between the different approaches.) The situation in the unquenched case (QCD with dynamical quarks) is a bit barouqe. There are many results with different flavor content, fermion formulation, quark masses. None of them have used the proper physical quark content and none of them has attempted to carry out a continuum extrapolation. Moreover staggered fermion studies were always using inexact R-algorithm (which can result in uncontrolled systematics, see subsection \[ssec:ralg\]). Let us give here a brief review of the literature. - There are published results for two-flavor QCD using unimproved staggered [@Blum:1994zf; @Bernard:1996cs], and improved Wilson fermions [@AliKhan:2001ek]. - There are results available for the 2+1 flavour case, among which the study done by Karsch, Laermann and Peikert in the year 2000 [@Karsch:2000ps], using p4-improved staggered fermions, has often been used as the best result of EoS from lattice QCD. An additional drawback of this result, that the concept of the LCP was ignored from the calculation. If Karsch,Laermann and Peikert had cooled down e.g. two of their systems one at $T$=3$T_c$ and one at $T$=0.7$T_c$ down to T=0, the first system would have had approximately 4 times larger quark masses -two times larger pion masses- than the second one; this unphysical choice is known to lead systematics, which are comparable to the difference between the interacting and non-interacting plasma - Recently the MILC collaboration studied the equation of state along LCP’s with two light quark masses (0.1 and 0.2 times $m_s$) at $N_t$=4 and 6 lattices using asqtad improved staggered fermionic action [@Bernard:2006nj]. There are ongoing thermodynamics projects improving on previous results. The joint Bielefeld-Brookhaven-Columbia-MILC-RIKEN collaboration (hotQCD collaboration) have presented new results at the lattice conference [@Heide:lat07]. Still the results are only available for two different lattice spacings ($N_t=4$ and $6$) and for unphyiscal quark masses. There are two important further issues with the equation of state, which have been overlooked in previous calculations. Firstly the heavier quarks can have significant contribution to the pressure even at few times the transition temperature (exploratory investigations [@Cheng:lat07] show a significant jump in the pressure around $\sim 2\cdot T_c$ due to the charm quark). Secondly there is no serious practical obstacle to extend the EoS calculations well beyond the usual $4-5 \cdot T_c$ [@Szabo:lat07]. This opens the possibility to find the missing connection between conventional perturbation theory and nonperturbative methods. In this section we present our first step towards the final solution of the EoS with proper physical quark content. Though we have the EoS on two different sets of lattice spacings ($N_t=4$ and $6$) and one might attempt to do a continuum extrapolation, it is fair to say that another set of lattice spacings is needed ($N_t$=8). One of the reasons is, that in the hadronic phase, where the integration for the pressure starts, the lattice spacing is larger than 0.3 fm. In this region the lattice artefacts can not be really controlled (and in this deeply hadronic case it does not really help that an action is very good at asymptotically high temperatures in the free non-interacting gas limit). ### Integral technique We shortly review the integral technique to obtain the pressure [@Engels:1990vr]. For large homogeneous systems the pressure is proportional to the logarithm of the partition function: $$\begin{aligned} \label{eq:pa} pa^4=\frac{Ta}{V/a^3}\log Z(T,V)=\frac{1}{N_tN_s^3}\log Z(N_s,N_t;\beta,m_q).\end{aligned}$$ (Index ‘q’ refers to the ${ud}$ and $s$ flavors.) The volume and temperature are connected to the spatial and temporal extensions of the lattice: $$\begin{aligned} V=(N_sa)^3, && T=\frac{1}{N_ta}.\end{aligned}$$ The divergent zero-point energy has to be removed by subtracting the zero temperature ($N_t\to \infty$) part of Eq. (\[eq:pa\]). In practice the zero temperature subtraction is performed by using lattices with finite, but large $N_{t}$ (called $N_{t0}$, see Table \[ta:points\]). So the normalized pressure becomes: $$\begin{aligned} \frac{p}{T^4}=N_t^4\left[ \frac{1}{N_tN_s^3}\log Z(N_s,N_t;\beta,m_q) - \frac{1}{N_{t0}N_{s0}^3}\log Z(N_{s0},N_{t0};\beta,m_q) \right].\end{aligned}$$ With usual Monte-Carlo techniques one cannot measure $\log Z$ directly, but only its derivatives with respect to the bare parameters of the lattice action. Having determined the partial derivatives one integrates in the multi-dimensional parameter space: $$\label{integral} \frac{p}{T^4}= N_t^4\int^{(\beta,m_q)}_{(\beta_0,m_{q0})} d (\beta,m_q)\left[ \frac{1}{N_tN_s^3} \left(\begin{array}{c} {\partial \log Z}/{\partial \beta} \\ {\partial \log Z}/{\partial m_{q}} \end{array} \right )- \frac{1}{N_{t0}N_{s0}^3} \left(\begin{array}{c} {\partial \log Z_0}/{\partial \beta} \\ {\partial \log Z_0}/{\partial m_q} \end{array} \right ) \right],$$ where $Z/Z_0$ are shorthand notations for $Z(N_s,N_t)/Z(N_{s0},N_{t0})$. Since the integrand is a gradient, the result is by definition independent of the integration path. We need the pressure along the LCP, thus it is convenient to measure the derivatives of $\log Z$ along the LCP and perform the integration over this line in the $\beta$, $m_{ud}$ and $m_s$ parameter space. The lower limits of the integrations (indicated by $\beta_0$ and $m_{q0}$) were set sufficiently below the transition point. By this choice the pressure gets independent of the starting point (in other words it vanishes at small temperatures). In the case of $2+1$ flavor staggered QCD the derivatives of $\log Z$ with respect to $\beta$ and $m_q$ are proportional to the expectation value of the gauge action ($\langle S_g \rangle$ c.f. Eq. (\[action\])) and to the chiral condensates ($\langle \bar{\psi}\psi_q \rangle $), respectively. Eq. (\[integral\]) can be rewritten appropriately and the pressure is given by (in this formula we write out explicitely the flavors): $$\label{pmu0} \frac{p}{T^4}= N_t^4\int^{(\beta,m_{ud},m_s)}_{(\beta_0,m_{ud0},m_{s0})} d (\beta,m_{ud},m_s)\left[ \frac{1}{N_tN_s^3} \left(\begin{array}{c} \langle{\rm -S_g/\beta}\rangle \\ \langle\bar{\psi}\psi_{ud}\rangle \\ \langle\bar{\psi}\psi_{s}\rangle \end{array} \right) - \frac{1}{N_{t0}N_{s0}^3} \left(\begin{array}{c} \langle{\rm -S_g/\beta}\rangle_0 \\ \langle\bar{\psi}\psi_{ud}\rangle_0 \\ \langle\bar{\psi}\psi_{s}\rangle_0 \end{array} \right)\right],$$ where $\langle \dots \rangle _0$ means averaging on a $N_{s0}^3\cdot N_{t0}$ lattice. The integral method was originally introduced for the pure gauge case, for which the integral is one dimensional, it is performed along the $\beta$ axis. Many previous studies for staggered dynamical QCD (e.g. [@Bernard:1996cs; @Engels:1996ag; @Karsch:2000ps]) used a one-dimensional parameter space instead of performing it along the LCP. Note, that for full QCD the integration should be performed along a LCP path in a multi-dimensional parameter space. \[ta:c\_values\] $N_t$ p/$T^4$ $c_s^2$ $\chi/T^2$ ---------- --------- --------- ------------ 4 9.12 1/3 2.24 6 7.86 1/3 1.86 $\infty$ 5.21 1/3 1 : Summary of the results for the 2+1 flavor pressure, speed of sound and 1 flavor quark number susceptibility in the non-interacting Stefan-Boltzmann limit. $\epsilon/T^4$ is 3 times, whereas $s/T^3$ is 4 times the normalized value of the pressure ($p/T^4$) in the Stefan-Boltzmann limit. The first two lines gives the results for $N_t$=4,6 and the third line contains the results in the continuum (in the thermodynamic limit). Using appropriate thermodynamical relations one can obtain any thermal properties of the system. For example the energy density ($\epsilon$), entropy density ($s$) and speed of sound ($c_s^2$) can be derived as $$\begin{aligned} \label{eq:escs} \epsilon = T(\partial p/\partial T)-p, && s = (\epsilon + p) T, && c_s^2 =\frac{dp}{d\epsilon}.\end{aligned}$$ To be able to do theses derivatives one has to know the temperature along the LCP. Since the temperature is connected to the lattice spacing as $ T=(N_t a)^{-1 }, $ we need a reliable estimate on $a$. The lattice spacings at different points of the LCP are determined by first matching the static potentials for different $\beta$ values at an intermediate distance for $m_{ud}=\{3,5\}m_{ud}(phys)$ quark masses, then extrapolating the results to the physical quark mass. Relating these distances to physical observables (determining the overall scale in physical units) will be the topic of a subsequent publication. We show the results as a function of $T/T_c$. The transition temperature ($T_c$) is defined by the inflection point of the isospin number susceptibility ($\chi_{I}$, see later). To get the energy density the literature usually uses another quantity, namely $\epsilon$-3$p$, which can be also directly measured on the lattice. In our analysis it turned out to be more appropriate to calculate first the pressure directly from the raw lattice data (Eq. (\[pmu0\])) and then determine the energy density and other quantities from the pressure (Eq. (\[eq:escs\])). The reasons for that can be summarized as follows. As we discussed we perform T$\neq$0 simulations with physical quark masses, whereas the subtraction terms from T=0 simulations are extrapolated from larger quark masses. This sort of extrapolation is adequate for the chiral condensates, for which chiral perturbation techniques work well. Thus, one can choose an integration path for the T=0 part of the pressure, which moves along a LCP at some larger $m_{ud}$ (e.g. 9 times $m_{ud}(phys)$) and then at fixed $\beta$ goes down to the physical quark mass. No comparable analogous technique is available for the combination $\epsilon$-3$p$. We have also calculated the pressure for the larger quark masses. Plotting it as a function of the temperature the differences between them are significant. As a function of $T/T_c$ these differences are smaller, but still remain statistically significant in the $1.2...2.0 T_c$ region. Note that statements on the mass dependence are only qualitative since such an analysis requires the careful matching of the scales at different quark masses. ### Physics results $\beta$ $T/T_c$ $p/T^4$ (raw) $p/T^4$ (scaled) $\beta$ $T/T_c$ $p/T^4$ (raw) $p/T^4$ (scaled) --------- --------- --------------- ------------------ --------- --------- --------------- ------------------ 3.000 $0.90$ $0.12(0.02)$ $0.07(0.01)$ 3.450 $0.80$ $0.07(0.11)$ $0.05(0.08)$ 3.150 $0.95$ $0.32(0.07)$ $0.19(0.04)$ 3.500 $0.87$ $0.23(0.11)$ $0.15(0.08)$ 3.250 $0.98$ $0.59(0.10)$ $0.34(0.06)$ 3.550 $0.96$ $0.59(0.12)$ $0.39(0.08)$ 3.275 $0.99$ $0.73(0.10)$ $0.42(0.06)$ 3.575 $1.02$ $0.91(0.12)$ $0.60(0.08)$ 3.300 $1.01$ $0.91(0.10)$ $0.52(0.06)$ 3.600 $1.07$ $1.29(0.13)$ $0.86(0.08)$ 3.325 $1.04$ $1.13(0.10)$ $0.65(0.06)$ 3.625 $1.14$ $1.69(0.13)$ $1.12(0.09)$ 3.350 $1.06$ $1.39(0.09)$ $0.79(0.05)$ 3.650 $1.20$ $2.10(0.14)$ $1.40(0.09)$ 3.400 $1.14$ $2.04(0.10)$ $1.16(0.06)$ 3.675 $1.28$ $2.51(0.14)$ $1.66(0.10)$ 3.450 $1.23$ $2.79(0.10)$ $1.59(0.06)$ 3.700 $1.35$ $2.88(0.15)$ $1.91(0.10)$ 3.500 $1.34$ $3.56(0.11)$ $2.04(0.07)$ 3.750 $1.52$ $3.50(0.15)$ $2.32(0.10)$ 3.550 $1.49$ $4.32(0.12)$ $2.47(0.07)$ 3.800 $1.70$ $3.99(0.16)$ $2.65(0.11)$ 3.600 $1.66$ $4.96(0.12)$ $2.83(0.07)$ 3.850 $1.90$ $4.36(0.16)$ $2.89(0.11)$ 3.650 $1.86$ $5.46(0.12)$ $3.12(0.07)$ 3.930 $2.24$ $4.82(0.17)$ $3.19(0.11)$ 3.700 $2.09$ $5.84(0.12)$ $3.34(0.07)$ 4.000 $2.55$ $5.14(0.17)$ $3.41(0.11)$ 3.850 $2.93$ $6.57(0.15)$ $3.75(0.09)$ 4.000 $3.93$ $6.97(0.16)$ $3.98(0.09)$ : \[ta:p\_values\] Numerical values of the pressure for all of our simulation points. The left column shows the $N_t$=4, whereas the right column shows the $N_t$=6 data. Both the raw values and the ones scaled by $c_{cont}/c_{N_t}$ are given. Let us present the results. In order to show how the different quantities scale with the lattice spacing we show always $N_t$=4,6 results on the same plot. In addition, in order to make the relationship with the continuum limit more transparent we multiply the raw lattice results at finite temporal extensions ($N_t$=4,6) with $c_{cont}/c_{N_t}$, where the c values are the results in the free non-interacting plasma (Stefan-Boltzmann limit). These c values are summarized in Table \[ta:c\_values\] for the pressure, speed of sound, and for the quark number susceptibility at $N_t$=4,6 and in the continuum limit. By this multiplication the lattice thermodynamic quantities should approach the continuum Stefan-Boltzmann values for extreme large temperatures. Table \[ta:p\_values\] contains our most important numerical results. We tabulated the raw and normalized pressure values for both lattice spacings and for all of our simulation points. This data set and Eq. (\[eq:escs\]) were used to obtain the following figures. Fig. \[fig:eos\_pe\] shows the equation of state on $N_t$=4,6 lattices. The pressure (left panel) and $\epsilon$ (right panel) are presented as a function of the temperature. The Stefan-Boltzmann limit is also shown. Fig. \[fig:eos\_sc\] shows the entropy density (left panel) and the speed of sound (right panel), which can be obtained by using the pressure and energy density data (c.f. $sT$=$\epsilon$+$p$ and $c_s^2$=$dp$/$d\epsilon$) of the previous Fig. \[fig:eos\_pe\]. Clearly, the uncertainties of the pressure and those of the energy density cumulate in the speed of sound, therefore it is less precisely determined. \ Light and strange quark number susceptibilities ($\chi_{ud}$ and $\chi_s$) are defined via [@Bernard:2004je] $$\begin{aligned} \frac{\chi_{q}}{T^2}=\frac{N_t}{N_s^3}\left.\frac{\partial^2 \log Z}{\partial \mu_{q} ^2 }\right|_{\mu_{q}=0}, \end{aligned}$$ where $\mu_{ud}$ and $\mu_s$ are the light and strange quark chemical potentials (in lattice units). With the help of the quark number operators $$Q_{q}=\frac{1}{4}\frac{\partial }{\partial \mu_q} \log \det ({{D\hspace{-7pt}{/}}}+m_{q}),$$ the susceptibilities can be written as $$\frac{\chi_{q}}{T^2}=\frac{N_t}{N_s^3}\left(\langle Q_q^2 \rangle_{\mu_q=0} + \left\langle \frac{\partial Q_q}{\partial \mu_q}\right\rangle_{\mu_q=0}\right).$$ The first term is usually referred as disconnected, the second as connected part. The connected part of the light quark number susceptibility is 2 times the susceptibility of the isospin number ($\chi_I$). It is presented on the left panel of Fig. \[fig:eos\_sc\]. For our statistics and evaluation method the disconnected parts are all consistent with zero and their value is far smaller than those of the connected parts. The right panel of Fig. \[fig:eos\_sc\] contains the connected part of the strange number susceptibility. Summary ======= The focus of this work was implementing and applying dynamical fermions in lattice QCD. The first part was exploring the unknown territory of dynamical algorithms for the overlap fermion. The conventional dynamical algorithm (Hybrid Monte Carlo) fails to work for this type of fermion. The failure was identified, it is due to the change in the topological charge. We have proposed a possible workaround for this problem, and shown that with this modification the algorithm works reasonably. There were further modifications necessary to increase the performance to an acceptable level. At the end we have determined the topological susceptibility as the function of quark mass. The result was the first in lattice QCD which has shown the suppression of the susceptibility for small quark masses at finite lattice spacing. The simulations however considerably more expensive than the ones with other fermion formulations. Therefore it still seems to be more beneficial to investigate the current algorithms or to come up with new ones than starting larger scale physics projects with dynamical overlap fermions. The second part of the work was the determination of bulk properties of the finite temperature QCD matter using dynamical improved staggered fermions. This was a large scale project and attempted to give final answers based on a first principles approach. In order to achieve our goal we have done the simulations for physical values of the quark masses and carried out a continuum extrapolation wherever possible. Firstly we have determined the nature of the transition based on a finite size scaling analysis of a susceptibility type quantity. The transition turned out to be a smooth crossover, that is no sign of singularity has been found in the thermodynamical limit. Secondly we have determined some typical temperatures in physical units where this crossover takes place. Since there is no singularity, there is no unique temperature which can be identified as a critical temperature. We have calculated transition temperatures from various quantities: peak of the chiral susceptibility, inflection point of the quark number susceptibility and inflection point of the Polyakov loop. The second two gave significantly higher temperature values, than the first one. Thirdly we have presented result on the equation of state towards the continuum limit. From the currently available lattice spacings a reliable continuum extrapolation cannot be carried out, this is left for the future. Acknowledgments {#acknowledgments .unnumbered} --------------- First of all I would like to thank the help of my advisor Zoltán Fodor. His very good sense in choosing the right topics was inevitable to accomplish this thesis. I am also very grateful to Sándor Katz. They both have a great part in my results with lots of ideas, hints. In the overlap project we had a nice collaboration with Győző Egri, I thank him. It was a pleasure to work together at various stages of the staggered project with Yasumichi Aoki and Gergő Endrődi. I thank the discussions with Christian Hoelbling, Dániel Nógrádi, Stefan Krieg, Tamás Kovács, Anna Tóth and Bálint Tóth. The numerical computations for this thesis were carried out on the following supercomputers: on BlueGene/L at FZ Jülich, on PC clusters at Bergische Universität, Wuppertal and on PC clusters at Eötvös University, Budapest. [^1]: $N_c$ is the number of colors, in QCD $N_c=3$. [^2]: Introductory materials covering the extended literature are [@Montvay:1994cy; @Gupta:1997nd; @Rothe:2005nw]. Annual review of the field can be found in the lattice conference proceedings. To avoid the proliferation of citations in the introduction we refer to these, and cite articles only in special cases. [^3]: For simplicity we have same quark masses for the different flavors in this introductory section. [^4]: Nonlocality is driven by the smallness of the quark mass. [^5]: For a recent review on the staggered controversy see [@Sharpe:2006re]. [^6]: Other than staggered quark discretizations are often used in the correlation functions, these are the so called mixed approaches with staggered sea quarks. [^7]: For dynamical simulation of an approximate fixed-point Dirac operator see [@Hasenfratz:2005tt]. [^8]: In case of unit length trajectories. [^9]: For studies with inverters other than conjugate gradient see [@Arnold:2003sx]. [^10]: In order to have only an $O(\epsilon^2)$ difference one has to use an improved modified reflection step as described in the previous section. [^11]: We do not deal with the possibility of degenerate zero eigenvalues which appears only on a zero measure subset of the zero eigenvalue surface. [^12]: In the previous reflection recipe, $\det X = -1$ was also true, so all together one ended up with $\det J =1$. [^13]: It is normalized so, that $\varrho_A(r_B)=\delta_{AB}$ is satisfied at the identity. Due to right invariance, the normalization will hold on the whole group. [^14]: For simplicity the notation of the integral curve is $g(t)$ instead of $\{g(t),p(t)\}$. [^15]: In the $\beta=3.0..3.4$ range the $T=0$ simulations were carried out at $m_{ud}(phys)$. [^16]: Note, that outside the scaling region even a seemingly small lattice spacing dependence can lead to an incorrect result. An infamous example is the Naik action [@Naik:1986bn] in the Stefan-Boltzmann limit: $N_t$=4 and 6 are consistent with each other with a few % accuracy, but since they are not in the scaling region they are 20% off the continuum value. [^17]: A continuum extrapolation using only the two coarsest lattices ($N_t=4$ and $6$) yielded $T_c \sim 190$ MeV [@Katz:2005br], where an approximate LCP (LCP1) was used, if the lattice spacing is set by $r_0$.
--- abstract: 'We aim for composing algorithmic music in an interactive way with multiple participants. To this end we have developed an interpreter for a sub-language of the non-strict functional programming language Haskell that allows the modification of a program during its execution. Our system can be used both for musical live-coding and for demonstration and education of functional programming.' --- Introduction ============ It is our goal to compose music by algorithms. We do not want to represent music as a sequence of somehow unrelated notes as it is done on a note sheet. Instead we want to describe musical structure. For example, we do not want to explicitly list the notes of an accompaniment but instead we want to express the accompaniment by a general pattern and a sequence of harmonies. A composer who wants to draw a sequence of arbitrary notes might serve as another example. The composer does not want to generate the random melody note by note but instead he wants to express the idea of randomness. Following such a general phrase like “randomness” the interpreter would be free to play a different but still random sequence of notes. The programmer shall be free to choose the degree of structuring. For instance, it should be possible to compose a melody manually, accompany it using a note pattern following a sequence of user defined harmonies and complete it with a fully automatic rhythm. With a lot of abstraction from the actual music it becomes more difficult to predict the effect of the programming on the musical result. If you are composing music that is not strictly structured by bars and voices then it becomes more difficult to listen to a certain time interval or a selection of voices for testing purposes. Also, the classical edit-compile-run loop hinders creative experiments. Even if the music program can be compiled and restarted quickly, you must terminate the running program and thus the playing music and you must start the music from the beginning. Especially if you play together with other musicians this is unacceptable. In our approach to music programming we use a purely functional [^1] programming language [@hughes1984fpmatter], that is almost a subset of Haskell 98 [@peyton-jones1998haskell]. Our contributions to live music coding are concepts and a running system offering the following: - algorithmic music composition where the program can be altered while the music is playing (), - simultaneous contributions of multiple programmers to one song led by a conductor (). Functional live programming =========================== Live coding ----------- We want to generate music as a list of MIDI events [@mma1996midi], that is events like “key pressed”, “key released”, “switched instrument”, “knob turned” and wait instructions. A tone with pitch C-5, a duration of 100 milliseconds and an average force shall be written as: main = [ Event (On c5 normalVelocity) , Wait 100 , Event (Off c5 normalVelocity) ] ; c5 = 60 ; normalVelocity = 64 ; . Using the list concatenation “`++`” we can already express a simple melody. main = note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ; note duration pitch = [ Event (On pitch normalVelocity) , Wait duration , Event (Off pitch normalVelocity) ] ; qn = 200 ; -- quarter note hn = 2*qn ; -- half note c = 60 ; d = 62 ; e = 64 ; f = 65 ; g = 67 ; normalVelocity = 64 ; We can repeat this melody infinitely by starting it again when we reach the end of the melody. main = note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ++ main ; Please note, that this is not a plain recursion, but a so called . If we define the list `main` this way it is infinitely long but if we expand function applications only when necessary then we can evaluate it element by element. Thanks to this evaluation strategy (in a sense without ) we can describe music as pure list of events. The music program does not need, and currently cannot, call any statements for interaction with the real world. Only the interpreter sends MIDI messages to other devices. In a traditional interactive interpreter like the `GHCi`[^2] we would certainly play the music this way: Prelude> playMidi main . If we would like to modify the melody we would have to terminate it and restart the modified melody. In contrast to this we want to alter the melody while the original melody remains playing and we want to smoothly lead over from the old melody to the new one. In other words: The current state of the interpreter consists of the program and the state of the interpretation. We want to switch the program, but we want to keep the state of interpretation. This means that the interpreter state must be stored in a way such that it stays sensible even after a program switch. We solve this problem as follows: The interpreter treats the program as a set of term rewriting rules, and executing a program means to apply rewrite rules repeatedly until the start term `main` is expanded far enough that the root of the operator tree is a terminal symbol (here a ). For the musical application the interpreter additionally tests whether the root operator is a list constructor, and if it is the constructor for the non-empty list then it completely expands the leading element and checks whether it is a MIDI event. The partially expanded term forms the state of the interpreter. For instance, while the next to last note of the loop from above is playing, that is, after the interpreter has sent its [`NoteOn`]{} event, the current interpreter state would look like: Wait 200 : (Event (Off g normalVelocity) : (note hn g ++ main)) . The interpreter will rewrite the current expression as little as possible, such that the next MIDI event can be determined. On the one hand this allows us to process a formally infinite list like `main`, and on the other hand you can still observe the structure of the remaining song. E.g. the final call to `main` is still part of the current term. If we now change the definition of `main` then the modified definition will be used when `main` is expanded next time. This way we can alter the melody within the loop, for instance to: main = note qn c ++ note qn d ++ note qn e ++ note qn f ++ note qn g ++ note qn e ++ note hn g ++ main ; . But we can also modify it to main = note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ++ loopA ; in order to continue the melody with another one called `loopA` after another repetition of the `main` loop. We want to summarise that the meaning of an expression can change during the execution of a program. That is, we give up a fundamental feature of functional programming, namely . We could implement the original loop using the standard list function `cycle` main = cycle ( note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ) ; and if `cycle` is defined by cycle xs = xs ++ cycle xs ; then this would be eventually expanded to ( note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ) ++ cycle ( note qn c ++ note qn d ++ note qn e ++ note qn f ++ note hn g ++ note hn g ) ; . Using this definition we could leave the loop only by changing the definition of `cycle`. But such a change would affect *all* calls of `cycle` in the current term. Further, in a rigorous module system without import cycles it would be impossible to access functions of the main module from within the standard module `List` that defines the `cycle` function. But this would be necessary in order to not only leave the `cycle` loop but to continue the program in the main module. From this example we learn that a manually programmed loop in the form of `main = ... ++ main` has advantages over a loop function from the standard library, because the manual loop provides a position where we can insert new code later. Additionally to the serial composition of musical events we need the parallel composition for the simultaneous playback of melodies, rhythms and so on. At the level of MIDI commands this means that the commands of two lists must be interleaved in the proper way. For details we refer the reader to the implementation of “`=:=`”. ### User interface The graphical user interface is displayed in . In the upper left part the user enters the program code. Using a keyboard short-cut he can check the program code and transfer it to the buffer of the interpreter. The executed program is shown in the upper right part. In this part the interpreter highlights the function calls that had to be expanded in order to rewrite the previous interpreter term into the current one. This allows the user to trace the melody visually. The current term of the interpreter is presented in the lower part of the window. The texts in the figure are essentially the ones from our introductory example. ![*The running interpreter*](screenshot){width="\hsize"} Our system can be run in three modes: “real time”, “slow motion” and “single step”. The real-time mode plays the music as required by the note durations. In contrast to that the other two modes ignore the wait instructions and insert a pause after every element of the MIDI event list. These two modes are intended for studies and debugging. You may also use them in education if you want to explain how an interpreter of a non-strict functional language works in principle. We implemented the interpreter in Haskell using the Glasgow Haskell Compiler GHC [@ghc2012], and we employ WxWidgets [@smart2011wxwidgets] for the graphical user interface. Our interpreted language supports pattern matching, a set of predefined infix operators, higher order functions, and partial function application. For the sake of a simple implementation we deviate from Haskell 98 in various respects: Our language is dynamically and weakly typed: It knows “integer”, “text” and “”. The parser does not pay attention to layout thus you have to terminate every declaration with a semicolon. Several other syntactic features of Haskell 98 are neglected, including list comprehensions, operator sections, do notation, “let” and “case” notation, and custom infix operators. I/O operations are not supported as well. Distributed coding ------------------ Our system should allow the audience to contribute to a performance or the students to contribute to a lecture by editing program code. The typical setup is that the speaker projects the graphical interface of the sequencer at the wall, the audience can listen to music through loud speakers, and the participants can access the computer of the performer via their browsers and wireless network. Our functional language provides a simple module system. This helps the performer to divide a song into sections or tracks and to put every part in a dedicated module. Then he can assign a module to each participant. This is still not a function of the program, but must be negotiated through other means. For instance the conductor might point to people in the audience. Additionally the performer can insert a marker comment that starts the range of text that participants can edit. The leading non-editable region will usually contain the module name, the list of exported identifiers, the list of import statements, and basic definitions. This way the performer can enforce an interface for every module. A participant can load a module into his web browser. The participant sees an HTML page showing the non-editable header part as plain text and the editable region as an editable text field. (cf. ) After editing the lower part of module he can submit the modified content to the server. The server replaces the text below the marker comment with the submitted text. Subsequently the new module content is checked syntactically and on success it is loaded into the interpreter buffer. In case of syntax errors in the new code the submitted code remains in the editor field. The performer can inspect it there and can make suggestions for fixes. ![*Accessing a module via HTTP*](browser){width="\hsize"} Generally it will not be possible to start composition with many people from scratch. However, the performer can prepare a session by defining a set of modules and filling them with basic definitions. For instance he can provide a function that converts a list of zeros and ones into a rhythm, or a list of integers into a chord pattern or a bass line. By providing a meter and a sequence of harmonies he can assert that the parts contributed by the participants fit together loosely. In this application the performer no longer plays the role of the composer but the role of a conductor. Timing ------ For a good listening experience we need precise timing for sending MIDI messages. A naive approach would be to send the messages as we compute them. I.e. in every step we would determine the next element in the list of MIDI events. If it is a wait instruction then we would wait for the desired duration and if it is a MIDI event then we would send it immediately. However this leads to audible inaccuracies due to processor load caused by term rewriting, garbage collection and GUI updates. We are using the ALSA sequencer interface for sending MIDI messages. It allows us to send a MIDI message with a precise but future time stamp. However we still want that the music immediately starts if we start the interpreter and that the music immediately stops if we stop it and that we can also continue a paused song immediately. We achieve all these constraints the following way: We define a latency, say $d$ milliseconds. The interpreter will always compute as many events in advance until the computed time stamps are $d$ milliseconds ahead of current time. This means that the interpreter will compute a lot without waiting when it is started. It will not immediately send a MIDI message because it needs to compute it first. This introduces a delay, sometimes audible, but we cannot do it faster. When the user pauses the interpreter, we halt the timer of our outgoing ALSA queue. This means that the delivery of messages is immediately stopped, but there are still messages for the next $d$ milliseconds in the queue. If the interpreter is continued these messages will be send at their scheduled time stamps. If the interpreter is stopped we simply increase the time of the ALSA queue by $d$ milliseconds in order to force ALSA to send all remaining messages. Related work ============ Algorithmic composition has a long tradition. The musical dice games by Mozart and the Illiac Suite [@hiller1959illiacsuite] might serve as two popular examples here. Further on, the Haskore project (now Euterpea) [@hudak1996haskore] provides a method for music programming in Haskell. It also lets you control synthesisers via MIDI and it supports the generation of audio files via CSound, SuperCollider or pure Haskell audio signal synthesis. Like our approach, Haskore relies upon lazy evaluation which allows for an elegant definition of formally big or even infinite songs while its interpretation actually consumes only a small amount of memory. However, the creative composition process is made more difficult by the fact that you can listen to a change to a song only after terminating the old song and starting the new one. In a sense, our system is an interactive variation of Haskore. So-called functional reactive programming is a very popular approach for programming of animations, robot controls, graphical user interfaces and MIDI processing in Haskell [@elliott1997fran]. Functional reactive programming mimics working with a time ordered infinite list of events. But working with actual lazy lists leads to fundamental problems in real-time processing, e.g. if a stream of events is divided first but merged again later. This is a problem that is solved by functional reactive programming libraries. The advantage of functional reactive MIDI processing compared to our approach is that it allows the processing of event input in realtime. The disadvantage is that usually you cannot alter a functional reactive program during execution. Erlang is another functional (but not purely functional) programming language that accepts changes to a program while the program is running [@armstrong1997erlang]. Erlang applies eager evaluation. That is, in Erlang you could not describe a sequence of MIDI commands by a lazy list of constructors. Instead you would need iterators or similar tools. You can insert new program code into a running Erlang programming in two ways: Either the running program runs functions (e.g. lambda expressions) that it receives via messages or you replace an Erlang module by a new one. If you upload a new Erlang module then the old version is kept in the interpreter in order to continue the running program. Only calls from outside the module jump into the code of the new module, but by qualification you can also simulate an external call from within the replaced module. That is, like in our approach, you need dedicated points (external calls or calls of functions received via messages) where you can later insert new code. Summarised, our approach for changing running programs is very similar to “Hot Code loading” in Erlang. However, the non-strict evaluation of our interpreter implies that considerable parts of the program are contained in the current term. These are not affected immediately by a change to the program. This way we do not need to hold two versions of a module in memory for a smooth transition from old to new program code. In a sense, Erlang’s external calls play the role of our top-level functions. Musical live coding, i.e. the programming of a music generating program, while the music is playing, was in the beginning restricted to special purpose languages like SuperCollider/SCLang [@mccartney1996supercollider] and ChucK [@wang2004chuck] and their implementations. With respect to program control these languages adhere to the imperative programming paradigm and with respect to the type system they are object oriented languages. The main idea in these languages for creating musical patterns is constructing and altering objects at runtime, where the objects are responsible for sending commands to a server for music generation. Also in our approach the sound generation runs parallelly to the interpreter and it is controlled by (MIDI) commands. However, in our approach we do not program how to change some runtime objects but instead we modify the program directly. In the meantime also Haskell libraries for live coding are available, like Tidal ([@mclean2010tidal]) and Conductive ([@bell2011conductive]). They achieve interactivity by running commands from the interactive Haskell interpreter GHCi. They are similar to SCLang and ChucK in the sense that they maintain and manipulate (Haskell) objects at runtime, that in turn control SuperCollider or other software processors. Conclusions and future work =========================== Our presented technique demonstrates a new method for musical live coding. Maybe it can also be transferred to the maintenance of other long-running functional programs. However, we have shown that the user of the live-sequencer must prepare certain points for later code insertion. Additionally our system must be reluctant with automatic optimisations of programs since an optimisation could remove such an insertion point. If you modify a running program then functions are no longer ; that is, we give up a fundamental feature of functional programming. #### Type system A static type checker would considerably reduce the danger that a running program must be aborted due to an ill-typed or inconsistent change to the program. The type checker would not only have to test whether the complete program is type correct after a module update. Additionally it has to test whether the current interpreter term is still type correct with respect to the modified program. A type checker is even more important for distributed composition. The conductor of a multi-user programming session could declare type signatures in the non-editable part of a module and let the participants implement the corresponding functions. The type checker would assert that participants could only send modifications that fit the rest of the song. #### Evaluation strategy Currently our interpreter is very simple. The state of the interpreter is a term that is a pure tree. This representation does not allow for . E.g. if `f` is defined by `f x = x:x:[]` then the call `f (2+3)` will be expanded to `(2+3) : (2+3) : []`. However, when the first list element is evaluated to `5`, the second element will not be evaluated. I.e. we obtain `5 : (2+3) : []` and not `5 : 5 : []`. Since the term is a tree and not a general graph we do not need a custom garbage collector. Instead we can rely upon the garbage collector of the GHC runtime system that runs our interpreter. If a sub-term is no longer needed it will be removed from the operator tree and sooner or later it will be detected and de-allocated by the GHC garbage collector. Even a simple co-recursive definition like that of the sequence of Fibonacci numbers main = fix fibs fibs x = 0 : 1 : zipWith (+) x (tail x) fix f = f (fix f) leads to an unbounded growth of term size with our evaluation strategy. In the future we want to add more strategies like the graph reduction using the STG machine [@peyton-jones1992stg]. This would solve the above and other problems. The operator tree of the current term would be replaced by an operator graph. The application of function definitions and thus the possibility of live modifications of a definition would remain. However, in our application there is the danger that program modification may have different effects depending on the evaluation strategy. On the one hand, the sharing of variable values at different places in the current term would limit the memory consumption in the Fibonacci sequence defined above, on the other hand it could make it impossible to respect a modification of the called function. Our single step mode would allow the demonstration and comparison of evaluation strategies in education. Currently we do not know, whether and how we could embed our system, including live program modifications, into an existing language like Haskell. This would simplify the study of the interdependence between program modifications, optimisations and evaluation strategies and would provide many syntactic and typing features for free. For this purpose we cannot use an interactive Haskell interpreter like GHCi directly: - GHCi does not let us access or even modify a running program and its internal representation is optimized for execution and it is not prepared for changes to the running program. - GHCi does not allow to observe execution of the program, and thus we could not highlight active parts in our program view. - GHCi does not store the current interpreter state in a human readable way that we can show in our display of the current term. Nonetheless, we can imagine that it is possible to write an embedded domain specific language. That is, we would provide functions that allow to program Haskell expressions that only generate an intermediate representation that can then be interpreted by a custom interpreter. #### Highlighting We have another interesting open problem: How can we highlight program parts according to the music? Of course, we would like to highlight the currently played note. Currently we achieve this by highlighting all symbols that were reduced since the previous pause. However if a slow melody is played parallelly to a fast sequence of controller changes this means that the notes of the melody are highlighted only for a short time, namely the time period between controller changes. Instead we would expect that the highlighting of one part of music does not interfere with the highlighting of another part of the music. We can express this property formally: Let the serial composition operator `++` and the parallel composition operator `=:=` be defined both for terms and for highlighting graphics. Consider the mapping `highl`, that assigns a term to its visualisation. Then for every two musical objects `a` and `b` it should hold: highl (a ++ b) = highl a ++ highl b highl (a =:= b) = highl a =:= highl b If you highlight all symbols whose expansion was necessary for generating a `NoteOn` or `NoteOff` MIDI command, then we obtain a function [`highl`]{} with these properties. However this causes accumulation of highlighted parts. In note qn c ++ note qn d ++ note qn e ++ note qn f the terms `note qn c` and `note qn d` would still be highlighted if `note qn e` is played. The reason is that `note qn c` and `note qn d` generate finite lists and this is the reason that `note qn e` can be reached. That is the expansion of `note qn c` and `note qn d` is necessary to evaluate `note qn e`. #### JACK support In the future our system should support JACK in addition to ALSA. It promises portability and synchronous control of multiple synthesisers. #### Beyond MIDI MIDI has several limitations. For example, it is restricted to 16 channels. In the current version of our sequencer the user can add more ALSA sequencer ports where each port adds 16 virtual MIDI channels. E.g. the virtual channel 40 addresses the eigth channel of the second port (zero-based counting). MIDI through wires is limited to sequential data, that is, there cannot be simultaneous events. In contrast to that the ALSA sequencer supports simultaneous events and our Live sequencer supports that, too. Thus the use of MIDI is twofold: On the one hand it is standard in hardware synthesisers and it is the only music control protocoll supported by JACK. On the other hand it has limitations. The Open Sound Control protocol lifts many of these limitations. It should also be relatively simple to add OSC support, but currently it has low priority. Acknowledgments =============== This project is based on an idea by Johannes Waldmann, that we developed into a prototype implementation. I like to thank him, Renick Bell, Alex McLean, and the anonymous reviewers for their careful reading and several suggestions for improving this article. You can get more information on this project including its development, demonstration videos, and papers at > <http://www.haskell.org/haskellwiki/Live-Sequencer> . Glossary ======== #### A constructor {#constructor} is, mathematically speaking, an injective function and, operationally speaking, a way to bundle and wrap other values. E.g. a list may be either empty, then it is represented by the empty list constructor `[]`, or it has a leading element, then it is represented by the constructor `:` for the non-empty list. For example, we represent a list containing the numbers 1, 2, 3 by `1 : (2 : (3 : []))`, or more concisely by `1 : 2 : 3 : []`, since the infix `:` is right-associative. #### Co-recursion is a kind of inverted recursion. Recursion decomposes a big problem into small ones. E.g. the factorial “$!$” of a number can be defined in terms of the factorial of a smaller number: $$n! = \begin{cases} 1 &: n=0 \\ n \cdot (n-1)! &: n>0 \end{cases}$$ A recursion always needs a base case, that is, a smallest or atomic problem that can be solved without further decomposition. In contrast to this, co-recursion solves a problem assuming that it has already solved the problem. It does not need decomposition and it does not need a base case. E.g. a co-recursive definition of an infinite list consisting entirely of zeros is: zeros = 0 : zeros #### Lazy evaluation {#lazy evaluation} is an evaluation strategy for . An alternative name is “call-by-need”. It means that the evaluation of a value is delayed until it is needed. Additionally it provides of common results. #### Non-strict semantics {#non-strictness} means that a function may have a defined result even if it is applied to an undefined value. It is a purely mathematical property that is independent from a particular evaluation strategy. E.g. the logical “and” operator `&&` in the C programming language is non-strict. In a strict semantics the value of `p && *p` would be undefined if [`p`]{} is [`NULL`]{}, because then [`*p`]{} would be undefined. However, `&&` allows the second operand to be undefined if the first one is [`false`]{}. #### Referential transparency {#referential transparency} means that function values depend entirely on their explicit inputs. You may express it formally by: $$\forall x, y:\quad x=y \implies f(x)=f(y) \quad.$$ For mathematical functions this is always true, e.g. whenever $x=y$ it holds $\sin x = \sin y$. However for sub-routines in imperative languages this is not true, e.g. for a function [`readByte`]{} that reads the next byte from a file, [`readByte(fileA)`]{} may differ from [`readByte(fileB)`]{} although [`fileA`]{} = [`fileB`]{}. #### Sharing means that if you read a variable multiple times it is still computed only once and then stored for later accesses. [^1]: All terms set in italics are explained in the glossary on [page ]{}. In the PDF they are also hyperlinks. [^2]: Glasgow Haskell Compiler in interactive mode
--- abstract: | We use the dynamical vertex approximation (D$\Gamma$A) with a Moriyaesque $% \lambda$ correction for studying the impact of antiferromagnetic fluctuations on the spectral function of the Hubbard model in two and three dimensions. Our results show the suppression of the quasiparticle weight in three dimensions and dramatically stronger impact of spin fluctuations in two dimensions where the pseudogap is formed at low enough temperatures. Even in the presence of the Hubbard subbands, the origin of the pseudogap at weak-to-intermediate coupling is in the splitting of the quasiparticle peak. At stronger coupling (closer to the insulating phase) the splitting of Hubbard subbands is expected instead. The $\mathbf{k}$-dependence of the self energy appears to be also much more pronounced in two dimensions as can be observed in the $\mathbf{k}$-resolved D$\Gamma$A spectra, experimentally accessible by angular resolved photoemission spectroscopy in layered correlated systems. author: - 'A. A. Katanin$^{a,b},$ A. Toschi$^{a,c}$, and K. Held$^c$' date: 'Version 8, ' title: Comparing pertinent effects of antiferromagnetic fluctuations in the two and three dimensional Hubbard model --- =ø== Introduction ============ Since its formulation, [@Hubbard] the Hubbard model served as a minimal model for electronic correlations. Due to the complexity of electronic correlations, solving this model is however only possible in dimension $d=1$ (exactly via the Bethe Ansatz[@Bethe]) and in the limit $d=\infty $[DMFT,DMFT2,DMFTREV]{} (where the mapping[@DMFT2] onto an Anderson impurity model allows for an accurate numerical solution [@DMFTREV; @Bulla]). Of physical interest are however strongly correlated systems in $% d=3$, for modeling the Mott-Hubbard transition [@MH] and (anti-)ferromagnetism [@AFM; @Hubbard; @FM], and in $d=2$ for describing the cuprates [@Dagotto], where the role of the antiferromagnetic fluctuations in developing pseudogap structures and superconductivity are at the center of attention. The aim of this paper is to study the difference between the effect of antiferromagnetic fluctuations on the electronic properties in $d=2$ and $% d=3 $. For weak coupling (small Coulomb interaction $U$), the perturbation theory, and its extensions, e.g. the fluctuation-exchange approximation (FLEX)[@FLEX], the two-particle self-consistent approximation (TPSC)[TPSC]{}, and the functional renormalization group [@fRG] are suitable methods for this purpose. In $d=3$ antiferromagnetic fluctuations produce only quantitative changes of electronic spectrum, although the particle-hole excitations enhance the quasiparticle scattering rate when the temperature $T $ is approaching the Néel temperature. In $d=2$ there are divergences in the self-energy diagrams and the abovementioned approximations predict pseudogap structures in the self-energy in the weak-coupling regime [FlexSE,TPSCSE,fRGSE]{}. These techniques are however not applicable at stronger coupling, since they do not describe strong quasiparticle renormalization due to the Mott physics. Since we are interested in intermediate-to-strong electronic correlations, we need to take a different approach. Starting point is the by-now widely employed dynamical mean-field theory (DMFT).[@DMFT; @DMFT2; @DMFTREV] This method becomes exact [@DMFT] for $d\rightarrow \infty $, and yields a major part of the electronic correlations, i.e., the local correlations. However, any non-local correlations are neglected and hence DMFT does not differentiate between the Hubbard model in two- and three dimensions. More precisely, only differences stemming from different shapes of the density of states (DOS) are taken into account, not those resulting, e.g., from antiferromagnetic correlations since these correlations are by nature non-local. Hitherto, the focus of DMFT extensions has been on *short-range* correlations within a (finite) cluster instead of the single DMFT impurity site. These cluster extensions of DMFT [@clusterDMFT] have been used for describing pseudogaps and superconductivity in the two-dimensional Hubbard model. Due to numerical limitations, the inclusion of important *long-range* correlations and the application of this method in three dimensions or realistic multi-orbital calculations is however not possible, except for very small clusters with $\mathcal{O}(2 \div 4)$ sites. Also the $% 1/d$ expansion of DMFT [@Schiller] is restricted to *short-range* correlations, as is a recent perturbative extension. [@Tokar07a] Hence, for including *long-range* correlations, the focus of the methodological development has shifted recently to diagrammatic extensions of DMFT such as the dynamical vertex approximation (D$\Gamma $A) [DGA1,DGA2,Kusunose,Monien]{} and the dual fermion approach by Rubtsov *et al.* [@DualFermion] Even before, Kuchinskii *et al.* [Sadovskii05]{} combined the local DMFT self energy with the non-local contributions to self energy of the spin-fermion model, and included long-range correlations this way. Their procedure, however, does not rely on a rigorous diagrammatic derivation. To include long-range fluctuations in a diagrammatic way D$\Gamma $A considers the local vertex instead of the bare interaction. It includes DMFT but also long-range correlations beyond. Our understanding of the physics associated with such long-range correlation is typically based on ladder diagrams, which are considered, e.g. by the abovementioned TPSC and FLEX approximations. For example, the ladder diagrams in the particle-hole channel yield antiferromagnetic fluctuations in the paramagnetic phase (paramagnons) and (anti-)ferromagnons in the ordered state. It is natural to suppose that the contribution of the corresponding fluctuations in the intermediate coupling regime can be described by the same kind of diagrams albeit with the *renormalized* vertices. In D$\Gamma$A the local (frequency dependent) vertex is considered instead of the bare interaction. Therefore, this method reproduces the results of the weak-coupling approaches at small $U$ but can treat spatial correlations also at intermediate coupling. Hence, D$\Gamma $A is well suited for studying antiferromagnetic fluctuations in strongly correlated systems both for $d=2$ and $d=3$. The paper is organized as follows: In Section \[Sec:DGA\] we reiterate the D$\Gamma $A approach in a formulation with the three-point (instead of the four-point) vertex functions which allows for a connection to the spin fermion model in Section \[Sec:SF\] and for the analytical considerations on the D$\Gamma $A self energy in Section \[Sec:approx\]. In Section [Sec:lambda]{}, we introduce a Moriyaesque $\lambda $ correction to the susceptibility to describe correctly the two-dimensional case. Results for three dimensions are presented in Section \[Sec:3D\] and compared to those in two dimensions in Section \[Sec:2D\]. Special emphasis to angular resolved spectra is given in Section \[Sec:ARPES\] before we give a brief summary in Section \[Sec:Conclusion\]. Dynamical Vertex Approximation {#Sec:DGA} ============================== Starting point of our considerations is the Hubbard model on a square or cubic lattice $$H=-t\sum_{\langle ij\rangle \sigma }c_{i\sigma }^{\dagger }c_{j\sigma }+U\sum_{i}n_{i\uparrow }n_{i\downarrow } \label{H}$$where $t$ denotes the hopping amplitude between nearest-neighbors, $U$ the Coulomb interaction, $c_{i\sigma }^{\dagger }$($c_{i\sigma }$) creates (annihilates) an electron with spin $\sigma $ on site $i$; $n_{i\sigma }\!=\!c_{i\sigma }^{\dagger }c_{i\sigma }$. In the following, we restrict ourselves to the paramagnetic phase with $n=1$ electrons/site at a finite temperature $T$. The D$\Gamma $A result for the self-energy of the model (\[H\]) was derived in Ref. , see Eq. (16). For the purpose of the present paper this result for the self-energy can be written in the form $$\begin{aligned} \Sigma _{\mathbf{k},\nu } &=&\frac{1}{2}{Un}+\frac{1}{2}TU\sum\limits_{\nu ^{\prime }\nu ^{\prime \prime }\omega ,\mathbf{q}}\left[ 3\chi _{s,\mathbf{q}% }^{\nu ^{\prime }\nu ^{\prime \prime }\omega }\Gamma _{s,\text{ir}}^{\nu ^{\prime \prime }\nu \omega }-\chi _{c,\mathbf{q}}^{\nu ^{\prime }\nu ^{\prime \prime }\omega }\Gamma _{c,\text{ir}}^{\nu ^{\prime \prime }\nu \omega }\right. \nonumber \\ &&\left. +\chi _{0\mathbf{q}\omega }^{\nu ^{\prime }}(\Gamma _{c,\text{loc}% }^{\nu \nu ^{\prime }\omega }-\Gamma _{s,\text{loc}}^{\nu \nu ^{\prime }\omega })\right] G_{\mathbf{k+q},\nu +\omega }, \label{SE}\end{aligned}$$ where the non-local spin (s) and charge (c) susceptibilities$$\chi _{s(c),\mathbf{q}}^{\nu \nu ^{\prime }\omega }=[(\chi _{0\mathbf{q}% \omega }^{\nu ^{\prime }})^{-1}\delta _{\nu \nu ^{^{\prime }}}-\Gamma _{s(c),% \text{ir}}^{\nu \nu ^{\prime }\omega }]^{-1}$$can be expressed in terms of the particle-hole bubble $\chi _{0\mathbf{q}% \omega }^{\nu ^{\prime }}=-T\sum_{\mathbf{k}}G_{\mathbf{k},\nu ^{\prime }}G_{% \mathbf{k}+\mathbf{q},\nu ^{\prime }+\omega }$, $G_{\mathbf{k},\nu }=[i\nu -\epsilon _{\mathbf{k}}+\mu -\Sigma _{\text{loc}}(\nu )]^{-1}$ is the Green function, and $\Sigma _{\text{loc}}(\nu )$ the local self-energy. The spin (charge) irreducible local vertices $\Gamma _{s(c),\text{ir}}^{\nu \nu ^{\prime }\omega }$ are determined from the corresponding local problem[DGA1]{}. ![(Color online) Graphical representation of the contribution of bare Coulomb interaction (a) and spin (charge) fluctuations (b) to the self-energy in the D$\Gamma$A approach, Eq. (\[SE0\]). Solid lines correspond to the electronic Green function $G_{\mathbf{k},\protect\nu} $, dashed line to the bare Hubbard interaction $U$, wiggly lines - to the spin (charge) susceptibility $\protect\chi^{\mathrm{s(c)}}_{\mathbf{q},\protect% \omega}$; the triangle corresponds to the interaction vertex $\protect\gamma% ^{\protect\nu,\protect\omega}_{\mathrm{s(c)},\mathbf{q}}$. []{data-label="Fig:1"}](Fig1.eps){width="8cm"} The result (\[SE\]) accounts for the contribution of ladder diagrams to the self-energy in the two particle-hole channels. Following Edwards and Hertz [@HertzEdw] it is convenient to pick out parts of these ladders, which are separated by the bare on-site Coulomb interaction $U.$ This is achieved by considering the quantities $$\begin{aligned} \Phi _{s(c),q}^{\nu \nu ^{\prime }\omega } &=&[(\chi _{0\mathbf{q}\omega }^{\nu ^{\prime }})^{-1}\delta _{\nu \nu ^{^{\prime }}}-\Gamma _{s(c),\text{% ir}}^{\nu \nu ^{\prime }\omega }\pm U]^{-1}, \label{Fi} \\ \phi _{\mathbf{q},\omega }^{s(c)} &=&\sum\limits_{\nu \nu ^{\prime }}\Phi _{s(c),\mathbf{q}}^{\nu \nu ^{\prime }\omega } \nonumber\end{aligned}$$such that $\chi _{s(c),\mathbf{q}}^{\nu \nu ^{\prime }\omega }=\{[\Phi _{s(c),q}^{\nu \nu ^{\prime }\omega }]^{-1}\mp U\}^{-1}$ with the upper (lower) sign for the spin (charge) susceptibility. The nonlocal spin (charge) susceptibility is then given by$$\chi _{\mathbf{q}\omega }^{s(c)}=\sum\limits_{\nu \nu ^{\prime }}\chi _{s(c),% \mathbf{q}}^{\nu \nu ^{\prime }\omega }=[(\phi _{\mathbf{q,}\omega }^{s(c)})^{-1}\mp U]^{-1}. \label{hi}$$and therefore $\phi _{\mathbf{q,}\omega }^{s(c)}$ provided to be a particle-hole irreducible susceptibility in the spin (charge) channel. Introducing, similar to Ref. , the corresponding three-point vertex $\gamma _{s(c),\mathbf{q}}^{\nu \omega }$ of electron interaction with charge (spin) fluctuations,$$\gamma _{s(c),\mathbf{q}}^{\nu \omega }=(\chi _{0\mathbf{q}\omega }^{\nu })^{-1}\sum\limits_{\nu ^{\prime }}\Phi _{s(c),\mathbf{q}}^{\nu \nu ^{\prime }\omega }, \label{gamma}$$the irreducible susceptibility $\phi _{\mathbf{q,}\omega }^{s(c)}$ can be represented as $$\phi _{\mathbf{q,}\omega }^{s(c)}=\sum\limits_{\nu }\gamma _{s(c),\mathbf{q}% }^{\nu \omega }\chi _{0\mathbf{q}\omega }^{\nu } \label{fi}$$In these notations, the result (\[SE\]) can then be rewritten identically as$$\begin{aligned} \Sigma _{\mathbf{k},\nu } &=&\frac{1}{2}{Un}+\frac{1}{2}TU\sum\limits_{% \omega ,\mathbf{q}}\left[ 3\gamma _{s,\mathbf{q}}^{\nu \omega }-\gamma _{c,% \mathbf{q}}^{\nu \omega }-2\right. \nonumber \\ &&+3U\gamma _{s,\mathbf{q}}^{\nu \omega }\chi _{\mathbf{q}\omega }^{s}+U\gamma _{c,\mathbf{q}}^{\nu \omega }\chi _{\mathbf{q}\omega }^{c} \nonumber \\ && +\sum\limits_{\nu ^{\prime }} \left. \chi _{0\mathbf{q}\omega }^{\nu ^{\prime }}(\Gamma _{c,\text{loc}}^{\nu \nu ^{\prime }\omega }-\Gamma _{s,% \text{loc}}^{\nu \nu ^{\prime }\omega })\right] G_{\mathbf{k+q},\nu +\omega } \label{SE0}\end{aligned}$$The first three terms in the square brackets correspond to the interaction of electrons via Hubbard on-site Coulomb interaction (without forming ph-bubbles, Fig. \[Fig:1\]a), the next two terms correspond to electron interactions via charge- and spin-fluctuations (Fig. \[Fig:1\]b), the last term subtracts double counted local contribution. Relation to spin-fermion models {#Sec:SF} =============================== The contributions of bare Coulomb interaction and charge (spin) fluctuations to the self-energy (\[SE0\]) can be also obtained from the fermion-boson model with generating functional$$\begin{aligned} &&Z% \begin{array}{c} =% \end{array}% \int D[c_{k\sigma }^{\dagger },c_{k\sigma }]D\mathbf{S}_{\mathbf{q},\omega }D\rho _{\mathbf{q},\omega }\exp \{-\mathcal{L}[\mathbf{S},\rho ,c]\} \nonumber \\ &&\ \mathcal{L}[\mathbf{S},\rho ,c]% \begin{array}{c} =% \end{array}% \sum\limits_{\mathbf{k},\nu, \sigma }(i\nu _{n}-\varepsilon _{\mathbf{k}% })c_{k\sigma }^{\dagger }c_{k\sigma } \label{sf} \\ &&\ \ \ +U\sum\limits_{\mathbf{q},\omega }(\rho _{q\omega }\rho _{-q,-\omega }+\mathbf{S}_{q\omega }\mathbf{S}_{-q,-\omega }) \nonumber \\ &&\ \ \ +U\sum\limits_{\mathbf{k,q,}\nu ,\omega, \sigma,\sigma^{\prime} }(\gamma _{s,\mathbf{q}}^{\nu \omega })^{1/2}c_{\mathbf{k},\nu ,\sigma }^{\dagger }\mbox {\boldmath $\sigma $}_{\sigma \sigma ^{\prime }}c_{\mathbf{k+q},\nu +\omega ,\sigma ^{\prime }}% \mathbf{S}_{\mathbf{q},\omega } \nonumber \\ &&\ \ \ +iU\sum\limits_{\mathbf{k,q,}\nu ,\omega }(\gamma _{c,\mathbf{q}% }^{\nu \omega })^{1/2}c_{\mathbf{k},\nu ,\sigma }^{\dagger }c_{\mathbf{k+q}% ,\nu +\omega ,\sigma }\rho _{\mathbf{q},\omega } \nonumber\end{aligned}$$where $\gamma _{c(s),\mathbf{q}}^{\nu \omega }$ is determined in the present approach according to the Eq. (\[gamma\]) and $\mbox {\boldmath $\sigma $}_{\sigma \sigma ^{\prime }}$ are the Pauli matrices. The model (\[sf\]) is similar to that derived from the Hubbard model via Hubbard-Stratonovich transformation[@Hertz], but it is explicitly spin symmetric and contains the non-local frequency dependent vertices $\gamma _{c(s),\mathbf{q}}^{\nu \omega },$ which account for the local- and short range-nonlocal fluctuations. Contrary to the earlier paramagnon theories[@Moriya] and the spin-fermion model[@SpFerm; @SpFerm1], where $\gamma _{s,\mathbf{q}}^{\nu \omega }=1$ and charge fluctuations are omitted ($\gamma _{c,\mathbf{q}% }^{\nu \omega }=0$), we have $\gamma _{s(c),\mathbf{q}}^{\nu \omega }\neq 0$ and $\neq 1$. The frequency dependence of the vertices $\gamma _{s(c),% \mathbf{Q}}^{\nu 0} $ calculated in the present approach for two dimensions with $\mathbf{Q}=(\pi,\pi)$ is shown in Fig. \[Fig:2\] (in the three dimensional case we observe qualitatively similar behavior). One can see, that both charge- and spin vertices have a strong frequency dependence and approach unity only in the high-frequency limit. While in the weak-coupling regime $U=D\equiv 4t$ both vertices are suppressed at small frequencies \[which is the consequence of the particle-particle (Kanamori) screening\], closer to the DMFT Mott transition (at $U=2D\equiv 8t$) the spin vertex at small frequencies is *enhanced*. This behavior is similar to that observed in Ref.  for the three-frequency (four-point) vertex in the three dimensional case. Hence, the spin-fermion theory, which was heuristically added to the DMFT self-energy before, is included in a more systematic and consistent way in D$% \Gamma $A, which also accounts for the corrections to the electron-paramagnon vertex. The susceptibility $\chi _{q,\omega }^{s}$ which is determined phenomenologically in the spin-fermion model is obtained in our approach by dressing the bare propagator $1/U$ of charge- and spin fields by particle-hole bubbles, which reproduces the results (\[hi\]) and (\[fi\]) of the previous Section. ![(Color online) Frequency dependence of the spin and charge three-point vertex Eq. (\[gamma\]) at $U=1$, $\protect\beta=1/T=15 $ (left) and $U=2$, $\protect\beta =10$ (right), $\protect\omega =0$, at the antiferromagnetic wave vector $\mathbf{Q}=(\protect\pi ,\protect\pi )$. All energies are in units of half the effective bandwidth $% D\equiv 4t$.[]{data-label="Fig:2"}](Fig2.eps){width="8cm"} Using the model (\[sf\]) one can also calculate the leading order non-local correction to the three-point vertices due to fermion-boson interaction, $$\begin{aligned} \widetilde{\gamma }_{s,\mathbf{k},\mathbf{q}}^{\nu \omega } &=&\gamma _{s,% \mathbf{q}}^{\nu \omega }+\frac{1}{2}TU\sum\limits_{\omega _{1},\mathbf{q}% _{1}}\gamma _{s,\mathbf{q}}^{\nu +\omega _{1},\omega }\left[ 2-\gamma _{s,% \mathbf{q}_{1}}^{\nu \omega _{1}}-\gamma _{c,\mathbf{q}_{1}}^{\nu \omega _{1}}\right. \nonumber \\ &&\left. -U\gamma _{s,\mathbf{q}_{1}}^{\nu \omega _{1}}\chi _{\mathbf{q}% _{1},\omega _{1}}^{s}+U\gamma _{c,\mathbf{q}_{1}}^{\nu \omega _{1}}\chi _{% \mathbf{q}_{1}\omega _{1}}^{c}\right] G_{\mathbf{k+q}_{1},\nu +\omega _{1}} \nonumber \\ &&\times G_{\mathbf{k+q}_{1}+\mathbf{q},\nu +\omega _{1}+\omega }^{{}}-\text{% loc,} \\ \widetilde{\gamma }_{c,\mathbf{k},\mathbf{q}}^{\nu \omega } &=&\gamma _{s,% \mathbf{q}}^{\nu \omega }+\frac{1}{2}TU\sum\limits_{\omega _{1},\mathbf{q}% _{1}}\gamma _{s,\mathbf{q}}^{\nu +\omega _{1},\omega }\left[ 3\gamma _{s,% \mathbf{q}_{1}}^{\nu \omega _{1}}-\gamma _{c,\mathbf{q}_{1}}^{\nu \omega _{1}}-2\right. \nonumber \\ &&\left. +3U\gamma _{s,\mathbf{q}_{1}}^{\nu \omega _{1}}\chi _{\mathbf{q}% _{1},\omega _{1}}^{s}+U\gamma _{c,\mathbf{q}_{1}}^{\nu \omega _{1}}\chi _{% \mathbf{q}_{1}\omega _{1}}^{c}\right] G_{\mathbf{k+q}_{1},\nu +\omega _{1}} \nonumber \\ &&\times G_{\mathbf{k+q}_{1}+\mathbf{q},\nu +\omega _{1}+\omega }^{{}}-\text{% loc,}\end{aligned}$$where loc stands for the subtraction of the local terms already included in $% \gamma _{s,\mathbf{q}}^{\nu \omega }$. The non-local corrections to the self-energy and vertex can be then treated self-consistently by substituting them into Eq. (\[fi\]). This provides an alternative simpler way of self-consistent treatment instead of the more complicated parquet approach discussed in Ref. . An even simpler way to go beyond a non-self consistent treatment of the D$\Gamma$A equations is considered in Sect. V. Analytic approximation for the D$\Gamma $A self energy {#Sec:approx} ====================================================== Similarly to the weak-coupling approach [@TPSC], in the two dimensional case the self-energy can be obtained approximately analytically. In this case the susceptibility $\chi _{\mathbf{q}\omega }^{s}$ is strongly enhanced at $\omega _{n}=0$ and $\mathbf{q}\approx \mathbf{Q}=(\pi ,\pi )$, and can be represented in the form$$\chi _{\mathbf{q}0}^{s}=\frac{A}{(\mathbf{q}-\mathbf{Q})^{2}+\xi ^{-2}} \label{hiq}$$where $\xi ^{-2}=A/(1-U\phi _{\mathbf{Q}0}^{s})$ with $A=(\nabla ^{2}\phi _{% \mathbf{q}0}^{s})_{\mathbf{q}=\mathbf{Q}}$ being the (squared) inverse spin fluctuation correlation length. Since the corresponding momentum sum in the Eq. (\[SE0\]) over $\mathbf{q}$ is logarithmically divergent at $\xi \rightarrow \infty $, we can approximately retain ourselves to only the zero bosonic Matsubara frequency term in the spin-fluctuation contribution and put $\mathbf{q}\approx \mathbf{Q}$ in all the factors except $\chi _{% \mathbf{q}0}^{s}$ to obtain$$\Sigma _{\mathbf{k},\nu }\simeq \Sigma _{\text{loc}}(\nu )+\Delta ^{2}\gamma _{s,\mathbf{Q}}^{\nu ,0}G_{\mathbf{k}+\mathbf{Q},\nu } \label{SEAprox}$$where $\Delta ^{2}=(3TU^{2}/2)\sum\limits_{\mathbf{q}}\chi _{\mathbf{q}% ,0}^{s}.$ To study the frequency dependence of the self-energy (\[SEAprox\]) qualitatively, we first consider $\gamma _{s,\mathbf{Q}}^{\nu ,0}=1$ and choose the local self-energy in the form (see, e.g. Ref. ) $$\Sigma _{\text{loc}}(\nu )=(1-\kappa )(\Delta _{\text{loc}}^{2}/4)/(\nu -\Delta _{\text{loc}}^{2}\kappa /(4\nu )) \label{Eq:SigmaKappa}$$where $\Delta _{\text{loc}}\simeq U$ is the size of the Hubbard gap and $% \kappa $ measures the relative weight of the quasiparticle peak (QP) with respect to the Hubbard subbands ($\kappa =0$ at the Mott transition and $% \kappa =1$ for $U\rightarrow 0$). The Eq. (\[Eq:SigmaKappa\]) allows to reproduce the three-peak structure of the self-energy, observed in the numerical solution of the single-impurity Anderson model, supplemented by the DMFT self-consistent condition. The evolution of the spectral properties calculated with the self-energies (\[SEAprox\]) and (\[Eq:SigmaKappa\]) with changing $\kappa $ for $\Delta _{\text{loc}}=1$ and $\Delta =0.1$ is shown in Fig. \[Fig:3\] (we suppose that the vector $\mathbf{k}$ is located at the Fermi surface and $% \varepsilon _{\mathbf{k+Q}}=0$ due to nesting). ![(Color online) The spectral functions in $d=2$ as obtained from the approximate self-energies including local (dashed lines, Eq. ([Eq:SigmaKappa]{})) and non-local (solid lines, Eq. (\[SEAprox\])) fluctuations for $\protect\kappa =0$ (a), 0.1 (b), 0.3 (c), 0.5 (d), 0.9 (e), and 1.0 (f). []{data-label="Fig:3"}](Fig3.eps){width="10.2cm"} One can see that at small $\kappa $, i.e. in the vicinity of the Mott transition one finds splitting of Hubbard subbands, while the QP remains unsplit (Fig. 3a,b). In the narrow region of larger $\kappa $ the QP is split in two peaks, and the splitting of the Hubbard subbands remain visible (Fig. 3c). At intermediate values of $\kappa $ we find only splitting of the QP peak, the two other peaks corresponding to the Hubbard subbands are present (Fig. 3d,3e). Finally, in the weak coupling limit $\kappa =1$ we reproduce the two-peak pseudogap, discussed in Refs.  (Fig. 3f). In a more general case of $\gamma _{s,\mathbf{Q}}^{\nu ,0}\neq 1$ we expect a pseudogap of the size $\sim \Delta (\gamma _{s,\mathbf{Q}0}^{\Delta ,0})^{1/2}$ in the weak coupling regime at small enough temperatures and more complicated structures at strong $U$; see our numerical results below. Moriyaesque $\protect\lambda $ correction for the vertex {#Sec:lambda} ======================================================== The local approximation for the particle-hole irreducible vertex, considered in Section II, is however not exact. In particular, the magnetic transition temperature remains equal to its value in DMFT, and therefore it is overestimated in both three- and two dimensions. In the latter case $T_{N}$ would remain finite, contrary to the Mermin-Wagner theorem. In the D$\Gamma $A framework a reduction of $T_{N}$ would naturally arise from a self-consistent solution of the D$\Gamma $A equations. An alternative (simpler) way to fulfill the Mermin-Wagner theorem in 2D (and to reduce the transition temperature in three dimensions) is to introduce a correction to the susceptibility similar to the Moriya theory of weak itinerant magnets[@Moriya]. To this end, we replace $$\chi _{\mathbf{q}\omega }^{s}\longrightarrow \left[ (\chi _{\mathbf{q}\omega }^{s})^{-1}+\lambda _{\mathbf{q\omega }}\right] ^{-1}. \label{his}$$Formally the r.h.s. of Eq. (\[his\]) is exact for some (unknown) $\lambda _{\mathbf{q\omega }}$; in the following we assume $\lambda _{\mathbf{q\omega }}\simeq \lambda _{\mathbf{Q}0}\equiv \lambda $ since static fluctuations with momentum $\mathbf{Q}$ predominate near the magnetic instability. Instead of determining (as it was done in Moriya theory) $\lambda $ from the fluctuation correction to the free energy, which is rather cumbersome in the present approach, we (similar to TPSC) impose the fulfillment of the sumrule$$-\int_{-\infty }^{\infty }\frac{d\nu }{\pi }\mbox{Im}\Sigma _{\mathbf{k},\nu }=U^{2}n(1-n/2)/2. \label{As1}$$This also implies$$\mbox{Re}\Sigma _{\mathbf{k},\nu }\simeq \frac{U^{2}n(1-n/2)}{2\nu } \label{As2}$$for $\nu \gg D$, ![(Color online) D$\Gamma $A self-energy on the Matsubara axis calculated with and without Moriya $\protect\lambda$ correction for two different points of the Fermi surface in the two dimensional Hubbard model (at $U=D=4t$, $\protect\beta =1/T=17$); also shown is the DMFT self energy for comparison. Notice that, without introducing the Moriya $\protect\lambda$ correction, one always observes a deviation of the high-frequency $\Sigma (% \mathbf{k},i\protect\nu _{n})$ from the correct asymptotic behavior $\sim U^2 n (1-\frac{n}{2})/(2 i \protect\nu_n) = U^{2}/(4i\protect\nu _{n})$ which is consistent with the self energy sum rule (see text).[]{data-label="FigSigwim"}](Fig4.eps){width="8cm"} according to the Kramers-Kronig relation. The latter asymptotic behavior may be very important to obtain the correct Fermi surface in the non-half-filled case, but should be fulfilled also in the half-filled case to obtain correct spectral functions. It is obviously violated in standard spin-fermion (also paramagnon) approaches in two dimensions, where the Néel temperature ($% T_N$) is finite without the $\lambda $ correction and the l.h.s. of Eqs. (\[As1\]) and (\[As2\]) are divergent at $T\longrightarrow T_{N}$. The frequency dependence of the self-energy at the imaginary axis for the two-dimensional Hubbard model ($U=D=4t$), calculated with and without $% \lambda $ correction is compared in Fig. \[FigSigwim\]. The $\lambda $ correction removes the divergence of the l.h.s. of Eqs. (\[As1\]) and ([As2]{}) at $T\rightarrow T_{N}^{\text{DMFT}}$ and leads to the correct asymptotic behavior at large $\nu _{n}$. Without $\lambda$-correction (or, alternatively, a self-consistent solution of the D$\Gamma $A equations) spin fluctuations and their pertinent effect on the self energy are overestimated. This is because the spin fluctuations result in a reduced metalicity which in a second D$\Gamma $A iteration, i.e., the recalculation of the local vertex with the less metallic Green function as an input[dga\_proc]{}, would reduce the spin fluctuations. In two dimensions the sumrules (\[As1\]) and (\[As2\]) can be fulfilled at all positive temperatures, and the actual transition temperature is zero, as required by the Mermin-Wagner theorem. As one can see from Eqs. (\[hiq\]) and (\[SEAprox\]), the correlation length $\xi $ in two dimensions is exponentially divergent (with $\lambda $-correction): $$\xi \propto \exp (b/T),$$where the coefficient $b$ in the exponent is proportional to $U$. This is evidently confirmed also by our numerical results shown in Fig. [FigInvChiS]{}, where we have reported the values of the inverse of the spin susceptibility at $\mathbf{Q}=(\pi ,\pi )$ calculated with the inclusion of the $\lambda $-correction: The exponential divergence of $\xi $ for $% T\rightarrow 0$ is directly reflected in an analogous behavior of the spin susceptibility ($\chi _{\mathbf{{Q},0}}^{s}\sim A\xi ^{2}$, see Eq. ([hiq]{})) at $T\rightarrow 0$. In three dimensions, on the other hand, the sumrules (\[As1\]) and ([As2]{}) with $(\chi _{\mathbf{{Q},0}}^{s})^{-1}+\lambda >0$ can be fulfilled only down to a certain temperature $T_{N}^{\text{D}\Gamma {\text{A}}}$, which is reduced in comparison with $T_{N}^{\text{DMFT}}$ and determines the phase transition temperature in the D$\Gamma $A approach. ![Temperature dependence of the $\lambda$-corrected inverse antiferromagnetic susceptibility in two dimensions (triangles) for $U=D=4t$. The data display an exponential temperature dependence, consistent with the expected behavior of $\xi$ (see text). The DMFT Néel temperature corresponding to this set of parameters is marked with an arrow.[]{data-label="FigInvChiS"}](Fig4a_ter.eps){width="8cm"} Results for the Hubbard model in three dimensions {#Sec:3D} ================================================= Let us turn to the results for the self-energy and spectral functions which are obtained applying the Moriya $\lambda $ correction to the vertex of the D$\Gamma $A for the three dimensional system (the analytical continuation to the real axis $i\nu _{n}\rightarrow \omega $ was done using the Padé algorithm). In this case, as mentioned above, the $\lambda $ correction is expected to result in small -and only quantitative- changes of the final D$% \Gamma $A results, because in $d=3$ (where the antiferromagnetic long-range order survives at finite temperatures) the $\lambda $ correction produces just a moderate reduction of the Néel temperature w.r.t. the DMFT value. Our results, shown in Fig. \[Fig3d\], clearly confirm this expectation. Specifically, we analyze the case, already considered in our previous study Ref. , i.e., the three dimensional Hubbard model with $U=1.5$ (in the units of half the variance of the non-interacting DOS, being $% D\equiv 2\sqrt{6}t$ for $d=3$), and $\beta =11.2$ (in units of $1/D$), which corresponds to a temperature *slightly above* the DMFT Néel temperature ($T_{N}^{\mathrm{DMFT}}$), but appreciably higher than the three-dimensional $T_{N}^{\text{D}\Gamma {\text{A}}}$ with $\lambda$ correction (an estimate of the $\lambda-$reduced Néel temperature gives $\beta^{\text{D}\Gamma \text{A}}=1/T_{N}^{D\Gamma A} \simeq 16.5$). In this situation, as noticed in Ref.  and shown in Fig. \[Fig3d\] (first row), the standard D$\Gamma$A results display a sizable renormalization of the quasiparticle (QP) peak present in the DMFT spectrum. However, no qualitative change in the nature of the spectral functions can be observed. The inclusion of the Moriya $\lambda $ correction, as shown in the second row of Fig. \[Fig3d\], reduces the renormalization effects due to non-local correlations: both the real and the imaginary part of the D$\Gamma $A self-energy at low frequency get very close to the DMFT values, and, obviously, the same happens to the QP peak in $A(\mathbf{k},\omega )$. This result is easily understood in terms of the reduction of $T_{N}$ determined by the Moriya corrections, since the enhanced distance to the second-order antiferromagnetic transition at $T_{N}$ leads to a reduction of the spin-fluctuation and corrections to the DMFT self-energy. If we reduce the temperature towards the D$\Gamma $A Néel temperature, antiferromagnetic spin fluctuations become strong again, and as shown in Fig. \[Fig3db15\], we indeed find results which are qualitatively similar to those without $% \lambda $ correction (first row of Fig.\[Fig3d\]). In particular, in both figures the quasiparticle weight is smaller in D$\Gamma $A than in DMFT in agreement with the expected effect of antiferromagnetic fluctuations. ![(Color online) DMFT self-energies and spectral functions (grey dashed line) at $\mathbf{k}_F=(\protect\pi/2,\protect\pi/2,\protect\pi/2)$ for the Hubbard model in $d=3$ at $U= 1.5 D$ ($D=2\protect\sqrt{6}t$) and $\protect\beta =11.2$ (i.e., slightly above $T_N^{\mathrm{DMFT}}$) are compared with the corresponding D$\Gamma$A results with (lower row; solid blue line) and without (upper row; black dotted line) Moriya $\protect \lambda$ correction. Note that (i) the non-local fluctuations modify only quantitatively the shape of the QP, but no pseudogap appears, and (ii) non-local correlation effects are further reduced by the inclusion of the Moriya $\protect\lambda$ correction.[]{data-label="Fig3d"}](Fig5new.eps){width="8cm"} ![(Color online) DMFT self-energies and spectral functions at $\mathbf{k}_F=(\protect\pi /2,\protect\pi /2,\protect\pi/2)$ for the Hubbard model in $d=3$ at $U= 1.5 D$ ($D=2 \protect\sqrt{6}t$) and $\protect\beta =15$ compared with the corresponding D$\Gamma$A ones with $\protect\lambda$ correction. Lowering $% T=1/\protect\beta$ towards $T_N^{D\Gamma A}$, qualitatively similar results as without $\protect\lambda$ correction at higher $T$ (upper row of Fig. \[Fig3d\], $\protect\beta=11.2$) are obtained.[]{data-label="Fig3db15"}](Fig6new.eps){width="8cm"} Summing up the results for the isotropic three dimensional system, we emphasize that the principal consequence of the inclusion of the Moriya $% \lambda $ correction is a shift of the region with appreciable non-local correlation effects (i.e., the region where the D$\Gamma $A spectra substantially differ from DMFT) to lower temperatures, i.e., to the proximity of the new line of the antiferromagnetic phase transition. Our result demonstrates that for $d=3$ -with or without lambda correction- the extension of the region characterized by relevant non-local correlations is relatively small even for intermediate values of the interactions. This indicates, hence, that for $d=3$ DMFT represents indeed a good approximation, except for the region close to the antiferromagnetic transition. Results for the Hubbard model in two dimensions {#Sec:2D} =============================================== The effects of non-local correlations are -as one can imagine- much more dramatic for a two-dimensional system. It is easy to figure out that the divergence of the ladder diagrams in the spin channel leads to huge non-local corrections in the D$\Gamma$A self-energy, which can differ also qualitatively from the DMFT one. At the same time, one should expect that in two dimensions the non-local correlation effects could be sensibly overestimated by the D$\Gamma$A without the inclusion of the Moriya $\lambda$ correction. As we have discussed in Section \[Sec:lambda\], these corrections are essential to fulfill the Mermin-Wagner theorem, pushing the Néel temperature from the DMFT value down to zero. Hence, for any finite temperature the antiferromagnetic fluctuations are reduced. The effects of the divergence of the spin ladder diagrams are also to some extent attenuated in the formula for the D$\Gamma$A self-energy, because of the extra dimension gained at $T=0$ due to the transformation of the Matsubara summation to a frequency integral on the r.h.s. of Eq. (\[SE\]). [![(Color online) D${\Gamma}$A results for the half-filled two-dimensional Hubbard model at $U=D=4t $, $\protect\beta=17$ (just slightly above $T_N^{\mathrm{DMFT}}$) computed without (first and third row; black dotted line) and with (second and fourth row; blue solid line) the Moriya $\protect\lambda$ correction, and compared with DMFT (grey dashed line). The D$\Gamma$A calculations in the non-selfconsistent scheme show a clear pseudogap opening for the $\mathbf{k}$ points of the non-interacting FS, but more pronounced in the antinodal direction. Within the $\protect% \lambda$-corrected scheme, one can still notice a pseudogap opening but only in the antinodal direction, while in the nodal direction a strongly damped QP appears. []{data-label="Fig2Db17"}](Fig7new.eps "fig:"){width="85mm"}]{} In the light of these considerations, we can more easily interpret the results of the D$\Gamma$A for the two-dimensional Hubbard model, which are presented in Figs. \[Fig2Db17\], \[Fig2Dnod\], \[Fig2Dant\]. Specifically, we start the analysis of the two dimensional case, by evaluating the effects of the Moriya $\lambda$ correction for the D$\Gamma$A results computed for the half-filled Hubbard model with $U=4t$ at a temperature ($\beta=17$) *slightly above* the corresponding $T_N$ in DMFT. In the first/third row of Fig. \[Fig2Db17\], we show the D$\Gamma $A self-energy and spectral function at the Fermi surface (FS) at the nodal \[$% \mathbf{q}=(\frac{\pi }{2},\frac{\pi }{2})$\]/antinodal \[$\mathbf{q}=(\pi ,0)$\] points computed without Moriya correction. One can clearly observe that, in contrast to the three dimensional case, the D$\Gamma $A spectra qualitatively differ from the original DMFT one because (i) a pseudogap appears at low frequencies and (ii) the spectra are markedly anisotropic in the nodal/antinodal direction, as the observed pseudogap is evidently more pronounced at the antinodal points. As discussed above, the Moriya $\lambda$ correction is however expected to be much more important in the two-dimensional than in the three dimensional case . This is confirmed by the results shown in the second/fourth row of Fig. \[Fig2Db17\]. In these panels the reduction of the non-local effects due to the inclusion of the Moriya $\lambda$ correction in D$\Gamma$A is evident. It is important noticing, however, that although the distance in the phase-diagram from the actual antiferromagnetic transition (occurring at $T=0$ for $d=2$) is considerably larger than for $d=3$, non-local correlation effects are nonetheless extremely strong. Turning to the details, we still observe a remarkable anisotropy in the D$\Gamma$A spectra after the inclusion of Moriya correction, with a strongly renormalized QP peak in the nodal direction and a rather clear pseudogap in the antinodal direction. The results of D$\Gamma$A (implemented with the Moriya correction) indicate, hence, that for the two-dimensional system at half-filling antiferromagnetic fluctuation effects predominate in a wide region of the phase diagram, determining the onset of an anisotropic pseudogap in the spectra also for considerably high temperatures, qualitatively similar to that observed in underdoped cuprates.[@pseudogapCuprates] The inclusion of the Moriya correction in D$\Gamma $A allows us to extend our analysis to the low-temperature regime $T<T_{N}^{\text{DMFT}}$. In particular, we are interested to study the evolution of the spectral function when the temperature is considerably reduced compared to the DMFT value $T_{N}^{\mathrm{DMFT}}$. In Figs. \[Fig2Dnod\] and \[Fig2Dant\] we report the D$\Gamma $A calculation for the self-energy and the spectral function for the same case considered above ($U=4t$, half-filling) for three different decreasing temperatures ($\beta =17$, shown already in Fig. [Fig2Db17]{}, $\beta =25$ and $\beta =60$) in the nodal and antinodal direction, respectively. First, we note that the anisotropy in the self-energy and the spectra remains visible at all temperatures. In addition, a marked tendency towards a completely gapped spectrum can be seen at the lowest temperature: At lowest temperature ($\beta =60$) a pseudogap appears also in the nodal direction, while the pseudogap already present in the antinodal direction becomes remarkably more profound. At this temperature, therefore, the anisotropy is reduced in comparison to the higher T cases- due to the strong depletion of spectral weight at $\omega =0$. This results can be understood in terms of the closer proximity to the antiferromagnetic instability at $T=0$, and is consistent with the marked pseudogap visible in the $k$-$integrated$ spectral function obtained by means of cluster DMFT in Ref. . [![(Color online) Temperature evolution of the D${\Gamma}$A results for the half-filled two-dimensional Hubbard model at $U=D=4t$ in the nodal direction, computed with the Moriya $\protect\lambda$ correction, and compared with DMFT. A clear pseudogap emerges at the lowest temperature ($% \protect\beta=60$), similarly to the results of the non-self consistent scheme in the proximity of $T_N^{\mathrm{DMFT}}$ (see Fig. [Fig2Db17]{}).[]{data-label="Fig2Dnod"}](Fig8new.eps "fig:"){width="8cm"}]{} [![(Color online) Same as in Fig. \[Fig2Dnod\] in the antinodal direction. As expected, a very pronounced pseudogap characterizes the lowest temperature results. The behavior of the spectral functions is not completely monotonous, as the pseudogap seems to disappear at $\protect% \beta=25$. At all temperatures, however, the pseudogap features are always more marked in the antinodal than in the nodal direction.[]{data-label="Fig2Dant"}](Fig9new.eps "fig:"){width="8cm"}]{} It is worth noticing, however, that the temperature evolution towards the formation of a fully gapped spectrum at $T\rightarrow 0$ does not appear to be completely monotonous. The effects of the non-local fluctuations seems to be slightly weaker in the D$\Gamma$A results for $\beta=25$ (second row in Figs. \[Fig2Dnod\]-\[Fig2Dant\]), than for $\beta=17$ (first row). More specifically, this is visible in the slightly *more Fermi-liquid-like* behavior of the real and imaginary part of the self-energies at $\beta=25$ in comparison to $\beta=17$. [![Color online) D${\Gamma}$A results with $\protect\lambda$ corrections for the half-filled two-dimensional Hubbard model at $U=2D=8t $, $\protect\beta=40$ compared with the corresponding DMFT ones.[]{data-label="Fig2Db40"}](Fig10new.eps "fig:"){width="85mm"}]{} A possible interpretation of this specific feature of our results is to relate the non-monotonous temperature evolution in the D$\Gamma $A spectral function to a competition between non-local and local mechanisms capable of destroying coherent excitations: (i) The (non-local) antiferromagnetic fluctuations, which become less pronounced with increasing $T$, making the system more metallic ($\chi _{\mathbf{{Q},0}}^{s}=$ $8.9\cdot 10^{3}$, $39.26$, and $13.28$, for $\beta =60$, $25$, and $17$, respectively); and at the same time (ii) the thermal loss of coherence, which is at the origin of so-called crossover region in the (purely local) DMFT and reflects increasing values of the quasiparticle damping ($\gamma =-$Im$% \Sigma (0)=0.009,0.021,0.034,$ for the three considered temperatures respectively) . The relevance of the interplay between these two mechanisms is an interesting issue raised by our D$\Gamma $A results. It might also be related to a similar non-monotonous trend in the cluster DMFT phase diagram reported by Park *et al.* [@Park08]. The D$\Gamma $A results at stronger interaction ($U=2D$ and $\beta=40$) are presented in Fig. \[Fig2Db40\]. At the considered low temperature the local DMFT spectral functions have peaky structure, because we solve the impurity problem of DMFT by means of exact diagonalization (ED), which treats only finite number of sites. Note, however, the D$\Gamma$A spectral functions are continuous due to momenta- and frequency sums in the Eq. ([SE0]{}), even though ED is employed as an impurity solver. The nonlocal spectral functions show the splitting of the quasiparticle peak due to magnetic correlations, which is similar to the structure (d) in Fig. [Fig:3]{} discussed in Sect. IV [@Kusunose]. Closer to the Mott transition (i.e. at even stronger U) we also expect the formation of the structures (a)-(c) of Fig. \[Fig:3\]. The presented results demonstrate that the D$\Gamma $A approach -with the inclusion of the Moriya corrections- allows for a non trivial analysis of the effects of long-range spatial correlations in every region of the phase diagrams of strongly interacting fermionic systems both in two and three dimensions. $\mathbf{k}$-resolved spectral functions in two dimensions {#Sec:ARPES} ========================================================== Let us now calculate the $\mathbf{k}$-dependence of the spectral functions in the directions of high symmetry, as can be observed in angular resolved photoemission spectroscopy (ARPES). It is worthwhile remarking that, in contrast to the cluster extensions of DMFT, this does not require any kind of interpolation in $\mathbf{k}$-space: Due to the diagrammatic nature of the D$\Gamma$A, the spectra for every chosen $\mathbf{k}$ point in the first Brillouin zone are easily computed via Eq. (\[SE\]). Here, in Fig. \[Fig:2DkL\], we present D$\Gamma$A results with Moriya $\lambda$ correction for the same case previously considered in Fig. \[Fig2Db17\] (second and fourth row). As it is often done, we consider two different $\mathbf{k}$-paths along the Brillouin zone, the first one along the nodal direction \[$(0,0) \rightarrow (\pi,\pi)$, left panel\] and the second one right at the border of the Brillouin zone, crossing the antinodal point at the FS \[$(\pi,\pi) \rightarrow (\pi,0)$, right panel\]. Our analysis of the $\mathbf{k}$-resolved D$\Gamma$A results allows us to appreciate the evolution of the main features of the D$\Gamma$A spectral functions. Specifically, we observe that for the points most far away from the FS, the spectral functions display similar features in the two cases: A relatively narrow peak separated from a broader maximum at higher energies, which represents the incoherent processes building up the (upper) Hubbard band. When proceeding in the direction of the FS, as expected, the narrow peak moves towards the Fermi energy, while the broad feature becomes less pronounced. A qualitative difference between the two selected paths emerges only in the vicinity of the FS: The shift of the narrow peak down to zero energy is frozen along the second path, consistent with the opening of the anisotropic pseudogap in the antinodal direction, while it continues to shift down to the Fermi level in the nodal directions. It is also worth noticing the occurrence in both cases of a slight broadening of the narrow peak while approaching the FS. This trend, which is markedly different from any FL expectation, could be understood in terms of the maximum of Im $\Sigma(\mathbf{{k},\omega)}$ appearing at zero frequency (see again Fig. \[Fig2Db17\]) for both directions. The enhanced value of Im$\Sigma(\mathbf{{k},\omega)}$ at low frequencies, which is ultimately responsible for the opening of the pseudogap starting at the antinodal points, determines a loss of coherence and, hence, the observed broadening of the peak, while it moves closer to the FS. [![(Color online) $\mathbf{k}$-resolved D$\Gamma$A spectra along the nodal (left) and antinodal (right) direction for the half-filled two-dimensional Hubbard model at $U=D=4t$ and $\protect\beta=17$, calculated with the Moriya $\protect\lambda$ correction.[]{data-label="Fig:2DkL"}](Fig11.eps "fig:"){width="8.cm"}]{} Conclusion {#Sec:Conclusion} ========== Based on the representation of the nonlocal self energy which considers the effect of the bare Coulomb interaction and charge (spin) fluctuations, we have extended the recently introduced dynamical vertex approximation (D$% \Gamma$A) by including a Moriyaesque $\lambda$ correction to the local vertex in Section \[Sec:lambda\]. The value of $\lambda$ is determined from the sum rule which relates $\omega$-integrated self energy and occupation and allows for a proper reduction of the DMFT Néel temperature, in two dimensions even to $T_N=0$ so that the Mermin-Wagner theorem is fulfilled. This correction is therefore particularly important for two dimensions, where spin fluctuations are especially strong. Without the Moriya $\lambda$ correction, a much more involved self-consistent solution of the D$\Gamma$A equations would be necessary to yield similar results. The method we have introduced here allows for a treatment of non local long-range spatial correlation in finite dimensional systems. In three dimensions, pronounced effects of non-local spin fluctuations are found only close to the antiferromagnetic phase transition. This is in contrast to the two dimensional case where antiferromagnetic fluctuations completely reshuffle the spectrum, also far away from the antiferromagnetic phase transition at $T_N=0$, leading eventually to the formation of a pseudogap. Qualitatively, the spectral functions can be understood by means of the analytical formula for the self energy proposed in Section \[Sec:approx\]. Calculating several D$\Gamma$A self energies along the high symmetry lines of the Brillouin zone, we obtain the momentum dependence of the spectral functions, which could be directly compared with the ARPES data. D$\Gamma$A can serve as a very promising method for future studies of the Hubbard model at non-integer filling, in particular in the vicinity of the antiferromagnetic quantum critical point.[@SpFerm1; @QCP] A further important development would be also the generalization of the method to the multi-orbital case, to analyze the effects of non local correlations beyond DMFT in realistic bandstructure calculations.[@LDADMFT] *Acknowledgments.* We thank W. Metzner, M. Capone, C. Castellani, G. Sangiovanni, and R. Arita for stimulating discussions and are indebted to M. Capone also for providing the DMFT(ED) code which has served as a starting point. This work has been supported by the EU-Indian cooperative network MONAMI. The work of AK was supported by the Russian Basic Research Foundation through Grants No. 1941.2008.2 (Support of Scientific Schools) and 07-02-01264a. J. Hubbard, Proc. Roy. Soc. London A [**276**]{}, 238 (1963); M. C. Gutzwiller, Phys. Rev. Lett. [**10**]{}, 159 (1963); J. Kanamori, Progr. Theor. Phy. [**30**]{}, 275 (1963). E. H. Lieb and F. Y. Wu, Phys. Rev. Lett. [**20**]{}, 1445 (1968). W. Metzner and D. Vollhardt, , 324 (1989). A. Georges and G. Kotliar, Phys. Rev. B [**45**]{}, 6479 (1992). A. Georges, G. Kotliar, W. Krauth, and M. Rozenberg, Rev. Mod. Phys. [**68**]{}, 13 (1996). R. Bulla, Phys. Rev. Lett. [**83**]{} 136 (1999). N. F. Mott, Rev. Mod. Phys. [**40**]{}, 677 (1968), [*Metal-Insulator Transitions*]{} (Taylor & Francis, London, 1990); F. Gebhard, [*The Mott Metal-Insulator Transition*]{} (Springer, Berlin, 1997). M. Jarrell, Phys. Rev. Lett. [**69**]{} 168 (1992). D. Vollhardt, N. Blümer, K. Held, M. Kollar, J. Schlipf, and M. Ulmke, Z. Phys. B [**103**]{}, 283 (1997). E. Dagotto, Rev. Mod. Phys. [**66**]{}, 763 (1994). N. E. Bickers, D. J. Scalapino, and S. R. White, Phys. Rev. Lett. 62, 961 (1989). J. M. Vilk and A.-M. S. Tremblay, J. Phys. I (France) [**7**]{}, 1909 (1997). J. Polchinski, Nucl. Phys. B [**231**]{}, 269 (1984); D. Zanchi and H.J. Schulz, Phys. Rev. B [**54**]{}, 9509 (1996); ibid. [**61**]{}, 13609 (2000); C. J. Halboth and W. Metzner, Phys. Rev. B [**61**]{}, 7364 (2000); C. Honerkamp, M. Salmhofer, N. Furukawa, and T.M. Rice, Phys. Rev. B [**63**]{}, 035109 (2001); C. Honerkamp and M. Salmhofer, Phys. Rev. B[** 64**]{}, 184516 (2001). J. J. Deisz, D. W. Hess, and J. W. Serene, Phys. Rev. Lett. [**76**]{}, 1312 (1996); J. Altmann, W. Brenig, and A.P. Kampf, Eur. Phys. J. B [**18**]{}, 429 (2000). B. Kyung, Phys. Rev. B [**58**]{}, 16032 (1998); S. Moukouri, S. Allen, F. Lemay, B. Kyung, D. Poulin, Y. M. Vilk, and A.-M. S. Tremblay, ibid. [**61**]{}, 7887 (2000). D.Zanchi, Europhys. Lett. [**55**]{}, 376 (2001); C. Honerkamp and M. Salmhofer, Phys. Rev. B [**67**]{}, 174504 (2003); A. A. Katanin and A. P. Kampf, Phys. Rev. Lett. [**93**]{}, 106406 (2004); D. Rohe and W. Metzner, Phys. Rev. B [**71**]{}, 115116 (2005). M. H. Hettler, A. N. Tahvildar-Zadeh, M. Jarrell, T. Pruschke, and H. R. Krishnamurthy, Phys. Rev. B [**58**]{}, (1998) 7475; C. Huscroft, M. Jarrell, Th. Maier, S. Moukouri, and A. N. Tahvildarzadeh, Phys. Rev. Lett. [**86**]{}, 139 (2001); A. I. Lichtenstein and M. I. Katsnelson, Phys. Rev. B [**62**]{}, R9283 (2000); G. Kotliar, S. Y. Savrasov, G. Pálsson, and G. Biroli, Phys. Rev. Lett. [**87**]{}, 186401 (2001); T. A. Maier, M. Jarrell, T. Pruschke, M. H. Hettler, Rev. Mod. Phys. [**77**]{}, 1027 (2005); E. Koch, G. Sangiovanni, and O. Gunnarsson, Phys. Rev. B [**78**]{}, 115102 (2008). A. Schiller and K. Ingersent, , 113 (1995). V. I. Tokar and R. Monnier, cond-mat/0702011 (unpublished). A. Toschi, A. Katanin, and K. Held, Phys. Rev. B [**75**]{}, 045118 (2007). Indepedently, a similar method, but with a cluster instead of a single-site as a starting point, was developed by C. Slezak, M. Jarrell, Th. Maier, and J. Deisz, cond-mat/0603421 (unpublished). Similar, albeit less elaborated (complete), ideas were also indepedently developed by H. Kusunose, J. Phys. Soc. Jpn. [**75**]{}, 054713 (2006). D$\Gamma$A susceptibilities have been recently calculated also by G. Li, H. Lee, and H. Monien, Phys. Rev. B 78, 195105 (2008). A. N. Rubtsov, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. B [**77**]{}, 033101 (2008); S. Brener, H. Hafemann, A. N. Rubtsov, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. B [**77**]{}, 195105 (2008). E. Z. Kuchinskii, I. A. Nekrasov, and M. V. Sadovskii, Sov. Phys. JETP Lett. [**82**]{}, 98 (2005); M. V. Sadovskii I. A. Nekrasov, E. Z. Kuchinskii, Th. Pruschke, and V. I. Anisimov, Phys. Rev. B [**72**]{}, 155105 (2005). For an elementary introduction, see K. Held, A.A. Katanin, and A. Toschi, Progr. Theor. Phys. Suppl. 176, 117 (2008) . For a review see, e.g., T.  Timusk and B.  W.  Statt, Rep. Prog. Phys. 62 (1999) 61-122, and A.  Damascelli, Z.  Hussain, and Z.-X. Shen, Rev. Mod. Phys. [**75**]{}, 473 (2003) J. A. Hertz and D. M. Edwards, J. Phys. F [**3**]{}, 2174 (1973). J. A. Hertz, Phys. Rev. B [**14**]{}, 3 (1976). T. Moriya, “Spin fluctuations in Itinerant Electron Magnetism” (Springer, 1985). P. Monthoux, A. V. Balatsky, and D. Pines, Phys. Rev. Lett. [**67**]{}, 3448 (1991); Phys. Rev. B [**46**]{}, 14803 (1992). Ar. Abanov, A. V. Chubukov, and J. Schmalian, Adv. Phys. [**52**]{}, 119 (2003). K. Biczuk, R. Bulla, R. Caessen, and D. Vollhardt, Int. J. Mod. Phys. B [**16**]{} 3759 (2002). H. Park, K. Haule, and G. Kotliar, Phys. Rev. Lett. 101, 186403 (2008). S. Paschen, T. Lühmann, S. Wirth, P. Gegenwart, O. Trovarelli, C. Geibel, F. Steglich, P. Coleman and Q. Si, Nature [**432**]{}, 881 (2004); Custers J, Gegenwart P, Wilhelm H, Neumaier K, Tokiwa Y, Trovarelli O, Geibel C, Steglich F, Pepin C, Coleman P, 424 524 (2003). V. I. Anisimov, A. I. Poteryaev, M. A. Korotin, A. O. Anokhin and G. Kotliar, J. Phys. Cond. Matter [**9**]{} (1997), 7359; A. I. Lichtenstein and M. I. Katsnelson, Phys. Rev. B [**57**]{} 198, 6884 (1998); G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet and C. A. Marianetti, Rev. Mod. Phys. [**78**]{}, 865 (2006); K. Held, Adv. Phys. [**56**]{}, 829 (2007).
--- abstract: 'We present a new permutation-invariant network for 3D point cloud processing. Our network is composed of a recurrent set encoder and a convolutional feature aggregator. Given an unordered point set, the encoder firstly partitions its ambient space into parallel beams. Points within each beam are then modeled as a sequence and encoded into subregional geometric features by a shared recurrent neural network (RNN). The spatial layout of the beams is regular, and this allows the beam features to be further fed into an efficient 2D convolutional neural network (CNN) for hierarchical feature aggregation. Our network is effective at spatial feature learning, and competes favorably with the state-of-the-arts (SOTAs) on a number of benchmarks. Meanwhile, it is significantly more efficient compared to the SOTAs.' author: - | Pengxiang Wu$ ^1 $, Chao Chen$ ^2 $, Jingru Yi$ ^1 $, Dimitris Metaxas$ ^1 $\ $ ^1 $Department of Computer Science, Rutgers University, NJ, USA, {pw241, jy486, dnm}@cs.rutgers.edu\ $ ^2 $Department of Biomedical Informatics, Stony Brook University, NY, USA, chao.chen.cchen@gmail.com title: Point Cloud Processing via Recurrent Set Encoding --- Introduction {#sec:intro} ============ Point cloud is a simple and compact geometric representation of 3D objects, and has been broadly used as the standard output of various sensors. In recent years, the analysis of point clouds has gained much attention due to its wide application in real world problems such as autonomous driving [@CVPR_autonomous_driving], robotics [@Robotics], and navigation [@Navigation]. However, it is nontrivial to solve such tasks using traditional deep learning tools, e.g., convolutional neural networks (CNNs). Unlike a 2D image with regularly packed pixels, a point cloud consists of sparse points without a canonical order. Moreover, the spatial distribution of a point cloud is heterogeneous due to factors in data acquisition, e.g., perspective effects and radial density variations. Due to the 3D nature of the problem, various methods have been proposed to convert a point cloud into a 3D volumetric representation, to which 3D CNNs are then applied [@3D_ShapeNets; @VoxNet]. However, despite their success in analyzing 2D images, CNNs are not satisfactory in this context. The commonly used 3D CNN is extremely memory consuming, and thus can not be trained efficiently. A more serious issue is that converting a point cloud into a volumetric representation introduces quantization artifacts and loses fine-scale geometric details. ![image](RCNet.pdf){width="\linewidth"} Better performance has been achieved by deep networks that avoid the volumetric convolutional architechture and operate directly on point clouds. Representative works include PointNet [@PointNet] and PointNet++ [@PointNet++], which process point clouds by combining multi-layer perceptron (MLP) network with symmetric operations (e.g., max-pooling) to learn point features globally or hierarchically. Inspired by PointNet, several recent methods have been proposed to further improve the point feature representation [@Kernel-Graph; @ShapeContextNet; @SO-Net]. This class of networks are invariant to input permutation and have achieved state-of-the-art results. However, due to the reliance on the coarse feature pooling technique, they fail to fully exploit fine-scale geometric details. In this work, we aim to completely bypass the coarse pooling-based technique, and propose a new deep network for point cloud data. At the core of our method is a *recurrent set encoder*, which divides the ambient domain into parallel *beams* and encodes the points within each beam as *subregional* geometric features with an RNN. Our key observation is that when the beam is of moderate size, the RNN is approximately dealing with a sequence of points, as a beam only contains points near a 1D line. Such a sequential input largely benefits the learning of RNN. Meanwhile, noticing that the beams are packed in a regular spatial layout, we use a 2D CNN to further analyze the beam features (called the *convolutional feature aggregator*). Being efficient and powerful at feature learning, the 2D CNN can effectively aggregate the subregional features into a global one, while further benefiting the RNN learning in return. Our method (see Fig. \[fig:architecture\]) is surprisingly efficient and effective for point cloud processing. It is invariant to point order permutation, and competes favorably with the state-of-the-arts (SOTAs) in terms of both accuracy and computational efficiency. A few recent works also adopt convolution for point cloud processing. They typically utilize carefully designed domain transformations to map point data into suitable spaces, where convolution could be applied. Examples include SPLATNet [@SPLATNet] and PCNN [@PCNN_2018]. However, these methods are inefficient as they rely on sophisticated geometric transformations and complex convolutional operations, e.g., continuous volumetric convolution or sparse bilateral convolution. In contrast, our method only employs regular spatial partitioning and sorting, and leverages classic neural network architectures such as RNN and 2D CNN, which are well supported at both software and hardware levels. As a result, our network circumvents much implementation overhead and is significantly more efficient than these SOTAs in computation. It is worth mentioning that, our recurrent set encoder can be seen as a domain mapping function as well. But unlike these SOTAs, it is automatically learned via back-propagation instead of by careful handcrafted design. In this work, we focus on point cloud classification and segmentation tasks, and evaluate the proposed method on several datasets, including ModelNet10/40 [@3D_ShapeNets], ShapeNet part segmentation [@ShapePartSeg], and S3DIS [@S3DIS]. Experimental results demonstrate the superior performance of our method to the SOTAs in both accuracy and computational efficiency. In a nutshell, our main contributions are as follows: - We present a new architecture that operates directly on point clouds without relying on symmetric functions (e.g., max-pooling) to achieve permutation invariance. - We propose a recurrent set encoder for effective subregional feature extraction. To the best of our knowledge, this is the first time an RNN is effectively employed to model point clouds directly. - We propose to introduce the 2D CNN for aggregating subregional features. This design maximally utilizes the strengths of CNN while further benefiting the RNN encoder. The resulting network is efficient as well as effective at hierarchical and spatially-aware feature learning. Related Work {#sec:related_work} ============ We briefly review the existing deep learning approaches for 3D shape processing, with a focus on point cloud setting. ### Volumetric Methods One classical approach to handling unstructured point clouds or meshes is to first rasterize them into regular voxel grids, and then apply standard 3D CNNs [@3D_ShapeNets; @VoxNet; @3D_CNN_LiDAR; @Vol_MVCNN; @Orientation_VoxelNet; @Segcloud; @3DCNN-DQN-RNN]. The major issue with such volumetric representations is that they tend to produce sparsely-occupied grids, which are unnecessarily memory-consuming. Besides, the grid resolutions are limited due to excessive memory and computational cost, causing quantization artifacts and loss of details. To remedy these issues, recent methods propose to adaptively partition the grids and place denser cells near the shape surface [@O-CNN; @OctNet; @Octree]. These methods suffer less from the computational and memory overhead, but still lose geometric details due to sampling and discretization. ### View-based Methods Another strategy is to encode the 3D shapes via a collection of 2D images which are rendered from different views. These rendered images can be fed into traditional 2D CNNs and processed via transfer learning, i.e., fine-tuning networks pre-trained on large-scale image datasets [@MVCNN; @Vol_MVCNN; @PCNN]. However, such view projections would lead to self-occlusions and consequently severe loss of geometric information. Moreover, view-based methods are mostly applied to classification tasks, and are hard to generalize to detail-focused tasks such as shape segmentation and completion. ### Non-Euclidean Methods These approaches build graphs from the input data (e.g., based on the mesh connectivity or k-nearest neighbor relationship), and apply CNNs to the graph spectral domain for shape feature learning [@GeoDL; @LocalizedSpec_CNN; @GraphCNN; @FastGraphCNN; @SemiSupervised_GraphCNN; @AdapGraphCNN]. Graph CNN models are suitable for non-rigid shape analysis due to the isometry invariance. However, it is comparatively difficult to generalize these methods across non-isometric shapes with different structures, largely because the spectral bases are domain-dependent [@SyncSpecCnn]. ### Point Cloud-based Methods PointNet [@PointNet] pioneers a new type of deep neural networks that act directly on point clouds without data conversions. Its key idea is to learn per-point features independently, and then aggregate them in a permutation-invariant manner via a symmetric function, e.g., max-pooling. While achieving impressive performance, PointNet fails to capture crucial fine-scale structure details. To address this issue, the follow-up work PointNet++ [@PointNet++] exploits local geometric information by hierarchically stacking PointNets. This leads to improved performance, but at the cost of computational efficiency. Besides, since PointNet++ still treats points individually at local scale, the relationships among points are not fully captured. In light of the above challenges, a number of recent works have been proposed for better shape modeling [@KD_net; @SO-Net; @Kernel-Graph; @SliceNet; @ShapeContextNet; @PCCN]. These methods overcome the weakness of coarse pooling operation at some degree, and achieve improved performance. Another class of methods have been recently developed without relying on pooling to guarantee permutation invariance. They typically transform the point data into another domain, where convolutions could be readily applied. In SPLATNet [@SPLATNet], the source point samples are mapped into a high-dimensional lattice, where sparse bilateral convolution is employed for shape feature learning. In PCNN [@PCNN_2018], a pair of extension and restriction operators are designed to translate between point clouds and volumetric functions, such that continuous volumetric convolution could be applied. Our method could be considered belonging to this category from the perspective of domain transformation. However, different from existing methods, our domain mapping function is automatically learned rather than by handcrafted design. Moreover, instead of utilizing complex convolutions, we employ the classic 2D convolution for feature aggregation. As a result, our method is more efficient in computation as well as effective at point feature learning. Method {#sec:method} ====== In this work, we focus on two tasks: point cloud classification and segmentation, and present two architectures correspondingly, as illustrated in Fig. \[fig:architecture\]. The input is a point set $ P = \{p_i \in \mathbb{R}^d, i=1, \cdots, N \} $, where each point $ p_i $ is a vector of coordinates plus additional features, such as normal and color. The output will be a $ 1\times K $ score vector for classification with $ K $ classes, or an $ N \times M $ score matrix for segmentation with $ M $ semantic labels. Our network, termed RCNet, consists of two components: the *recurrent set encoder* and the *convolutional feature aggregator*. The recurrent set encoder aims to extract subregional features from input point cloud, while convolutional feature aggregator is responsible for aggregating these extracted features hierarchically. Below we explain their details. ### Recurrent Set Encoder Given an unordered point set, the recurrent set encoder firstly partitions the ambient space into a set of parallel beams, and then divides the points into subgroups accordingly (see Fig. \[fig:architecture\]). The beams are uniformly distributed in a structured manner, spanning a 2D lattice. In particular, suppose the width, height and depth of a beam extends along $ x $, $ y $ and $ z $ axis, respectively. Let $ r $ and $ s $ be the hyper-parameters controlling the number of beams: $ w=(x_{max} - x_{min})/r $ and $ h=(y_{max} - y_{min})/s $, where $ w,h $ are the beam width and height; $ [x_{min}, x_{max}] $ and $ [y_{min}, y_{max}] $ are the maximum spanning ranges of points. Then a point with coordinate $ (x_k, y_k, z_k) $ is assigned to the $ (i,j) $-th beam if $ x_k - x_{min} \in [(i-1)w, iw) $ and $ y_k - y_{min} \in [(j-1)h, jh) $. In our implementation, since the point clouds are normalized to fit within a unit ball, we can simply set $ x_{min} = y_{min}=-1 $ and $ x_{max} = y_{max}=1 $. The subgroups of points are denoted by $ \{S_{ij}\}_{i=1, j=1}^{r,s} $. Note that depending on the tasks, it is also possible to perform non-uniform partition [@O-CNN]. In this work we only focus on uniformly partitioned beams. Given points in subgroup $ S_{ij} $, we treat them as a sequential signal and process it with an RNN. In particular, before being fed to RNN, points within each beam are sorted along the beam depth (according to their $ z $ coordinates). The RNN is single-directional, implemented using Gated Recurrent Units (GRU) [@GRU] with 2 layers. To the best of our knowledge, our network is the first to *effectively* use an RNN to handle 3D point sets directly. Interestingly, it has been previously observed that an RNN performs poorly on a 3D point cloud due to the lack of a unique and stable ordering [@PointNet; @OrderMatters]. The key to our success is the beam partition strategy. With the relatively dense partitioning, the points within each beam is of moderate size, and can be approximately considered distributed along a 1D line. In another word, the RNN is approximately handling point signal of moderate length in a 1D space. This facilitates the learning of RNN and makes it behave quite robustly with respect to the input perturbation. The output of recurrent set encoder is a grid of 1D feature vectors, which are taken as a 2D feature map and fed into the subsequent 2D CNN aggregator: $$\label{eq:feature_map} I = \begin{bmatrix} \mathcal{R}(S_{11}) & \dots & \mathcal{R}(S_{1s})\\ \vdots & \ddots & \vdots \\ \mathcal{R}(S_{r1}) & \dots & \mathcal{R}(S_{rs}) \end{bmatrix},$$ where $ \mathcal{R} $ is a shared RNN with hidden size $ \ell $, and $ I \in \mathbb{R}^{r\times s \times \ell} $. Note that, we only utilize RNN to encode nonempty beams, and for those empty ones we pad zero vectors at the corresponding positions of $ I $. ### Convolutional Feature Aggregator We first note that the features encoded by RNN are actually *non-local*, as the points within each beam span a large range along the beam depth. To build a global shape descriptor, we need to connect these non-local features. A natural choice is using 2D convolutional neural network, given the structured output $ I $ in Eq.(\[eq:feature\_map\]). Being efficient and powerful at multi-scale feature learning, a 2D CNN aggregator brings much computational and modeling advantage compared to the sophisticated aggregators in previous methods, as shown in the experiment section. Further, the strength of a 2D CNN alleviates the modeling burden of the recurrent encoder and boosts the overall performance. In this work, we utilize a simple shallow CNN architecture to validate our idea (see Fig. \[fig:architecture\]), and leave advanced architectures for future exploration. The aggregated global feature could be used for shape classification directly, or combined with the per-point features for semantic segmentation, as illustrated in Fig. \[fig:architecture\]. Note that, for segmentation task we inject additional subregional information into the points via feature propagation, so as to facilitate the discriminative point feature learning. ### Remarks We stress a few key properties of RCNet below. 1. It is invariant to point permutation, a result derived from point sorting within beams. 2. The amount of context information embedded in the 2D feature maps can be controlled with beam sizes. Smaller beams would preserve richer spatial contexts while larger ones would contain less. In the extreme case, when the ambient space is trivially partitioned, i.e., there is only one beam, RCNet degenerates to the vanilla RNN model for point clouds [@PointNet]. The effect of beam size will be investigated in the experiment section in detail. 3. RCNet is computationally efficient and converges fast during training, due to the benefits of 2D CNN. Besides, unlike vanilla RNN, our recurrent encoder is parallelizable with each RNN processing a small portion of points. This further facilitates the computational efficiency. RCNet Ensemble -------------- In RCNet, the beam depth extends along a certain direction, i.e., $ z $ axis. While being effective at extracting subregional features in this direction, the recurrent encoder does not explicitly consider features along other directions. To further facilitate the point feature learning, we propose to capture geometric details in different directions and use an ensemble of RCNets, of which each single model has different beam depth directions. The ensemble unifies a set of “weak” RCNets and is able to learn richer geometric features. The resulting model, termed RCNet-E, is flexible and achieves better performance, as shown in our experiments. In practice, we implement an ensemble by independently training three RCNets, whose beam depths extend along $ x $, $ y $ and $ z $ axes respectively. Then we simply average their predictions to produce the final results. Note that, although multiple networks are used, thanks to the high efficiency of RCNet, their ensemble is still quite efficient. Moreover, such ensemble is amenable to parallelization for further speed-up. Experiments {#sec:exp} =========== In this section, we evaluate our RCNet on multiple benchmark datasets, including ModelNet10/40 [@3D_ShapeNets], ShapeNet part segmentation [@ShapePartSeg], and S3DIS [@S3DIS]. In addition, we analyze the properties of RCNet in details with extensive controlled experiments. Code can be found on the authors’ homepage. ### Ablation Study and a Baseline Model To validate the advantages of our recurrent set encoder, we compare it with the widely used pooling-based feature aggregator. In particular, we replace the recurrent encoder in RCNet with an MLP, consisting of two layers whose sizes are the same with that of the corresponding RNN hidden layers. This MLP is shared and applied to each point, followed by a global max-pooling to aggregate the subregional features. Meanwhile, the remaining parts of the model are kept the same with RCNet. We take this modified network as a baseline model. As demonstrated in the following section, our recurrent set encoder is more effective at describing the spatial layout and geometric relationships than pooling-based technique. Shape Classification -------------------- ### Datasets ModelNet10 and ModelNet40 [@3D_ShapeNets] are standard benchmarks for shape classification. ModelNet10 is composed of 3991 train and 908 test CAD models from 10 classes, while ModelNet40 consists of 12311 models from 40 categories, with 9843 models used for training and 2468 for testing. These models are originally organized with triangular meshes, and we follow the same protocol of [@PointNet; @PointNet++] to convert them into point clouds. In particular, for each model, we uniformly sample 1024 points from the mesh, and then normalize them to fit within a unit ball, centered at the origin. We only use the point positions as input features and discard the normal information. ### Training Following [@PointNet; @PointNet++; @KD_net], we apply data augmentation during the training procedure by randomly translating and scaling the objects, as well as perturbing the point positions. We set the hyper-parameters $ r=32$ and $ s=32 $. The learning rate is initialized to 0.001 with a decay of 0.1 every 30 epochs. The networks are optimized using Adam [@Adam], and it takes about $ 2 \sim 3 $ hours for the training to converge on a single NVIDIA GTX 1080 Ti GPU. ### Results We compare RCNet with several state-of-the-arts: VoxNet [@VoxNet], volumetric CNN [@Vol_MVCNN], O-CNN [@O-CNN], MVCNN [@MVCNN], ECC [@ECC], DeepSets [@Deep_Sets], vanilla RNN and PointNet [@PointNet], PointNet++ [@PointNet++], KD-Net [@KD_net], Pointwise CNN [@PointWise], SO-Net [@SO-Net], KCNet [@Kernel-Graph], SCN [@ShapeContextNet], and PCNN [@PCNN_2018]. The results are demonstrated in Table \[table:classification\]. We observe that a single RCNet is able to achieve competitive results against the state-of-the-arts, and with ensemble the performance is further boosted. In particular, RCNet performs better than most existing approaches. While obtaining similar accuracy to PCNN, our network is significantly simpler in design. On the other hand, compared to the baseline model, RCNet outperforms it by a large margin. This validates the effectiveness of recurrent encoder at modeling the relative relationships among points. It is worth noting that, in [@SO-Net] the SO-Net also attempted to apply the standard CNN to the generated image-like feature maps, but only led to decreased performance. In contrast, our RCNet is better at incorporating the advantages of CNN into point cloud analysis, thanks to the recurrent set encoder. Finally, our RCNet is computationally efficient. In particular, a single RCNet can be trained in about 3 hours. This is much faster than PointNet++ and PCNN, both of which require about 20 hours for training [@PointNet++; @PCNN_2018]. Besides, as shown in Table \[table:time\], on average it takes about 0.4 milliseconds for RCNet to forward a shape, while PointNet++ and PCNN require 2.8 and 16.8 milliseconds, respectively[^1]. Table \[table:time\] also summarizes the number of parameters of different networks. Interestingly, although our model has larger size, it still runs faster than other competitors. This validates that the classic RNN and 2D CNN, which are well supported at both software and hardware levels, contribute largely to the model efficiency. In contrast, since PointNet++ need to perform additional K-nearest neighbor query on the fly on GPU, it is much less efficient in spite of the smaller model size. Similarly, PCNN and SPLATNet$ _\text{3D} $ rely on sophisticated geometric transformations and complex convolutional operations. These operations are much less GPU-friendly and cause a lot of overhead in practice. It is worth mentioning that, since RCNet-E is naturally parallelizable, its inference time is almost the same with that of a single RCNet. -------------------------- --------- --------- --------- --------- Class. Seg. Class. Seg. RCNet (ours) **0.4** **4.5** 13.3 16.7 RCNet-E (ours) 0.6 4.8 39.9 50.1 PointNet++ 2.8 11.9 **1.0** **1.7** PCNN 16.8 109.3 8.1 5.4 SPLATNet$ _{\text{3D}} $ - 23.1 - 2.7 -------------------------- --------- --------- --------- --------- : Comparison of inference time and model size for different networks. Classification and segmentation are performed on ModelNet40 and ShapeNet part datasets, respectively. Time is measured in milliseconds, which correspond to the cost of forwarding a shape on average. The hardware used is an Intel i7-6850K CPU and a single NVIDIA GTX 1080 Ti GPU. “M" stands for million.[]{data-label="table:time"} Shape Part Segmentation ----------------------- ### Dataset and Configuration For shape part segmentation, the task is to classify each point of a point cloud into one of the predefined part categories. We evaluate the proposed method on the challenging ShapeNet part dataset [@ShapePartSeg], which contains 16881 shapes from 16 categories. The shapes are consistently aligned and normalized to fit within a unit ball. For each shape, it is annotated with 2-6 part labels, and in total there are 50 different parts. We sample 2048 points for each shape following [@PointNet; @PointNet++]. As in [@PointNet++], apart from point positions we also use normal information as input features. Following the setting in [@ShapePartSeg], we evaluate our methods assuming that the category of the input 3D shape is already known. The segmentation results are reported with the standard metric mIoU [@PointNet]. We use the official train/test split as in [@shapenet2015] in our experiment. We follow the same network configuration with the classification task. ### Results Table \[table:partseg\] compares RCNet with the following state-of-the-art point cloud-based methods: PointNet [@PointNet], PointNet++ [@PointNet++], Kd-Net [@KD_net], SPLATNet$ _{\text{3D}} $ [@SPLATNet], SO-Net (pre-trained) [@SO-Net], RSNet [@SliceNet], KCNet [@Kernel-Graph], A-SCN [@ShapeContextNet], and PCNN [@PCNN_2018]. In Table \[table:partseg\], we report the instance average mIoU as well as the mIoU scores for each category. ![Visualization of ShapeNet part segmentation results. From top to bottom: ground truth, baseline, baseline-E, RCNet, RCNet-E. From left to right: airplane, motorbike, lamp, table.[]{data-label="fig:shape_part_seg"}](shape_part_seg.pdf){width="1.0\linewidth"} As is shown, our method achieves better results than the state-of-the-art works. In particular, a single RCNet is able to achieve average mIoU of 85.3, which is competitive with the performance of PointNet++ and PCNN. With ensemble, the accuracy is further boosted and our method dominates most of the shape categories. Some qualitative segmentation results are illustrated in Fig. \[fig:shape\_part\_seg\]. Specifically, the first two columns show that both RCNet and RCNet-E are able to handle the small details of objects well. The third column indicates that the ensemble helps correct the prediction error of a single model, and is better at capturing the fine-grain semantics than the baseline methods. The last column corresponds to a failure case, which is possibly due to the imperfect model representation ability or caused by shape semantic ambiguity (i.e., the table board in the middle could be interpreted as either table support or tabletop). In Table \[table:time\], we compare the computational efficiency of different networks on part segmentation task. As is shown, our method is more efficient than the state-of-the-arts[^2]. Method Mean IoU Overall accuracy ------------------- ----------- ------------------ PointNet 47.71 78.62 A-SCN 52.72 81.59 Pointwise CNN - 81.50 Baseline (ours) 50.31 81.57 Baseline-E (ours) 52.38 82.98 RCNet (ours) 51.40 82.01 RCNet-E (ours) **53.21** **83.58** : Segmentation results on S3DIS dataset. Mean IoU and point-wise accuracy are listed.[]{data-label="table:S3DIS"} Semantic Scene Segmentation --------------------------- ### Dataset and Configuration We evaluate our RCNet on the scene parsing task with Standford 3D indoor scene dataset (S3DIS) [@S3DIS]. S3DIS consists of 6 scanned large-scale areas, which in total have 271 rooms. Each point in the scene point cloud is annotated with one of the 13 semantic categories. Following [@PointNet], we pre-process the data by splitting the scene points into rooms, and then subdividing the rooms into small blocks with area 1m by 1m (measured on the floor). As in [@PointNet], we also use k-fold strategy for training and testing. At training time we randomly sample 2048 points for each block, but use all the points during testing. We represent each point using 9 attributes, including XYZ coordinates, RGB values and normalized coordinates as to the room. The same shape segmentation RCNet is used for this task. ![Visualization of S3DIS segmentation results. From top to bottom: input scene, ground truth, baseline, baseline-E, RCNet, RCNet-E.[]{data-label="fig:S3DIS"}](S3DIS.pdf){width="1.0\linewidth"} ### Results We compare our RCNet with PointNet [@PointNet], A-SCN [@ShapeContextNet] and Pointwise CNN [@PointWise]. The results are reported in Table \[table:S3DIS\]. As is shown, our RCNet improves A-SCN by about $ 0.5\% $ in mean IoU and $ 2\% $ in overall accuracy. We visualize a few segmentation results in Fig. \[fig:S3DIS\]. It can be observed that RCNet is able to output smooth predictions and segment the small objects well. In contrast, the baseline methods tend to produce large prediction errors. This shows the benefits of our recurrent set encoder and the 2D CNN as feature aggregators. With ensemble, the segmentation accuracy is further boosted and our RCNet-E achieves the best results. Architecture Analysis --------------------- In this section we show the effects of network hyper-parameters and validate the design choices through a series of controlled experiments. We consider the following two main contributory factors on model performance: (1) the size of beams; (2) the number of points. We use ModelNet40 dataset as the test bed for comparisons of different options. Unless explicitly noted, all the experimental settings are the same with those in the shape classification experiment. ### The Size of Beams The beam size controls how much local context information would be utilized, and is a major contributory factor for the network performance. For RCNet, large beams will lead to a small feature map for the downstream CNN. This would increase the efficiency of CNN but in turn result in the loss of fine-scale geometric details. Moreover, beams with large size would be filled with too many points, and as a result the RNN would perform poorly in feature modeling. On the other hand, if the size of beams is too small, the subregions would contain insufficient amount of points, which is adverse to the feature learning. We conduct several experiments to investigate the influence of beam size on the network performance. In particular, we test RCNet with different specifications of hyper-parameters $ r $ and $ s $. The results are reported in Table \[table:beam\_size\]. As is shown, both larger and smaller beam sizes would hurt the performance, and $ r\times s =32 \times 32 $ leads to the best results. Note that, although beam size is an important parameter on the performance, our RCNet is still quite robust to this factor. In contrast, the max-pooling based encoder behaves quite sensitively and the performance decreases a lot with large beams. This further validates that pooling is a relatively coarse technique for exploiting geometric details. ### The Number of Points Point clouds obtained from sensors in real world usually suffer from data corruptions, which lead to non-uniform data with varying densities [@PointNet++]. To validate the robustness of our model to such situations, we randomly dropout the number of points in testing and conduct two different groups of experiments. In the first group, the models are trained on uniform point clouds without random point dropout, while in the second group the models are trained with random dropout as well. In the experiment, we set $ r=s=32 $ as in the shape classification task. The results are shown in Table \[table:num\_points\]. We observe that models trained with random point dropout (DP) during training are fairly robust to the sampling density variation, with drop of accuracy less than 3.3% when point number decreases from 1024 to 128. In contrast, those trained only on uniform data fail to generalize well to the cases of non-uniform data. Note that, despite the drop of accuracy, our RCNet still achieves better performance than the baseline model when trained without DP. This validates the superiority of RNN in subregional feature extraction compared to max-pooling. Conclusion and Discussion {#sec:conclusion} ========================= In this work we present a new deep neural network for 3D point cloud processing. Our network consists of a recurrent set encoder and a 2D CNN. The recurrent set encoder partitions the input point clouds into several parts, which are encoded via a shared RNN. The encoded part features are later assembled in a structured manner and fed into a 2D CNN for global feature learning. Such design leads to an efficient as well as effective network, thanks to the benefits of CNN and RNN. Experiments on four representative datasets show that our method competes favorably with the state-of-the-arts in terms of accuracy and efficiency. We also conduct extensive experiments to further analyze the network properties, and show that our method is quite robust to several key factors affecting the model performance. Finally, we note that the proposed recurrent set encoder can be generalized to other contexts. For example, we can build a KNN graph for the input point cloud and model the local neighborhood for each point with recurrent encoder. In particular, we can sort the $ k $ nearest neighbor points according to their distances to the query point, and then apply RNN to this point sequence for local feature learning. This is different from KCNet [@Kernel-Graph] which uses a local point-set kernel, and will be explored in the future. Acknowledgments =============== This work was partially supported by NSF IIS-1718802, CCF-1733866, and CCF-1733843. [^1]: For PCNN, we run the code released by the authors (https://github.com/matanatz/pcnn), with the default pointconv configuration. For PointNet++, we use the official implementation (https://github.com/charlesq34/pointnet2), and test the MSG model with the default network setting. [^2]: For SPLATNet$ _{\text{3D}} $, we run the code implemented by the authors (https://github.com/NVlabs/splatnet), with the default network configuration. For PointNet++, the MSG model with one hot vector is tested. For PCNN, we use the default pointconv configuration. In the experiment we sample 2048 points for each shape.
--- abstract: 'In classical thermodynamics the work cost of control can typically be neglected. On the contrary, in quantum thermodynamics the cost of control constitutes a fundamental contribution to the total work cost. Here, focusing on quantum refrigeration, we investigate how the level of control determines the fundamental limits to cooling and how much work is expended in the corresponding process. [[ We compare two extremal levels of control. First coherent operations, where the entropy of the resource is left unchanged, and second incoherent operations, where only energy at maximum entropy (i.e. heat) is extracted from the resource. For minimal machines, we find that the lowest achievable temperature and associated work cost depend strongly on the type of control, in both single-cycle and asymptotic regimes. We also extend our analysis to general machines.]{}]{} Our work provides a unified picture of the different approaches to quantum refrigeration developed in the literature, including algorithmic cooling, autonomous quantum refrigerators, and the resource theory of quantum thermodynamics.' author: - Fabien Clivaz - Ralph Silva - Géraldine Haack - Jonatan Bohr Brask - Nicolas Brunner - Marcus Huber title: | Unifying paradigms of quantum refrigeration:\ fundamental limits of cooling and associated work costs --- Introduction ============ Characterizing the ultimate performance limits of thermal machines is directly connected to the problem of understanding the fundamental laws of thermodynamics. The development of classical thermodynamics was instrumental for the realization of efficient thermal machines. Similarly, understanding the thermodynamics of quantum systems is closely related to the fundamental limits of quantum thermal machines. An intense research effort has been devoted to these questions [@book; @review; @millen; @janet], resulting in the formulation of the basic laws of quantum thermodynamics, a resource theory perspective, and a large body of work on quantum thermal machines, including first experimental demonstrations. When trying to establish fundamental limits on quantum thermodynamics tasks, one is always faced with the problem of identifying the relevant resources. For instance, one may consider different classes of allowed operations on a quantum system, or equivalently different levels of control. This challenge is particular to the quantum regime, where monitoring and manipulating systems generally affects the dynamics. Conceptually different approaches have been pursued in parallel to explore this question. [[ One approach is via]{}]{} the development of a general theory of quantum thermodynamics [[ that]{}]{} aims at placing upper bounds on the performance limits of quantum thermal machines. By establishing fundamental laws, this abstract perspective provides limits that hold for any possible quantum process (hence to all transformations achievable by quantum thermal machines). Typically, such upper bounds are obtained by characterising possible state transitions, focusing on the single-cycle regime. The intuition being that a machine cannot perform better than a perfect cycle. Here one can distinguish two paradigms. In the first, free operations are given by “thermal operations” [@wit; @resource; @TO; @coherence1; @coherence2; @coherence3], i.e. energy conserving unitaries applied to the system and a thermal bath. The implicit assumptions are access to i) a perfect timing device, ii) arbitrary spectra in the bath, and iii) interaction Hamiltonians of arbitrary complexity. This [[ perspective]{}]{} led to derivations of the second law [@secondlaws; @paul; @yelena]—i.e. the removal of system entropy in a thermally equilibrated environment comes at an inevitable work cost—and general formulations of the third law [@thirdlaw; @thirdlaw2; @thirdlaw3]—cooling to temperatures approaching absolute zero requires a diverging amount of resources. In the second paradigm, one considers an increased amount of classical control over a single quantum system, but no access to bath degrees of freedom. I.e. the implicit assumptions are i) a perfect timing device ii) the ability to implement any cyclic change in the Hamiltonian of a quantum system. This led to the concepts of passive states [@passive1; @passive2; @passive3; @skrzypczyk15; @passive4] and algorithmic cooling [@algo1; @algo2; @raeisi; @algo3; @algo4] and more generally to fundamental limits on single-cycle performance of coherently driven quantum machines [@ticozzi]. Another approach is via explicit models of quantum thermal machines that provide lower bounds on their performance. A wide range of such models have been discussed. In general terms, a quantum thermal machine makes use of external resources (e.g. thermal baths) to accomplish a specific task, such as work extraction or cooling. More formally, these machines are modeled as open quantum systems, where the machine consists of few interacting quantum systems coupled to external baths. Performance is usually evaluated in the asymptotic regime of non-equilibrium steady states. Machines with very different levels of control must be distinguished. Autonomous quantum thermal machines feature the lowest level of control [@auto1; @auto0; @auto2; @virtual; @venturelli; @autoexp1; @patrick; @autoexp2; @roulet]. Here the machine subsystems are coupled to thermal baths at different temperatures, and interact via time-independent Hamiltonians, thus requiring no external source of work or control. In the opposite regime, machines requiring a high level of control have been considered, such as quantum Otto engines [@abah; @rossnagel; @otto1; @otto2]. Here one assumes the ability to implement complex unitary cycles, which generally require time-dependent Hamiltonians or well-timed access to a coherent battery [@clock1; @clock2; @coh1]. Nonetheless similar statements of the second and third law are also possible in this regime [@thirdlawmachines; @silva16]. Each of the above approaches represents a perfectly reasonable paradigm for discussing the ultimate limitations of quantum thermodynamics, each featuring its own merits and drawbacks. Comparing these approaches is thus a natural and important question. It is however also a challenging one, due to the fact that each approach works within its own respective framework and set of assumptions. Recently, several works established preliminary connections between some of these approaches. Refs [@transient1; @transient2] studied autonomous machines in the transient regime and showed that a single cycle can achieve more cooling than the steady state regime. Quantum machines powered by finite-size baths have been studied [@finiteotto] to understand the impact of finite resources, and the control cost of achieving a shortcut to adiabaticity was studied in [@shortcutcost]. In [@finitesize] the authors explored the implications of finite size systems, i.e. thermal operations [[ not at]{}]{} the thermodynamic limit. In the single-cycle regime, Refs [@Gaussianpassive; @realresource; @passiveinstability] discussed thermodynamic performance under restricted sets of thermal operations, with limited complexity. Finally, even the assumption of perfect timing control, inherent to all paradigms except autonomous machines, should arguably carry a thermodynamic cost [@clock3]. The above paradigms can instructively be split into two types of assumed control over the quantum system. For a single cycle of a thermodynamic process, we can either assume to be capable of engineering time dependent Hamiltonians, dubbed *coherent* control or just turning on time-independent interactions, which we call *incoherent* control. We explicitly model each bath constituent that we have access to and refer to it as machine size. Thus for an infinite machine size, the incoherent control paradigm exactly captures the resource theory of thermodynamics. On the other hand, the explicit modeling of size adds another layer to the analysis of thermodynamic processes in terms of size/complexity. Incoherent Coherent -- ------------ ---------- --- at $T_H$ m 0 at $T_R$ n-m n TO CPTP : We here summarize the important properties of both paradigms. By complexity we mean the number of components the machine is allowed to have. Each component is in principle allowed to be a qudit of arbitrary dimension. In the limit of infinitely many ancillas the single cycle incoherent paradigm become the thermal operations (TO) used in the Resource theory of thermodynamics (RTT) and in the single cycle coherent paradigm one is allowed to apply any CPTP map to the target. In the accompanying letter [@OurLetter], we used this framework to derive a universal bound for quantum refrigeration and proved that it could be obtained by all types of control, provided that complex enough machines and corresponding interactions are available. In the present work we dig deeper and reveal the intricate connection between machine complexity, control and add the amount of resources consumed in the process to the picture. The latter, in turn, is connected to the entropy change associated with the energy drawn from the resource. Considering our two extremal levels of control; first the *coherent* scenario, where the entropy of the resource is left unchanged, and second the *incoherent* scenario, where only energy at maximum entropy (i.e. heat) is extracted from the resource. Within each level of control, we investigate the lowest attainable temperature, and the work cost for attaining a certain temperature. These quantities allow us to give a direct and insightful comparison between the different approaches for quantum refrigeration. To tackle these questions, it is natural to consider machines of a given size (i.e. the number of systems that one has access to), since the size in itself also represents a form of control. We analyze this aspect of control starting from the smallest possible machines. It turns out that the two-qubit machine is the smallest one where the coherent and incoherent scenarios [[ can be compared in a meaningful way.]{}]{} We also discuss the case of general machines, and in particular the limit of asymptotically large machines. Our results clearly demonstrate the expected crucial role of control for quantum cooling performance, but surprisingly unifies the different operational approaches through machine complexity. Setting and summary of results\[sec:setting\] ============================================= ![Model for the minimal thermal machine achieving cooling and allowing for the comparison of two paradigmatic scenarios of quantum refrigeration. After initialization of the machine and target qubit with a thermal bath at room temperature $T_R$, two scenarios are proposed. In [[ scenario]{}]{} 1, the free energy is provided by a hot bath. This corresponds to a low level of control, i.e. maximal entropy change. In contrast, [[ scenario]{}]{} 2 describes a thermal machine requiring a high level of control (e.g. via a coherent battery), that can implement arbitrary unitary operations at zero entropy change.[]{data-label="fig:model"}](setup.pdf){width="9cm"} Cooling a quantum system could have several meanings. For a system initially in a thermal state, one can drive it to a thermal state of lower temperature. Alternatively, one could consider increasing the ground-state population, or decreasing the entropy or the energy. These notions are in general inequivalent for target systems of arbitrary dimension. Determining the fundamental limits to cooling is therefore a complex problem in general. It turns out, however, that for the case of a qubit target, all the above notions of cooling coincide. Because of the clarity that this offers but also because the bounds set on target qubits imply bounds for target qudits, see our accompanying article [@OurLetter], we focus on qubit targets only in this article. Specifically we consider cooling a single qubit which is initially in a thermal state set by the environment temperature $T_R$ and then isolated from any environment [@footnote0]. The goal is to increase the ground sate of the qubit (without changing its energy gap). In order to cool the target qubit, we couple it to a quantum thermal machine. We consider two scenarios for the operation of this machine, that represent the two extremal levels of control (coherent and incoherent) introduced above. For each of these scenarios we are interested in the limits to cooling performance, see our accompanying article [@OurLetter] for a complementary treatment of this, as well as in the associated work cost. We characterize the work cost by the free energy change, a well-established monotone across thermodynamic paradigms (see e.g. [@work; @work2]). This quantifies the maximum extractable work from a resource in the presence of an environment at equilibrium, and hence measures to what degree the resource is out of equilibrium with the environment, a property necessary to induce non-trivial transformations of the target system. More precisely, the two scenarios are defined as follows: - [**[Scenario 1: Incoherent operations.]{}**]{} The source of free energy is a hot bath at a temperature $T_H > T_R$. The machine (or any of its subsystems) can be coupled to the hot bath or rethermalized with the environment at any stage. The machine interacts with the target qubit via an energy conserving unitary operation. The work cost of the operation corresponds to the decrease in free energy of the hot bath. - [**[Scenario 2: Coherent operations.]{}**]{} Here the source of free energy is coherent in the sense of allowing for energy non-conserving unitary operations between the machine and the target qubit. This effectively assumes a coherent battery or classical control field as the source of free energy. There is no additional thermal bath, and the machine may only be coupled to the environment (at temperature $T_R$). As the entropy is unaffected, the work cost, i.e. the change in free energy, is simply the change in energy. In order to compare these two scenarios and understand the fundamental limits to cooling performance, we investigate - the lowest attainable temperature $T^*$, - the work cost for attaining any given temperature, in particular $T^*$. In contrast to our accompanying article [@OurLetter] where we focus on the unbounded number of cycles regime, we are here interested in the single-cycle, repeated and asymptotic regimes. In the single-cycle regime, an initial thermalisation step is followed by a single unitary operation on the machine and the target qubit (energy conserving or arbitrary, for scenario 1 and 2, respectively). In the repeated operations regime, thermalisation and unitary operations are alternated a finite number of times. In the asymptotic regime, this cycle of steps is repeated indefinitely. Turning our attention to the machine more closely, we consider that distinct subsystems of the machine can connect to baths at different temperatures, but we do not allow individual transitions in the machine to be separately thermalised at different temperatures. With that in mind, while bounds on the performance of general machines can be set for both paradigms, see our accompanying article [@OurLetter], the incoherent paradigm is trivial unless the machine has a tensor product structure. Since we are here focusing in comparing both paradigms, in particular with respect to their associated work cost, we will consider machines with such a structure only. Furthermore, besides the more practical aspect of small machines, which are arguably easier to realize, especially in the incoherent scenario where increasing the machine size usually comes at the price of decreased interaction strengths [@autoexp1; @autoexp2], they also already suffice to saturate the cooling bounds of each scenario, see our accompanying article [@OurLetter]. This as such motivates our interest to focus most our analysis on the minimal settings. The two smallest possible machines consist of either a single qubit or two qubits. Of these, only the latter allows for a non-trivial comparison between the incoherent and coherent scenarios as a single-qubit machine allows for cooling only in scenario 2. Figure \[fig:summary\] summarises the results of our comparison, and demonstrates the crucial role of control for the fundamental limits of quantum refrigeration. It shows the minimal achievable temperature of the target qubit vs. the associated work cost in each scenario and for the single-cycle, finite-repetition, and asymptotic regimes. Surprisingly, in the single-cycle regime, we find that neither scenario is universally superior. While scenario 2 always achieves the lowest temperature when no restriction is placed on the work cost, there is a threshold work cost below which scenario 1 outperforms scenario 2. For finite repetitions, additional cooling starts from the end points of maximal single-cycle cooling in each scenario. For scenario 1, one can think of this as repeated thermal operations with a locality restriction, i.e. access to a single qubit from each of the two baths in every round, and for scenario 2 it corresponds to multiple cycles of coherently driven quantum machines (such as e.g. quantum Otto cycles). In the asymptotic regime scenario 1 corresponds to the minimal autonomous quantum thermal refrigerator, as shown in [@Raam] and discussed in our accompanying article [@OurLetter]. Scenario 2 [[ leads to heat bath algorithmic cooling, when augmented with the ability to individually rethermalise the machine qubits to the environment temperature $T_R$]{}]{}. Moreover, like in the single-cycle regime, scenario 2 always achieves a lower temperature, although generally at a higher work cost. While minimal machines saturate the cooling bounds, they do so in a very ineffective way from a work cost perspective. Extending our analysis to the case of N-qubit machines, by considering cooling to a fixed target temperature, we finally show that both coherent and incoherent machines can achieve minimal work cost, *i.e* saturate the second law, in the limit of large size.\ ![Comparison of achievable temperatures and associated work costs for scenarios 1 and 2 in the single-cycle, finite repetitions, and asymptotic regimes. The ratio $T/T_R$ is the relative cooling, $T$ being the final temperature and $T_R$ the initial one. The symbols (dots, etc) correspond to maximal cooling (i.e. achieving minimal temperature $T^*$) in each scenario. [[ Here we use $T_R=1$.]{}]{} \[fig:summary\] ](summaryPlot.png){width="9cm"} The rest of the paper is organized as follows. In [Sec. \[sec:model\]]{}, we introduce notation and definitions. Section \[sec:onequbit\] deals with the case of the one-qubit machine. In Secs \[sec:prelim\] and \[sec:single-cycle\], we investigate the cooling performance and associated work cost of the two-qubit machine, focusing on the single-cycle regime. In [Sec. \[sec:repeat\]]{}, we discuss repeated operations and the asymptotic regime of the two-qubit machine. We then discuss the saturation of the second law by more general machines in [Sec. \[sec:secondlaw\]]{} before concluding in [Sec. \[sec:conclusion\]]{}. Notation and definitions \[sec:model\] ====================================== As argued in section \[sec:setting\], we consider machines consisting of a given number of qubits. We take the energies of all qubit ground states to be zero, denote the excited state energy of qubit $i$ by $E_i$, and the energy eigenstates by ${\vert{0}\rangle}_i$ and ${\vert{1}\rangle}_i$. Thus, the local Hamiltonian for each qubit is $H_i = E_i {\vert{1}\rangle}_i{\langle{1}\vert}$, and the total Hamiltonian of target and machine is $$H = \sum_i E_i {\vert{1}\rangle}_i{\langle{1}\vert} .$$ The initial state, prior to cooling, is the same for the incoherent and coherent scenarios. Every qubit is in a thermal state of its local Hamiltonian at the environment temperature $T_R$. In general, a thermal state of a qubit with energy gap $\varepsilon$ and temperature $T$ is given by $$\tau(\varepsilon,T) = r(\varepsilon,T) {\vert{0}\rangle}{\langle{0}\vert} + [1-r(\varepsilon,T)] {\vert{1}\rangle}{\langle{1}\vert} ,$$ where the populations are determined by the Boltzmann distribution (throughout the paper we work in natural units, $k_B=\hbar=1$) $$\label{eq:bdist} r(\varepsilon,T) = \frac{1}{1 + e^{-\varepsilon/T}} = \frac{1}{\mathcal{Z}(\varepsilon,T)} ,$$ where $\mathcal{Z}(\epsilon,T)$ is the partition function corresponding to the qubit Hamiltonian and temperature. We denote the ground state populations at the environmental temperature by $r_i = r(E_i,T_R)$, and the corresponding thermal states by $\tau_i$. We will refer to the target to be cooled as qubit $A$, but for convenience, we will generally drop the subscript for the target qubit, such that $$E {\vcentcolon=}E_A, \hspace{0.5cm} r {\vcentcolon=}r_A, \hspace{0.5cm} \tau {\vcentcolon=}\tau_A .$$ Note that we can choose a unit of energy such that $E=1$ without loss of generality, which we do for all our numerical analysis. In scenario 1, one (or more) of the machine qubits is first heated to a higher temperature $T_H$. This is followed by an energy conserving unitary acting jointly on the target and the machine, i.e. any unitary $U$ for which $[U,H]=0$. In scenario 2, an energy non-conserving unitary is applied directly to the initial state of target and machine. We extract the temperature of the target qubit by reading its ground state population and inverting the relation . When the target qubit is diagonal, which will turn out to be the case for all our relevant operations, the target has a well defined temperature and this is a valid way to extract it. When the target state is not diagonal it strictly speaking has no temperature. One way to nevertheless extend the notion of temperature to these states also is as presented above. The work cost is accounted for from the perspective of the work reservoir, i.e. the free energy change of the resource. This is not necessarily equal to the free energy change of the system itself, but is nonetheless the appropriate way to quantify consumed resources. For completeness, we have also worked out the two scenarios for the two-qubit machine case from a system perspective in [App. \[app:internal\]]{}.\ One-qubit machine {#sec:onequbit} ================= Denoting the machine qubit by $B$, the Hamiltonian is $H = H_A + H_B$, and the initial state is [[ simply]{}]{} $$\rho^{in} = \tau \otimes \tau_B .$$ Scenario 1: incoherent operations --------------------------------- In this scenario, the machine qubit is first heated to a higher temperature $T_H$, resulting in the state $$\rho^{H} = \tau \otimes \tau_B^H ,$$ where $\tau_B^H = \tau(E_B,T_H)$ is the thermal state of qubit B at the temperature of the hot bath. This is followed by an energy conserving unitary. However there is no such unitary that can cool the target, as we demonstrate now. For the action of the unitary to be nontrivial (and hence, for any cooling of the target to happen), the spectrum of the joint Hamiltonian $H$ must have some degeneracy, allowing one to shift population between distinct energy eigenstates of the same energy. The only possibilities are that *(i)* one of the energies vanish $E=0$ or $E_B=0$, or *(ii)* the gaps are equal $E=E_B$. In case *(i)*, the thermal state $\rho^H$ will be proportional to the identity in the degenerate subspace, and hence $U\rho^HU^\dagger = \rho^H$ for any energy conserving unitary $U$. In case *(ii)*, because the matrix elements (in the product basis of $H_A$, $H_B$) fulfill $\rho^H_{01,01} = r(1-r_B^H) > r_B^H(1-r) = \rho^H_{10,10}$, unitaries acting on the degenerate subspace can only heat up the target. Thus, for the single-qubit machine, cooling is impossible in the incoherent scenario. Scenario 2: coherent operations\[sec:OneMco\] --------------------------------------------- Using coherent operations, it is possible to cool, and we now derive the minimal attainable temperature of the target, and the work cost of cooling. This will also provide some intuition for how to tackle the two-qubit machine, where coherent and incoherent cooling can be compared. Cooling corresponds to increasing the ground state population of the target using an arbitrary joint unitary $U$ on target and machine. This population $r_{coh}$ is given by the sum of the two first diagonal entries of the final state $U\rho^{in}U^\dagger$, when expressed in the product basis of $H_{\text{A}} \otimes H_{\text{B}}$. [[ From the Schur-Horn theorem one learns that this sum can at most be the sum of the two greatest eigenvalues of $U \rho^{\text{in}} U^{\dagger}$, which, since $U$ cannot change the eigenvalues of the state and since $\rho^{\text{in}}$ is diagonal, are the sum of the two largest diagonal entries of $\rho^{\text{in}}$. Maximal cooling is thus]{}]{} achieved when $r_{coh}$ equals the sum of the two largest diagonal entries of $\rho^{in}$. One readily sees that $\rho_{00,00}^{in} = r r_B$ is the largest element and $\rho_{11,11}^{in} = (1-r)(1-r_B)$ the smallest, while $$\frac{\rho_{01,01}^{in}}{\rho_{10,10}^{in}} = \frac{r(1-r_B)}{(1-r)r_B} = e^{\frac{E-E_B}{T_R}} .$$ Cooling is only possible if the initial ground state population $r = \rho_{00,00}^{in} + \rho_{01,01}^{in}$ is not already maximal, i.e. if $E < E_B$. In this case, the maximal final population is $r_{coh}^* = \rho_{00,00}^{in} + \rho_{10,10}^{in} = r_B$ corresponding to (from ) $$T_{coh}^* = \frac{E}{\ln\left(\frac{r_{coh}^*}{1-r_{coh}^*}\right)} = \frac{E}{E_B} T_R .$$ This temperature can be achieved by a unitary which swaps the states ${\vert{01}\rangle}$ and ${\vert{10}\rangle}$, and in fact this also minimises the associated work cost. More generally, we can identify an optimal unitary which minimises the work cost of cooling to any temperature in the attainable range, i.e. any ground state population $r_{coh}$ between $r$ and $r_{coh}^*$. The optimal work cost is given by $$\label{eq:workcostonequbitcoh} \Delta F_{coh} = (r_{coh} - r)(E_B - E) ,$$ and it is achieved by a unitary of the form $$\label{eq:optimalUonequbitcoh} U = e^{-i t L} ,$$ where $$\label{eq:Lonequbit} L = i {\vert{01}\rangle}{\langle{10}\vert} - i {\vert{10}\rangle}{\langle{01}\vert}$$ is a Hamiltonian which generates swapping of excitations between the target and machine qubits, and $t = \arcsin(\sqrt{\mu})$ with $$\mu = \frac{r_{coh} - r}{r_B - r} .$$ The optimality of can be proven using the Shur-Horn theorem and [[ majorization]{}]{} [@MarshalMajorization; @NielsenMajorization]. [[ The idea of the proof is as follows.]{}]{} [[ By scanning through all the unitarily attainable $\rho^{\text{coh}}= U \rho^{\text{in}} U^{\dagger}$ we are looking at all the Hermitian matrices $\rho^{\text{coh}}$ with spectrum $\vv{\rho}^{\text{in}}$ (given an $n \times n$ matrix $\mu=(\mu_{ij})$ we generically denote its vectorized diagonal $(\mu_{11},\dots,\mu_{nn})$ by $\vv{\mu}$). According to the Schur-Horn theorem there exists a Hermitian matrix $\rho^{\text{coh}}$ with spectrum $\vv{\rho}^{\text{in}}$ if and only if the majorization condition $\vv{\rho}^{\text{coh}} \prec \vv{\rho}^{\text{in}}$ holds. Hence, a state $\rho^{coh}$ is reachable by a unitary starting from $\rho^{in}$ if and only if the diagonals fulfill $\vv{\rho}^{\text{coh}} \prec \vv{\rho}^{\text{in}}$. In the coherent scenario, the free energy difference and hence the work cost is simply given by $$\begin{aligned} \Delta F = \operatorname{Tr}\left[(\rho^{coh} - \rho^{in}) H\right] = (\vv{\rho}^{\text{coh}} - \vv{\rho}^{\text{in}})\cdot \vv{H},\end{aligned}$$ where $\vv{H}$ is the diagonal of $H$. The last term is constant, and so minimising the work cost for a given final $r_{coh}$ is equivalent to $$\label{eq:workminimisation_onequbit} \underset{\vv{\rho}\prec\vv{\rho}^{\text{in}}}{\min} \, \vv{\rho}\cdot \vv{H} \hspace{0.5cm} \text{s.t.} \hspace{0.5cm} \rho_1 + \rho_2 = r_{coh}.$$ As shown in [App. \[app:Uopt\]]{}, this minimisation can be solved analytically, leading to and . ]{}]{} Two-qubit machine: Model {#sec:prelim} ======================== [[ When considering the two-qubit machine, the total Hamiltonian of target and machine]{}]{} is $H = H_A + H_B + H_C$, with qubits B and C forming the machine. [[ The setup, as well as the two scenarios, are illustrated in [Fig. \[fig:model\]]{}]{}]{}. The starting point for both scenarios 1 and 2 is the initial state $$\label{eq:initialstate} \rho^{\text{in}} = \tau \otimes \tau_B \otimes \tau_C.$$ In scenario 2, an energy non-conserving unitary is applied directly to the initial state $\rho^{\text{in}}$, while in scenario 1, qubit C is first heated to a higher temperature $T_H$, resulting in the state $$\begin{aligned} &&\rho^H = \tau \otimes \tau_B \otimes \tau_C^H,\end{aligned}$$ where $\tau_C^H = \tau(E_C,T_H)$ is the thermal state of qubit C at the temperature of the hot bath. This is followed by an energy conserving unitary acting on the three qubits. To allow for non-trivial energy conserving unitaries, there must be a degeneracy in the spectrum of $H$ with an associated degenerate subspace. In [App. \[app:degeneracy\]]{}, we show that the only degeneracy which enables cooling of the target is obtained by setting $$\label{eq:deg} E = E_B - E_C\,.$$ Hence, we work with this convention throughout the following. \ \ Two-qubit machine: single-cycle regime {#sec:single-cycle} ====================================== In this section, we discuss the single-cycle regime [[ of the two-qubit machine]{}]{}. We show that scenario 2 (coherent operations) always reaches lower temperatures when the work cost is unrestricted. However, for sufficiently low work cost, it turns out that scenario 1 (incoherent operations) outperforms scenario 2.\ Scenario 1: incoherent operations {#sec:single-cycle1} --------------------------------- We first identify the energy-conserving unitary that is optimal for cooling the target qubit. From the relation it follows that there is only one subspace that is degenerate in energy (relevant for cooling), which is spanned by the states ${\vert{010}\rangle}$ and ${\vert{101}\rangle}$. Optimal cooling is simply achieved by swapping these two states, i.e. the unitary is given by (see [App. \[app:degeneracy\]]{} for more details) $$\label{eq:u_incoh} U = {\vert{010}\rangle} {\langle{101}\vert} + {\vert{101}\rangle} {\langle{010}\vert} + \mathds{1}_{\text{non-deg}},$$ where $\mathds{1}_{\text{non-deg}}$ is the identity operation on the complement space. We can thus directly compute the final temperature of the target qubit. We first compute the final ground state population $r_{\text{inc}}$ $$\begin{aligned} \label{eq:rSf} r_{\text{inc}} (T_H) = r r_B + [(1-r) r_B + r (1-r_B)] (1-r_C^H),\end{aligned}$$ where $r_C^H = r(E_C,T_H)$ denotes the ground state population of qubit C after heating and $r$ and $r_B$ denote the ground state populations of the target qubit and qubit B at room temperature $T_R$. The final temperature is found by inverting Eq.  $$\label{eq:TSf} T_{\text{inc}} (T_H) = \frac{E}{\ln(\frac{r_{\text{inc}}}{1-r_{\text{inc}}})}.$$ Not limiting the work cost, optimal cooling is obtained in the limit $T_H \rightarrow \infty$. In this case $r_C^H = \frac{1}{2}$, and thus $$r_{\text{inc}}^* = \lim_{T_H \rightarrow \infty} r_{\text{inc}} (T_H) = \frac{1}{2}(r+r_B).$$ We thus obtain the lowest achievable temperature for scenario 1: $$\label{eq:Tminincoh} T_{\text{inc}}^* = \lim_{T_H \rightarrow \infty} T_{\text{inc}} (T_H) = \frac{E}{\ln(\frac{r+r_B}{2-(r+r_B)})}.$$ We are now interested in the work cost of cooling. For scenario 1, the hot bath is the only resource, implying that the free energy decrease in the hot bath represents the cooling cost. The free energy difference is $\Delta F = \Delta U - T_R \Delta S$, where $\Delta U$ is the internal energy change. For a thermal bath $\Delta U$ is defined as the heat drawn from the bath, $Q$, which from the first law equals the change in energy of qubit C. We follow the convention of counting as positive what is taken from the bath. The change in entropy $\Delta S$ also takes a simple form for a thermal bath, $\Delta S = Q/T_H$. This gives $$\label{eq:deltaF1} \begin{aligned} \Delta F_{\text{inc}} (T_H) &= Q (1-\frac{T_R}{T_H}) \\ &= E_C (r_C - r_C^H) (1-\frac{T_R}{T_H}) . \end{aligned}$$ The above equation shows that the work cost is determined directly by the hot bath temperature $T_H$. The work cost associated to maximal cooling is given by $$\label{eq:deltaFmaxincoh} \Delta F_{\text{inc}}^*= \lim_{T_H \rightarrow \infty} \Delta F_{\text{inc}}(T_H) = E_C (r_C-\frac{1}{2}).$$ Note that despite appearances, the above expression is not independent of $E_B$, as the machine qubits are mutually constrained by the degeneracy condition . More generally, as the ground state population $r_{\text{inc}}$ is monotonic in $r_C^H$, see Eq. , and thus in $T_H$, one can cool to any temperature between $T_R$ and $T_{\text{inc}}^*$ by varying $T_H$ continuously between $T_R$ and infinity. The associated work cost is given by Eq. ; see [Fig. \[fig:Tf\_single\]]{}. Note that the minimum achievable temperature in this scenario is lower bounded away from absolute zero. Taking the limits $T_H \rightarrow \infty$ and then $E_B \rightarrow \infty$, $r_{\text{inc}}$ tends to $(1+r)/2$. The work cost diverges in this limit. This is in contrast to scenario 2 presented in the following section, where for an unbounded work cost, one can cool arbitrarily close to absolute zero. Scenario 2: coherent operations {#sec:single-cycle2} ------------------------------- We now turn to the second scenario, where any joint unitary operation can be applied to the target and machine qubits. The freedom in unitary operation means that the resonance condition $E_B = E + E_C$ is in principle not required to allow cooling, in contrast to scenario 1. However, as the cooling in either scenario depends on the choice of machine qubits, the freedom to choose them represents an extra level of control. In order to make a meaningful comparison between coherent and incoherent operations, we will therefore enforce the resonance condition for scenario 2 as well. We first investigate the lowest achievable temperature. By definition this is obtained by maximizing the ground state population of the target qubit. If we express the state of all three qubits as a density matrix $\rho$ in the energy eigenbasis, then the initial state is seen to be diagonal from Eq.  and the reduced state of the target is given by $\text{Tr}_{BC}(\rho)$. Its ground state population is then simply given by adding the populations (diagonal elements) of the 4 following states: $\{{\vert{000}\rangle},{\vert{001}\rangle},{\vert{010}\rangle},{\vert{011}\rangle}\}$. Making use of the Schur-Horn theorem as argued in [Sec. \[sec:OneMco\]]{} one reaches optimal cooling by unitarily rearranging the populations such that the four largest populations of the initial state are mapped to the four levels contributing to the ground state population of the target. [[ Labeling the population of the state ${\vert{ijk}\rangle}$ in the corresponding initial density operator $\rho^{\text{in}}$ by $p_{ijk}$]{}]{}, and arranging them in decreasing order of magnitude, we find $$\begin{aligned} p_{000} \!>\! \{ p_{001}, p_{100} \} \!>\! p_{010} \!=\! p_{101} \!>\! \{ p_{011}, p_{110} \} \!>\! p_{111},\end{aligned}$$ where $\{\}$ denotes populations whose ordering depends on whether $E_C > E$ or $E_C < E$. Thus the only change necessary to optimize cooling is to swap the populations of ${\vert{100}\rangle}$ and ${\vert{011}\rangle}$, and this leads to a final ground state population of $r_{\text{coh}}^* = r_B$, corresponding to the remarkably simple final temperature $$\label{eq:Tmincoh} T_{\text{coh}}^* = T_R \frac{E}{E_B}.$$ This is the lowest achievable temperature in scenario 2, when the work cost is unrestricted.\ We now turn to the question of optimizing the work cost. Indeed, on inspection of the end point of the above procedure, one finds that within the ground and excited subspaces of the target qubit, one can perform unitaries that rearrange populations without affecting cooling, but that extract energy back from the system, hence decreasing the work cost of the cooling procedure. We illustrate this subtlety with the end point of the simple swap above. The only modified populations after the swap are those of the states ${\vert{100}\rangle}$ and ${\vert{011}\rangle}$. Denoting the new population of energy level ${\vert{ijk}\rangle}$ by $p_{ijk}^\prime$, we have that $p_{011}^\prime = p_{100}$ and $p_{100}^\prime = p_{011}$, with the rest unchanged. Thus the new ordering is $$\begin{aligned} p^\prime_{000} \!>\! \{ p^\prime_{001}, p^\prime_{011} \} \!>\! p^\prime_{010} \!=\! p^\prime_{101} \!>\! \{ p^\prime_{100}, p^\prime_{110} \} \!>\! p^\prime_{111}.\end{aligned}$$ Although the ground state population is maximized by this swap, one sees that its energy is not minimal, since e.g. $p^\prime_{011} > p^\prime_{010}$. As a consequence, one could now extract energy without changing the ground state population by simply swapping the levels ${\vert{011}\rangle}$ and ${\vert{010}\rangle}$. Formally, this implies that within each subspace of the target qubit (ground and excited), the state is not *passive* [@passive1], i.e. the populations are not ordered in decreasing order in energy within each subspace. [[ This showcases the general fact that the state the optimal unitary drives the system to, necessarily has to be passive within each of these subspaces. In the maximal cooling case, as shown in [App. \[subsec:endpoint\]]{}, this passivity condition remarkably turns out to also be sufficient.]{}]{} If one thus follows performing the unitary that reorders each subspace to be passive, and subtracts the energy extracted from the work cost, we arrive at the optimal work cost corresponding to maximal cooling, $\Delta F_{\text{coh}}^*$.\ We find that there are two cases. If $E_C \leq E$, then $$\begin{aligned} \label{eq:deltaFmaxcohBig} \Delta F_{\text{coh}}^* &= E_C \left( r_B - r \right).\end{aligned}$$ Note that this end point can be achieved by simply performing the unitary that swaps the states of qubits $A$ and $B$. On the other hand, if $E_C > E$, then $$\begin{aligned} \label{eq:deltaFmaxcohSmall} \Delta F_{\text{coh}}^* &= \left( E_C - E \right) \left( r_C - r \right) + E_C \left( r_B - r_C \right).\end{aligned}$$ The unitary that achieves this result is the sequence of two swaps - first the swap between the target and qubit $C$, followed by the swap between the target and qubit $B$. Remarkably, these two expressions can be intuitively understood in the following manner. In order to achieve cooling on the target qubit, one would swap its state with a qubit of the machine (or qubit subspace from the machine, also called a “virtual qubit” [@virtual], see [App. \[app:virtualqubit\]]{}, that has a larger energy gap between its ground and excited states than the target qubit. However, doing so requires moving population against the energy difference between the target and the specific machine qubit. Minimizing the work cost of the cooling procedure therefore amounts to swapping the state of the target qubit with the state of the machine qubit with the minimal energy gap as long as this one is bigger than the energy $E$ of the target qubit. If $E_C \leq E$, then the smallest qubit subspace of the machine that has a higher energy gap than the target is qubit $B$, and the optimal procedure is to swap the states of those two qubits. This has a work cost $E_B - E = E_C$ per population. In contrast, when $E_C > E$, then qubit $C$ is the machine qubit with the smallest energy gap bigger than $E$ ($E_C < E_B$ by definition). We thus begin by swapping the target qubit with qubit $C$, at a work cost per population of $E_C - E < E_C$, and only after proceed to cool further by swapping the target qubit with qubit $B$, at higher work cost. This two cases respectively lead to Eqs.  and when the work cost is unrestricted. We now move to the case where the work cost is restricted. Equivalently, we consider the problem of cooling to a certain temperature (above $T_{\text{coh}}^*$), and derive the minimal associated work cost. Intuitively, as the lowest temperature given by Eq.  can be reached by a full swap (or a sequence of two full swaps if $E_C > E$), we might expect that an optimal strategy for reaching an intermediate temperature will be a partial swap. This is indeed the case. In analogy with , the minimal work cost for a given target temperature $T_{coh}$ and corresponding ground-state population $r_{coh}$ is given by $$\label{eq:workminimisation_onequbit} \underset{\vv{\rho}\prec\vv{\rho^{\text{in}}}}{\min} \, \vv{\rho}\cdot \vv{H} \hspace{0.5cm} \text{s.t.} \hspace{0.5cm} \sum_{i=1}^4\rho_i = r_{coh},$$ where $\vv{\rho}$, $\vv{\rho^{\text{in}}}$, and $\vv{H}$ represent the diagonals of $\rho$, $\rho^{in}$, and $H$. This minimisation can be solved analytically, as shown in [App. \[app:Uopt\]]{}. The optimal unitary and associated work cost depends on whether $E_C \leq E$ or $E_C > E$. For the case $E_C \leq E$, we can parametrise a partial swap of the target with machine qubit B as in by $$U_\leq(\mu) = e^{-i t L_{AB}} ,$$ where $$\label{eq:LAB} L_{AB} = i {\vert{01}\rangle}_{AB}{\langle{10}\vert} - i {\vert{10}\rangle}_{AB}{\langle{01}\vert}$$ It is useful to define $t = \arcsin(\sqrt{\mu})$, where $\mu\in[0,1]$ is a swapping parameter. The ground state population of the target qubit and the free energy cost are given by $$\begin{aligned} r_{\text{coh},\leq} (\mu) &= r + \mu \left( r_B - r \right), \label{eq:rAf2leq}\\ \Delta F_{\text{coh},\leq} (\mu) &= \mu E_C \left( r_B - r \right),\label{eq:deltaF2leq}\end{aligned}$$ with $\mu=0$ corresponding to no swap and $\mu=1$ to a full swap, which is the limit of maximal cooling, as previously discussed, see Eqs.  and . Similarly, for the case $E_C > E$ we employ the unitary that first swaps qubits $A$ with $C$ until the required temperature is reached, and if this is not the case after the full swap, continue by swapping the new state of qubit $A$ with qubit $B$. This unitary can be parametrised as $$U_>(\mu) = e^{-i g(\mu) L_{AB}} e^{-i f(\mu) L_{AC}} ,$$ [[ where $f(\mu) = \arcsin(\sqrt{\min\{2\mu,1\}})$, $g(\mu) = \arcsin(\sqrt{\max\{2\mu-1,0\}})$,]{}]{} and $L_{AC}$ is definied analogously to Eq. . Again $\mu\in[0,1]$ such that for $\mu \leq \frac{1}{2}$, a partial swap between A and C is performed and for $\frac{1}{2}<\mu\leq 1$, an additional partial swap between A and B is performed. The ground state population for the strategy defined by $U_>$ is: $$\begin{aligned} \label{eq:rAf2gtr} r_{\text{coh},>}(\mu) &= \begin{cases} r + 2\mu(r_C-r), & \mu\in[0,\frac{1}{2}] \\ r_C + (2\mu-1)(r_B-r_C), & \mu\in(\frac{1}{2},1] \end{cases},\end{aligned}$$ and the work cost for the same strategy is given by $$\begin{aligned} \label{eq:deltaF2gtr} \Delta F_{\text{coh},>}(\mu) &= \begin{cases} 2\mu (E_C-E) (r_C-r), & \mu\in[0,\frac{1}{2}] \\ \begin{aligned} &(E_C - E) (r_C-r)\\ &+ (2\mu-1) E_C (r_B-r_C) \end{aligned}, & \mu\in(\frac{1}{2},1] \end{cases}.\end{aligned}$$ The final temperature can again be computed by inverting Eq.  using the ground state population $r_{\text{coh}}$ as given by Eq.  or Eq.  according to the relative size of $E$ and $E_C$. Since both $\Delta F_{\text{coh}}$ and $T_{\text{coh}}$ are given as functions of $\mu$, by varying $\mu$ from 0 to 1, we can parametrically map out the amount of cooling and the associated work cost, as shown in [Fig. \[fig:Tf\_single\]]{} and discussed in [Sec. \[subsec:comparison\]]{}. Comparison of scenarios 1 and 2 {#subsec:comparison} ------------------------------- Our main results in the single-cycle regime are summarised in [Fig. \[fig:Tf\_single\]]{}. There we map out the amount of cooling vs. the associated work cost for both scenarios 1 and 2. In the first case, the curve is generated from Eqs.  and (inverting Eq.  to extract $T_{\text{inc}}$) and is parametric in the hot bath temperature $T_H$. In the second case, the curve is generated from Eqs.  and (inverting Eq.  to extract $T_{\text{coh}}$) and is parametrised by the swapping parameter $\mu$. We selected $E_C \leq E$ for [Fig. \[fig:Tf\_single\]]{} but note that the behavior of the curve for $E_C > E$ is similar, changing only by the fact that the coherent curve has a discontinuity in the first derivative at $\mu = \frac{1}{2}$. ![Parametric plot of the relative temperature of the target qubit $\frac{T}{T_R}$ as a function of its work cost $\Delta F$ for $E_C=0.4$ and $T_R=1$. The red solid curve corresponds to scenario 1 (incoherent operations), the blue dashed, to scenario 2 (coherent operations). When the cooling is maximal (i.e. the work cost is unrestricted), scenario 2 always outperforms scenario 1, $T_{\text{coh}}^{*} < T_{\text{inc}}^{*}$ and $\Delta F_{\text{coh}}^* < \Delta F_{\text{inc}}^*$. However, below a critical work cost $\Delta F_\text{crit}$, scenario 1 always outperforms scenario 2.\[fig:Tf\_single\]](Tsingle.png){width="8.5cm"} The plot illustrates several interesting observations. First, comparing the endpoints of the curves, we see that coherent operations achieve a lower minimal temperature (i.e. stronger cooling) and that the associated work cost is lower than the one for achieving the minimal temperature with incoherent operations. This is true generally. As can be seen by comparing Eqs.  and , $T_{\text{coh}}^* < T_{\text{inc}}^*$ since $$\ln \left( \frac{r+r_B}{2-(r+r_B)} \right) < \frac{E_B}{T_R},$$ where we use that $E_B>E$. Similarly, comparing Eqs.  and , we see that $\Delta F_{\text{coh}}^* < \Delta F_{\text{inc}}^*$, see [App. \[app:endpoint\]]{}. Thus, for maximal cooling, coherent operations always perform better than incoherent ones in the single-cycle regime. Second, perhaps surprisingly, for non-maximal cooling with low work cost, incoherent operations may outperform coherent ones. In fact, for sufficiently low work cost, this is always the case. This can be seen by looking at the derivatives of the two curves in [Fig. \[fig:Tf\_single\]]{} with respect to $\Delta F$, close to $\Delta F = 0$. For the incoherent scenario, using the parametrization w.r.t. $T_H$, we have $$\lim_{\Delta F_{\text{inc}} \rightarrow 0} \frac{dT_{\text{inc}}}{d\Delta F_{\text{inc}}} = \lim_{\Delta F_{\text{inc}} \rightarrow 0} \frac{dT_{\text{inc}}}{dT_H} \left( \frac{d \Delta F_{\text{inc}}}{dT_H} \right)^{-1} = -\infty .$$ On the other hand, for the coherent scenario, using the parametrization in terms of $\mu$, we find that $$\lim_{\Delta F_{\text{coh}} \rightarrow 0} \frac{dT_{\text{coh}}}{d\Delta F_{\text{coh}}} = - \frac{1}{E_C' r (1-r)\ln^2(\frac{1-r}{r})},$$ where $E_C' = E_C$ for $E_C\leq E$ and $E_C'=E_C-E$ if $E > E_C$. This expression is negative but finite. Hence, since both curves begin at the same point, the incoherent curve must lie below the coherent one for sufficiently small $\Delta F$. From the previous observations, it follows that the curves must cross at least once. Numerically we find that there is always exactly one such crossing. [[ Hence, there exists a critical work cost $\Delta F_{\text{crit}}$ below which incoherent [[ operations]{}]{} perform better than coherent [[ ones,]{}]{} while the reverse is true above some $\Delta F_{\text{crit}}' \geq \Delta F_{\text{crit}}$, with $ \Delta F_{\text{crit}}'=\Delta F_{\text{crit}}$ numerically strongly supported to be true]{}]{}. We denote the temperature of the target qubit at the crossing point by $T_{\text{crit}}$. In [App. \[app:cross\]]{} we study the behaviour of $T_{\text{crit}}$ and $\Delta F_{\text{crit}}$ as functions of $T_R$ and $E_C$. Two-qubit machine: Repeated operations and asymptotic regime {#sec:repeat} ============================================================ In this section we go beyond the single-cycle regime discussed above. In the repeated and asymptotic regimes, the cooling unitaries of either scenario can be repeated a finite number of times [[ or indefinitely]{}]{}, inter-spaced by steps in which the machine qubits (B and C) are rethermalised to the temperatures of their baths, *i.e.* [[ respectively]{}]{} $T_R$ and $T_H$ in scenario 1 and $T_R$ for both [[ machine qubits]{}]{} in scenario 2. The target qubit is assumed not to rethermalise during the cooling process. In this way, the bounds we obtain on achievable temperature and work cost are general. Moreover, these bounds can be attained in the limit where the thermal coupling of the target qubit is much smaller than other couplings in the system. \ Before going into details, we first summarize the main results of this section.\ 1. Repeated operations do enhance the cooling, as the lowest achievable temperatures in both scenarios are strictly lower than in the single-cycle case. 2. For incoherent operations (scenario 1), the asymptotic regime (the limit of infinite repetitions) corresponds to autonomous refrigeration. Specifically, we recover the cooling and work cost obtained in the steady-state of a three-qubit autonomous refrigerator [@auto0; @footnote1]. 3. For coherent operations (scenario 2), the asymptotic regime corresponds to algorithmic cooling. In particular, the cooling bounds correspond to known results [@raeisi; @algo3]. 4. In the asymptotic regime, incoherent operations (scenario 1, autonomous cooling) achieve the same maximal cooling (for $T_H \rightarrow \infty$) as that of a single-cycle coherent operation (scenario 2). See our accompanying article [@OurLetter] for more details on this relation. 5. In both scenarios, the approach to the asymptotic state of the target qubit (w.r.t. its ground state population) is exponential in the number of repetitions. In the following, we will start by discussing repeated operations in scenario 1 and then move to scenario 2. Scenario 1: repeated incoherent operations {#sec:repeatinc} ------------------------------------------ ![Scenario 1, repeated incoherent operations. Each cycle comprises the steps of 1. the environment reset of qubit $B$ and resource input into qubit $C$, and 2. the cooling unitary operation.\[fig:repeat\_inc\]](repeat_inc.pdf){width="9cm"} As mentioned above, the scenario of repeated incoherent operations involves a rethermalisation of the machine qubits to their respective baths in every step. This is followed by an energy-conserving unitary operation between the machine and the target. Thus, the cooling cycle consists in the following steps (see [Fig. \[fig:repeat\_inc\]]{}), which can be repeated any number of times. 1. *Environment reset and resource input -* [[ Qubit C is heated to $T_H$ after the machine [[ has]{}]{} been brought back to the environment temperature $T_R$.]{}]{} 2. *Cooling step -* The energy-preserving unitary given by Eq.  (swapping the degenerate states ${\vert{101}\rangle} \leftrightarrow {\vert{010}\rangle}$) is applied. Prior to the first step, all three qubits are at temperature $T_R$. Then qubit C is heated to $T_H$. After this, every cooling step lowers the temperature of the target qubit $A$, but also cools down qubit $C$ while heating qubit $B$, which necessitates the reset of $B$ to $T_R$ and the heating of $C$ to $T_H$ before the swap can be repeated. This process can be conveniently characterized using the notion of a virtual qubit, [@virtual]. The virtual qubit corresponds to the subspace of the machine which is involved in the cooling swap with the target qubit. See [App. \[app:virtualqubit\]]{} and [App. \[app:incohop\]]{}, for a detailed explanation. It is thus the properties of the virtual qubit that determine the cooling in each step. For the unitary operation here, the virtual qubit is spanned by the states $\{{\vert{01}\rangle}_{BC},{\vert{10}\rangle}_{BC}\}$. In each repetition, the rethermalisation of qubits $B$ and $C$ (Step 1) resets the virtual qubit. In the asymptotic limit of infinite repetitions, we find that the ground state population of the target goes to (see [App. \[app:incohop\]]{}) $$\begin{aligned} \label{equ:rincinf} r_{\text{inc},\infty} = \frac{1}{1 + e^{-E/T_{\text{inc},\infty}}}, \end{aligned}$$ where $T_{\text{inc},\infty}$ is equal to the temperature of the virtual qubit, $$\begin{aligned} \label{eq:TVinc} T_{\text{inc},\infty} = T_{V,\text{inc}} = \frac{E}{\frac{E_B}{T_R} - \frac{E_C}{T_H}}.\end{aligned}$$ For a finite number $n$ of repetitions, the ground state population of the target qubit approaches the asymptotic value as $$\begin{aligned} \label{eq:incoherentnstep} r_{\text{inc,n}} &= r_{\text{inc},\infty} - \left( r_{\text{inc},\infty} - r \right) \left( 1 - N_{V,\text{inc}} \right)^n,\end{aligned}$$ where $N_{V,\text{inc}} = r_B (1-r_C^H)+(1-r_B) r_C^H$ is the norm of the virtual qubit (i.e. the total population in the subspace $\{{\vert{01}\rangle}_{BC},{\vert{10}\rangle}_{BC}\}$). Note that all of the quantities in the above expressions are functions of $T_H$. As argued also in [App. \[app:incoherentauto\]]{}, the asymptotic temperature given by Eq.  is exactly equal to the temperature obtained in the steady state of an autonomous refrigerator [@auto0], and thus the asymptotic state of the target qubit under repeated incoherent operations is the same as the steady state of the autonomous fridge. More precisely, $$\begin{aligned} r_{\text{inc},\infty} &= r_{\text{auto}} & &\text{i.e.} \quad T_{\text{inc},\infty} = T_{\text{auto}}.\end{aligned}$$ This highlights an interesting connection between discrete and continuous cooling procedures; see also [@Raam]. Furthermore, showcasing one of the result of our accompanying article [@OurLetter], the maximal cooling in either case, obtained in the limit $T_H \rightarrow \infty$, is the same as for a single-cycle coherent operation (c.f. Eq. ) $$T_{\text{auto}}^* = \lim_{T_H \rightarrow \infty} T_{\text{auto}} = \frac{E}{E_B}T_R = T_{\text{coh}}^* \,.$$ Note that in this limit we have that $N_{V,\text{inc}} = \frac{1}{2} $. Hence in each repetition the difference between the current and asymptotic ground state population is halved. Finally, we discuss the work cost of cooling. Detailed calculations are given in [App. \[app:incohop\]]{}. Intuitively, the free energy drawn from the hot bath can be divided into two parts: i) the energy required in the first instance of step 1, to initially heat up qubit C to temperature $T_H$, and ii) the energy required in all subsequent repetitions of step 1, to bring qubit C back to $T_H$. This is straightforwardly calculated from the change in population of qubit $C$, which is equal to the change in population of qubit $A$, due to the form of the energy-preserving unitary in step 2. The total heat drawn from the hot bath for $n$ repetitions is $$\begin{aligned} Q^H_n &= E_C \left( r_C - r_C^H \right) + E_C \left( r_{\text{inc},n-1} - r \right).\end{aligned}$$ In the asymptotic case, we find that the total heat drawn from the hot bath is exactly the same as if we had run the autonomous refrigerator beginning from the initial state, i.e. $Q^H_{\infty} = Q^H_{\text{auto}}$. See [App. \[app:incoherentauto\]]{} for a detailed proof. In order to cool to a given temperature, it is possible to vary the number of repetitions as well as the temperature of the hot bath $T_H$. One may therefore ask which is the most cost-efficient strategy. Generically, we observe (see [Fig. \[fig:compare\_incoherent\]]{}) that for a given final temperature, implementing many cooling swaps has a lower work cost than using fewer swaps (at higher temperature $T_H$). As implementing a higher number of swaps would take longer time, this observation is reminiscent of the power vs efficiency trade-off in continuously operated machines [@ContinuousDevices]. ![Cooling vs the work cost for different number of repetitions of incoherent operations. Each curve is parametrized by the temperature of the hot bath, $T_H$. $E_C$, $E$ and $T_R$ are all set to $1$.\[fig:compare\_incoherent\]](compareInco.png){width="8.5cm"} Scenario 2: repeated coherent operations ---------------------------------------- ![Scenario 2, coherent operations, in the regime of repeated operations. Each cycle comprises the steps of 1. the environment reset of the machine, and 2. cooling.\[fig:repeat\_coh\]](repeat_coh.pdf){width="9cm"} When discussing single-cycle cooling via coherent operations in [Sec. \[sec:single-cycle2\]]{}, we found that [[ according to the relative size of $E_C$ and $E_A$, there were]{}]{} two different sets of unitaries which lead to the lowest achievable temperature $T_{\text{coh}}^*$ of the target qubit. The first procedure involved only qubits A and B, and maximal cooling could be achieved with a single-qubit machine (i.e. without qubit C). [[ This procedure was found to be optimal when $E_C \leq E_A$.]{}]{} However, [[ although this procedure was also valid when $E_C > E$]{}]{} we showed that [[ in this case]{}]{} a different procedure, involving all three qubits, could reach the same temperature, but at a lower work cost. In the present section we discuss cooling via repeated coherent operations. We find that [[ after the first cycle a procedure similar to the second procedure in the single-cycle case must be applied in order to cool further]{}]{}. In fact, one can immediately see that for a single-qubit machine, repetitions do not lower the temperature further beyond the single-cycle case. Since the single-qubit machine simply swaps qubits A and B, there is no unitary operation that can cool further, even after $B$ is re-thermalised to the ambient temperature $T_R$. On the contrary, using a two-qubit machine one can enhance the cooling beyond the single-cycle case. This is achieved by repeating the following steps (see [Fig. \[fig:repeat\_coh\]]{}, and [App. \[app:repeatcoherent\]]{} for more details): 1. *Environment reset -* Qubits B and C are brought back to the environment temperature $T_R$. 2. *Cooling step -* The unitary swapping the populations of the states $\{{\vert{100}\rangle},{\vert{011}\rangle}\}$ is applied. As qubit A is cooled by swapping with the subspace $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$ of the machine, we identify this subspace as the relevant virtual qubit of the machine, and denote its norm as $N_{V,\text{coh}}$. Following calculations given in [App. \[app:repeatcoherent\]]{}, one finds that in the asymptotic limit (infinite repetitions), the ground state population of the target goes to $$\begin{aligned} r^*_{\text{coh},\infty}= \frac{1}{1+e^{-E/T^*_{\text{coh},\infty}}},\end{aligned}$$ where the asymptotic temperature takes the simple form $$\begin{aligned} T^*_{\text{coh},\infty} = T_R \frac{E}{E_B+E_C}. \end{aligned}$$ This recovers the result of our accompanying article [@OurLetter] and the results of heat bath algorithmic cooling with no compression qubit. Note that in the coherent case, the temperature of the virtual qubit is just $T_R$, since both the machine qubits are at $T_R$ after rethermalization. However, due to the swap, the final temperature of the target qubit is not simply the virtual temperature, but rather is modulated by the ratio of energies of the target and virtual qubits, see [App. \[app:virtualqubit\]]{} for more detail. This is why maximal cooling in the asymptotic case is attained by picking the virtual qubit of the largest energy gap, which for the two qubit machine is $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$. For a finite number $n$ of repetitions, the ground state population of the target approaches its asymptotic value as $$\begin{aligned} \label{eq:coherentnstep} r_{\text{coh},n}^* &= r^*_{\text{coh},\infty} - \left( r^*_{\text{coh},\infty} - r \right) \left( 1 - N_{V,\text{coh}} \right)^n.\end{aligned}$$ Thus we see that cooling is enhanced compared to the single-cycle case, i.e. $T^*_{\text{coh},n} < T_{\text{coh}}^*$. (Note that we use $*$ here to denote the lowest achievable temperature for a fixed number of repetitions.) We proceed to discuss the work cost of this process. Note that the optimal work cost of the first coherent operation has already been discussed in [Sec. \[sec:single-cycle2\]]{}, and is denoted by $\Delta F^*_{\text{coh}}$. For further repetitions of the steps presented above, free energy is needed to implement the unitary in step 2, as populations of states with different energies are swapped. (Step 1 is free as it involves thermalisation of the machine qubits to the environment temperature $T_R$). The work cost of $n$ full repetitions of the cycle is given by (details in [App. \[app:repeatcoherent\]]{}) $$\begin{aligned} \label{eq:Fcohn} \Delta F_{\text{coh},n}^* &= \Delta F_{\text{coh}}^* + 2 E_C \left( r_{\text{coh},n}^* - r_B \right),\end{aligned}$$ where $\Delta F_{\text{coh}}^*$ is the work cost in the single-cycle regime given by Eq. . In the asymptotic regime, the work cost becomes $$\begin{aligned} \Delta F^*_{\text{coh},\infty} &= \Delta F_{\text{coh}}^* + 2E_C \left( r^*_{\text{coh},\infty} - r_B \right),\end{aligned}$$ where $r^*_{\text{coh},\infty}$ is the final ground-state population for the target qubit corresponding to $T^*_{\text{coh},\infty}$. Following the argument expanded in full detail in [App. \[app:repeatcoherent\]]{}, the steps presented above are the only way to cool the target after the first (optimal) coherent operation, and thus $\Delta F^*_{\text{coh},n}$ represents the minimum work cost given the lowest achievable temperature after $n$ repetitions. Scenario 2: algorithmic cooling ------------------------------- ![Scenario 2 in the regime of algorithmic cooling. Each cycle comprises the steps of 1. environment reset, 2. precooling, 3. environment reset, and 4. cooling.\[fig:repeat\_algo\]](repeat_algo.pdf){width="8cm"} It turns out that even stronger cooling can be obtained, by increasing the level of control compared to the above model of repeated coherent operations, specifically, by allowing for individual rethermalisation of each machine qubit separately. This model is equivalent to heat bath algorithmic cooling, this time with a compression qubit, as we will demonstrate shortly. The procedure consists in repeating the following steps, shown schematically in [Fig. \[fig:repeat\_algo\]]{}: 1. *Environment reset -* Qubit $B$ is brought back to the environment temperature $T_R$. 2. *Precooling -* The states of qubits $B$ and $C$ are swapped. 3. *Environment reset -* Qubit $B$ is brought back to the environment temperature $T_R$. 4. *Cooling step -* The unitary swapping the populations of the states ${\vert{100}\rangle}\leftrightarrow{\vert{011}\rangle}$ is applied. As before, the target qubit is swapped with the qubit subspace of the machine that has the highest energy gap, spanned by ${\vert{00}\rangle}_{BC}$ and ${\vert{11}\rangle}_{BC}$. However, thanks to the precooling step, the virtual temperature of this coldest qubit subspace is decreased, from $T_R$ to $$\begin{aligned} T_{V,\text{algo}} &= T_R \frac{E_B + E_C}{2E_B}.\end{aligned}$$ The final temperature is again determined by the virtual temperature. Following calculations given in [App. \[app:algo\]]{}, in the asymptotic limit of infinite repetitions, the ground state population of the target qubit tends to $$\begin{aligned} r_{\text{algo},\infty}^* &= \frac{1}{1+e^{-E/T_{\text{algo},\infty}^*}},\end{aligned}$$ where the aysmptotic temperature is given by $$\begin{aligned} \label{eq:Talgo} T_{\text{algo},\infty}^* = T_R \frac{E}{2E_B} = \frac{T_{\text{coh}}^*}{2}.\end{aligned}$$ The final temperature is thus half the temperature achieved via single-cycle coherent operations. Note that it is also half of the minimal achievable temperature $T_{\text{auto}}^*$ in the asymptotic incoherent regime. Moreover, since $E_B >E_C$, we see that the lowest achievable temperature of algorithmic cooling is strictly colder than that of repeated coherent operations. It is worth noting that the expression for the minimal temperature of Eq.  perfectly matches known results in algorithmic cooling: specifically Eq. (7) of Ref. [@raeisi] (for the case of two reset qubits), as well as Eq. (10) of Ref. [@algo3]. For a finite number of repetitions of the above cycle of steps, one finds that the ground state population of the target approaches $r_{\text{algo},\infty}^*$ as $$\begin{aligned} \label{eq:ralgon} r_{\text{algo},n} & = r_{\text{algo},\infty}^* - \left( r_{\text{algo},\infty}^* - r_0 \right) \left( 1 - N_{V,\text{algo}} \right)^n,\end{aligned}$$ where $r_0$ is the population of the ground state before the first application of the procedure, and $N_{V,\text{algo}}$ is the norm of the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$ right before step 4 (i.e. after qubit $C$ has been pre-cooled and qubit $B$ rethermalized). Finally, we discuss the work cost of this process. Free energy is needed to implement the unitaries in steps 2 and 4, as populations of states with different energies are swapped. Steps 1 and 3 have zero cost, since they only involve the environment bath. As detailed in [App. \[app:algo\]]{}, the work cost after $n$ full repetitions is given by $$\label{eq:Falgon} \begin{aligned} \Delta F_{\text{algo},n} = &E ( r_B - r_C ) + 2E_C ( r_{\text{algo},n} - r_0 ) \\ &+ E ( r_{\text{algo},n-1} - r_0 ) \end{aligned}$$ Let us first remark that for cooling to a temperature that would be achievable with repeated coherent operations, algorithmic cooling has a higher work cost, as is argued in [App. \[app:optimalcoherent\]]{}, and on comparison of Eqs.  and . Thus, in order to minimize the work cost, a better strategy consists in first cooling using repeated coherent operations, until the temperature cannot be lowered any further, and only then switch to algorithmic cooling. A detailed discussion of this sequence of operations may be found in [App. \[app:optimalcoherent\]]{}. In the asymptotic case of infinite repetitions, the work cost of this procedure (denoted by $\Delta F^*$) becomes $$\begin{split} \Delta F^*_{\text{algo},\infty} &= \Delta F^*_{\text{coh},\infty} + E \left( r_B - r_C \right) \\ &\quad + (2 E_C+E) \left( r^*_{\text{algo},\infty} - r_{\text{coh},\infty}^* \right). \end{split}$$ This procedure turns out to be optimal with respect to the work cost if one is interested in reaching the lowest achievable temperature $T_{\text{algo},\infty}^*$. If one is however interested in cooling the target to a temperature between $T_{\text{algo},\infty}^*$ and $T_{\text{coh},\infty}^*$, fully precooling qubit $C$ is unnecessary and there exists a better manner of proceeding after repeated coherent operations, where given the desired final temperature of the target, one tunes the precooling of qubit $C$ to be a partial rather than a full swap. In [Fig. \[fig:plot\_compare\_coherent\]]{}, we compare the work cost of the optimal sequence of operations (first repeated coherent, then optimized algorithmic cooling) against that of using standard algorithmic cooling from the beginning. ![Comparison of the work cost of using algorithmic cooling from the beginning (orange dot-dashed), as opposed to the optimal sequence of coherent operations (blue solid line), and of an autonomous refrigerator (red dashed, parametrized w.r.t. $T_H$). $E_C$, $E$ and $T_R$ are all set to $1$.\[fig:plot\_compare\_coherent\]](algoVScoh.png){width="9cm"} Finally, it is worth noting that algorithmic cooling is rather expensive even when compared to autonomous cooling, for the same target temperature, see [Fig. \[fig:plot\_compare\_coherent\]]{}. Thus, while algorithmic cooling can achieve the lowest temperatures, it may be the case, depending on the parameters of the problem, that an autonomous refrigerator can cool to any $T \geq T^*_{\text{auto}}$ more efficiently. Saturating the second law \[sec:secondlaw\] =========================================== Upon comparing the cooling performance of the minimal machines presented in this article with the ultimate performance bound set by each paradigm in our accompanying article [@OurLetter], it is quite striking to notice that as simple as they are, the minimal machines already suffice to saturate the bound. The next natural question to ask is if these machines are also optimal in terms of the associated work cost. We in this section answer this question by the negative. Clearly, fundamental limitations on the work cost arise from the second law. Specifically, the free energy change of the target qubit is a lower bound on the work cost. Here we present a family of $N$-qubit coherent machines which asymptotically saturate this bound. These machines have been introduced in Ref. [@paul] for demonstrating optimal work extraction from quantum states. Moreover, for any machine in the family, we construct an incoherent machine of $2N$ qubits achieving the same temperature. In the limit where the hot bath becomes infinite, the associated work cost is the same up to a constant offset [[ that can be made arbitrarily small]{}]{}. As we have learned from section \[sec:onequbit\], a given temperature $T$ can be achieved via a single-qubit machine with energy gap $E_N = E \frac{T_R}{T}$. In order to minimize the work cost, we now introduce $N-1$ additional qubits with energy gaps (evenly) spaced between $E$ and $E_N$. The single swap is now replaced by a sequence of swaps between the target qubit and machine qubits in order of increasing energy gaps. This can be understood intuitively by noticing that the energy difference $\Delta E$ when swapping two qubits represents the work cost per unit of population transferred $\Delta r$ (see [App. \[app:virtualqubit\]]{}) $$\begin{aligned} \Delta F &= \Delta r \Delta E.\end{aligned}$$ Hence, for a given final population transfer, replacing one single swap at large $\Delta E$ by $N$ swaps at smaller $\Delta E $ reduces the work cost. As shown in Ref. [@paul], the total work cost of this procedure is given by $$\begin{aligned} \Delta F &= \Delta F_{\text{target}} + O\left( \frac{1}{N} \right),\end{aligned}$$ where $\Delta F_{\text{target}}$ is the increase in the free energy of the target qubit. In the limit $N \rightarrow \infty$, the work cost is exactly the free energy transferred to the system, which is the lower bound provided by the second law. See [App. \[app:secondlaw\]]{} for details. The next question is whether we can find an incoherent machine which also saturates the second law. A first possibility is to transform the above coherent machine into an incoherent one, using the same idea as discussed in our accompanying article [@OurLetter]. Specifically, each swap can be made energy preserving by adding an extra qubit to the machine. Therefore, the $N$-qubit coherent machine discussed above, can be made incoherent by adding $N$ extra qubits. The temperature achieved by the incoherent machine will match that of the coherent if either $T_H \rightarrow \infty$, or if the energies of the machine qubits are increased so as to match the desired temperature on the target qubit. In the former case, the work cost will diverge when $N \rightarrow \infty$, as each additional qubit must now be heated from $T_R$ to infinite temperature. Nevertheless, in the second case, this problem can be circumvented by noting that these $N$ additional qubits do not need to correspond to physical qubits, but can be taken as virtual qubits. For instance, one can consider a single evenly spaced $(N+1)$-level system providing all these virtual qubits. By embedding this $(N+1)$-level system into a larger system, the initial work cost can be made arbitrarily small, and we can thus approach the work cost of the corresponding coherent machine arbitrarily closely. Consequently, we have constructed an incoherent machine which also saturates the second law in the limit of $N \rightarrow \infty$. See [App. \[app:incsecondlaw\]]{} for an explicit construction and proof. Conclusion and outlook {#sec:conclusion} ====================== We have presented a unified view of quantum refrigeration, allowing us to compare various paradigms. In particular, our framework incorporates autonomous quantum thermal machines, algorithmic cooling, single-cycle refrigeration and the resource theory of thermodynamics. [Lowest achievable temperature $T^*$ and associated free energy change of the resource $\Delta F^*$ for different cooling paradigms and machine sizes]{}[Tempi]{} $$\begin{aligned} T_{coh}^* = \frac{E}{E_B} T_R&&\Delta F_{coh}^* = (r_{coh} - r)(E_B - E)\nonumber \end{aligned}$$ $$\begin{aligned} &T^*_{\text{inc}}=\frac{E}{\ln \left( \frac{1-r^*_{\text{inc}}}{r^*_{\text{inc}}}\right)} &&r^*_{\text{inc}}=\frac{1}{2} (r+r_B) &\Delta F^*_{\text{inc}}= E_C (r_C- \frac{1}{2}) > \Delta F_{\text{coh}}^* \nonumber\\ &T^*_{\text{auto}}=\frac{E}{E_B}T_R&& r^*_{\text{auto}}=r_B&\Delta F^*_{\text{auto}}= E_C (r_C- \frac{1}{2}+r_B-r)\nonumber\\ &T^*_{\text{coh}}=T^*_{\text{auto}} && r_{\text{coh}}^*=r^*_{\text{auto}}&\Delta F^*_{\text{coh}}= \begin{cases} E_C (r_B- r),& E_C \leq E\nonumber\\ E_C (r_B-r)-E(r_C-r),& E_C > E \end{cases}\nonumber\\ &T^*_{\text{coh,}\infty}=\frac{E}{E_B+E_C} T_R && r^*_{\text{coh},\infty}= \frac{1}{1+e^{-\frac{E_B+E_C}{T_R}}}&\Delta F^*_{\text{coh,}\infty}=\Delta F^*_{\text{coh}}+2 E_C (r^*_{\text{coh},\infty}- r_B)\nonumber\\ &T^*_{\text{algo,}\infty}=\frac{E}{2E_B} T_R && r^*_{\text{algo},\infty}= \frac{1}{1+e^{-\frac{2E_B}{T_R}}}&\Delta F^*_{\text{algo,}\infty}= \Delta F^*_{\text{coh,}\infty}+E(r_B-r_C)+\nonumber\\ &\,&&\,&(2 E_C+E) (r^*_{\text{algo},\infty}-r^*_{\text{coh},\infty})\nonumber \end{aligned}$$ $$\begin{aligned} T^* = \frac{E}{E_{max}}T_R&&\Delta F^* = \Delta F_{\text{target}} \nonumber \end{aligned}$$ $T_R:$ Room temperature; $E:$ target qubit energy gap; $E_B:$ energy gap of first machine qubit; $E_C:$ energy gap of second machine qubit; $E_{max}:$ maximum energy gap of the machine; $r:$ initial target qubit ground state population; $r^*:$ final target qubit ground state population; The subscripts denote the different operational paradigms, i.e. *coh* refers to the coherent scenario, *inc* to the incoherent scenario, *auto* to the autonomous scenario and *algo* to algorithmic cooling. We characterize fundamental limits of cooling, in terms of achievable temperature and work cost, for both coherent and incoherent operations, in single-cycle, finite repetitions, and asymptotic regimes. The main formulas are summarized in the boxes shown. [[ We find that, contrary to classical thermodynamics, the fundamental limits crucially depend on the level of control available. In particular, this implies that the free energy does not uniquely determine the minimal achievable temperature. Moreover, the size of the machine represents an additional form of control, which also influences thermodynamic performance. On the one hand, for minimal machines, the difference between coherent and incoherent control is strongly pronounced. On the other, in the asymptotic limit, the two scenarios become mostly equivalent.]{}]{} [[ While we focused here on the task of cooling a single qubit, it is natural to ask what the fundamental limits to cooling larger systems are. Understanding the qubit case already provides significant insight into the general case. For the task of increasing the ground-state population, we showed that qubit bounds apply in general. Repeating every scenario, for every possible notion of cooling would, while possible, not add much insight without a more physical motivation of the respective target Hamiltonian and setting. It would furthermore be interesting to characterize the work cost of cooling general systems, although this will also depend on the exact Hamiltonian structure of the target, and so one cannot expect to obtain a single-letter formula (as in the qubit case). ]{}]{} Finally, it would indeed be interesting to discuss different tasks than cooling, e.g. work extraction, and determine if similar conclusions can be drawn. **Acknowledgements.**We are grateful to Flavien Hirsch, Patrick P. Hofer, Marc-Olivier Renou, and Tamás Krivachy for fruitful discussions. We would also like to acknowledge the referees of our initial submission for productive and challenging comments. Several authors acknowledge the support from the Swiss NFS and the NCCR QSIT: R.S. through the grants Nos. $200021\_169002$ and $200020\_165843$, N.B. through the Starting grant DIAQ and grant $200021\_169002$, F.C. through the AMBIZIONE grant PZ00P2$\_$161351, and G.H. through the PRIMA grant $PR00P2\_179748$ and Marie-Heim Vögtlin grant 164466. MH acknowledges support from the Austrian Science Fund (FWF) through the START project Y879-N27. JBB acknowledges support from the Independent Research Fund Denmark. [99]{} J. Gemmer, M. Michel, G. Mahler, Quantum Thermodynamics, Lecture Notes in Physics, Springer (2009). J. Goold, M. Huber, A. Riera, L. del Rio, P. Skrzypczyk, [*The role of quantum information in thermodynamics — a topical review*]{}, J. Phys. A: Math. Theor. [**49**]{}, 143001 (2016). S. Vinjanampathy, J. Anders, [*Quantum thermodynamics*]{}, Contemp. Phys. [**57**]{}, 545 (2016). J. Millen, A. Xuereb, [*Perspective on quantum thermodynamics*]{}, New J. Phys. [**18**]{}, 011002 (2016). M. Horodecki, J. Oppenheim, [*Fundamental limitations for quantum and nano thermodynamics*]{}, Nat. Comm. [**4**]{}, 2059 (2013). F. G. S. L. Brandao, M. Horodecki, J. Oppenheim, J. M. Renes, R. W. Spekkens, [*The Resource Theory of Quantum States Out of Thermal Equilibrium*]{}, Phys. Rev. Lett. [**111**]{}, 250404 (2013). G. Gour, M. P. Müller, V. Narasimhachar, R. W. Spekkens, N. Yunger Halpern, [*The resource theory of informational nonequilibrium in thermodynamics*]{}, Phys. Rep. [**583**]{} 1-58 (2015). M. Lostaglio, K. Korzekwa, D. Jennings, T. Rudolph, [*Quantum coherence, time-translation symmetry and thermodynamics*]{}, Phys. Rev. X [**5**]{}, 021001 (2015). M. Lostaglio, D. Jennings, T. Rudolph, [*Description of quantum coherence in thermodynamic processes requires constraints beyond free energy*]{}, Nat. Comm. [**6**]{}, 6383 (2015). P. Cwiklinski, M. Studzinski, M. Horodecki, J. Oppenheim, [*Towards fully quantum second laws of thermodynamics: limitations on the evolution of quantum coherences*]{}, Phys. Rev. Lett. [**115**]{}, 210403 (2015). F. G.S.L. Brandao, M. Horodecki, N. H. Y. Ng, J. Oppenheim, S. Wehner, [*The second laws of quantum thermodynamics*]{}, PNAS [**112**]{}, 3275 (2015). P. Skrzypczyk, A.J. Short, S. Popescu, [*Work extraction and thermodynamics for individual quantum systems*]{}, Nat. Commun. [**5**]{}, 4185 (2014). Y. Guryanova, S. Popescu, A. J. Short, R. Silva, P. Skrzypczyk, [*Thermodynamics of quantum systems with multiple conserved quantities*]{}, Nat. Comm. [**7**]{}, 12049 (2016). L. Masanes, J. Oppenheim, [*A general derivation and quantification of the third law of thermodynamics*]{}, Nat. Comm. [**8**]{}, 14538 (2017). H. Wilming, R. Gallego, [*The third law as a single inequality*]{}, Phys. Rev. X [**7**]{}, 041033 (2017). J. Scharlau, M. P. M[ü]{}ller, [*Quantum Horn’s lemma, finite heat baths, and the third law of thermodynamics*]{}, Quantum [**2**]{}, 54 (2018). W. Pusz and S. L. Woronowicz, [*Passive states and KMS states for general quantum systems*]{}, Commun. Math. Phys. [**58**]{}, 273 (1978). R. Alicki and M. Fannes, [*Entanglement boost for extractable work from ensembles of quantum batteries*]{}, Phys. Rev. E [**87**]{}, 042123 (2013). K. V. Hovhannisyan, M. Perarnau-Llobet, M. Huber, A. Acin, [*Entanglement Generation is Not Necessary for Optimal Work Extraction*]{}, Phys. Rev. Lett. [**111**]{}, 240401 (2013). P. Skrzypczyk, R. Silva, N. Brunner, [*Passivity, complete passivity, and virtual temperatures*]{}, Phys. Rev. E [**91**]{}, 052133 (2015). M. Perarnau-Llobet, K. V. Hovhannisyan, M. Huber, P. Skrzypczyk, J. Tura, A. Acin, [*Most energetic passive states*]{}, Phys. Rev. E [**92**]{}, 042147 (2015). L. J. Schulman, U. Vazirani, [*Scalable NMR Quantum Computation*]{}, Proc. 31’st ACM STOC (Symp. Theory of Computing), 322-329, (1999). P. Oscar Boykin, T. Mor, V. Roychowdhury, F. Vatan, R. Vrijen, [*Algorithmic Cooling and Scalable NMR Quantum Computers*]{}, PNAS [**99**]{} (6) 3388-3393 (2002). S. Raeisi, M. Mosca, [*The Asymptotic Cooling of Heat-Bath Algorithmic Cooling*]{}, Phys. Rev. Lett. [**114**]{}, 100404 (2015). N. A. Rodriguez-Briones, Raymond Laflamme, [*Achievable polarization for Heat-Bath Algorithmic Cooling*]{}, Phys. Rev. Lett. [**116**]{}, 170501 (2016). N. A. Rodriguez-Briones, E. Martin-Martinez, A. Kempf, R. Laflamme, [*Correlation-Enhanced Algorithmic Cooling*]{}, Phys. Rev. Lett. [**119**]{}, 050502 (2017). F. Ticozzi, L. Viola, [*Quantum resources for purification and cooling: fundamental limits and opportunities*]{}, Scientific Reports [**4**]{}, 5192 (2014). N. Linden, S. Popescu, P. Skrzypczyk, [*How small can thermal machines be? The smallest possible refrigerator*]{} Phys. Rev. Lett. [**105**]{}, 130401 (2010). P. Skrzypczyk, N. Brunner, N. Linden, S. Popescu, [*The smallest refrigerators can reach maximal efficiency*]{}, J. Phys. A [**44**]{}, 492002 (2011). A. Levy, R. Kosloff, [*The Quantum Absorption Refrigerator*]{}, Phys. Rev. Lett. [**108**]{}, 070604 (2012). N. Brunner, N. Linden, S. Popescu, P. Skrzypczyk, Phys. Rev. E [**85**]{}, 051117 (2012). D. Venturelli, R. Fazio, V. Giovannetti, Phys. Rev. Lett. [**110**]{}, 256801 (2013). M. T. Mitchison, M. Huber, J. Prior, M. P. Woods, M. B. Plenio, [*Realising a quantum absorption refrigerator with an atom-cavity system*]{}, Quantum Science and Technology [**1**]{}, 015001 (2016). P. P. Hofer, J.-R. Souquet, and A. A. Clerk. [*Quantum heat engine based on photon-assisted Cooper pair tunneling*]{}, Phys. Rev. B [**93**]{}, 041418(R) (2016). P. P. Hofer, M. Perarnau-Llobet, J. Bohr Brask, R. Silva, M. Huber, N. Brunner, [*Autonomous Quantum Refrigerator in a Circuit-QED Architecture Based on a Josephson Junction*]{}, Phys. Rev. B [**94**]{}, 235420 (2016). G. Maslennikov, S. Ding, R. Hablutzel, J. Gan, A. Roulet, S. Nimmrichter, J. Dai, V. Scarani, and D. Matsukevich. [*Quantum absorption refrigerator with trapped ions*]{}, ArXiv:1702.08672 (2017). O. Abah, J. Rossnagel, G Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, E. Lutz, [*Single ion heat engine with maximum efficiency at maximum power*]{}, Phys. Rev. Lett [**109**]{}, 203006 (2012). J. Rossnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler, K. Singer [*A single-atom heat engine*]{}, Science [**352**]{}, 325 (2016). W. Niedenzu, D. Gelbwaser-Klimovsky, A. G. Kofman, G. Kurizki, [*On the operation of machines powered by quantum non-thermal baths*]{} New J. Phys. [**18**]{}, 083012 (2016). W. Niedenzu, V. Mukherjee, A. Ghosh, A. G. Kofman, G. Kurizki, [*Universal thermodynamic limit of quantum engine efficiency*]{}, Nat. Commun. [**9**]{}, 165 (2018). A. S.L. Malabarba, A. J. Short, P. Kammerlander, [*Clock-Driven Quantum Thermal Engines*]{}, New J. Phys. [**17**]{} 045027 (2015). M. P. Woods, R. Silva, J. Oppenheim, [*Autonomous quantum machines and finite sized clocks*]{}, arXiv:1607.04591 (2016). J. [Å]{}berg, [*Catalytic Coherence*]{}, Phys. Rev. Lett. [**113**]{}, 150402 (2014). A. Levy, R. Alicki, R. Kosloff, [*Quantum Refrigerator and the III-law of Thermodynamics*]{}, Phys.Rev. E. [**85**]{}, 061126 (2012). R. Silva, G. Manzano, P. Skrzypczyk, N. Brunner, [*Performance of autonomous quantum thermal machines: Hilbert space dimension as a thermodynamic resource*]{}, Phys. Rev. E [**94**]{}, 032120 (2016). M. T. Mitchison, M. P. Woods, J. Prior, M. Huber, [*Coherence-assisted single-shot cooling by quantum absorption refrigerators*]{} New J. Phys. [**17**]{}, 115013 (2015). J. B. Brask, N. Brunner, [*Small quantum absorption refrigerator in the transient regime: time scales, enhanced cooling and entanglement*]{}, Phys. Rev. E [**92**]{}, 062101 (2015). A. Pozas-Kerstjens, K. V. Hovhannisyan, E. G. Brown, [*A quantum Otto engine with finite heat baths: energy, correlations, and degradation*]{}, New J. Phys. [**20**]{}, 043034 (2018). E. Torrontegui, I. Lizuain, S. Gonzalez-Resines, A. Tobalina, A. Ruschhaupt, R. Kosloff and J. G. Muga, [*Energy consumption for shortcuts to adiabaticity*]{}, Phys. Rev. A [**96**]{}, 022133 (2017). Ch. T. Chubb, M. Tomamichel, K. Korzekwa, [*Beyond the thermodynamic limit: finite-size corrections to state interconversion rates*]{}, arXiv:1711.01193 (2017). E. G. Brown, N. Friis, M. Huber, [*Passivity and practical work extraction using Gaussian operations*]{} New J. Phys. [**18**]{}, 113028 (2016). C. Perry, P. Ciwiklinski, J. Anders, M. Horodecki, J. Oppenheim, [*A sufficient set of experimentally implementable thermal operations*]{}, arXiv:1511.06553 (2015). C. Sparaciari, D. Jennings, J. Oppenheim, [*Energetic instability of passive states in thermodynamics*]{}, Nat. Commun. [**8**]{}, 1895 (2017). P. Erker, M. T. Mitchison, R. Silva, M. P. Woods, N. Brunner, M. Huber, [*Autonomous quantum clocks: does thermodynamics limit our ability to measure time?*]{}, Phys. Rev. X [**7**]{}, 031022 (2017). see the article *Unifying paradigms of quantum refrigeration: A universal and attainable bound on cooling*, arxiv (2019) We consider the system to be isolated from the environment during the operations. Taking the system-environment interaction into account can only weaken cooling, and requires a comparison of the time-scales of different operations, which we do not go into in this work. R. Gallego, J. Eisert, H. Wilming, [*Thermodynamic work from operational principles*]{}, New J. Phys. [**18**]{}, 103017 (2016). M. P. Müller, [*Correlating thermal machines and the second law at the nanoscale*]{}, arXiv:1707.03451 (2017). R. Uzdin, A. Levy, R. Kosloff, Phys. Rev. X [**5**]{}, 031044 (2015). A. Marshall, I. Olkin, and B. Arnold. [*Inequalities: Theory of Majorization and its Applications*]{}. Springer, Second edition, (2011). Michael A. Nielsen.[*An introduction to majorization and its applications to quantum mechanics*]{}. Notes found on http://michaelnielsen.org/blog/talks/2002/maj/book.ps, October 2002. In recent literature, the term *self-contained* is sometimes used to mean the same as autonomous, i.e. a machine with no external source of work or control. R. Kosloff and A. Levy, [*Quantum Heat Engines and Refrigerators: Continuous Devices*]{}. Annual Review of Physical Chemistry [**65**]{}:1, 365-393 (2014). Degeneracy condition for cooling\[app:degeneracy\] ================================================== [[ We want here to investigate the conditions for cooling to be possible in the incoherent scenario. First we will see that given an arbitrary machine, the system hamiltonian of target and machine together must have some degeneracy in that scenario for cooling to be possible at all with that machine. This is the content of Lemma \[lemma:degexist\]. We will then move on to the specific case of the two-qubit machine and prove that given such a machine, cooling is only possible when $E_B=E_A+E_C$. This is what Lemma \[lemma:twoqubitdeg\] proves.]{}]{} \[lemma:degexist\] As $[U,H]=0$, $U$ can only cool the target by acting on the degenerate eigenspaces of $H$. Let $\text{Eig}_{H}(E)$ be the eigenspace of $H$ with eigenvalue $E$. Let ${\vert{v}\rangle} \in \text{Eig}_{H}(E)$. Per definition $H {\vert{v}\rangle} = E {\vert{v}\rangle}$. Furthermore as $[U,H]=0$ we have $$\label{equ:UonE} \begin{aligned} &U H {\vert{v}\rangle} = H U {\vert{v}\rangle}\\ \Leftrightarrow & E (U {\vert{v}\rangle})= H (U {\vert{v}\rangle}), \end{aligned}$$ which shows that $U {\vert{v}\rangle} \in \text{Eig}_{H}(E)$. This means that every energy eigenspace is invariant under $U$ and so as the whole vector space can be decomposed as a direct sum of $\text{Eig}_H(E), U= \oplus_E U_E$. We now have left to prove that if $\text{dim}(\text{Eig}_H(E))=1$, $U_E$ does not affect the temperature of the target. for this, let $E$ be an eigenvalue of $H$ with $\text{dim}(\text{Eig}_H(E))=1$. Let ${\vert{v}\rangle} \in \text{Eig}_H(E)$. From Equation and $\text{dim}(\text{Eig}_H(E))=1$, $U_E {\vert{v}\rangle}=U {\vert{v}\rangle}=\lambda {\vert{v}\rangle}$, meaning that ${\vert{v}\rangle}$ is an eigenvector of $U_E$ with eigenvalue $\lambda$. Since $U_E$ is unitary, $\lambda = e^{i \theta}$ and so $ U_E {\vert{v}\rangle} {\langle{v}\vert} U_E^{\dagger}= {\vert{v}\rangle}{\langle{v}\vert}$, which proves our assertion as only the diagonal elements of the density matrix $U_E \rho U_E^{\dagger}$ contribute to the temperature of the target and that we can write any $\rho$ as $$\rho= \sum_{ij} a_{ij} {\vert{v_i}\rangle} {\langle{v_j}\vert},$$ with $({\vert{v_i}\rangle})_i$ being an ONB of eigenvectors of H and ${\vert{v_1}\rangle}={\vert{v}\rangle}$. So $$\begin{aligned} U_E \rho U_E^{\dagger} &= U_E (\sum_{ij} a_{ij} {\vert{v_i}\rangle} {\langle{v_j}\vert}) U_E^{\dagger}\\ &= U_E (\sum_{i,j \neq 1} a_{ij} {\vert{v_i}\rangle} {\langle{v_j}\vert}+\sum_{j \neq 1} a_{1j} {\vert{v}\rangle} {\langle{v_j}\vert}+\sum_{i \neq 1} a_{i1} {\vert{v_i}\rangle} {\langle{v}\vert}+ a_{11} {\vert{v}\rangle} {\langle{v}\vert}) U_E^{\dagger}\\ &= \sum_{i,j \neq 1} a_{ij} {\vert{v_i}\rangle} {\langle{v_j}\vert}+\sum_{j \neq 1} \lambda a_{1j} {\vert{v}\rangle} {\langle{v_j}\vert}+\sum_{i \neq 1} a_{i1} \bar{\lambda} {\vert{v_i}\rangle} {\langle{v}\vert}+ a_{11} {\vert{v}\rangle} {\langle{v}\vert}. \end{aligned}$$ implying that $$(U_E \rho U_E^{\dagger})_{kk}=a_{kk} {\vert{v_k}\rangle} {\langle{v_k}\vert}=\rho_{kk},$$ , i.e. the diagonal elements of $U_E \rho U_E^{\dagger}$ are the original ones. Next we want to argue that \[lemma:twoqubitdeg\] Among all the possible degeneracies of $H$, only $E_B=E_A+E_C$ enables cooling of qubit A. Going through all the possible eigenvalue degeneracies of $H= H_A+H_B+H_C, \; H_i=E_i {\vert{1}\rangle} {\langle{1}\vert}_i \otimes \mathds{1}_{\bar{i}}, i \in \{A,B,C\}$, we see that we can have 3 different types of degeneracies: 1. $E_i = E_j , \quad i,j \in \{A,B,C\}$ 2. $ E_i =0, \quad i \in \{A,B,C\}$ 3. $E_i = E_j + E_k, \quad i,j,k \in \{A,B,C\}$ We first look at type 2. Imposing $E_A=0$ we get 4 degenerate subspaces: $A_{mn}:=\text{span}\{{\vert{0}\rangle}_A{\vert{mn}\rangle}_{BC}, {\vert{1}\rangle}_A{\vert{mn}\rangle}_{BC}\}$, where $m,n \in \{0,1\}$. Our unitary can act within each $A_{mn}$ subspace on $\rho^H = \tau \otimes \tau_B \otimes \tau_C^H$. However note that as $r_A= \frac{1}{1+e^{- \frac{E_A}{T_R}}}=\frac{1}{2} = 1-r_A$, we have $$\begin{aligned} \rho^H_{0mn,0mn}&=r_A (m+ (-1)^m r_B) (n+ (-1)^n r_C^H)\\ &=(1-r_A) (m+ (-1)^m r_B) (n+ (-1)^n r_C^H)=\rho^H_{1mn,1mn} \end{aligned}$$ such that in each of the $A_{mn}$ $\rho$ is proportional to the identity and hence $U \rho U^{\dagger}=\rho$ for all unitaries $U$ acting only within those subspaces. Note that this argument actually holds for any permutation of A, B and C thus also treating the $E_B=0$ and $E_C=0$ cases and showing that Type 2 degeneracies do not enable cooling qubit A.\ Turning to type 1, if $E_B=E_C$, we have two 2-dim. degenerate subspaces $\text{span}({\vert{001}\rangle},{\vert{010}\rangle})$ and $\text{span}({\vert{101}\rangle},{\vert{110}\rangle})$. In order to cool qubit A, one should maximize $[\text{Tr}_{BC}(U \rho^H U^{\dagger})]_{1,1}= (U\rho^H U^{\dagger})_{000,000}+(U\rho^H U^{\dagger})_{001,001}+(U\rho^H U^{\dagger})_{010,010}+(U\rho^H U^{\dagger})_{011,011}$. As unitaries are trace preserving, acting with U within the first subspace leaves $\rho^H_{001,001}+\rho^H_{010,010}$ unchanged. Acting with U within the second one does not alter any term in $[\text{Tr}_{BC}( \rho^H)]_{1,1}$, meaning that this degeneracy does not allow us to cool qubit A. For $E_A=E_B$, the degenerate subspaces are $\text{span}({\vert{010}\rangle}, {\vert{100}\rangle})$ and $\text{span}({\vert{011}\rangle}, {\vert{101}\rangle})$. Doing the same analysis as before shows that in general the unitary doesn’t leave $[\text{Tr}_{BC}( \rho )]_{1,1}$ invariant, unfortunately it does for our $\rho^H$ since with this condition $$\rho^H_{010,010} = r_A (1-r_B) r^H_C= r_B(1-r_A) r^H_C=\rho^H_{100,100}$$ and similarly $\rho_{011,011}=\rho_{101,101}$. Imposing $E_A = E_C$ we have as degenerate subspaces $\text{span}({\vert{001}\rangle},{\vert{100}\rangle})$ and $\text{span}({\vert{011}\rangle},{\vert{110}\rangle})$. As above the unitary does not in general leave $[\text{Tr}_{BC}(\rho )]_{1,1}$ invariant. For our $\rho^H$ it is also the case but since $T_H \geq T_R$, we have $-\frac{E_C}{T_H}\geq -\frac{E_A}{T_R}$ meaning $r_C^H \leq r_A$, such that $$\rho^H_{001,001}=r_A r_B (1-r_C^H) \geq r_C^H r_B (1-r_A) = \rho_{100,100}^H$$ and $$\rho^H_{011,011}= r_A (1-r_B) (1-r_C^H)\geq r_C^H (1-r_B) (1-r_A)\geq \rho_{110,110}^H.$$ The unitary that maximizes $[\text{Tr}_{BC}(U \rho^H U^{\dagger})]_{1,1}$ is therefore the trivial one. Indeed any 2 dimensional unitary can be written as $$U= \begin{pmatrix} a & b \\ -b^* e^{i \theta} & a^* e^{i \theta} \end{pmatrix},$$ with $\lvert a \rvert ^2 + \lvert b \rvert ^2 =1$ and $\theta \in [0,2\pi)$. And so $$\begin{split} \left[ U \begin{pmatrix} \rho^H_{001,001} &0 \\ 0 & \rho^H_{100,100} \end{pmatrix} U^{\dagger} \right]_{1,1} &= \begin{pmatrix} \lvert a \rvert^2 \rho^H_{001,001} + \lvert b \rvert ^2 \rho^H_{100,100} & a b e^{- i \theta} (\rho^H_{100,100}- \rho^H_{001,001} ) \\ a^* b^* e^{i \theta} (\rho^H_{100,100} - \rho^H_{001,001} ) & \lvert b \rvert ^2 \rho^H_{001,001} + \lvert a \rvert ^2 \rho^H_{100,100} \end{pmatrix}_{1,1}\\ &=\lvert a \rvert^2 \rho^H_{001,001} + \lvert b \rvert ^2 \rho^H_{100,100} \end{split}$$ is maximal for $\lvert a \rvert =1$, $b=0$ and any choice of $\theta$, which exactly corresponds to the unitary of $\text{span}({\vert{001}\rangle}, {\vert{100}\rangle})$ acting trivially on our $\rho^H$. The same obviously holds for the unitaries of $\text{span}({\vert{011}\rangle}, {\vert{110}\rangle})$. This type of degeneracy hence also does not allow any cooling to happen.\ We are left with the last type of degeneracy, type 3. Looking at $E_A = E_B + E_C$ we have that the degenerate subspace is $\text{span} ({\vert{011}\rangle}, {\vert{100}\rangle})$. As after heating we have that $T_R\leq T_H$, which implies $e^{-E_C/T_R} \leq e^{-E_C/T_H}$, we have, $$\begin{aligned} \rho^H_{011,011}&= r_A (1-r_B) (1-r_C^H)\\ &= r_A e^{-\frac{E_B}{T_R}} e^{-\frac{E_C}{T_H}} r_B r_C^H\\ &\geq r_A e^{-\frac{E_B+E_C}{T_R}} r_B r_C^H= (1-r_A) r_B r_C^H= \rho^H_{100,100} \end{aligned}$$ meaning that our unitary can only decrease $[\text{Tr}_{BC}(U \rho U^{\dagger})]_{1,1}$ by making use of this degeneracy (that corresponds to heating qubit A). Similarly for $E_C = E_A + E_B$ (here the subspace is $\text{span}({\vert{001}\rangle}, {\vert{110}\rangle})$ and we have $\rho^H_{001,001} \geq \rho^H_{110,110}$).\ However, for $E_B=E_A+E_C$, we have that our unitary can increase $[\text{Tr}_{BC}( \rho^H )]_{1,1}$ by acting in the degenerate subspace $\text{span}({\vert{010}\rangle}, {\vert{101}\rangle})$ since $$\begin{aligned} \rho^H_{010,010}&= r_A (1-r_B) r_C^H\\ &= r_A e^{-\frac{E_A}{T_R}} e^{-\frac{E_C}{T_R}} r_B r_C^H\\ &\leq r_A e^{-\frac{E_A}{T_R}} e^{-\frac{E_C}{T_H}} r_B r_C^H= (1-r_A) r_B r_C^H= \rho^H_{101,101}. \end{aligned}$$ This shows that the only single degeneracy enabling cooling is $E_B=E_A+E_C$. To finish the proof one needs to prove that there is no way of selecting some of the above degeneracies and achieve cooling without also having to select the degeneracy $E_B=E_A+E_C$. All the ways of selecting two degeneracies can be listed as 1. $E_i=E_j=0, \quad i,j \in \{A,B,C\}$ 2. $E_i=E_j, E_k=0, \quad \{i,j,k\}=\{A,B,C\}$ 3. $E_A = E_B=E_C=0$ 4. $E_A=E_B=E_C$ 5. $E_i=E_j, E_k=2 E_i, \quad \{i,j,k\}=\{A,B,C\}$ In case a), $\rho \propto \mathds{1}$ within the degenerate subspaces and so no cooling can occur. In case b) the degenerate subspaces are $\text{span}({\vert{00}\rangle}_{ij} {\vert{0}\rangle}_k, {\vert{00}\rangle}_{ij} {\vert{1}\rangle}_k)$, $\text{span}({\vert{11}\rangle}_{ij} {\vert{0}\rangle}_k, {\vert{11}\rangle}_{ij} {\vert{1}\rangle}_k)$, and $\text{span}({\vert{01}\rangle}_{ij} {\vert{0}\rangle}_k, {\vert{01}\rangle}_{ij} {\vert{1}\rangle}_k, {\vert{10}\rangle}_{ij} {\vert{0}\rangle}_k,{\vert{10}\rangle}_{ij} {\vert{1}\rangle}_k)$. In the first two subspaces $\rho \propto \mathds{1} $. In the third if $(i,j,k)=(A,B,C)$ then $\rho \propto \mathds{1}$, if $(i,j,k) = (B,C,A)$ then cooling is possible as $\rho^H_{001,001}=\rho^H_{101,101} \geq \rho^H_{010,010}=\rho^H_{110,110}$, but this is no contradiction to our claim since in this case $E_B= E_A+E_C$ holds, and if $(i,j,k)= (C,A,B)$, $\rho^H_{100,100}= \rho^H_{110,110} \leq \rho^H_{001,001}= \rho^H_{011,011}$, meaning that no cooling is possible. In case c) $\rho \propto \mathds{1}$ and so no cooling is possible. In case d) the degenerate subspaces are $\text{span}({\vert{001}\rangle},{\vert{010}\rangle},{\vert{100}\rangle})$ and $\text{span}({\vert{011}\rangle}, {\vert{101}\rangle}, {\vert{110}\rangle})$ and as $\rho^H_{001,001} \geq \rho^H_{010,010}= \rho^H_{100,100}$ and $\rho^H_{011,011} = \rho^H_{101,101} \geq \rho^H_{110,110}$, no cooling is possible. Finally in case e) the degenerate subspaces are $\text{span}({\vert{01}\rangle}_{ij} {\vert{0}\rangle}_k, {\vert{10}\rangle}_{ij} {\vert{0}\rangle}_k)$, $\text{span}({\vert{00}\rangle}_{ij} {\vert{1}\rangle}_k, {\vert{11}\rangle}_{ij} {\vert{0}\rangle}_k)$, and $\text{span}({\vert{01}\rangle}_{ij} {\vert{1}\rangle}_k, {\vert{10}\rangle}_{ij} {\vert{1}\rangle}_k)$. If $(i,j,k) = (A,B,C)$ then $\rho^H_{010,010}=\rho^H_{100,100}$, $\rho^H_{011,011}=\rho^H_{100,100}$, and $\rho^H_{001,001}\geq \rho^H_{110,110}$ so that no cooling is possible. If $(i,j,k) = (B,C,A)$ then $\rho^H_{001,001}\geq \rho^H_{010,010}$, $\rho^H_{101,101}\geq \rho^H_{110,110}$, and $\rho^H_{100,100}\leq \rho^H_{011,011}$ so that no cooling is possible either. If $(i,j,k) = (C,A,B)$, $\rho^H_{010,010}\leq \rho^H_{101,101}$ so that one can cool in that subspace but as in this case $E_B=E_A+E_C$ also happens to hold; this again is no contradiction to our claim. If one selects more than two different degeneracies from the list 1.,2., and 3., either three linearly independant degeneracies are selected, which results in $E_A=E_B=E_C=0$ and leads to no cooling as shown above, or less than three of the selected degeneracies are linearly independant and the situation reduces to one of the above treated case. This ends the proof. Optimal incoherent thermalisation {#app:thermalinc} ================================= Here we want to argue that for the case of the two-qubit machine in order to cool the target qubit maximally, the best way to make use of both thermal baths at $T_R$ and $T_H$ respectively, is to thermalise qubit B at $T_R$ and qubit C at $T_H$. To begin with, note that the only allowed unitaries are those within the degenerate subspace, as these are the only ones that preserve energy, see [Sec. \[app:degeneracy\]]{}. Any unitary within this qubit subspace can be viewed as a partial swap between the populations of the two levels (up to a change in relative phase, which does not affect cooling). Thus the maximum cooling is achieved by either swapping the populations fully, or not at all, since these are the two extremes of the achievable populations. Thus given two thermal baths, at temperatures $T_R$ and $T_H$, the optimal manner of cooling would be to thermalize qubits $B$ and $C$ in such a way as to maximize the difference in the populations of the two degenerate levels ${\vert{101}\rangle}$ and ${\vert{010}\rangle}$ before applying a full swap; i.e. maximize $p_{101} - p_{010}$, where $p_{ijk}$ denotes the population of level ${\vert{ijk}\rangle}$. Consider that we thermalize $B$ and $C$ to temperatures between those of the environment and of the hot bath (these are the extremes of temperatures available to us, and thus any temperature in between is also attainable). It is straightforward to check that $$\begin{aligned} \frac{d}{d T_B} \Big( p_{101} - p_{010} \Big) &= - \frac{ E_B r_B (1-r_B)}{T_B^2} \left( r r_C + (1-r) (1-r_C) \right) < 0 \quad \forall T_B, \\ \text{and} \quad \frac{d}{d T_C} \Big( p_{101} - p_{010} \Big) &= + \frac{E_C r_C (1-r_C)}{T_C^2} \left( r (1-r_B) + (1-r) r_B \right) > 0 \quad \forall T_C.\end{aligned}$$ Therefore, it is optimal to have qubit $B$ be as cold as possible (the environment temperature $T_R$), and qubit $C$ be as hot as possible (the temperature of the hot bath $T_H$). Thus, although the whole machine has access to a hotter thermal bath at temperature $T_H$, it is best to only put qubit $C$ in contact with it, leaving $B$ at the room temperature $T_R$. Note that the above argument also holds if the population on qubit A is set to be some other value than $r$, meaning that in the repeated incoherent operations one should also rethermalise qubit B to $T_R$ and qubit C to $T_H$ before applying the swap operation in order to maximally cool the target qubit. Single-cycle coherent machines {#app:Uopt} ============================== We want here to discuss the solution of the single-cycle coherent machines presented in the main text. More precisely, we are interested in finding the unitary $U_{\text{opt}}$ (or equivalently the state $\rho_{\text{opt}}= U_{\text{opt}} \rho^{\text{in}} U_{\text{opt}}^{\dagger}$) that enables us to cool the target to a given temperature, i.e. ground state, $r_{\text{coh}} \in [r^{\text{in}},r_{\text{coh}}^*]$ at a minimal work cost. From the discussion of the main text we know that using the Schur-Horn theorem, finding $\rho_{\text{opt}}$ for a system comprised of a target qubit and a machine of $n/2$ energy gaps amounts to solving $$\label{equ:genqubitprob} \min_{\vv{\rho} \prec \vv{\rho^{\text{in}}}} \vv{\rho} \cdot \vv{H}, \quad \text{s.t. } \sum_{i=1}^{n/2} \rho_i=r_{\text{coh}}.$$ Indeed, the solution of Equation gives us $\vv{\rho_{\text{coh}}}$ from which we can easily reconstruct $\rho_{\text{coh}}$ and $U_{\text{opt}}$. We in the following solve the problem for the one qubit machine ($n=4$) and the two qubit machine ($n=8$). We then show that , given a general machine, it is sufficient to solve two marginal problems in order to find the optimal unitary cooling the target to the lowest temperature $r_{\text{coh}}^*$. We also provide $\vv{\rho_{\text{coh}}^*}$. Coherent One-Qubit Machine\[sec:Uopt1\] --------------------------------------- We want to solve $$\label{equ:onequbitprob} \min_{\vv{\rho} \prec \vv{\rho^{\text{in}}}} \vv{\rho} \cdot \vv{H}, \quad \text{s.t. } \rho_1+\rho_2=r_{\text{coh}},$$ where the majorization conditions are simply given by $$\label{equ:onequbitMaj} \sum_{i=1}^l \rho_i^{\downarrow} \leq \sum_{i=1}^l \rho_i^{\text{in},\downarrow}, \quad \forall l=1,\dots,4,$$ with equality for $l=4$. First note that $\rho_1+\rho_2=r$ with the trace condition implies that $\rho_3+\rho_4=1-r$ such that $$\begin{aligned} \vec{\rho} \cdot \vec{H} &= \rho_2 E_B + \rho_3 E_A + \rho_4 (E_A+E_B)\\ &= (r-\rho_1 +\rho_4) E_B+ \underbrace{(1-r)E_A}_{=\text{cste}} \end{aligned}$$ such that in order to minimise $\vec{\rho} \cdot \vec{H}$, one should minimise $\rho_4-\rho_1$. This means that $\rho_1$ should be the greatest component of $\vec{\rho}$, namely $\rho_1= \rho_1^{\downarrow}$ and $\rho_4$ the smallest, namely $\rho_4=\rho_4^{\downarrow}$. From equation with $l=1$ we have $\rho_1=\rho_1^{\downarrow} \leq \rho_1^{\text{in}}$ and combining equation with $l=3$ with the trace condition we get $$\begin{aligned} \rho_1^{\text{in},\downarrow}+\rho_2^{\text{in},\downarrow}+\rho_3^{\text{in},\downarrow}+\rho_4^{\text{in},\downarrow}&=\rho_1^{\downarrow}+\rho_2^{\downarrow}+\rho_3^{\downarrow}+\rho_4^{\downarrow}\\ & \leq \rho_1^{\text{in},\downarrow}+\rho_2^{\text{in},\downarrow}+\rho_3^{\text{in},\downarrow}+\rho_4^{\downarrow}, \end{aligned}$$ such that $\rho_4 = \rho_4^{\downarrow} \geq \rho_4^{\text{in},\downarrow}$. In order to minimise $\rho_4-\rho_1$ we therefore have to choose $\vec{\rho}$ such that $$\begin{aligned} \rho_1&=\rho_1^{\text{in}}\\ \rho_4&=\rho_4^{\text{in}}. \end{aligned}$$ Plugging these values in the majorization conditions (equation ), we are left with $$\label{eq:majT} \begin{aligned} \rho_2^{\downarrow} &\leq \rho_3^{\text{in}}\\ \rho_2^{\downarrow}+\rho_3^{\downarrow}&= \rho_2^{\text{in}}+\rho_3^{\text{in}}. \end{aligned}$$ As $\rho_2^{\downarrow} = \max \{\rho_2,\rho_3 \}$ and $\rho_2^{\downarrow}+\rho_3^{\downarrow} = \rho_2+\rho_3$, these are exactly the conditions for $$(\rho_2,\rho_3) \prec (\rho_2^{\text{in}},\rho_3^{\text{in}}),$$ which means that one can get the majorized vector by applying a T-transform to the initial vector. That is, for some $t \in [0,1]$, $$\label{eq:Ttrafo} \begin{pmatrix} \rho_2\\ \rho_3 \end{pmatrix} = T \begin{pmatrix} \rho_2^{\text{in}}\\ \rho_3^{\text{in}} \end{pmatrix}, \quad T= \begin{pmatrix} t & 1-t \\ 1-t & t \end{pmatrix}.$$ This simply follows from the fact that in general $r \prec s$ iff there exists some doubly stochastic matrix $D$ such that $r=Ds$, and that the most general $2 \times 2$ doubly stochastic matrices are T-tranforms. Now we just have to choose t such that $\rho_1+\rho_2=r$, that is $$t=\frac{\rho_1^{\text{in}}+\rho_3^{\text{in}}-r}{\rho_3^{\text{in}}-\rho_2^{\text{in}}},$$ or equivalently $$1-t = \frac{r-r^{\text{in}}}{r_B-r^{\text{in}}}$$ or $$r=r^{\text{in}}+(1-t) (r_B-r^{\text{in}}).$$ The associated work cost is $$\begin{aligned} \Delta F &= (\vec{\rho} - \vv{\rho^{\text{in}}}) \cdot \vec{H} \\ &=(1-t) (\rho_3^{\text{in}}-\rho_2^{\text{in}}) (E_B-E_A)\\ &= (1-t) (r_B-r_A) (E_B-E_A). \end{aligned}$$ A unitary U such that $\vec{\rho}=\vv{\text{Diag}}(U\rho^{\text{in}}U^{\dagger})$ is for example given by $$U = \begin{pmatrix} 1&0&0&0\\ 0& \sqrt{1-\mu}& \sqrt{\mu}&0\\ 0& - \sqrt{\mu} & \sqrt{1-\mu} &0\\ 0&0&0&1 \end{pmatrix},$$ where $\mu = 1-t$ and can be written more compactly as $$U= e^{-i \arcsin(\sqrt{\mu}) L_{AB}},$$ with $L_{AB}= i {\vert{01}\rangle}{\langle{10}\vert} - i {\vert{10}\rangle} {\langle{01}\vert}$. Coherent Two-Qubit Machine -------------------------- We want to solve $$\label{equ:twoqubitprob} \min_{\vv{\rho} \prec \vv{\rho^{\text{in}}}} \vv{\rho} \cdot \vv{H}, \quad \text{s.t. } \sum_{i=1}^4\rho_i=r_{\text{coh}},$$ where the majorization conditions are simply given by $$\label{equ:majcond} \begin{aligned} \sum_{i=1}^k \rho^{\downarrow}_i &\leq \sum_{i=1}^k \rho^{\text{in},\downarrow}_i, \quad \forall k=1,\dots, 7\\ \sum_{i=1}^8 \rho^{\downarrow}_i &= \sum_{i=1}^8 \rho^{\text{in},\downarrow}_i. \end{aligned}$$ The ordering of the original entries is crucial to the solving of the problem. There are hence two regimes that one needs to investigate, namely $E_C \leq E_A$ and $E_C > E_A$. We begin with the $E_C \leq E_A$ regime. #### The $E_C \leq E_A$ regime. In this regime the ordering of the original diagonal entries is given by $$\label{equ:smallorder} \rho_1^{\text{in}} >\rho_2^{\text{in}}>\rho_5^{\text{in}}>\rho_3^{\text{in}}=\rho_6^{\text{in}}>\rho_4^{\text{in}}>\rho_7^{\text{in}}>\rho_8^{\text{in}}.$$ We furthermore have: $$\label{equ:rhoHsmall} \begin{aligned} \vv{\rho} \cdot \vv{H} &= \rho_2 E_C + \rho_3 E_B+ \rho_4 (E_B+E_C)+ \rho_5 E_A \\ & \quad +\rho_6 (E_A+E_C) + \rho_7 (E_A+E_B) \\ & \quad + \rho_8 (E_A+E_B+E_C). \end{aligned}$$ We will next rewrite equation , keeping in mind the ordering of the original state of equation , in a way that the majorization conditions of equation can easily be applied. First we use that $\rho_5+\rho_6+\rho_7+\rho_8=1-r$ and get $$\label{eq:reformsmall} \begin{aligned} \vv{\rho} \cdot \vv{H} &= \underbrace{(\rho_2+\rho_4)}_{r-\rho_1-\rho_3} E_C + \underbrace{(\rho_3+\rho_4)}_{r-\rho_1-\rho_2} E_B \\ & \quad + (1-r) E_A + \underbrace{(\rho_6+\rho_8)}_{1-r-\rho_5-\rho_7} E_C + (\rho_7+\rho_8) E_B\\ &=(r-\rho_1) E_C - \rho_3 E_C + (r-\rho_1-\rho_2) E_C \\ &\quad + (r-\rho_1-\rho_2)E_A+ (1-r) E_A + (1-r-\rho_5) E_C \\ &\quad - \rho_7 E_C + (\rho_7+\rho_8) E_C + (\rho_7+\rho_8) E_A\\ &= (r-\rho_1) E_C + (1-(\rho_1+\rho_2+\rho_5+\rho_3)) E_C\\ &\quad + (1-(\rho_1+\rho_2)) E_A+\rho_8 E_C + (\rho_7+\rho_8) E_A, \end{aligned}$$ where in the second step we used that $E_B=E_A+E_C$. Note that the sum of the the minima of each summand of $\vv{\rho} \cdot \vv{H}$ is for sure a lower bound to the minimum of $\vv{\rho} \cdot \vv{H}$, such that if one can pick a $\rho$ achieving this lower bound, we will have reached the minimum of $\vv{\rho} \cdot \vv{H}$. Using the last reformulation of $\vv{\rho} \cdot \vv{H}$, this is luckily possible, indeed: $$\label{equ:mincond} \begin{aligned} \rho_1 &\leq \rho_1^{\downarrow} \leq \rho_1^{\text{in}, \downarrow} =\rho_1^{\text{in}}\\ \rho_1+\rho_2+\rho_5+\rho_3 &\leq \sum_{i=1}^4 \rho_i^{\downarrow}\leq \sum_{i=1}^4 \rho_{i}^{\text{in},\downarrow}=\rho_1^{\text{in}}+\rho_2^{\text{in}}+\rho_5^{\text{in}}+\rho_3^{\text{in}}\\ \rho_1+\rho_2 &\leq \rho_1^{\downarrow} + \rho_2^{\downarrow} \leq \rho_1^{\text{in},\downarrow}+\rho_2^{\text{in},\downarrow}=\rho_1^{\text{in}}+\rho_2^{\text{in}}\\ \rho_8& \geq \rho_8^{\downarrow} \geq \rho_8^{\text{in},\downarrow} = \rho_8^{\text{in}}\\ \rho_7+\rho_8 &\geq \rho_7^{\downarrow}+\rho_8^{\downarrow} \geq\rho_7^{\text{in},\downarrow}+\rho_8^{\text{in},\downarrow}=\rho_7^{\text{in}}+\rho_8^{\text{in}}. \end{aligned}$$ To minimise the first summand we hence have to choose $\rho_1=\rho_1^{\text{in}}$. To minimise the third summand, since $\rho_1=\rho_1^{\text{in}}$, we have to pick $\rho_2=\rho_2^{\text{in}}$. To minimise the fourth summand we have to choose $\rho_8=\rho_8^{\text{in}}$ which forces us to choose $\rho_7=\rho_7^{\text{in}}$ in order to minimise the last summand. We are hence left with the minimisation of the second summand that is achieved if $$\label{equ:equal35} \rho_5+\rho_3=\rho_5^{\text{in}}+\rho_3^{\text{in}}$$ is satisfied. Now note that we have $$\begin{aligned} \rho_1 + \rho_2+ \max\{\rho_3, \rho_5\} &\leq\rho_1^{\downarrow} + \rho_2^{\downarrow} + \rho_3^{\downarrow} \\ &\leq \rho_1^{\text{in},\downarrow} + \rho_2^{\text{in},\downarrow} + \rho_3^{\text{in},\downarrow} \\ &= \rho_1^{\text{in}}+\rho_2^{\text{in}}+\rho_5^{\text{in}}\\ &= \rho_1+\rho_2+\rho_5^{\text{in}} \end{aligned}$$ such that $$\label{equ:bound3} \max\{\rho_3,\rho_5\} \leq \rho_5^{\text{in}}.$$ Equation and together mean that $(\rho_3,\rho_5) \prec (\rho_5^{\text{in}},\rho_3^{\text{in}})$, which we know from [Sec. \[sec:Uopt1\]]{} to be equivalent to $$\begin{pmatrix} \rho_3 \\ \rho_5 \end{pmatrix} = T_1 \begin{pmatrix} \rho_3^{\text{in}} \\ \rho_5^{\text{in}} \end{pmatrix}, \quad T_1 \begin{pmatrix} t_1 & 1-t_1 \\ 1-t_1 & t_1 \end{pmatrix},$$ for some $t_1 \in [0,1]$. Similarly, as $\sum_{i=1}^8 \rho_i = \sum_{i=1}^8 \rho_i^{\text{in}}$ we find that $$\label{eq:equal46} \rho_4+\rho_6 = \rho_4^{\text{in}}+ \rho_6^{\text{in}}$$ and using that the second line of equation is satisfied with equality we find that $$\sum_{i=1}^4 \rho_i^{\downarrow} +\max\{\rho_4,\rho_6\} \leq \sum_{i=1}^5 \rho_i^{\downarrow} = \sum_{i=1}^5 \rho_i^{\text{in},\downarrow} = \sum_{i=1}^4 \rho_i^{\downarrow} + \rho_6^{\text{in}}$$ such that $$\label{eq:maj46} \max\{\rho_4,\rho_6\} \leq \rho_6^{\text{in}}.$$ Now equations and together mean that $(\rho_4 , \rho_6) \prec (\rho_4^{\text{in}}, \rho_6^{\text{in}})$, which as before is equivalent to $$\begin{pmatrix} \rho_4 \\ \rho_6 \end{pmatrix} = T_2 \begin{pmatrix} \rho_4^{\text{in}} \\ \rho_6^{\text{in}} \end{pmatrix}, \quad T_2 \begin{pmatrix} t_2 & 1-t_2 \\ 1-t_2 & t_2 \end{pmatrix},$$ for some $t_2 \in [0,1]$. This means that for any $t_1$ and $t_2$, we have found a vector $\rho$ that achieves the minimum of each summands in and that therefore is the solution of our problem. Of course, for a given $r$, only some $t_1$ and $t_2$ will solve our problem, namely the ones satisfying $$\begin{aligned} r&= \rho_1+\rho_2 +\rho_3+\rho_4\\ &= \rho_1^{\text{in}}+\rho_2^{\text{in}}+t_1 \rho_3^{\text{in}}+(1-t_1) \rho_5^{\text{in}}+ t_2\rho_4^{\text{in}}+ (1-t_2) \rho_6^{\text{in}}\\ &= \sum_{i=1}^4 \rho_i^{\text{in}} + (t_1-1) \rho_3^{\text{in}} + (1-t_1) \rho_5^{\text{in}} \\ & \quad + (t_2-1) \rho_4^{\text{in}}+ (1-t_2) \rho_6^{\text{in}}\\ &= r^{\text{in}} + (1-t_1) (\rho_5^{\text{in}}-\rho_3^{\text{in}}) + (1-t_1) (\rho_6^{\text{in}} - \rho_4^{\text{in}}). \end{aligned}$$ Next note that $$\begin{aligned} \rho_5^{\text{in}} - \rho_3^{\text{in}}&= (1-r_A) r_B r_C - r_A (1-r_B) r_C \\ &= r_B r_C - r_A r_B r_C - r_A r_C + r_A r_B r_C \\ &= (r_B-r_A) r_C\\ \rho_6^{\text{in}}-\rho_4^{\text{in}}&=(1-r_A) r_B (1-r_C) - r_A (1-r_B) (1-r_C)\\ &= (1-r_C) ( r_B-r_A r_B -r_A + r_A r_B)\\ &= (1-r_C) (r_B-r_A) \end{aligned}$$ such that $$r= r^{\text{in}} + [(1-t_1) r_C + (1-t_2) (1-r_C)] (r_B-r_A).$$ If we were to choose $t_1=t_2 = t$ then we would have $$r= r^{\text{in}} + (1-t) (r_B-r_A).$$ Now the work cost of carrying this process is $$\begin{aligned} \Delta F &= (\vv{\rho}- \vv{\rho^{\text{in}}}) \cdot \vv{H}\\ &= (t_1 \rho_3^{\text{in}}+ (1-t_1) \rho_5^{\text{in}}-\rho_3^{\text{in}}) E_B\\ & \quad + (t_2 \rho_4^{\text{in}} + (1-t_2) \rho_6^{\text{in}}-\rho_4^{\text{in}}) (E_B+E_C)\\ &\quad + ((1-t_1) \rho_3^{\text{in}}+ t_1 \rho_5^{\text{in}}-\rho_5^{\text{in}}) E_A\\ & \quad +((1-t_2) \rho_4^{\text{in}}+ t_2 \rho_6^{\text{in}}-\rho_6^{\text{in}}) (E_A+E_C)\\ &= (1-t_1) (\rho_5^{\text{in}}-\rho_3^{\text{in}}) E_B\\ &\quad + (1-t_2) (\rho_6^{\text{in}} -\rho_4^{\text{in}}) (E_B+E_C)\\ & \quad + (1-t_1) (\rho_3^{\text{in}}-\rho_5^{\text{in}}) E_A\\ & \quad + (1-t_2) (\rho_4^{\text{in}}-\rho_6^{\text{in}}) (E_A+ E_C)\\ &=[(1-t_1) (\rho_5^{\text{in}}-\rho_3^{\text{in}}) + (1-t_2) (\rho_6^{\text{in}}-\rho_4^{\text{in}}) ] E_C\\ &=[(1-t_1) r_C + (1-t_2) (1-r_C) ] (r_B-r_A) E_C. \end{aligned}$$ If we choose $t_1=t_2=t$ then $$\Delta F = (1-t) (r_B-r_A) E_C.$$ A unitary $U$ such that $\vv{\rho} = \vv{\text{Diag}} (U \rho^{\text{in}} U^{\dagger})$ is for example given by $$U=\begin{pmatrix} 1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&\sqrt{1-\mu_1}&0&\sqrt{\mu_1}&0&0&0\\ 0&0&0&\sqrt{1-\mu_2}&0&\sqrt{\mu_2}&0&0\\ 0&0&- \sqrt{\mu_1}&0&\sqrt{1-\mu_1}&0&0&0\\ 0&0&0&-\sqrt{\mu_2}&0&\sqrt{1-\mu_2}&0&0\\ 1&0&0&0&0&0&0&0\\ \end{pmatrix},$$ where $\mu_1=1-t_1$ and $\mu_2= 1-t_2$. If we choose $t_1=t_2=t$ then $\mu_1=\mu_2=\mu$ and $$U=\begin{pmatrix} 1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&\sqrt{1-\mu}&0&\sqrt{\mu}&0&0&0\\ 0&0&0&\sqrt{1-\mu}&0&\sqrt{\mu}&0&0\\ 0&0&- \sqrt{\mu}&0&\sqrt{1-\mu}&0&0&0\\ 0&0&0&-\sqrt{\mu}&0&\sqrt{1-\mu}&0&0\\ 1&0&0&0&0&0&0&0\\ \end{pmatrix}$$ can be compactly written as $$U=e^{-i \arcsin(\sqrt{\mu}) L_{AB}}, \quad L_{AB}- i {\vert{01}\rangle} {\langle{10}\vert}_{AB} - i {\vert{10}\rangle}{\langle{01}\vert}_{AB}.$$ #### The $E_C > E_A$ regime In this regime the ordering of the original diagonal entries is given by $$\label{equ:bigorder} \rho_1^{\text{in}} >\rho_5^{\text{in}}>\rho_2^{\text{in}}>\rho_3^{\text{in}}=\rho_6^{\text{in}}>\rho_7^{\text{in}}>\rho_4^{\text{in}}>\rho_8^{\text{in}}.$$ As before, we would like to reshuffle the terms of $$\label{equ:rhoHbig} \begin{aligned} \vv{\rho} \cdot \vv{H} &= \rho_2 E_C + \rho_3 E_B+ \rho_4 (E_B+E_C)+ \rho_5 E_A \\ & \quad +\rho_6 (E_A+E_C) + \rho_7 (E_A+E_B) \\ &\quad + \rho_8 (E_A+E_B+E_C) \end{aligned}$$ such that each summand can be minimised. So we get $$\begin{aligned} \vv{\rho} \cdot \vv{H} &= (\rho_2+\rho_4) E_C+ (r-\rho_1-\rho_2) \underbrace{E_B}_{=E_A+E_C}+\rho_5 E_A\\ &\quad +\rho_6 (E_A+E_C)+(1-r-\rho_5-\rho_6) (E_A+E_B)\\ & \quad +\rho_8 E_C\\ &= -\rho_1 E_C+ (-\rho_1-\rho_2) E_A +(-\rho_5) (E_A+E_C)\\ & \quad +\rho_4 E_C +(-\rho_6) E_A+\rho_8 E_C\\ &\quad +\underbrace{(1-r) (E_A+E_B)+r E_B}_c\\ &= -\rho_1 E_C+ (-\rho_1-\rho_5-\rho_2) E_A +\\ &\quad (r-1+\rho_6+\rho_7+\rho_8) E_C+ (\rho_4+\rho_8) E_C \\ & \quad + (-\rho_6) E_A+c\\ &= -\rho_1 E_C+ (-\rho_1-\rho_5-\rho_2) E_A\\ &\quad + \rho_6 (E_C-E_A)+(\rho_7+\rho_4+\rho_8) E_C +\rho_8 E_C \\ & \quad +\underbrace{c+(r-1)E_C}_d\\ &= -\rho_1 E_C+ (-\rho_1-\rho_5-\rho_2) E_A + d+ \rho_8 E_C\\ & \quad+(\rho_6+\rho_7+\rho_4+\rho_8) (E_C-E_A) \\ & \quad +(\rho_7+\rho_4+\rho_8) E_A. \end{aligned}$$ Now looking at each summand and applying the majorization conditions with the order of the original vector that we know we get: $$\label{equ:fixbigEC} \begin{alignedat}{3} &\rho_1 \leq \rho_1^{\text{in}} &&\Rightarrow \rho_1=\rho_1^{\text{in}}\\ &\rho_1+\rho_5+\rho_2 \leq \rho_1^{\text{in}}+\rho_5^{\text{in}}+\rho_2^{\text{in}} && \hspace{-0.76cm}\Rightarrow \rho_5+\rho_2=\rho_5^{\text{in}}+\rho_2^{\text{in}}\\ &\rho_8 \geq \rho_8^{\text{in}}&& \Rightarrow \rho_8= \rho_8^{\text{in}}\\ &\rho_7+\rho_4+\rho_8 \geq \rho_7^{\text{in}}+\rho_4^{\text{in}}+\rho_8^{\text{in}} &&\hspace{-0.76cm}\Rightarrow \rho_7+\rho_4=\rho_7^{\text{in}}+\rho_4^{\text{in}}\\ &\rho_6+\rho_7+\rho_4+\rho_8 \geq \rho_6^{\text{in}}+\rho_7^{\text{in}}+\rho_4^{\text{in}}+\rho_8^{\text{in}} &&\Rightarrow \rho_6=\rho_6^{\text{in}}. \end{alignedat}$$ Furthermore, note that out of $\sum_{i=1}^8 \rho_i=\sum_{i=1}^8 \rho_i^{\text{in}}$ and the above fixed values we get $$\rho_3=\rho_3^{\text{in}}.$$ Also using the majorization conditions, we have $$\begin{aligned} \rho_1+\max\{\rho_5,\rho_2\}\leq \rho_1^{\text{in}}+\rho_5^{\text{in}} \Rightarrow \max\{\rho_5,\rho_2\}\leq \rho_5^{\text{in}}\\ \min\{\rho_5,\rho_2\}+\rho_8 \geq \rho_4^{\text{in}}+\rho_8^{\text{in}}\Rightarrow \min\{\rho_5,\rho_2\} \geq \rho_4^{\text{in}},\\ \end{aligned}$$ which together with means that $(\rho_5 , \rho_2) \prec (\rho_5^{\text{in}}, \rho_2^{\text{in}})$ and $(\rho_4 , \rho_7) \prec (\rho_4^{\text{in}}, \rho_7^{\text{in}})$ which is equivalent to $$\begin{pmatrix} \rho_5 \\ \rho_2 \end{pmatrix} = T_1 \begin{pmatrix} \rho_5^{\text{in}} \\ \rho_2^{\text{in}} \end{pmatrix}, \quad T_1 \begin{pmatrix} t_1 & 1-t_1 \\ 1-t_1 & t_1 \end{pmatrix}$$ and $$\begin{pmatrix} \rho_4 \\ \rho_7 \end{pmatrix} = T_2 \begin{pmatrix} \rho_4^{\text{in}} \\ \rho_7^{\text{in}} \end{pmatrix}, \quad T_2 \begin{pmatrix} t_2 & 1-t_2 \\ 1-t_2 & t_2 \end{pmatrix},$$ for some $t_1 \in [0,1]$ and $t_2 \in [0,1]$. This means that for any $t_1$ and $t_2$, the vector $\rho$ is the solution of our problem. Of course, for a given $r$, only some $t_1$ and $t_2$ will solve our problem, namely the ones satisfying $$\begin{aligned} r&= \rho_1+\rho_2 +\rho_3+\rho_4\\ &= r^{\text{in}} + (1-t_1) (\rho_5^{\text{in}}-\rho_2^{\text{in}}) + (1-t_1) (\rho_7^{\text{in}} - \rho_4^{\text{in}})\\ &= r^{\text{in}} + [(1-t_1) r_B + (1-t_1) (1-r_B)] (r_C-r_A). \end{aligned}$$ If we were to choose $t_1=t_2 = t$ then we would have $$r= r^{\text{in}} + (1-t) (r_C-r_A).$$ Now the work cost of carrying this process is $$\begin{aligned} \Delta F &= (\vv{\rho}- \vv{\rho^{\text{in}}}) \cdot \vv{H}\\ &=[(1-t_1) r_B + (1-t_2) (1-r_B) ]\\ &\quad \cdot (r_B-r_A) (E_C-E_A). \end{aligned}$$ If we choose $t_1=t_2=t$ then $$\Delta F = (1-t) (r_C-r_A) (E_C-E_A).$$ A unitary $U$ such that $\vv{\rho} = \vv{\text{Diag}} (U \rho^{\text{in}} U^{\dagger})$ is for example given by $$U=\begin{pmatrix} 1&0&0&0&0&0&0&0\\ 0&\sqrt{1-\mu_1}&0&0&\sqrt{\mu_1}&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&\sqrt{1-\mu_2}&0&0&\sqrt{\mu_2}&0\\ 0&- \sqrt{\mu_1}&0&0&\sqrt{1-\mu_1}&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&-\sqrt{\mu_2}&0&0&\sqrt{1-\mu_2}&0\\ 0&0&0&0&0&0&0&1\\ \end{pmatrix},$$ where $\mu_1=1-t_1$ and $\mu_2= 1-t_2$. If we choose $t_1=t_2=t$ then $\mu_1=\mu_2=\mu$ then U can be compactly written as $$U=e^{-i \arcsin(\sqrt{\mu}) L_{AC}}, \quad L_{AC}=- i {\vert{01}\rangle} {\langle{10}\vert}_{AC} - i {\vert{10}\rangle}{\langle{01}\vert}_{AC}.$$ Note however that upon applying this procedure one only finds the solution of our problem for $$r_A \leq r \leq \rho_1^{\text{in}}+\rho_5^{\text{in}}+\rho_3^{\text{in}}+\rho_7^{\text{in}}.$$ To find the solution for $$\rho_1^{\text{in}}+\rho_5^{\text{in}}+\rho_3^{\text{in}}+\rho_7^{\text{in}} \leq r \leq \rho_1^{\text{in}}+\rho_5^{\text{in}}+\rho_2^{\text{in}}+\rho_3^{\text{in}}$$ we use another rearrangement of terms of $\vv{\rho} \cdot \vv{H}$, namely $$\begin{aligned} \vv{\rho} \cdot \vv{H} &= \rho_2 E_C + (r-\rho_1-\rho_2) E_B+\rho_4 E_C\\ & \quad +(1-r-\rho_7-\rho_8) E_A + \rho_6 E_C \\ &\quad + (\rho_7+\rho_8) (E_A+E_B) +\rho_8 E_C\\ &= -\rho_1 E_C + (-\rho_1-\rho_2) E_A + r E_B + (1-r) E_A\\ &\quad (\rho_4+\rho_6+\rho_7+\rho_8) E_C + (\rho_7+ \rho_8) E_A + \rho_8 E_C. \end{aligned}$$ By looking at each summand individually we find that $$\begin{alignedat}{3} &\rho_1 \leq \rho_1^{\text{in}} &&\Rightarrow \rho_1=\rho_1^{\text{in}}\\ &\rho_1+\rho_2 \leq \rho_1^{\text{in}}+\rho_5^{\text{in}} &&\Rightarrow \rho_2=\rho_5^{\text{in}}\\ &\rho_8 \geq \rho_8^{\text{in}}&& \Rightarrow \rho_8= \rho_8^{\text{in}}\\ &\rho_7+\rho_8 \geq \rho_4^{\text{in}}+\rho_8^{\text{in}} &&\Rightarrow \rho_7=\rho_4^{\text{in}}\\ &\rho_6+\rho_7+\rho_4+\rho_8 \geq \rho_6^{\text{in}}+\rho_7^{\text{in}}+\rho_4^{\text{in}}+\rho_8^{\text{in}} && \\ & &&\hspace{-1.6 cm}\Rightarrow \rho_4+\rho_6=\rho_6^{\text{in}}+\rho_7^{\text{in}}. \end{alignedat}$$ Using the trace condition we find that $$\rho_3+\rho_5=\rho_2^{\text{in}}+\rho_3^{\text{in}}.$$ As before this leads to $(\rho_5 , \rho_3) \prec (\rho_2^{\text{in}}, \rho_3^{\text{in}})$and $(\rho_6 , \rho_4) \prec (\rho_6^{\text{in}}, \rho_7^{\text{in}})$ which is equivalent to $$\begin{pmatrix} \rho_5 \\ \rho_3 \end{pmatrix} = T_1 \begin{pmatrix} \rho_2^{\text{in}} \\ \rho_3^{\text{in}} \end{pmatrix}, \quad T_1 \begin{pmatrix} t_1 & 1-t_1 \\ 1-t_1 & t_1 \end{pmatrix}$$ and $$\begin{pmatrix} \rho_6 \\ \rho_4 \end{pmatrix} = T_2 \begin{pmatrix} \rho_6^{\text{in}} \\ \rho_7^{\text{in}} \end{pmatrix}, \quad T_2 \begin{pmatrix} t_2 & 1-t_2 \\ 1-t_2 & t_2 \end{pmatrix},$$ for some $t_1 \in [0,1]$ and $t_2 \in [0,1]$. This means that for any $t_1$ and $t_2$, the vector $\rho$ is the solution of our problem. Of course, for a given $r$, only some $t_1$ and $t_2$ will solve our problem, namely the ones satisfying $$\begin{aligned} r&= \rho_1+\rho_2 +\rho_3+\rho_4\\ &= r_C + [(1-t_1) r_A + (1-t_1) (1-r_A)] (r_B-r_C). \end{aligned}$$ If we were to choose $t_1=t_2 = t$ then we would have $$r= r_C+ (1-t) (r_B-r_C).$$ Now the work cost of carrying this process is $$\begin{aligned} \Delta F &= (\vv{\rho}- \vv{\rho^{\text{in}}}) \cdot \vv{H}\\ &=(r_C-r_A) (E_C-E_A)\\ &\quad+[(1-t_1) r_A + (1-t_2) (1-r_A) ] (r_B-r_C) E_C. \end{aligned}$$ If we choose $t_1=t_2=t$ then $$\Delta F = (r_C-r_A) (E_C-E_A)+(1-t) (r_B-r_C) E_C.$$ A unitary U such that $\vv{\rho}=\vv{\text{Diag}} (U \rho U^{\dagger})$ is given by $$U=U_{35}(\mu_1) U_{46}(\mu_2) U_{25}(1) U_{57}(1),$$ where $\mu_1=1-t_1$ and $\mu_2= 1-t_2$ and $\mu \in [0,1]$ $$U_{ij}(\mu):= \begin{pmatrix} \sqrt{1-\mu}&\sqrt{\mu}\\ -\sqrt{\mu}&\sqrt{1-\mu} \end{pmatrix}_{ij} \oplus \mathds{1}_{\bar{ij}}.$$ If we choose $t_1=t_2=t$ then $\mu_1=\mu_2=\mu$ then U can be written as $$U=e^{-i \arcsin(\sqrt{\mu}) L_{AB}} e^{-i \pi/2 L_{AC}},$$ with $L_{AC}=- i {\vert{01}\rangle} {\langle{10}\vert}_{AC} - i {\vert{10}\rangle}{\langle{01}\vert}_{AC}$ and $L_{AB}=- i {\vert{01}\rangle} {\langle{10}\vert}_{AB} - i {\vert{10}\rangle}{\langle{01}\vert}_{AB}$. We can also summarise both parts of the solution in one unitary. Then U looks like $$U=U_{35}(\mu_2) U_{46}(\mu_2) U_{25}(\mu_1) U_{57}(\mu_1),$$ with $\mu_1= \min(2 \mu,1), \mu_2= \max(2 \mu-1,0)$, and $\mu \in [0,1]$. Or $$U=e^{-i \arcsin(\sqrt{\mu_2}) L_{AB}} e^{-i \arcsin(\sqrt{\mu_1}) L_{AC}}.$$ Endpoint of Arbitrary Single-Cycle Machines\[subsec:endpoint\] -------------------------------------------------------------- We here want to find the solution of the problem of Equation when r is chosen to be the maximally allowed r. We set $k=n/2$. We know that r is at most $r_{\text{coh}}^*=\sum_{i=1}^k \rho_i^{\text{in},\downarrow}$ since $$r= \sum_{i=1}^k \rho_i \leq \sum_{i=1}^k \rho_i^{\downarrow} \leq \sum_{i=1}^k \rho_i^{\text{in},\downarrow}$$ and choosing $\rho_i=\rho_i^{\text{in},\downarrow},\, i=1,\dots,n$ achieves this upper bound, i.e. then $\vv{\rho} \prec \vv{\rho^{\text{in}}}$ and $r=r_{\text{coh}}^*$. We next want to show that \[lemma:biggest\_entries\] For any state $\rho$ such that $r_{\rho}=r_{\text{coh}}^*$, the first k entries of the state are its biggest entries. Suppose not, i.e. there exists a $\rho$ as in the statement for which there exist $ i \leq k, \text{ and } j >k \text{ such that } \rho_i < \rho_j$. Then $\vv{\rho'}:= P_{ij} \vv{\rho} \prec \vv{\rho} \prec \vv{\rho^{\text{in}}}$ and $r' = \sum_{l=1}^k \rho_l' = \sum_{l=1, l \neq i}^k \rho_l+\rho_j > \sum_{l=1, l \neq i}^k \rho_l+\rho_i= \sum_{l=1}^k \rho_l=r_{\rho}$. As $\vv{\rho'} \prec \vv{\rho^{\text{in}}}, \, r' \leq r_{\text{coh}}^*$ so $r_{\rho} < r_{\text{coh}}^*$ in contractiction with the assumption. Writing $\vv{\rho}$ as $$\vv{\rho} =(v_{\rho}, w_{\rho}),$$ with $$\begin{aligned} v_{\rho}&=((v_{\rho})_1,\dots,(v_{\rho})_k) :=(\rho_1,\dots, \rho_k)\\ w_{\rho}&=((w_{\rho})_{1},\dots,(w_{\rho})_{n-k}) :=(\rho_{k+1},\dots, \rho_n), \end{aligned}$$ the above lemma can be reformulated as $$v_{\rho}^{\downarrow} = (\rho_1^{\downarrow}, \dots, \rho_k^{\downarrow}).$$ What makes the above equation non trivial is that on the left hand side only the first k entries of $\rho$ are reordered whereas on the right hand side all the entries of $\rho$ are reordered. Using that $\vv{\rho} \prec \vv{\rho^{\text{in}}}$ we have for all $l=1,\dots, k$ that $$\sum_{i=1}^l (v_{\rho}^{\downarrow})_i = \sum_{i=1}^l \rho_i^{\downarrow} \leq \sum_{i=1}^l \rho_i^{\text{in},\downarrow},$$ with equality for $l=k$. This is equivalent to $v_{\rho} \prec v_{\rho^{\text{in},\downarrow}}$. Also note that the lemma implies that the last $n-k$ entries of $\vv{\rho}$ are the smallest $n-k$ ones, that is $$w_{\rho}^{\downarrow} =(\rho_{k+1}^{\downarrow}, \dots, \rho_{n}^{\downarrow}).$$ Again using that $\vv{\rho} \prec \vv{\rho^{\text{in}}}$ we find that for all $l=1,\dots,n-k$, $$\sum_{i=1}^{l} (w_{\rho}^{\downarrow})_i= \sum_{i=1}^{l} \rho_{k+i}^{\downarrow} \leq \sum_{i=1}^{l} \rho_{k+i}^{\text{in},\downarrow}$$ with equality for $l=n-k$. This is equivalent to $w_{\rho} \prec w_{\rho^{\text{in},\downarrow}}$. So we have proven that If $\vv{\rho}$ satisfies $r_{\rho} = r_{\text{coh}}^*$, then $$\vv{\rho} \prec \vv{\rho^{\text{in}}} \Leftrightarrow v_{\rho} \prec v_{\rho^{\text{in},\downarrow}} \text{ and } w_{\rho} \prec w_{\rho^{\text{in},\downarrow}},$$ where $\vv{\rho} = (\underbrace{\rho_1,\dots \rho_k}_{v_{\rho}},\underbrace{\rho_{k+1},\dots \rho_n}_{ w_{\rho}})$. Indeed, the reverse implication is trivially satisfied. This means that $\rho$ is the solution of $$\label{equ:endprob} \min_{\rho \prec \rho^{\text{in}}} \vv{\rho}\cdot \vv{H} , \text{ s.t. } \sum_{i=1}^k \rho_i = r_{\text{coh}}^*$$ iff $v_{\rho}$ and $w_{\rho}$ are solutions of $$\begin{aligned} \min_{v_{\rho} \prec v_{\rho^{\text{in},\downarrow}}}& v_{H} \cdot v_{\rho}\\ \min_{w_{\rho} \prec w_{\rho^{\text{in},\downarrow}}}& w_{H} \cdot w_{\rho}, \end{aligned}$$ where we split $H$ in the same way as $\rho$ in $H=(v_H,w_H)$. That is, we have reformulated the original constaint problem into two marginal unconstraint problems. The minimums of $v_H \cdot v_{\rho}$ and $w_H \cdot w_{\rho}$ are attained by $\sum_{i=1}^k (v_H^{\downarrow})_i (v_{\rho}^{\uparrow})_i$ and $\sum_{i=1}^{n-k} (w_H^{\downarrow})_i (w_{\rho}^{\uparrow})_i$, that is when the entries of $v_{\rho}$ and $w_{\rho}$ are inversely ordered with respect to the ones of $v_H$ respectively $w_H$. This uniquely defines the $\vv{\rho}$ that minimises and solves the endpoint problem. single-cycle endpoint free energy {#app:endpoint} ================================= We want here to argue that in the two-qubit machine one always needs less free energy to reach the endpoint in the single-cycle coherent scenario than in the single-cycle incoherent scenario. This is formulated in the following $\Delta F_{\text{coh}}^* \leq \Delta F^*_{\text{inc}}$ with equality iff $T_R \rightarrow +\infty$, $E \rightarrow 0$, $E_C \rightarrow 0$, or $E_C \rightarrow +\infty$. If $T_R \rightarrow +\infty$ one sees directly that for both cases of $E_C \leq E$ and $E > E_C$, $\Delta F_{\text{coh}}^* =0= \Delta F^*_{\text{inc}}$. If $E \rightarrow 0$, then also in both cases $\Delta F_{\text{coh}}^*=\Delta F^*_{\text{inc}}$. If $E_C \rightarrow 0$ we also trivially have $\Delta F_{\text{coh}}^* =0= \Delta F^*_{\text{inc}}$. If $E_C \rightarrow +\infty$ both terms go to infinity as $\mathcal{O}(E_C)$ and are in that sense equal. Similarly one sees that if $T_R \rightarrow 0$ or $E \rightarrow + \infty$, $E_C (r_B-r) < \frac{1}{2}= \Delta F^*_{\text{inc}}.$ Else, assuming $E_C, E, T_R \notin \{0, + \infty\}$, note that as for $E_C >E$ we have $$\Delta F_{\text{coh}}^*= E_C (r_B-r)-E (r_C-r),$$ the work cost in the coherent scenario is always bounded by $E_C (r_B-r)$. In order to prove our point we hence only need to prove that $E_C (r_B-r) < \Delta F^*_{\text{inc}}$. To do so we look at $$f(T_R) = r_C+r-r_B-\frac{1}{2}$$ for fixed $E_C, E \in ]0, + \infty[$. As $$f(0)=\frac{1}{2}, \quad f(+\infty)=0,$$ if $f'(T_R) <0$ our point is proven. We hence calculate $$\begin{aligned} f'(T_R)&= -\frac{1}{T_R^2} \left[ E_C r_C (1-r_C) + E r (1-r)-E_B r_B (1-r_B) \right]\\ &= -\frac{1}{T_R^2} \left[ E_C \underbrace{[ r_C (1-r_C)- r_B (1-r_B)]}_{>0}+ E \underbrace{ [ r (1-r)- r_B (1-r_B)]}_{>0}\right]<0, \end{aligned}$$ where in the second step we used that $g(r)= r (1-r)$ is stricly decreasing on $]\frac{1}{2}. 1[$ as well as $\frac{1}{2} < r_C < r_B <1$ and $\frac{1}{2} < r < r_B <1$. This ends the proof. Crossing point {#app:cross} ============== In Sec. VI C of the main text we demonstrate the existence of a critical point $(\Delta F_{\text{crit}},T_{\text{crit}})$ beyond which the coherent scenario outperforms the incoherent one in the single-cycle regime. Note that as both curves start at the same point, this critical point is not the only crossing point of both curves. Our numerical results though strongly suggest that those are the only two. We want here to study the behavior of the more interesting crossing point, $(\Delta F_{\text{crit}},T_{\text{crit}})$, when one varies the environment temperature $T_R$ and the energy gap $E_C$. In [Fig. \[fig:crossingFT\]]{} we analyse the behaviour of $\Delta F_{\text{crit}}$ as a function of $T_R$ for fixed $E_C$. Apart from the fact that the curves seem smooth, it is interesting to note that they all exhibit a maximum for some environmental temperature. This point corresponds to the environmental temperature for which the crossing between coherent and incoherent occurs at the lowest temperature of the target qubit (i.e. at maximum cooling). Treating the resource internally {#app:internal} ================================ Instead of treating the resource as an external supply, [[ one can instead consider part of the machine to be the resource itself. We showcase here what such a standpoint would lead to for the two-qubit machine when considering qubit C to be the resource.]{}]{} One can then ask the same question, namely how do the fully entropic (incoherent) vs. the non-entropic (coherent) supply of free energy scenarios compare in terms of - reachable temperatures - reachable temperatures for a given work cost. The incoherent scenario translates to exchanging qubit C with a qubit at a hotter temperature $T_H$ and then perfoming the energy conserving unitary in the subspace $\text{span}({\vert{010}\rangle},{\vert{101}\rangle})$. The free energy difference is now calculated in terms of the system state since the state itself is the resource. We hence have for the final free energy $$\begin{aligned} F^{\text{fin}} &= \langle H\rangle_{\rho^{\text{fin}}} -T_R S_{\rho^{\text{fin}}}\\ &=\text{Tr}(\rho^{\text{fin}} H) + T_R \; \text{Tr}[\rho^{\text{fin}} \ln(\rho^{\text{fin}})]\\ &=\text{Tr}(\rho^{\text{fin}} [H + T_R \ln(\rho^{\text{fin}})])\\ &=T_R \ln (r r_B r^H_C) + E_C ( 1- \frac{T_R}{T_H}) (1-r^H_C) . \end{aligned}$$ To calculate the initial free energy note that the initial state $\rho^{\text{in}} = \tau \otimes \tau_B \otimes \tau_C$ is the same as $\rho^H = \tau \otimes \tau_B \otimes \tau^H_C$ with $T_H = T_R$. Hence by setting $T_H=T_R$ in the above result $$F^{\text{in}} = T_R \ln(r r_B r_C).$$ Therefore $$\Delta F_{\text{inc,int}} = F^{\text{fin}} - F^{\text{in}}= E_C \frac{T_H-T_R}{T_H} (1-r^H_C) + T_R \ln(\frac{r^H_C}{r_C})$$ The temperature achieved on the target qubit is the same as in the single-cycle incoherent scenario of Sec. VI A of the main text and reads $$\begin{aligned} r_{\text{inc,int}}&= r r_B+ (1-r_C^H)((1-r) r_B+r (1-r_B))\\ T_{\text{inc,int}}&= \frac{E}{\ln{\frac{r_{\text{inc,int}}}{1-r_{\text{inc,int}}}}}.\end{aligned}$$ The coherent scenario allows one to implement any unitary on qubit C and then perfoming the energy conserving unitary on the 3 qubit system in the subspace $\text{span }({\vert{010}\rangle},{\vert{101}\rangle})$. After applying the unitary to qubit C the state looks like $$\rho^U= \tau \otimes \tau_B \otimes U \tau_C U^{\dagger},$$ where $$U=\begin{pmatrix} a&b\\ -b^* e^{i \theta}&a^* e^{i \theta} \end{pmatrix},$$ with $\theta \in [0, 2 \pi]$ and $\lvert a \rvert ^2+ \lvert b \rvert ^2 =1, \, a,b \in \mathbb{C}$. And hence $$\begin{split} U \tau_C U^{\dagger} &= \begin{pmatrix} (1-\lvert b \rvert^2) r_C+ \lvert b \rvert^2 (1-r_C)& ab e^{-i \theta} (1-2r_C)\\ a^* b^* e^{i \theta} (1-2r_C) & \lvert b\rvert^2 r_C+ (1-\lvert b \rvert^2) (1-r_C) \end{pmatrix}\\ &=\begin{pmatrix} (1-\mu) r_C+\mu(1-r_C)& \sqrt{\mu (1-\mu)} (1-2r_C)\\ \sqrt{\mu (1-\mu)} (1-2r_C)&\mu r_C+(1-\mu) (1-r_C) \end{pmatrix}\\ &=:\begin{pmatrix} r_C^U&z\\ z& 1-r_C^U \end{pmatrix} \end{split},$$ where in the second step we made the choice of a and b being real, $\theta=0$, and $b^2=\mu$. Note that making this choice does not influence the perfomance of U since for this only the value of $r_C^U$, which is not altered by the choice, matters. In any case, to maximally cool the target qubit for a given state of qubit C, one notices that the energy conserving unitary $U_{\text{cons}}$ need be chosen as $U_{\text{cons}}= \begin{pmatrix} 0&1\\ 1&0 \end{pmatrix}$ in the $\text{span}({\vert{010}\rangle}, {\vert{101}\rangle})$ subspace and as identity elsewhere, such that for the final state $\rho^{\text{fin}}:= U_{\text{cons}} \rho^U U_{\text{cons}}^{\dagger}$ we have $$\text{Tr}_{BC}(\rho^{\text{fin}})=\begin{pmatrix} r_{\text{coh,int}}&0\\ 0&1-r_{\text{coh,int}}^f \end{pmatrix},$$ with $r_{\text{coh,int}}:=r r_B+ (1-r_C^U) [(1-r) r_B+ r (1-r_B)]$. And so the final temperature is obtained as usual by $$T_{\text{coh,int}}= \frac{E}{\ln{\frac{r_{\text{coh,int}}}{1-r_{\text{coh,int}}}}}.$$ The free energy cost is obtained as $$\Delta F_{\text{coh,int}}= \Delta \langle H\rangle_{\rho} - T_R \Delta S_{\rho}$$ Note that as the transformations are all unitaries, $\Delta S_{\rho}=0$ and so we have $$\begin{aligned} \Delta F_{\text{coh,int}}&= \Delta \langle H\rangle_{\rho}\\ &=\text{Tr}\left((\rho^U-\rho^{\text{in}}) H\right)\\ &=(r_C-r_C^U) E_C. \end{aligned}$$ We are now in a position to map out the amount of cooling vs. the associated work cost for both scenarios and compare them. This is displayed in [Fig. \[fig:cohVSincohinternal\]]{}. Note however that those plots will never cross. Indeed by choosing the same cooling in both scenarios, i.e. $T_{\text{inc,int}}=T_{\text{coh,int}}$, we have $$\begin{aligned} T_{\text{coh,int}}=T_{\text{inc,int}} &\Leftrightarrow r_{\text{coh,int}}=r_{\text{inc,int}}\\ &\Leftrightarrow r_C^U=r_C^H\\ &\Rightarrow \langle H \rangle_{\rho^U}=\langle H \rangle_{\rho^H}\\ &\Rightarrow \Delta F_{\text{coh,int}}=\langle H \rangle_{\rho^U} > \Delta F_{\text{inc,int}}=\langle H \rangle_{\rho^H}-T_R\Delta S_{\rho^H}; \end{aligned}$$ meaning that for each temperature that both the incoherent and the coherent scenarios can reach, the incoherent scenario outperforms the coherent one. However, the coherent scenario can always reach lower temperatures than the incoherent one, that is $T_{\text{coh,int}}^* < T_{\text{inc,int}}^*$. This hence settles the comparison of both scenarios in a much more trivial way than in the external resource case. The swap operation and the virtual qubit as a basis for cooling operations {#app:virtualqubit} ========================================================================== In all of the paradigms discussed in this work, the operation that causes the target qubit to be cooled down is a swap operation between the target qubit and a qubit subspace in the joint system of the machine qubits. The latter can, but need not be, either one of the machine qubits. The effect of this swap operation can be very simply understood in terms of the “virtual qubit" subspace of the machine qubits. This section presents the cooling effect of the swap in terms of the virtual qubit, as was done in [@silva16]. All of the results in the case of repeated operations (and some of those in the single-cycle regime) follow from this argument. For a proof of the statement, see [@silva16], Appendix A. Let $A$ be a real (target) qubit system that begins in a state that is diagonal w.r.t. the energy eigenbasis (denoted by $\{{\vert{0}\rangle},{\vert{1}\rangle}\}$, with the population of its ground state (i.e. the corresponding diagonal element in the density matrix) denoted by $r$. Denote the energy difference between the two levels by $E$. In addition, consider another system $M$ (representing the machine), that has in particular a two-dimensional subspace spanned by the energy eigenstates $\{{\vert{E_g}\rangle},{\vert{E_e}\rangle}\}$, this subspace is referred to as the “virtual qubit". We denote by $E_V = E_e - E_g$ the energy gap of the virtual qubit. The initial state of the machine, expressed as a density matrix in the energy eigenbasis, is assumed to have no coherence w.r.t. the eigenstates of the virtual qubit, i.e. the coefficients of ${\vert{E_g}\rangle}{\langle{E_i}\vert}$ are zero for all $i$ (except the diagonal element $i=g$), and similarly for ${\vert{E_e}\rangle}{\langle{E_i}\vert}$. Let the population in the ${\vert{E_g}\rangle}$ state (the coefficient of ${\vert{E_g}\rangle}{\langle{E_g}\vert}$ in the density matrix) be denoted as $p_g$, and that in the ${\vert{E_e}\rangle}$ state be denoted by $p_e$. We label by - $N_V$ (the “norm" of the virtual qubit), the total population in the virtual qubit, $N_V = p_g + p_e$. - $r_V$ the normalized ground state population of the virtual qubit, $r_V = p_g/N_V$, i.e. the population if the virtual qubit was normalized to have $N_V = 1$, - $Z_V$ the bias of the virtual qubit, also normalized, $Z_V = (p_g - p_e)/N_V$. - $T_V$ the virtual temperature of the virtual qubit, calculated by inverting its Gibb’s ratio, $$\begin{aligned} \frac{p_e}{p_g} &= e^{-E_V/T_V}. \end{aligned}$$ Alternatively, the virtual temperature can also be expressed in terms of the bias, via the relation $$\begin{aligned} Z_V &= \tanh\left( \frac{E_V}{2T_V} \right). \end{aligned}$$ Let a swap operation be performed between the real and virtual qubits, described by the unitary $$\begin{aligned} U = \openone_{AM} - {\vert{0,E_e}\rangle}_{AM}\!{\langle{0,E_e}\vert} - {\vert{1,E_g}\rangle}_{AM}\!{\langle{1,E_g}\vert} + {\vert{1,E_g}\rangle}_{AM}\!{\langle{0,E_e}\vert} + {\vert{0,E_e}\rangle}_{AM}\!{\langle{1,E_g}\vert},\end{aligned}$$ Then the final reduced state of the target qubit will have a new ground state population given by $$\label{app:virtualsingler} \begin{aligned} r^\prime &= r + (1-r) p_g- r p_e\\ &= N_V r_V + \left( 1 - N_V \right) r, \end{aligned}$$ i.e. with probability $N_V$, the new populations of the target qubit are those of the normalized virtual qubit, and with probability $1-N_V$, there is no change. We assume $N_V \neq 0$, as this corresponds to the virtual qubit being empty. One can also express the above in the form $$\begin{aligned} \label{app:virtualrecursiver} \frac{r_V - r^\prime}{r_V - r} &= 1 - N_V.\end{aligned}$$ Thus, if after a single swap, the machine is restored to its state before the unitary, and then the swap is repeated, the recursive relation between $r$ and $r^\prime$ will hold for the new population $r^{\prime\prime}$ in terms of $r^\prime$. In general, if the reset of the machine and the swap are repeated in turn $n$ times, then the ground state population of the target qubit after the $n^{th}$ step will be \[app:virtualrepeatedr\] $$\begin{aligned} \frac{r_V - r^{(n)}}{r_V - r} &= \left( 1 - N_V \right)^n, \\ \text{Equivalently,} \quad r^{(n)} &= r_V - \left( r_V - r \right) \left( 1 - N_V \right)^n. \end{aligned}$$ In the asymptotic limit of infinite swaps, $r \rightarrow r_V$. This is equivalent to the Gibbs ratio of the target qubit approaching that of the virtual qubit, and the bias of the target qubit approaching $Z_V$. In terms of temperature, if the target qubit and virtual qubit have the same energy gap ($E=E_V$), then the temperature of the target qubit approaches the virtual temperature with each swap, and in the asymptotic limit, $T\rightarrow T_V$. However, if the energies are unequal, then $$\begin{aligned} \label{app:virtualtemperaturefinal} T \longrightarrow T_V \frac{E}{E_V},\end{aligned}$$ since it is the Gibbs ratio that equilibrates to that of the virtual qubit. Finally, one can calculate the work cost of the swap operation. Since it is unitary, the energy difference and free energy difference are the same, and given by $$\begin{aligned} \Delta F &= Tr \left( \rho^\prime H \right) - Tr \left( \rho H \right),\end{aligned}$$ where $\{\rho,\rho^\prime\}$ are the initial and final states of the system and machine, and $H$ is the Hamiltonian of the system and machine. For the degenerate case, i.e. $E=E_V$, one finds the work cost to be zero. For the non-degenerate case, the work cost of a single swap is given by $$\begin{aligned} \label{app:virtualworkcost} \Delta F = \left( r^\prime - r \right) \left( E_V - E \right),\end{aligned}$$ To end this section, we list the relevant virtual qubits for each of the paradigms used in this work: (see further sections for details) - For single shot and repeated incoherent operations, the virtual qubit is spanned by the two levels $\{{\vert{01}\rangle}_{BC},{\vert{10}\rangle}_{BC}\}$ of the machine qubits, with the energy gap of the virtual qubit equal to that of the target qubit $E_B-E_C=E$. - For repeated coherent operations and algorithmic cooling, the virtual qubit is spanned by the levels $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$, with the energy gap being $E_B + E_C$. - For single shot coherent operations, one requires the swap between the target qubit $A$ and the machine qubit $B$, which also falls under the above analysis, here the virtual qubit is simply the machine qubit $B$ (thus $N_V = 1$). The energy gap is thus $E_B$. If $E_C > E$, one also requires the swap between qubits $A$ and $C$, where $C$ can be treated as a virtual qubit. Repeated incoherent operations {#app:incohop} ============================== The rate of cooling with repeated incoherent operations {#app:coolingincoherent} ------------------------------------------------------- In the case of incoherent operations, the relevant virtual qubit (see [Sec. \[app:virtualqubit\]]{}) is the subspace $\{{\vert{01}\rangle}_{BC},{\vert{10}\rangle}_{BC}\}$ of the machine qubits. When qubit $B$ is at the environment temperature $T_R$ and qubit $C$ at the hot temperature $T_H$, one can calculate the populations and variables of the virtual qubit: $$\begin{aligned} p_{01} &= r_B \left( 1 - r_C^H \right) \\ p_{10} &= \left( 1 - r_B \right) r_C^H \\ N_{V,\text{inc}} = p_{01} + p_{10} &= r_B \left( 1 - r_C^H \right) + \left( 1 - r_B \right) r_C^H \\ r_{V,\text{inc}} \; (=r_{\text{inc},\infty}) \; &= \frac{ r_B \left( 1 - r_C^H \right)}{ r_B \left( 1 - r_C^H \right) + \left( 1 - r_B \right) r_C^H },\end{aligned}$$ where the labelling of $r_{V,\text{inc}}$ as $r_{\text{inc},\infty}$ will become clear shortly. Equivalently, $r_{V,\text{inc}}$ can be expressed in terms of the virtual temperature of the virtual qubit, $$\begin{aligned} \label{app:incohrepfinalT} r_{V,\text{inc}} &= \frac{1}{1 + e^{-E/T_V}}, & &\text{where} \quad T_{V,\text{inc}} (=T_{\text{inc},\infty}) = \frac{E}{\frac{E_B}{T_R} - \frac{E_C}{T_H}}.\end{aligned}$$ Thus following the argument in [Sec. \[app:virtualqubit\]]{}, the ground state population after $n$ repetitions of the incoherent cycle will be given by $$\begin{aligned} r_{\text{inc},n} &= r_{V,\text{inc}} - \left( r_{V,\text{inc}} - r \right) \left( 1 - N_{V,\text{inc}} \right)^n.\end{aligned}$$ Thus in the asymptotic limit of infinite repetitions, as $0<N_V\leq 1$, we recover $r_{\text{inc},\infty} = r_{V,\text{inc}}$, and the temperature of the target qubit in this limit is the virtual temperature, i.e. $T_{\text{inc},\infty}=T_{V,\text{inc}}$. In particular, in the limit that the hot bath is at infinite temperature, $T_H \rightarrow \infty$, $$\begin{aligned} N_{V,\text{inc}}^* &= \frac{1}{2}, \\ r_{\text{inc},\infty}^* &= r_B, \\ T_{\text{inc},\infty}^* &= T_R \frac{E}{E_B}, \\ r_{\text{inc},n}^* &= r_B - \frac{(r_B - r)}{2^n}.\end{aligned}$$ The free energy cost of repeated incoherent operations {#app:costincoherent} ------------------------------------------------------ Here we calculate the free energy cost of repeating the incoherent operations a finite number of times. Since the resource is the hot bath, we will account for $Q_h$, the heat drawn from it. Among all of the steps involved, only the thermalization of qubit $C$ involves the hot bath, and so it is sufficient to keep track of the populations of the reduced state of qubit $C$ in order to calculate $Q^H$. We can divide the total heat current into two parts, first off, the amount required to heat up qubit $C$ from the environment temperature $T_R$ to the temperature of the hot bath $T_H$. Following that, there is the repeated heat current required to bring back qubit $C$ to $T_H$ after a cooling swap has been performed. The first heat current is trivial to calculate, from the difference in the ground state population of $C$ due to heating, $$\begin{aligned} Q^H_1 &= E_C \left( r_C - r_C^H \right).\end{aligned}$$ For the second part, we have to determine the population change, specifically the reduction in the excited state population of qubit $C$, every time that the cooling swap is performed. However, since the swap is between the levels ${\vert{010}\rangle}$ and ${\vert{101}\rangle}$, we see that whatever the change in the reduced state populations of $C$, the change in the corresponding reduced state populations of $A$ is exactly the same. More precisely, the heat required to reset qubit $C$ before the $n^{th}$ swap (i.e. after the $(n-1)^{th}$ cooling swap) is $$\begin{aligned} Q^H_n &= E_C \left( r_{\text{inc},n-1} - r_{\text{inc},n-2} \right),\end{aligned}$$ which holds for $n\geq 2$. From the above two expressions, we thus have the cumulative heat current required for $n$ cooling steps, $$\begin{aligned} \label{app:heatcurrentrepeatedincoherent} Q^H_n &= E_C \left( r_C - r_C^H \right) + E_C \left( r_{\text{inc},n-1} - r \right).\end{aligned}$$ In the asymptotic limit of infinite repetitions, $r_{\text{inc},n-1}$ goes to $r_{\text{inc},\infty}$, and the resultant expression demonstrates that the total heat current is asymptotically finite. The work cost is given by the decrease in the free energy of the hot bath w.r.t. the temperature of the environment, which is defined as $$\begin{aligned} \Delta F_{\text{inc},n} &= Q^H_n - T_R \Delta S_{H,n},\end{aligned}$$ where $\Delta S_{H,n}$ is the *decrease* in the entropy of the bath after $n$ repetitions of the swap. For a thermal bath that stays at equilibrium, as we assume throughout, $\Delta S_{H,n} = Q^H_n/T_H$, leading to $$\begin{aligned} \Delta F_{\text{inc},n} &= Q^H_n \left( 1 - \frac{T_R}{T_H} \right).\end{aligned}$$ In particular, for the case that $T_H \rightarrow \infty$, in the asymptotic limit of infinite repetitions of the swap, $$\begin{aligned} \Delta F_{\text{inc},\infty}^* &= E_C \left( r_C - \frac{1}{2} + r_B - r \right).\end{aligned}$$ Asymptotic equivalence of incoherent operations and autonomous refrigerator {#app:incoherentauto} =========================================================================== In this section, we show that in the two-qubit machine the final state, and hence the final temperature of the target, as well as the total work cost, are the same as if we had run an autonomous refrigerator between the 3 qubits and waited for the steady state. In other words, since the autonomous refrigerator runs continuously, repeated incoherent operations can be understood as a discretized version of the continuous process. For a discussion on the connection between continuous and discretized versions of quantum thermal machines, see [@Raam]. Here we simply review the autonomous 3-qubit fridge introduced in [@auto0], and the equivalence of its steady state parameters with the asymptotic end point of repeated incoherent operations. In the case of the autonomous fridge, rather than having repeated unitary operations, there is a time-independent interaction Hamiltonian between the three qubits given by $$\label{eq:intHauto} H_{\text{int}} = g \left( {\vert{010}\rangle}_{ABC}{\langle{101}\vert} + h.c. \right),$$ that acts on the degenerate subspace. Note that this Hamiltonian is a generator of the unitary that swaps the population of the degenerate levels, specifically, $U = \text{exp} \left(-i \frac{\pi}{2g} H_{\text{int}} \right)$. At the same time, each qubit is coupled to a thermal bath, qubit $B$ to the environment, qubit $C$ to the hot bath. For completeness one could also consider qubit $A$ to be coupled to its own environment, but for simplicity we ignore this effect here. This is to be consistent with the repeated incoherent operations picture, where we did not take into account any coupling between qubit $A$ and an environment in between the cooling operations. As proven in [@auto0], the three qubits approach a steady state, that is particularly simple in the case that qubit $A$ has no coupling to a bath, $$\label{eq:finalautostate} \tau_{\text{auto}} \otimes \tau_B \otimes \tau_C^H.$$ That is, the steady state is a tensor product state, with qubits $B$ and $C$ thermal at the temperatures of the baths they are respectively coupled to, and qubit $A$ in Gibbs state with temperature $$T_{\text{auto}} = \frac{E}{\frac{E_B}{T_R} - \frac{E_C}{T_H}}.$$ This is the same as $T_{\text{inc},\infty}$, see Eq. , that is the asymptotic limit of repeated incoherent operations. Furthermore, it is clear that in the repeated operations, when the number of operations approaches infinite, the cooling swaps stop having an effect, and thus the final states of qubits $B$ and $C$ are Gibbs states at $T_R$ and $T_H$ respectively as these are the temperatures they are reset to after each cooling cycle. Thus the final state of all three qubits is the same in both the autonomous and repeated operations scenario. Free energy equivalence. {#app:costauto} ------------------------ Here we calculate the free energy consumed by the autonomous fridge to go from the initial state to the final state. As the resource is the hot bath, we will calculate the free energy from $Q^H_{\text{auto}}$, the heat drawn from the hot bath. The initial state is that of all three qubits being at the environment temperature $T_R$, while the final state is the tensor product of Gibbs states derived above, see Eq. . Consider the entire system to be comprised of three parts. Each part consists one of the qubits and the bath that it is attached to (in the case of qubits $B$ and $C$). The only way that energy is exchanged between the different parts is via the energy-preserving interaction Hamiltonian given by Eq. . This swaps the populations of the two energy eigenstates ${\vert{010}\rangle}$ and ${\vert{101}\rangle}$, and thus the change in population of qubit $A$ due to the interaction is exactly the same as that in qubit $C$. Since the energy change is given by the population times the energy gap this implies that the energy change of the three parts (at all times during the operation of the machine) must be in proportion to $E:-E_B:E_C$, from the form of $H_{\text{int}}$. Since part $A$ consists only of the target qubit, the total energy change is simply the difference in energy from the initial to the final state, $E(r - r_{\text{auto}})$. For part $C$, the total energy change is the sum of that of qubit $C$, and that of the hot bath, $E_C (r_C - r_C^H) - Q^H_{\text{auto}}$. Via the preceding argument, $$\begin{aligned} \frac{E(r-r_{\text{auto}})}{E} &= \frac{E_C (r_C - r_C^H) - Q^H_{\text{auto}}}{E_C}.\end{aligned}$$ Solving for $Q^H_{\text{auto}}$, we find that $$Q_{\text{auto}}^H = E_C \left( r_C - r_C^H + r_{\text{auto}} - r \right).$$ As $r_{\text{auto}} = r_{\text{inc},\infty}$, this is the same heat current as in the asymptotic limit of infinite repetitions of the incoherent operation, see Eq. . Repeated coherent operations {#app:repeatcoherent} ============================ Choosing the best virtual qubit from the machine {#app:repeatedcoherentproof} ------------------------------------------------ In this section we investigate the effect and optimal strategy for repeated coherent operations. Here we are allowed to repeatedly perform arbitrary unitary operations on the joint system of the target and machine qubits, with the machine qubits being reset to the temperature of the environment in between (see [Fig. \[fig:repeatedcoherentoperations\]]{}). To begin with, we demonstrate that in terms of asymptotic cooling, the best virtual qubit of the machine to choose is that spanned by $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$. ![Cooling via repeated coherent operations after the first coherent operation is completed. First the machine qubits $B$ and $C$ are thermalized to the environment temperature $T_R$, following which one performs a unitary that swaps the populations of the levels ${\vert{011}\rangle}$ and ${\vert{100}\rangle}$.\[fig:repeatedcoherentoperations\]](repeat_coh.pdf){width="10cm"} First off, w.r.t. the virtual qubit picture, here we can choose any qubit subspace of the machine qubits to swap with the target qubit, unlike in the incoherent case, where we were forced to choose the subspace $\{{\vert{01}\rangle}_{BC},{\vert{10}\rangle}_{BC}\}$, so as to be degenerate ($E_V = E$) with the target system. However, in the coherent case, there is only a single temperature available ($T_R$), thus the state of the machine after it is rethermalized to the environment will simply be the thermal state of qubits $B$ and $C$ at $T_R$. Given that the entire state of $B$ and $C$ is thermal, every qubit subspace of the machine has the same virtual temperature, $T_V = T_R$. From [Sec. \[app:virtualqubit\]]{}, Eq. , we conclude that if we pick a virtual qubit from the machine with energy gap $E_V$, then the temperature of the target system after many repetitions of the swap between the virtual qubit and the target (with the reset of the machine in between) will tend to $$\begin{aligned} T \longrightarrow T_R \frac{E}{E_V}.\end{aligned}$$ Thus to cooling the maximum amount in the asymptotic case of infinite repetitions, we should pick the largest energy gap, i.e. the qubit subspace $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$. In what follows, we show that in fact, after the first coherent operation (that was dealt with in [Sec. \[sec:single-cycle2\]]{}), this is the *only* virtual qubit that allows for cooling the target. The target qubit after $n$ repetitions of coherent operations {#app:repeatedcoherentcooling} ------------------------------------------------------------- Consider the state of the three qubits at the end of the single coherent operation. The initial state before the operation was the thermal state of all three qubits at the environment temperature $T$. If the energies satisfy $E \geq E_C$, then the optimal coherent operation is simply to swap the states of $A$ and $B$, leaving the three qubits in the state $$\begin{aligned} \rho^\prime &= \tau_B \otimes \tau_A \otimes \tau_C,\end{aligned}$$ whereas if $E < E_C$, then the optimal coherent operation is to first swap the states of $A$ and $C$, and then proceed by swapping $A$ with $B$, leading to the final state of $$\begin{aligned} \rho^\prime &= \tau_B \otimes \tau_C \otimes \tau_A.\end{aligned}$$ In either case, there is no further cooling on qubit $A$ possible by any unitary operation. Thus the only option to continue is to bring the machine back to the environment temperature. At this point, the state is now given by $$\begin{aligned} \tilde{\rho} &= \tau_B \otimes \tau_B \otimes \tau_C\ \\ &=\begin{pmatrix} r_B^2 r_C & \multicolumn{7}{c}{\multirow{4}{*}{\Large0}}\\ & r_B^2 \bar{r_C} & & & & & &\\ & & r_B \bar{r}_B r_C & & & & &\\ & & & r_B \bar{r}_B \bar{r}_C & & & & \\ & & & & r_B \bar{r}_B r_C & & & \\ \multicolumn{5}{c}{\multirow{2}{*}{\Large0}} & r_B \bar{r}_B \bar{r}_C & &\\ & & & & & & \bar{r}_B^2 r_C &\\ & & & & & & & \bar{r}_B^2 \bar{r}_C \\ \end{pmatrix}\end{aligned}$$ Recall that the first four populations (i.e. eigenvalues) are those in the ground state of qubit $A$. Labelling all of the populations from $p_{000}$ to $p_{111}$, one can verify (using $E_B > E_C$) that $$\begin{aligned} p_{000} > p_{001} > p_{010} = p_{100} > p_{011} = p_{101} > p_{110} > p_{111}.\end{aligned}$$ Thus from the perspective of maximizing the ground state population of $A$, the only two populations that are not in the optimal location are $p_{011}$ and $p_{100}$, which should be swapped, corresponding to unitarily swapping the two energy levels ${\vert{011}\rangle}$ and ${\vert{100}\rangle}$. This is unlike the initial state before the first coherent operation, where there were a number of possible level swaps that achieved cooling. There one had to optimize over all possible swap operations to minimize the work cost, whereas here there is only one possible cooling swap. Thus the second coherent operation continues with the ${\vert{100}\rangle}\leftrightarrow{\vert{011}\rangle}$ swap, cooling down qubit $A$ further, followed by bringing back the machine qubits $B$ and $C$ to the environment temperature. One can verify that after resetting the machine qubits, the populations once again satisfy $p_{011} < p_{100}$, allowing cooling to continue by repetition of this cycle of steps. In the same manner as for repeated incoherent operations, from the arguments of [Sec. \[app:virtualqubit\]]{}, one can identify the properties of the relevant virtual qubit in this case, the states ${\vert{00}\rangle}_{BC}$ and ${\vert{11}\rangle}_{BC}$ of the machine, $$\begin{aligned} p_{00} &= r_B r_C \\ p_{11} &= \left( 1 - r_B \right) \left( 1 - r_C \right) \\ N_{V,\text{coh}} = p_{00} + p_{11} &= r_B r_C + \left( 1 - r_B \right) \left( 1 - r_C \right) \\ r_{V,\text{coh}} \; (=r_{\text{coh},\infty}) \; &= \frac{ r_B r_C }{ r_B r_C + \left( 1 - r_B \right) \left( 1 - r_C \right) },\end{aligned}$$ where the labelling of $r_{V,\text{coh}}$ as $r_{\text{coh},\infty}$ will become clear shortly. Equivalently, $r_{V,\text{coh}}$ can be expressed in terms of the virtual temperature of the virtual qubit, $$\begin{aligned} \label{app:icohrepfinalT} r_{V,\text{coh}} &= \frac{1}{1 + E^{-E_V/T_{V,\text{coh}}}}, & &\text{where} \quad E_V = E_B + E_C \quad \text{and} \quad T_{V,\text{coh}} = T_R.\end{aligned}$$ Thus following the argument in [Sec. \[app:virtualqubit\]]{}, the ground state population after $n$ repetitions of the incoherent cycle will be given by $$\begin{aligned} r_{\text{coh},n}^* &= r_{V,\text{coh}} - \left( r_{V,\text{coh}} - r \right) \left( 1 - N_{V,\text{coh}} \right)^n.\end{aligned}$$ Thus in the asymptotic limit of infinite repetitions, $r^*_{\text{coh},\infty} = r_{V,\text{inc}}$, and the temperature of the target qubit in this limit is $$\begin{aligned} T^*_{\text{coh},\infty} &= T_{V,\text{coh}} \frac{E}{E_{V,\text{coh}}} = T_R \frac{E}{E_B + E_C}.\end{aligned}$$ The free energy cost of cooling with repeated coherent operations {#app:costcoherent} ----------------------------------------------------------------- In the case of repeated coherent operations, the work cost is only calculated from the unitary swap operations, as the other step is the thermalization of the machine to the environment temperature, which comes at no cost. To calculate the work cost of the unitary operations, we follow the argument in [Sec. \[app:virtualqubit\]]{}. From the argument therein (Eq. ), the free energy input in each repeated coherent operation is given by $$\begin{aligned} \Delta F_{\text{coh},n}^* - \Delta F_{\text{coh},n-1}^* &= \left( r_{\text{coh},n}^* - r_{\text{coh},n-1}^* \right) \left( E_B + E_C - E \right) \\ &= 2 E_C \left( r_{\text{coh},n} - r_{\text{coh},n-1} \right).\end{aligned}$$ This only applies for $n \geq 2$ since the first coherent operation is different, and the optimal work cost of the latter ($\Delta F^*_{\text{coh}}$) has been calculated in Sec. VI B of the main text. Recalling that the ground state population of the target qubit after a single coherent operation is $r_B$, we can calculate the work cost of $n$ repetitions of coherent operations, $$\begin{aligned} \Delta F_{\text{coh},n}^* &= \Delta F_{\text{coh}}^* + 2 E_C \left( r_{\text{coh},n}^* - r_B \right), \\ \text{where} \quad \Delta F_{\text{coh}}^* &= \begin{cases} E_C \left( r_B - r \right) & \text{if $E_C \leq E$,} \\ \left( E_C - E \right) \left( r_C - r \right) + E_C \left( r_B - r_C \right) & \text{if $E_C \geq E$.} \end{cases}\end{aligned}$$ Algorithmic cooling {#app:algo} =================== In the case of repeated coherent operations, the minimum temperature achievable by the target qubit is bound by the maximum bias $Z_V$ (see [Sec. \[app:virtualqubit\]]{}) that can be engineered on any qubit subspace of the machine qubits $B$ and $C$. When the qubits are both thermalized to the environment temperature $T_R$, the maximum bias is on the virtual qubit of $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$. However, if one is allowed to thermalize the machine qubits separately, then an even higher bias can be engineered on the same subspace, by pre-cooling qubit $C$. Specifically, after the cooling swap of the target qubit with the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$, only qubit $B$ is rethermalized to the environment temperature, and then its state is swapped with that of qubit $C$, thus cooling the state of $C$. Qubit $B$ is then rethermalized to $T_R$, and then the cooling swap involving all three qubits is repeated. ![The cycle of steps corresponding to algorithmic cooling. Steps 1 and 3 thermalize qubit $B$ to the environment. Step 2 is the pre-cooling of qubit $C$ by a swap with $B$. Step 4 is the cooling of the target qubit via the usual coherent operation. In the case of optimizing algorithmic cooling w.r.t. the work cost (see [Sec. \[app:algopara\]]{}), Step 2 is replaced by a partial rather than full swap.\[fig:algorithmiccooling\]](repeat_algo.pdf){width="10cm"} The state of the machine qubits prior to the swap is now a tensor product of two copies of the thermal state of qubit $B$ w.r.t. $T_R$, and so the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$ has the following properties $$\begin{aligned} p_{00} &= r_B^2 \\ p_{11} &= \left( 1 - r_B \right)^2 \\ N_{V,\text{algo}} = p_{00} + p_{11} &= r_B^2 + \left( 1 - r_B \right)^2 \\ r_{V,\text{algo}} \; (=r_{\text{algo},\infty}^*) \; &= \frac{ r_B^2}{ r_B^2 + \left( 1 - r_B \right)^2 },\end{aligned}$$ where the labelling of $r_{V,\text{algo}}$ as $r_{\text{algo},\infty}^*$ will become clear shortly. Equivalently, $r_{V,\text{algo}}$ can be expressed in terms of the virtual temperature of the virtual qubit, $$\begin{aligned} \label{app:algofinalT} r_{V,\text{algo}} &= \frac{1}{1 + e^{-E_{V,\text{algo}} /T_{V,\text{algo}}}}, & &\text{where} \quad E_{V,\text{algo}} = E_B + E_C \quad \text{and} \quad T_{V,\text{algo}} = T_R \frac{E_B+E_C}{2 E_B}.\end{aligned}$$ Thus following the argument in [Sec. \[app:virtualqubit\]]{}, the ground state population after $n$ repetitions of algorithmic cooling will be given by $$\begin{aligned} r_{\text{algo},n} &= r_{V,\text{algo}} - \left( r_{V,\text{algo}} - r_0 \right) \left( 1 - N_{V,\text{algo}} \right)^n,\end{aligned}$$ where $r_0$ is the ground state population of the target before starting the algorithmic cooling procedure. $r_0$ can be $r$, in the case that we begin with algorithmic cooling from the initial state, but can also be anything else, in particular some population greater than $r$, corresponding to the endpoint of a different type of cooling operation. Finally note that in the asymptotic limit of infinite repetitions, $r_{\text{algo},\infty}^* = r_{V,\text{algo}}$, and the temperature of the target qubit in this limit is given by $$\begin{aligned} T_{\text{algo},\infty}^* &= T_{V,\text{algo}} \frac{E}{E_{V,\text{algo}}} = T_R \frac{E}{2E_B},\end{aligned}$$ which is independant of $r_0$, the initial ground state population of the target. The free energy cost of algorithmic cooling {#app:costalgo} ------------------------------------------- Analogous to the case of repeated coherent operations, here the work cost is invested during the unitary operations. However, in addition to the cooling swap involving all three qubits, whose cost is calculated in exactly the same way as in the repeated coherent case, see [Sec. \[app:costcoherent\]]{}, there is also the pre-cooling of qubit $C$, which is a non-energy preserving unitary operation. Since this is effected by a swap between qubits $B$ and $C$, the work cost per population swapped (in the direction of cooling $C$) is $E_B - E_C = E$. The work cost of pre-cooling $C$ can be split into two contributions: first, the initial cost of cooling $C$ from the environment temperature $T_R$ to the state that has the same populations as that of the initial state of $B$, that costs $E (r_B - r_C)$, and then the work cost of returning it to the pre-cooled state after every successive three-qubit swap. Since the three qubit swap is between the states ${\vert{011}\rangle}$ and ${\vert{100}\rangle}$, we see that whatever the change in the population of the ground state of the target qubit, there is exactly the same decrease in the ground state population of qubit $C$. Adding up all of these contributions, one finds that the free energy cost of algorithmic cooling is given by $$\begin{aligned} \Delta F_{\text{algo},n} &= 2 E_C \left( r_{\text{algo},n} - r_0 \right) + E \left( r_B - r_C \right) + E \left( r_{\text{algo},n-1} - r_0 \right),\end{aligned}$$ where the first term is the total work cost of the cooling swap on all three qubits, the second term is the cost of pre-cooling qubit $C$ from its initial state thermal at $T_R$, and the third represents the cost of returning qubit $C$ to the pre-cooled state prior to the $n^{th}$ cooling swap. As before, $r_0$ is the ground state population of the target before starting the algorithmic cooling procedure. Optimizing the repetition of coherent operations w.r.t. the work cost {#app:optimalcoherent} ===================================================================== In the case of coherent operations, we now have a number of different procedures for cooling. Recall that in the single-cycle case, we found that we could cool by simply swapping the target qubit $A$ with $B$. Furthermore, if it is the case that $E < E_C$, then a lower work-cost can be achieved by swapping the target qubit with $C$ to begin with. For repeated coherent operations, we have to swap the target qubit with the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$. And finally, to cool the maximum we should precool qubit $C$ (which is a swap between qubits $B$ and $C$) prior to the same cooling swap. Each of these processes has a different work cost, and it is illuminating to construct the optimal manner of combining them to have the minimum work cost. Following the argument in [Sec. \[app:virtualqubit\]]{}, Eq. , we understand that to optimize the work cost, we should always seek to swap the target qubit with a virtual (or real) qubit of as small an energy gap as we can find, given it has a greater normalized ground state population $r_V$ than the ground state population of the target. This way we minimize the energy gradient over which we move population, and thus minimize the work cost. At the beginning, when the target and machine qubits are at the environment temperature, if $E_C > E$, then one can verify from the machine state that among all the virtual qubits of the machine with greater normalized ground state population $r_V$ than r, qubit $C$ (here it is a real qubit, rather than virtual) is the one that has the smallest energy difference with E, $E_V-E$. Thus the minimal cost of cooling is to swap these states, taking $r\rightarrow r_C$ at a gradient of $E_C - E$. Once this procedure is exhausted and the ground state population of qubit A has become $r_C$, we find that among the above virtual qubits of the machine, qubit B has the second smallest energy difference with $E$, and so one proceeds by swapping the target qubit with qubit $B$, taking $r\rightarrow r_B$, at a gradient of $E_B - E$. One then rethermalises the machine qubits to $T_R$. Note that qubit C could have equivalently been rethermalised at any point between the end of the first swap and now without affecting the cooling and the work-cost of the procedure. At this point, after resetting the machine qubits, we find that the only virtual or real qubit in the machine that allows for cooling is the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$, and one proceeds by repeatedly swapping the target qubit with this virtual qubit, until $r\rightarrow r_{V,\text{coh}}$. This is performed at a gradient of $E_B + E_C - E$. Finally, one proceeds via algorithmic cooling, where one precools qubit $C$, at a gradient of $E_B - E_C$, before applying the same cooling swap as in the case of repeated coherent operations. The reason one exhausts the repeated coherent operations procedure before proceeding with algorithmic cooling is that precooling qubit $C$ has a work-cost that arguably enables one to cool more but still at the same energy rate, $2 E_C$. Thus, as long as cooling without this extra work-cost is possible, it is more efficient to do so. The work cost at an intermediate stage in this process can be simply calculated from the above, we present here the total work cost of the entire procedure, $$\begin{aligned} \label{eq:finalworkcost} \Delta F_{\text{algo},\infty}^* &= \quad \left( r_C - r \right) \left( E_C - E \right) + \left( r_B - r_C \right) \left( E_B - E \right) \nonumber\\ &\quad + \left( r_{\text{coh},\infty}^* - r_B \right) 2E_C \nonumber\\ &\quad + \left( r_B - r_C \right) \left( E_B - E_C \right) + \left( r_{\text{algo},\infty}^* - r_{\text{coh},\infty}^* \right) \left( \left( E_B - E_C \right) + 2 E_C \right),\end{aligned}$$ where the first, second and third lines correspond to the work cost of the single-cycle, repeated and algorithmic sections of the protocol repsectively. In the case of $E_C \leq E$, the single shot case simplifies to directly swapping the target qubit with qubit $B$, and thus the first line of the work cost becomes $\left( r_B - r \right) \left( E_B - E \right)$. It is interesting to observe that subdividing the entire procedure in this manner, the temperature of the target qubit evolves due to each subsection as $$\begin{aligned} T \xrightarrow{E<E_C} T_R \frac{E}{E_C} \quad\longrightarrow\quad T_R \frac{E}{E_B} \quad\longrightarrow\quad T_R \frac{E}{E_B+E_C} \quad\longrightarrow\quad T_R \frac{E}{2E_B}.\end{aligned}$$ An optimal cooling sequence in the regime of algortihmic cooling {#app:algopara} ---------------------------------------------------------------- In the analysis above, we noted that algorithmic cooling is more expensive as it requires the pre-cooling of qubit $C$. Furthermore, if one pre-cools $C$ via a full swap with $B$, as presented above, this represents an initial work cost which does not cool down the target at all, representing a discontinuity in the curve of cooling vs work cost. This is especially relevant if the desired final temperature is not that corresponding to algorithmic cooling, but is rather somewhere in-between algorithmic cooling and the endpoint of repeated coherent operations. In this case, one can optimize the work cost by using the same cycle of steps as in Fig. \[fig:algorithmiccooling\], but only *partially* pre-cooling qubit $C$ in Step 2, to exactly the temperature required to achieve the desired final temperature on the target. More precisely, consider that during Step 2, one performs a *partial* swap between qubits $B$ and $C$, such that the final ground state population of qubit $C$ is given by $$\begin{aligned} r_C(\nu) &= r_C + \nu \left( r_B - r_C \right),\end{aligned}$$ where $\nu \in [0,1]$. On inspection of the virtual qubit $\{{\vert{00}\rangle}_{BC},{\vert{11}\rangle}_{BC}\}$, we can calculate the normalized ground state population $r_V (\nu)$, $$\begin{aligned} \label{eq:algopara} r_{V,\nu \text{algo}} &= \frac{ r_B \cdot r_C(\nu)}{r_B \cdot r_C(\nu) + (1-r_B) (1-r_C(\nu))}.\end{aligned}$$ Note that $r^*_{\text{coh},\infty} < r_{V,\nu \text{algo}} < r^*_{\text{algo},\infty}$, with $r_{V,0 \text{algo}} = r_{\text{coh},\infty}^*$ and $r_{V,1 \text{algo}} = r^*_{\text{algo},\infty}$, and thus $\nu$ parametrizes the entire regime of cooling between the endpoint repeated coherent operations, and full algorithmic cooling. In the limit of infinite repetitions of the cycle of steps, the ground state population of the target becomes $r_{V,\nu \text{algo}}$, such that, given the desired final temperature of the target, $T_{\nu \text{algo},\infty}^*$, the swapping parameter $\nu$ need be chosen such that $$r_{V,\nu \text{algo}}= \frac{1}{1+e^{-\frac{E}{T_{\nu \text{algo},\infty}^*}}}.$$ The work cost of cooling the target to $r_{V,\nu \text{algo}}$, given that we began with the target ground state population of $r_0$, is found by adding up the cost of pre-cooling qubit $C$, the cost of returning it the pre-cooled state, and the cost of the repeated cooling swaps on the target, $$\begin{aligned} \label{eq:algoparawork} \Delta F_{\nu \text{algo},\infty} &= E \left( r_C (\nu) - r_C \right) + E \left( r_{V,\nu \text{algo}} - r_0 \right) + 2 E_C \left( r_{V,\nu \text{algo}} - r_0 \right).\end{aligned}$$ Thus given the endpoint of repeated coherent operations, (where $r_0 = r^*_{\text{coh},\infty}$), the above expression represents the optimal extra work cost for cooling the target to a ground state population (Eq. \[eq:algopara\]) that is between the end points of repeated coherent operations and algorithmic cooling. The total work cost of the optimal sequence is in this case therfore given by $$\begin{aligned} \label{eq:nufinalworkcost} \Delta F_{\nu \text{algo},\infty}^* &= \quad \left( r_C - r \right) \left( E_C - E \right) + \left( r_B - r_C \right) \left( E_B - E \right) \nonumber\\ &\quad + \left( r_{\text{coh},\infty}^* - r_B \right) 2E_C \nonumber\\ &\quad + \left( r_C(\nu) - r_C \right) \left( E_B - E_C \right) + \left( r_{V,\nu \text{algo}} - r_{\text{coh},\infty}^* \right) \left( \left( E_B - E_C \right) + 2 E_C \right).\end{aligned}$$ Note that for $\nu=1$, we recover the previously discussed total work cost of the optimal sequence of coherent operations of Eq. \[eq:finalworkcost\]. Optimizing the work cost {#app:secondlaw} ======================== The N qubit coherent machine ---------------------------- In this section we review a result demonstrated in [@paul] (within a different context), that given a final cold temperature, there exists a family of coherent machines, each member of increasing size, that can attain the final temperature, and that saturate the second law of thermodynamics in the limit of infinite size. We do this for coherent machines first, and prove the same for incoherent machines in the next section. Consider the system to be a qubit of energy $E$ (the result may be generalized by cooling individual qubit subspaces of a more complicated system), that begins at the environment temperature $T_R$. The final temperature that we would like to attain is labelled $T_C$, where $T_C < T_R$. The simplest machine to do so would be a single qubit of energy $$\begin{aligned} E_{coh,max} &= E \frac{T_R}{T_C},\end{aligned}$$ as in Sec. IV B of the main text, and perform a swap in the energy eigenbasis. Note that the machine is assumed, as always, to begin at $T_R$. As discussed in the main text and in [Sec. \[app:virtualqubit\]]{} on the virtual qubit, the work cost of this protocol involves pushing population against the energy gradient between the machine and system, $E_{coh,max} - E$, $$\begin{aligned} W &= (r_{max} - r)(E_{coh,max} - E) = (r_{max} - r)E \left( \frac{T_R}{T_C} - 1 \right).\end{aligned}$$ where $r$ and $r_{max}$ are the initial and final ground state populations of the target. One can reduce this work cost by splitting the protocol into a number of steps. Consider that the machine is constructed out of a sequence of $N$ qubits, with linearly increasing energy, $$\begin{aligned} E_{coh,i} &= E + \frac{i}{N} (E_{coh,max} - E) = E \left( 1 + \frac{i}{N} \left( \frac{T_R}{T_C} - 1 \right) \right), \quad i \in \{1,2,...,N\}.\end{aligned}$$ The protocol now consists in performing swap operations between the target qubit and each of the machine qubits in sequence. The final temperature is the same as before, since the final qubit has energy $E_{coh,max}$. At each intermediate step, the temperature attained by the target is given by $$\begin{aligned} \frac{E}{T_i} &= \frac{E_{coh,i}}{T_R} = \frac{E}{T_R} + \frac{i}{N} \left( \frac{E_{coh,max} - E}{T_R} \right) \\ \therefore \frac{1}{T_i} &= \frac{1}{T_R} + \frac{i}{N} \left( \frac{1}{T_C} - \frac{1}{T_R} \right).\label{eq:cohseqtemp}\end{aligned}$$ Correspondingly, the ground state population of the target after the $i^{th}$ step is given by $$\begin{aligned} r_i &= \frac{1}{1 + e^{-E/T_i}}.\end{aligned}$$ The work cost of the $i^{th}$ step is now $$\begin{aligned} W_{coh,i} &= (r_i - r_{i-1}) (E_{coh,i} - E),\end{aligned}$$ from which the total cost follows as $$\begin{aligned} W_{coh} = \sum_i W_{coh,i} &= \sum_{i=1}^N \left( r_i - r_{i-1} \right) \left( E_{coh,i} - E \right) \\ &= E\sum_{i=1}^N \left( r_i - r_{i-1} \right) \frac{i}{N} \left( \frac{T_R}{T_C} - 1 \right).\label{eq:workcostcohseq}\end{aligned}$$ In [@paul], this protocol was studied, and it was shown that the total work cost was equal to $$\begin{aligned} W_{coh} &= \Delta F + O \left( \frac{1}{N} \right),\end{aligned}$$ where $\Delta F$ is the increase in free energy of the system from its initial to final temperature, and where the free energy is defined w.r.t. the environment temperature, $$\begin{aligned} F &= \langle{E}\rangle - T_R S,\end{aligned}$$ $\langle{E}\rangle$ and $S$ being the average energy and entropy of the system. Thus one can get arbitrarily close to saturating the second law of thermodynamics by increasing the number of steps involved in the protocol. Note that the qubits in the coherent machine need not be real qubits, but qubit subspaces (virtual qubits) embedded in a larger space. In this case, rather than a single swap for each of the machine qubits, one would require repeated swaps (inter-spaced with rethermalization of the machine) to approach the asymptotic temperature corresponding to that qubit. This does not however change the work cost of the procedure, since the cost is always given by the amount of population changed multiplied by the energy gradient, so repeating the swap with the same virtual qubit a number of times to achieve the same population difference as with a real qubit of the same energy gap results in the same work cost. The 2N qubit incoherent machine {#app:incsecondlaw} ------------------------------- Consider as before that we wish to cool a target qubit of energy $E$ from the environment temperature $T_R$ to $T_C$, but only using incoherent operations, that include energy-preserving unitaries, and heating up parts of our machine to a given hot temperature $T_H$. The simplest manner of achieving this temperature is via the simplest possible incoherent machine, comprised of two qubits (as in VII.A of the main text), of energies $E_B = E_{inc,max}$ and $E_C = E_{inc,max} - E$. The machine may be run in the repeated operations regime, where the energy preserving swap operation between the states ${\vert{010}\rangle}_{ABC}$ and ${\vert{101}\rangle}_{ABC}$ is repeatedly applied, inter-spaced by re-thermalising qubits $B$ and $C$ to the environment $T_R$ and hot bath $T_H$ respectively, or in the autonomous mode, where the Hamiltonion that generates the swap is left running continuously, while the qubits are kept coupled to their respective baths. The final temperature achieved by such a machine is given by $$\begin{aligned} \frac{E}{T^f} &= \frac{E_{inc,max}}{T_R} - \frac{E_{inc,max} - E}{T_H},\end{aligned}$$ and so we choose $E_{inc,max}$ such that $T^f = T_C$, the final desired cold temperature, resulting in $$\begin{aligned} \label{eq:cohincmatchtemp} E_{inc,max} &= E \left( \frac{ \frac{1}{T_C} - \frac{1}{T_H} }{ \frac{1}{T_R} - \frac{1}{T_H} } \right) \quad\quad \Leftrightarrow \quad\quad E_{inc,max} - E = E \left( \frac{ \frac{1}{T_C} - \frac{1}{T_R} }{ \frac{1}{T_R} - \frac{1}{T_H} } \right).\end{aligned}$$ The work cost of this protocol was discussed in [Sec. \[app:incoherentauto\]]{}, and may be calculated from the heat drawn from the hot bath during the protocol. The heat is comprised of two parts, the preheating of qubit $C$, that we label $Q_{init}$, followed by the heat required to keep it at the hot temperature after repeated incoherent operations. In the limit of infinite repetitions (or the steady state of the autonomous machine), this heat is given by $$\begin{aligned} Q_H &= (r_{max} - r) E_C = \left( r_{max} - r \right) \left( E_{inc,max} - E \right).\end{aligned}$$ Note that as the final temperature of the target is the same as in the coherent case, the final ground state population $r_{max}$ is also identical. Note that from the two heat contributions, the initial heat cost to bring qubit $C$ to $T_H$ from $T_R$ depends on whether it is a real or virtual qubit, and in the latter case, depends on the spectrum of the larger space in which the virtual qubit is embedded. However, the heat required to keep it at $T_H$ remains the same, as it only depends on the population flow between the system and the machine, which in the limit of infinite repetitions or the autonomous steady state, only depends on the Gibb’s ratio of qubit $C$. In a similar manner as in the coherent case, one can decrease the work cost by using a machine made out of a sequence of $N$ two-qubit systems, with linearly increasing energies given by \[eq:incseqenergies\] $$\begin{aligned} E_{B,i} &= E + \frac{i}{N} \left( E_{inc,max} - E \right) \\ E_{C,i} &= E_{B,i} - E = \frac{i}{N} \left( E_{inc,max} - E \right).\end{aligned}$$ For each of the two-qubit systems, one runs the same protocol as before, and thus the temperature attained by the $i^{th}$ step is given by $$\begin{aligned} \frac{E}{T_{inc,i}} &= \frac{E_{B,i}}{T_R} - \frac{E_{C,i}}{T_H} \\ &= \frac{E}{T_R} + \left( \frac{1}{T_R} - \frac{1}{T_H} \right) \frac{i}{N} \left( E_{inc,max} - E \right) \\ &= \frac{E}{T_R} + \frac{i}{N} E \left( \frac{1}{T_C} - \frac{1}{T_R} \right),\end{aligned}$$ by using for $E_{inc,max}$. Thus $T_{inc,i} = T_i$ from the coherent machine (see Eq. \[eq:cohseqtemp\]), and we keep the notation $T_i$. This implies that the ground state population of the target after the $i^{th}$ step is also the same as in the coherent machine, and we keep the notation $r_i$. The heat drawn in each step is again comprised of the two contributions of pre-heating and maintenance of the $i^{th}$ qubit $C$, $$\begin{aligned} Q_i &= Q_{init,i} + \left( r_i - r_{i-1} \right) E_{C,i} \\ &= Q_{init,i} + \left( r_i - r_{i-1} \right) \frac{i}{N} \left( E_{inc,max} - E \right).\end{aligned}$$ where we label the initial heat drawn for pre-heating as $Q_{init,i}$. Simplifying the rest of the expression using , and summing up for the total heat, $$\begin{aligned} Q &= \sum_{i=1}^N Q_{init,i} + E \sum_{i=1}^N \left( r_i - r_{i-1} \right) \frac{i}{N} \left( \frac{ \frac{1}{T_C} - \frac{1}{T_R} }{ \frac{1}{T_R} - \frac{1}{T_H} } \right) \\ &= Q_{init} + \frac{W_{coh}}{ 1 - \frac{T_R}{T_H} },\end{aligned}$$ using for the total work cost in the coherent case. We have denoted by $Q_{init}$ the total cost of pre-heating each of the qubits $\{C,i\}$. The above is the heat drawn from the hot bath. To compare with the coherent case we take the work cost instead, which is the decrease in free energy of the hot bath, $$\begin{aligned} W = \Delta F_H = Q - T_R \Delta S = Q \left( 1 - \frac{T_R}{T_H} \right).\end{aligned}$$ Thus the work cost in the incoherent case is $$\begin{aligned} W_{inc} &= Q_{init} \left( 1 - \frac{T_R}{T_H} \right) + W_{coh}.\end{aligned}$$ Thus the incoherent cost is very similar to the coherent cost, with the sole addition of bringing the additional qubits from $T_R$ to $T_H$. At first glance, this may appear to be a finite disadvantage, however it is possible to make this additional cost as small as possible, as we now demonstrate. The $N$ qubits $\{C,i\}$ need not be real qubits, but virtual ones. Consider for instance, a system with Hamiltonian $$\begin{aligned} H_C &= - E_g {\vert{E_g}\rangle}{\langle{E_g}\vert} + \sum_{i=0}^N \frac{i}{N} (E_{inc,max} - E),\end{aligned}$$ which is an evenly spaced ladder of $N+1$ levels plus a single ground state that lies at an energy $E_g$ below the ladder. Labelling the levels by $\{g,0,1,2,...,N\}$, we observe that $E_i - E_0 = E_{C,i}$, and thus the pair of levels $0$ and $i$ may be employed as the $i^{th}$ virtual qubit $C$ in the incoherent machine. However, for any fixed $N$, $T_H$ and $E_{inc,max}$, the cost of preheating this system can be made as small as we like, by pushing the ground state energy further downward. For high enough values of $E_g$, the population in the ladder will be small enough for both $T_H$ and $T_R$ such that the difference in average energy is vanishingly small. Note that this implies that the machine will run slower (in the autonomous case) or require many more repeated operations in the discrete case. However, in principle the final temperature attained is still the same and thus the incoherent machine can achieve as close a work cost as one likes to the coherent case. Together with the fact that the coherent machine can get as close to saturating the second law, we thus have the statement that for arbitrary sized machines, both coherent and incoherent machines can approach the limit of the second law of thermodynamics.
--- abstract: 'We consider the adiabatic evolution of the Dirac equation in order to compute its Berry curvature in momentum space. It is found that the position operator acquires an anomalous contribution due to the non Abelian Berry gauge connection making the quantum mechanical algebra noncommutative. A generalization to any known spinning particles is possible by using the Bargmann-Wigner equation of motions. The non-commutativity of the coordinates is responsible for the topological spin transport of spinning particles similarly to the spin Hall effect in spintronic physics or the Magnus effect in optics. As an application we predict new dynamics for non-relativistic particles in an electric field and for photons in a gravitational field.' author: - Alain Bérard and Hervé Mohrbach title: Non Abelian Berry Phase in Noncommutative Quantum Mechanics --- Introduction ============ Recently, Quantum Mechanics involving non-commutative space time coordinates has led to numerous works. The assumption that the coordinate operators do not commute was originally introduced by Snyder [@SNYDER] as a short distance regularization to resolve the problem of infinite energies in Quantum Field Theory. This idea became popular when Connes [@CONNES] analyzed Yang Mills theories on non-commutative space. More recently a correspondence between a non-commutative gauge theory and a conventional gauge theory was introduced by Seiberg and Witten [@SEIBERG-WITTEN]. Non-commutative gauge theories are also naturally related to string and M-theory [@KONECHNY] and to Galilean symmetry in the plane [@HORVATHY1]. Applications of non-commutative theories were also found in condensed matter physics, for instance in the Quantum Hall effect [@BELLISSARD] and the non-commutative Landau problem [@GAMBOA]. Recently, it was found that a non-commutative geometry also underlies the semiclassical dynamics of electrons in semi-conductors [@MURAKAMI]. In this case, the non-commutativity property of the coordinates originates from the presence of a Berry phase which by changing drastically the equations of motion, induces a purely topological and dissipationless spin current. Other equations of motion including a contribution of a Berry phase were also recently found for the propagation of Bloch electrons [@NIU]. In this paper we show that a non-commutative geometry underlies the algebraic structure of all known spinning particles. In the Foldy-Wouthuysen representation of the Dirac equation, the position operator acquires a spin-orbit contribution which turns out to be a gauge potential (Berry connection). It is important to mention that anomalous contributions to the position operator were already found some time ago in different contexts, for instance in the Bloch representation of electrons in a periodic potential [@LANDAU] and for electrons in narrow-gap semi-conductors (where the spin-orbit term is called a Yafet term [@YAFET]). The common feature in all these cases is that an anomalous contribution to the position operator stems from the representation where the kinetic energy is diagonal (FW or Bloch representation). When interband transitions (adiabatic motion) are neglected the algebraic structure of the coordinates becomes non-commutative. Then, after having determined the new position operator for spinning particles we propose to explore its consequences at the level of semi-classical equations of motion in several physical situations. In particular, our approach provides a new interpretation of the Magnus effect, which was observed experimentally in optics. But first of all, we recall some previous results we derived in the framework of non-commutative Quantum mechanics from symmetry arguments only [@NOUS]. Monopole in momentum space ========================== In non-commutative Quantum mechanics an antisymmetric parameter $\theta ^{ij} $ usually taken to be constant [@DERIGLAZOV] is introduced in the commutation relation of the coordinates in the space manifold $$\left[ x^{i},x^{j}\right] =i\hbar \theta ^{ij}.$$ In a recent paper [@NOUS] we generalized the quantum mechanics in noncommutative geometry by considering a quantum particle whose coordinates satisfy the deformed Heisenberg algebra $$\left[ x^{i},x^{j}\right] =i\hbar \theta ^{ij}(\mathbf{x},\mathbf{p}),$$ $$\lbrack x^{i},p^{j}]=i\hbar \delta ^{ij},\text{ and }[p^{i},p^{j}]=0.$$ From Jacobi identity $$\lbrack p^{i},\left[ x^{j},x^{k}\right] ]+[x^{j},[x^{k},p^{i}]]+[x^{k},[p^{i},x^{j}]]=0,$$ we deduced the important property that the $\theta $ field is only momentum dependent. An important consequence of the non-commutativity between the coordinates is that neither the position operator does satisfy the usual law $[x^{i},L^{j}]=i\hbar \varepsilon ^{ijk}x_{k}$, nor the angular momenta satisfy the standard $so(3)$ algebra $[L^{i},L^{j}]=i\hbar \varepsilon ^{ijk}L_{k}$. In fact we have $$\lbrack x^{i},L^{j}]=i\hbar \varepsilon ^{ijk}x_{k}+i\hbar \varepsilon ^{j}{}_{kl}p^{l}\theta ^{ik}(\mathbf{p}),$$ and $$\lbrack L^{i},L^{j}]=i\hbar \varepsilon ^{ij}{}_{k}L^{k}+i\hbar \varepsilon ^{i}{}_{kl}\varepsilon ^{j}{}_{mn}p^{l}p^{n}\theta ^{km}(\mathbf{p}).$$ To remedy this absence of generators of rotation in the noncommutative geometry we had to introduce a generalized angular momentum $$\mathbf{J}=\mathbf{\ r}\wedge \mathbf{p}+\lambda \frac{\mathbf{p}}{p},$$ that satisfies the $so(3)$ algebra. The position operator then transforms as a vector under rotations i.e., $[x^{i},J^{j}]=i\hbar \varepsilon ^{ijk}x_{k}$. The presence of the dual Poincare momentum $\lambda \mathbf{p/}p$ leads to a dual Dirac monopole in momentum space for the position algebra $$\left[ x^{i},x^{j}\right] =-i\hbar \lambda \varepsilon ^{ijk}\frac{p^{k}}{% p^{3}}. \label{xxnc}$$ This result immediately implies that the coordinates of spinless particles are commuting. Another consequence is the quantification of the helicity $% \lambda =n\hbar /2$ that arises from the restoration of the translation group of momentum that is broken by the monopole [@NOUS][@JACKIW]. Note also that other recent theoretical works concerning the anomalous Hall effect in two-dimensional ferromagnets predicted a topological singularity in the Brillouin zone [@ONODA]. In addition, in recent experiments a monopole in the crystal momentum space was discovered and interpreted in terms of an Abelian Berry curvature [@FANG]. In quantum mechanics this construction may look formal because it is always possible to introduce commuting coordinates with the transformation $\mathbf{% R=r-p\wedge S/}p^{2}\mathbf{.}$ The angular momentum is then $\mathbf{% J=R\wedge p+S}$ which satisfies the usual $so(3)$ algebra, whereas the potential energy term in the Hamiltonian now contains spin-orbit interactions $V$ $(\mathbf{R+p\wedge S/}p^{2})$. In fact, the inverse procedure is usually more efficient: considering an Hamiltonian with a particular spin-orbit interaction one can try to obtain a trivial Hamiltonian with a dynamics due to the noncommutative coordinates algebra. This procedure has been applied with success to the study of adiabatic transport in semiconductor with spin-orbit couplings [@MURAKAMI] where the particular dynamics of charges is governed by the commutation relation (\[xxnc\]). The important point is to determine which one of the two position operators $\mathbf{r}$ or $\mathbf{R% }$ gives rise to the real mean trajectory of the particle. In fact it is well known that $\mathbf{R}$ does not have the genuine property of a position operator for a relativistic particle. As we shall see this crucial remark implies a new prediction concerning the non-relativistic limit of a Dirac particle. In particle physics it is by now well known that the non-commutativity of the coordinates of massless particles is a fundamental property because the position operator does not transform like a vector unless it satisfies equation (\[xxnc\]) and that $\theta ^{ij}(% \mathbf{p})$ is the Berry curvature for a massless particle with a given helicity $\lambda $ [@SKAGERSTAM]. In this letter we present another point of view of the origin of the monopole in high energy and condensed matter physics by considering the adiabatic evolution of relativistic massive spinning particles. In particular the computation of the Berry curvature of Dirac particles gives rise to a noncommutative position operator that was already postulated by Bacry [@BACRY] some time ago. A generalization to any spin is possible via the Bargmann-Wigner [@BARGMANN] equations of motion. By doing that construction, we are brought to make a generalization of noncommutative algebra by considering a $\theta $ field which is momentum as well as spin dependent. The associated connection is then non Abelian but becomes Abelian in the limit of vanishing mass leading to a monopole configuration for the Berry curvature. In this respect our approach is different from [@SKAGERSTAM] because the description of the photons is obtained by taking the zero mass limit of the massive representation of a spin one particle. The Foldy-Wouthuysen representation =================================== The Dirac’s Hamiltonian for a relativistic particle of mass $m$ has the form $$\hat{H}=\mathbf{\alpha .p}+\beta m+\hat{V}\left( \mathbf{R}\right) ,$$ where $\hat{V}$ is an operator that acts only on the orbital degrees of freedom. Using the Foldy-Wouthuysen unitary transformation $$U(\mathbf{p})=\frac{E_{p}+mc^{2}+c\beta \mathbf{\alpha .p}}{\sqrt{% 2E_{p}\left( E_{p}+mc^{2}\right) }},$$ with $E_{p}=\sqrt{p^{2}c^{2}+m^{2}c^{4}}$, we obtain the following transformed Hamiltonian $$U(\mathbf{p})\hat{H}U(\mathbf{p})^{+}=E_{p}\beta +U(\mathbf{p})\hat{V}% (i\hbar \partial _{\mathbf{p}})U(\mathbf{p})^{+}.$$ The kinetic energy is now diagonal whereas the potential term becomes $\hat{V% }(\mathbf{D})$ with the covariant derivative defined by $\mathbf{\ D=}i\hbar \partial _{\mathbf{p}}+\mathbf{A}$, and with the gauge potential $\mathbf{A}% =i\hbar U(\mathbf{p})\partial _{\mathbf{p}}U(\mathbf{p})^{+}$, which reads $$\mathbf{A}=\frac{\hbar c\left( ic^{2}\mathbf{p}(\mathbf{\alpha .p})\beta +i\beta \left( E_{p}+mc^{2}\right) E_{p}\mathbf{\alpha -}cE_{p}\mathbf{\ \Sigma \wedge p}\right) }{2E_{p}^{2}\left( E_{p}+mc^{2}\right) },$$ where $\mathbf{\Sigma }=1\otimes \mathbf{\sigma }$, is a $\left( 4\times 4\right) $ matrix. We consider the adiabatic approximation by identifying the momentum degree of freedom as slow and the spin degree of freedom as fast, similarly to the nuclear configuration in adiabatic treatment of molecular problems, which allows us to neglect the interband transition. We then keep only the block diagonal matrix element in the gauge potential and project on the subspace of positive energy. This projection cancels the zitterbewegung which corresponds to an oscillatory motion around the mean position of the particle that mixes the positive and negative energies. In this way we obtain a non trivial gauge connection allowing us to define a new position operator $\mathbf{r}$ for this particle $$\mathbf{r=}i\hslash \partial _{\mathbf{p}}+\frac{c^{2}\hslash \left( \mathbf{% \ p}\wedge \mathbf{\sigma }\right) }{2E_{p}\left( E_{p}+mc^{2}\right) }\text{% \textbf{\ ,}} \label{r}$$ which is a $\left( 2\times 2\right) $ matrix. The position operator (\[r\]) is not new, as it was postulated by H. Bacry [@BACRY]. By considering the irreducible representation of the Poincare group, this author proposed to adopt a general position operator for free massive or massless particles with any spin. In our approach which is easily generalizable to any known spin (see formula (\[rs\])) the anomalous part of the position operator arises from an adiabatic process of an interacting system and as we will now see is related to the Berry connection. For a different work with operator valued position connected to the spin-degree of freedom see [@GHOSH2]. Zitterbewegung-free noncommutative coordinates were also introduced for massless particles with rigidity and in the context of anyons [@PLYUSHCHAY]. It is straightforward to prove that the anomalous part of the position operator can be interpreted as a Berry connection in momentum space which, by definition is the $(4\times 4)$ matrix $$\mathbf{A}_{\alpha \beta }(\mathbf{p})=i\hbar <\alpha \mathbf{p}+\mid \partial _{\mathbf{p}}\mid \beta \mathbf{p}+>$$ where $\mid \alpha $ $\mathbf{p}+>$ is an eigenvector of the free Dirac equation of positive energy. The Berry connection can also be written as $$\mathbf{A}_{\alpha \beta }(\mathbf{p})=i\hbar <\phi _{\alpha }\mid U\partial _{\mathbf{p}}U^{+}\mid \phi _{\beta }>,$$ in terms of the canonical base vectors $\mid \phi _{\alpha }>=\left( \begin{array}{llll} 1 & 0 & 0 & 0 \end{array} \right) $ and $\mid \phi _{\beta }>=\left( \begin{array}{llll} 0 & 1 & 0 & 0 \end{array} \right) $. With the non zero element belonging only to the positive subspace, we can define the Berry connection by considering a $% 2\times 2$ matrix $$\mathbf{\ A}(\mathbf{p})=i\hbar \mathcal{P}(U\partial _{\mathbf{p}}U^{+}),$$ where $\mathcal{P}$ is a projector on the positive energy subspace. In this context the $\theta $ field we postulated in [@NOUS] emerges naturally as a consequence of the adiabatic motion of a Dirac particle and corresponds to a non-Abelian gauge curvature satisfying the relation $$\theta ^{ij}(\mathbf{p,\sigma })=\partial _{p^{i}}A^{j}-\partial _{p^{j}}A^{i}+\left[ A^{i},A^{j}\right] .$$ The commutation relations between the coordinates are then $$\left[ x^{i},x^{j}\right] =i\hslash \theta ^{ij}(\mathbf{p},\mathbf{\sigma }% )=-i\hbar ^{2}\varepsilon _{ijk}\frac{c^{4}}{2E_{p}^{3}}\left( m\sigma ^{k}+% \frac{p^{k}(\mathbf{p}.\mathbf{\sigma )}}{E_{p}+mc^{2}}\right). \label{nc}$$ This relation has very important consequences as it implies the nonlocalizability of the spinning particles. This is an intrinsic property and is not related to the creation of a pair during the measurement process (for a detailed discussion of this important point see [@BACRY]) To generalize the construction of the position operator for a particle with unspecified $n/2$ $(n>1)$ spin, we start with the Bargmann-Wigner equations $$(\gamma _{\mu }^{(i)}\partial _{\mu }+m+\hat{V})\psi _{(a_{1}...a_{n})}=0\ \ \ \ \ \ \ \ \ (i=1,2...n),$$ where $\psi _{(a_{1}...a_{n})}$ is a Bargmann-Wigner amplitude and $\gamma ^{(i)}$ are matrices acting on $a_{i}$. For each equation we have a Hamiltonian $$\hat{H}^{(i)}=\mathbf{\alpha }^{(i)}\mathbf{.p}+\beta m+\hat{V},$$ then $$(\prod\limits_{j=1}^{n}U^{(j)}(\mathbf{p}))\hat{H^{(i)}}(\prod% \limits_{j=1}^{n}U^{(j)}(\mathbf{p})^{+})=E_{p}\beta ^{(i)}+\hat{V}(\mathbf{D% }),$$ with $\mathbf{D=}i\hbar \partial _{\mathbf{p}}+\sum\limits_{i=1}^{n}\mathbf{A% }^{(i)},$ and $\mathbf{A}^{(i)}=i\hbar U^{(i)}(\mathbf{p})\partial _{\mathbf{% p}}U^{(i)}(\mathbf{p})^{+}$. Again by considering the adiabatic approximation we deduce a general position operator $\mathbf{r}$ for spinning particles $$\mathbf{r=}i\hslash \partial _{\mathbf{p}}+\frac{c^{2}\left( \mathbf{p}% \wedge \mathbf{S}\right) }{E_{p}\left( E_{p}+mc^{2}\right) }\mathbf{,} \label{rs}$$ with $\mathbf{S=}\hslash \left( \mathbf{\sigma }^{(1)}+...+\mathbf{\sigma }% ^{(n)}\right) /2$. The generalization of (\[nc\]) is then $$\left[ x^{i},x^{j}\right] =i\hslash \theta ^{ij}(\mathbf{p},\mathbf{S}% )=-i\hbar \varepsilon _{ijk}\frac{c^{4}}{E_{p}^{3}}\left( mS^{k}+\frac{p^{k}(% \mathbf{p}.\mathbf{S)}}{E_{p}+mc^{2}}\right) .$$ For a massless particle we recover the relation $\mathbf{r=}i\hslash \partial _{\mathbf{p}}+\mathbf{p}\wedge \mathbf{S/}p^{2}$,  with the commutation relation giving rise to the monopole $\left[ x^{i},x^{j}\right] =i\hslash \theta ^{ij}(\mathbf{p})=-i\hbar \varepsilon _{ijk}\lambda \frac{% p^{k}}{p^{3}}$. The monopole in momentum introduced in [@NOUS] in order to construct genuine angular momenta has then a very simple physical interpretation. It corresponds to the Berry curvature resulting from an adiabatic process of a massless particle with helicity $\lambda $. For $% \lambda =\pm 1$ we have the position operator of the photon, whose non-commutativity property agrees with the weak localizability of the photon which is certainly an experimental fact. It is not surprising that a massless particle has a monopole Berry curvature as it is well known that the band touching point acts as a monopole in momentum space [@BERRY]. This is precisely the case for massless particles for which the positive and negative energy band are degenerate in $p=0$. In our approach, the monopole appears as a limiting case of a more general Non Abelian Berry curvature arising from an adiabatic process of massive spinning particles. The spin-orbit coupling term in (\[rs\]) is a very small correction to the usual operator in the particle physics context but it may be strongly enhanced and observable in solid state physics because of the spin-orbit effect being more pronounced than in the vacuum. For instance in narrow gap semiconductors the equations of the band theory are similar to the Dirac equation with the forbidden gap $E_{G}$ between the valence and conduction bands instead of the Dirac gap $2mc^{2}$ [@RASHBA2].The monopole in momentum space predicted and observed in semiconductors results from the limit of vanishing gap $E_{G}\rightarrow 0$ between the valence and conduction bands. It is also interesting to consider the symmetry properties of the position operator with respect to the group of spatial rotations. In terms of commuting coordinates $\mathbf{R}$ the angular momentum is by definition $% \mathbf{J}=\mathbf{R\wedge p}+\mathbf{S}$, whereas in terms of the noncommuting coordinates the angular momentum reads $\mathbf{J}=\mathbf{\ r\wedge p}+\mathbf{M,}$ where $$\mathbf{M}=\mathbf{S-A\wedge p.} \label{m}$$ One can explicitly check that in terms of the non commuting coordinates the relation $[x^{i},J^{j}]=i\hbar \varepsilon ^{ijk}x_{k}$ is satisfied, so $% \mathbf{r}$ like $\mathbf{R}$ transforms as a vector under space rotations, but $d\mathbf{R}/dt=c\mathbf{\alpha }$ which is physically unacceptable. For a massless particle (\[m\]) leads to the Poincaré momentum associated to the monopole in momentum space deduced in [@NOUS]. Dynamical equations of motion ============================= Let us now look at some physical consequences of the non-commuting position operator on the dynamics of a quantum particle in an arbitrary potential. Due to the Berry phase in the definition of the position the equation of motion should be changed. But to compute commutators like $\left[ x^{k},V(x)% \right] $ one resorts to the semiclassical approximation $\left[ x^{k},V(x)% \right] =i\hbar \partial _{l}V(x)\theta ^{kl}+O(\hbar ^{2})$ leading to new equations of motion $$\stackrel{.}{\mathbf{r}}=\frac{\mathbf{p}}{E_{p}}-\stackrel{.}{\mathbf{p}}% \mathbf{\wedge \theta }\text{, \qquad and \qquad }\stackrel{.}{\mathbf{p}}=-% \mathbf{\nabla }V\mathbf{(r)} \label{rp}$$ with $\theta ^{i}=\varepsilon ^{ijk}\theta _{jk}/2$. While the equation for the momentum is as usual, the one for the velocity acquires a topological contribution due to the Berry phase. The latter is responsible for the relativistic topological spin transport as in the context of semi-conductors where similar non-relativistic equations [@MURAKAMI] lead to the spin Hall effect [@HIRSCH]. Applications ============ Non-relativistic Dirac particle in an electric potential -------------------------------------------------------- As a particular application, consider the nonrelativistic limit of a charged spinning Dirac particle in an electric potential $\hat{V}(\mathbf{r})$. In the NR limit the Hamiltonian reads $$\widetilde{H}(\mathbf{R,p})\approx mc^{2}+\frac{p^{2}}{2m}+\hat{V}(\mathbf{R}% )+\frac{e\hbar }{4m^{2}c^{2}}\mathbf{\sigma .}\left( \mathbf{\nabla \hat{V}(% \mathbf{r})\wedge p}\right) , \label{H}$$ which is a Pauli Hamiltonian with a spin-orbit term. As shown in [@MATHUR], the nonrelativistic Berry phase $\theta ^{ij}=-\varepsilon _{ijk}\sigma ^{k}/2mc^{2}$ results also from the Born-Oppenheimer approximation of the Dirac equation which leads to the same non-relativistic Hamiltonian. In the same paper, it was also proved that the adiabaticity condition is satisfied for slowly varying potential such that $L>>\tilde{% \lambda}$, where $L$ is the length scale over which $\hat{V}(\mathbf{r})$ varies and $\tilde{\lambda}$ is the de Broglie wave length of the particle. From Hamiltonian (\[H\]) we deduce the dynamics of the Galilean Schrödinger position operator $\mathbf{R}$ $$\frac{dX^{i}}{dt}=\frac{p^{i}}{m}+\frac{e\hslash }{4m^{2}c^{2}}\varepsilon ^{ijk}\sigma _{j}\partial _{k}\hat{V}(\mathbf{r}), \label{xnr1}$$ whereas the non relativistic limit of $\left( \ref{rp}\right) $ leads to the following velocity $$\frac{dx^{i}}{dt}=\frac{p^{i}}{m}+\frac{e\hslash }{2m^{2}c^{2}}\varepsilon ^{ijk}\sigma _{j}\partial _{k}\hat{V}(\mathbf{r}). \label{xnr2}$$ We then predict an enhancement of the spin-orbit coupling when the new position operator is considered. One can appreciate the similarity between this result and the Thomas precession as it offers another manifestation of the difference between the Galilean limit $\left( \ref{xnr1}\right) $ and the non-relativistic limit $\left( \ref{xnr2}\right) $. Rashba coupling --------------- Another interesting non relativistic situation concerns a parabolic quantum well with an asymmetric confining potential $V(z)=m\omega^2z^2/2$ in a normal electric field $E_{z}$ producing the structure inversion asymmetry. By considering again the NR limit of the position operator (\[rs\]), we get a spin orbit coupling of the form $\frac{\hbar }{4m^{2}c^{2}}% (eE_{z}+m\omega _{0}^{2}Z)\left( p_{x}\sigma_{y}-p_{y}\sigma_{x}\right) +O\left( 1/m^{3}\right) $, which for strong confinement in the $(x,y)$ plane is similar to the Rashba spin-orbit coupling well known in semi-conductor spintronics [@RASHBA]. This effect is very small for non-relativistic momenta, but as already said, it is greatly enhanced in semiconductors by a factor of about $mc^2/E_G$. Ultrarelativistic particle in an electric field ----------------------------------------------- Another example of topological spin transport that we consider now arises in the ultrarelativistic limit. In this limiting case $E_{p}\approx pc$ and the equations of motion of the spinning particle in a constant electric field are $$\frac{dx^{i}}{dt}=\frac{cp^{i}}{p}+\lambda e\varepsilon ^{ijk}\frac{p^{j}}{% p^{3}}E_{k}.$$ Taking the electric field in the $z$ direction and as initial conditions $% p_{1}(0)=p_{3}(0)=0$ and $p_{2}(0)=p_{0}>>mc^{2}$, we obtain the coordinates in the Heisenberg representation $$x\left( t\right) =\frac{\lambda }{p_{0}}\frac{eEt}{\left( p_{0}^{2}+e^{2}E^{2}t^{2}\right) ^{1/2}},$$ $$y(t)=\frac{p_{0}c}{eE}\arg \sinh \left( \frac{eEt}{p_{0}}\right) ,$$ $$z(t)=\frac{c}{eE}\left[ \left( p_{0}^{2}+e^{2}E^{2}t^{2}\right) ^{1/2}-p_{0}% \right] .$$ We observe an unusual displacement in the $x$ direction perpendicular to the electric field which depends on the value of the helicity. This topological spin transport can be considered as a relativistic generalization of the spin Hall effect discussed in [@MURAKAMI]. At large time the shift is of the order of the particle wave length $\left| \Delta x\right| \sim \tilde{% \lambda}$ which in ultrarelativistic limit is of the order of the Compton wave length. This very small effect is obviously very difficult to observe in particular due to the creation of pair particles during the measurement process itself. Actually this effect has been already observed however only in the case of photons propagating in an inhomogeneous medium. Spin Hall effect of light ------------------------- Experimentally what we call a topological spin transport has been first observed in the case of the photon propagation in an inhomogeneous medium [@ZELDOVICH], where the right and left circular polarization propagate along different trajectories in a wave guide (the transverse shift is observable due to the multiple reflections), a phenomena interpreted quantum mechanically as arising from the interaction between the orbital momentum and the spin of the photon [@ZELDOVICH]. To interpret the experiments these authors introduced a complicated phenomenological Hamiltonian leading to generalized geometrical optic equation. Our approach provides a new satisfactory interpretation as this effect, also called optical Magnus effect, is now interpreted in terms of the non-commuting property of the position operator containing the Berry phase. Note that the adiabaticity conditions in this case are given in [@chao]. To illustrate our purpose consider the Hamiltonian of a photon in an inhomogeneous medium $H=pc/n(r)$. The equations of motion $\stackrel{\cdot }{x^{i}}=\frac{1}{i\hbar }\left[ x^{i},H\right] $ and $\stackrel{\cdot }{p^{i}}=\frac{1}{i\hbar }\left[ p^{i},H\right] $ in the semi-classical approximation leads to following relations between velocities and momenta $$\frac{dx^{i}}{dt}=\frac{c}{n}\left( \frac{p^{i}}{p}+\frac{\lambda \varepsilon ^{ijk}p_{k}}{p^{2}}\frac{\partial \ln n}{\partial x^{j}}\right) \label{nopt}$$ which are similar to those introduced phenomenologically in [@ZELDOVICH]. However, here they are deduced rigorously from different physical considerations. We readily observe that the Berry phase gives rise to an ”ultra-relativistic spin-Hall effect” which in turn implies that the velocity is no more equal to $c/n$. Note that similar equations are also given in [@BLIOKH] where the optical Magnus effect is also interpreted in terms of a monopole Berry curvature but in the context of geometric optics. Photon in a static gravitational field -------------------------------------- Our theory is easily generalizable to the photon propagation in an anisotropic medium, a situation which is simply mentioned in [@ZELDOVICH] but could not be studied with their phenomenological approach. As a typical anisotropic medium consider the photon propagation in a static gravitational field whose metric $g^{ij}(x)$ is supposed to be time independent $\left( g^{0i}=0\right) $ and having a Hamiltonian $H=c\left( -\frac{% p_{i}g^{ij}(x)p_{j}}{g^{00}(x)}\right) ^{1/2}.$ In the semi-classical approximation the equations of motion are $$\frac{dp_{k}}{dt}=\frac{c^{2}p_{i}p_{j}}{2H}\partial _{k}\left( \frac{% g^{ij}(x)}{g^{00}(x)}\right) \label{pt}$$ and $$\frac{dx^{k}}{dt}=\frac{c\sqrt{g_{00}}g^{ki}p_{i}}{\sqrt{-g^{ij}p_{i}p_{j}}}+% \frac{dp_{l}}{dt}\theta ^{kl} \label{xt}$$ For a static gravitational field the velocity is then $$v^{i}=\frac{c}{\sqrt{g_{00}}}\frac{dx^{i}}{dx^{0}}=c\frac{g^{ij}p_{j}}{\sqrt{% -g^{ij}p_{i}p_{j}}}+\frac{1}{\sqrt{g_{00}}}\frac{dp_{l}}{dt}\theta ^{kl}$$ with $x^{0}=ct$. Equations (\[pt\]) and (\[xt\]) are our new equations for the semiclassical propagation of light which take into account the non-commutative nature of the position operator, i.e the spin-orbit coupling of the photon. The spinning nature of photon introduces a quantum Berry phase, which affects the propagation of light in a static background gravitational field at the semi-classical level. This new fundamental prediction will be studied in more detail in a future paper, but we can already observe that the Berry phase implies a speed of light different from the universal value $c$. This effect which is still very small could become important for a photon being propagated in the gravitational field of a black hole. This result goes in the same direction as recent works on the possibility of a variable speed of light [@magueijo] but here this variation has a physical origin. Conclusion ========== In summary, we looked at the adiabatic evolution of the Dirac equation in order to clarify the relation between monopole and Berry curvature in momentum space. It was found that the position operator acquires naturally an anomalous contribution due to a non Abelian Berry gauge connection making the quantum mechanical algebra non-commutative. Using the Bargmann-Wigner equation of motions we generalized our formalism to all known spinning particles. The non-commutativity of the coordinates is responsible for the topological spin transport of spinning particles similarly to the spin Hall effect in spintronic physics or the optical Magnus effect in optics. In particular we predict two new effects. One is an unusual spin-orbit contribution of a non-relativistic particle in an external field. The other one concerns the effect of the Berry phase on the propagation of light in a static background gravitational field. [99]{} H. Snyder, Phys. Rev.**71** (1947) 38; A. Connes, ”Noncommutative Geometry”, Academic press, San Diego (1994); N. Seiberg, E. Witten, JHEP. **09** (1999) 32; A. Konechny, A. Schwarz, Phys. Rep. **360** (2002) 353. C. Duval and P. A. Horvathy Phys. lett. B. **479** (2000) 284; J. Phys. A **34** (2001) 10097. J. Belissard, Lecture Notes in Physics **257** (1986) 99; J. Belissard, A. Van Elst and H. Schulz-Baldes, cond-mat/ 9301005. J. Gamboa et al., Mod. Phys. Lett. A **16** (2001) 2075; P. A. Horvathy, Ann. Phys. **299** (2002) 128. S. Murakami, N. Nagaosa, S.C. Zhang, Science **301** (2003) 1348. G. Sundaram, Q. Niu, Phys. Rev. B **59** (1999) 14915. E. M. Lifshitz and L. P. Pitaevskii, ”Statistical Physics”, Vol **9**, Pergamon Press (1981). Y. Yafet, ”Solid State Physics”, Vol **14**, Academic, New York, (1963) A. Bérard, H. Mohrbach, Phys. Rev. D **69** (2004) 127701; A. Bérard, Y. Grandati, H. Mohrbach, Phys. lett. A **254** (1999) 133; A. Bérard, J. Lages, H. Mohrbach, Eur. Phys. J. C **35** **2004** (373). A. A. Deriglazov, JHEP **03**, (2003) 021; Phys. Lett. B **555** (2003) 83; M. Sheikh-Jabbari, Nucl. Phys. B **611** (2001) 383; M. Chaichan, M. Sheikh-Jabbari and A. Tureanu, Phys. Rev. Lett., **86** (2001) 2716; S. Ghosh, Phys. Rev. D **66** (2002) 045031; J. Romero, J. Santiago and J. D. Vergara, Phys. Lett. A **310**, (2003) 9. R. Jackiw, Phys. Rev. Lett. **54** (1985) 159 and hep-th /0212058. M. Onoda, N. Nagasoa, J. Phys. Soc. Jpn. **71** (2002) 19. Z. Fang et al., Science **302** (2003) 92. B.S. Skagerstam hep-th /9210054. H. Bacry, ”Localizability and Space in Quantum Physics”, Lecture Notes in Physics, Vol **308**, Heidelberg, Springer-Verlag, (1988). V. Bargmann, E.P. Wigner, Proc. Nat. Acd. Sci. **34** (1948) 211. S. Ghosh, Phys. Lett. B **571** (2003) 97. M.S. Plyushchay, Mod. Phys. Lett. A **4** (1989) 837; Phys. Lett. B **243** (1990) 383; J.L. Cortes, M.S. Plyushchay, Int. J. Mod. Phys. A **11** (1996) 3331; P.A. Horvathy, M.S. Plyushchay, JHEP **0206** (2002) 033; hep-th/0404137. M.V. Berry, Proc. R. Soc. London A **392** (1984) 45. E.I. Rashba, Physica E, **20** (2004) 189. J.E. Hirsch, Phys. Rev. Lett. **83** (1999) 1834. H. Mathur, Phys. Rev. Lett. **67** (1991) 3325. Yu. Bychkov and E.I. Rashba, JETP Lett. **39** (1984) 78. A.V. Dooghin et al., Phys. Rev. A **45** (1992) 8204; V.S. Liberman, B.Y. Zeldovich, Phys. Rev. A **46** (1992) 5199. R.Y. Chiao, Y. Wu, Phys. Rev. Lett. **57** (1986) 933. K.Y. Bliokh, Y.P. Bliokh, Phys. Lett A **333** (2004), 181; Phys. Rev. E **70** (2004) 026605. M. Onoda, S. Murakami, N. Nagasoa, Phys. Rev. Lett. **93** (2004) 083901. J. Magueijo, Rep. Prog. Phys. **66** (2003) 2025.
--- abstract: 'We discuss the existence and regularity of periodic traveling-wave solutions of a class of nonlocal equations with homogeneous symbol of order $-r$, where $r>1$. Based on the properties of the nonlocal convolution operator, we apply analytic bifurcation theory and show that a highest, peaked, periodic traveling-wave solution is reached as the limiting case at the end of the main bifurcation curve. The regularity of the highest wave is proved to be exactly Lipschitz. As an application of our analysis, we reformulate the steady reduced Ostrovsky equation in a nonlocal form in terms of a Fourier multiplier operator with symbol $m(k)=k^{-2}$. Thereby we recover its unique highest $2\pi$-periodic, peaked traveling-wave solution, having the property of being exactly Lipschitz at the crest.' address: - 'Institute for Analysis, Karlsruher Institute of Technology (KIT), D-76128 Karlsruhe, Germany' - 'Department for Mathematical Sciences, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim, Norway' author: - Gabriele Bruell - Raj Narayan Dhara bibliography: - 'BD\_Reduced\_Ostrovsky.bib' title: Waves of maximal height for a class of nonlocal equations with homogeneous symbols --- Introduction ============= The present study is concerned with the existence and regularity of a highest, periodic traveling-wave solution of the nonlocal equation $$\label{eq:nonlocal} u_t + L_ru_x + uu_x=0,$$ where $L_r$ denotes the Fourier multiplier operator with symbol $m(k)=|k|^{-r}$, $r>1$. Equation is also known as the *fractional Korteweg–de Vries equation*. We are looking for $2\pi$-periodic traveling-wave solutions $u(t,x)=\phi(x-\mu t)$, where $\mu>0$ denotes the speed of the right-propagating wave. In this context equation reduces after integration to $$\label{eq:steady} -\mu \phi + L_r\phi + \frac{1}{2}\phi^2=B,$$ where $B\in {\mathbb{R}}$ is an integration constant. Since the symbol of $L_r$ is homogeneous, any bounded solution of the above equation has necessarily zero mean; in turn this implies that the integration constant $B$ is uniquely determined to be $$B=\frac{1}{4\pi}\int_{-\pi}^\pi\phi^2(x)\,dx.$$ The question about singular, highest waves was already raised by Stokes. In 1880 Stokes conjectured that the Euler equations admit a highest, periodic traveling-wave having a corner singularity at each crest with an interior angle of exactly $120^\circ$. About 100 years later (in 1982) Stokes’ conjecture was answered in the affirmative by Amick, Fraenkel, and Toland [@AFT]. Subject of a recent investigation by Ehrnström and Wahlén [@EW] is the existence and precise regularity of a highest, periodic traveling-wave solution for the Whitham equation; thereby proving Whitham’s conjecture on the existence of such a singular solution. The (unidirectional) Whitham equation is a genuinely nonlocal equation, which can be recovered from the well known Korteweg–de Vries equation by replacing its dispersion relation by one branch of the full Euler dispersion relation. The resulting equation takes (up to a scaling factor) the form of , where the symbol of the Fourier multiplier is given by $m(k)=\sqrt{\frac{\tanh(k)}{k}}$. In order to prove their result, Ehrnström and Wahlén developed a general approach based on the regularity and monotonicity properties of the convolution kernel induced by the Fourier multiplier. The highest, periodic traveling-wave solution for the Whitham equation is exactly $C^\frac{1}{2}$-Hölder continuous at its crests; thus exhibiting exactly half the regularity of the highest wave for the Euler equations. In a subsequent paper, Ehrnström, Johnson, and Claasen [@EJC] studied the existence and regularity of a highest wave for the bidirectional Whitham equation incorporating the full Euler dispersion relation leading to a nonlocal equation with cubic nonlinearity and a Fourier multiplier with symbol $m(k)=\frac{\tanh(k)}{k}$. The question addressed in [@EJC] is whether this equation gives rise to a highest, periodic, traveling wave, which is peaked (that is, whether it has a corner at each crest), such as the corresponding solution to the Euler equations? Overcoming the additional challenge of the cubic nonlinearity, the authors in [@EJC] follow a similar approach as implemented for the Whitham equation in [@EW] and prove that the highest wave has a singularity at its crest of the form $|x\log(|x|)|$; thereby still being a cusped wave. Concerning a different model equation arising in the context of shallow-water equations, Arnesen [@A] investigated the existence and regularity of a highest, periodic, traveling-wave solution for the Degasperis–Procesi equation. The Degasperis–Procesi equation is a local equation, but it can also be written in a nonlocal form with quadratic nonlinearity and a Fourier multiplier with symbol $m(k)=(1+k^2)^{-1}$, which is acting itself –in contrast to the previously mentioned equations– on a quadratic nonlinearity. For the Degasperis–Procesi and indeed for all equations in the so-called *b-family* (the famous Camassa–Holm equation being also such a member), explicit peaked, periodic, traveling-wave solutions are known [@CH; @DHK]. Using the nonlocal approach introduced originally for the Whitham equation in [@EW], the author of [@A] adapts the method to the nonlocal form of the Degasperis–Procesi equation and recovers not only the existence of a highest, peaked, periodic traveling wave, but also proves that any even, periodic, highest wave of the Degasperis–Procesi equation is exactly Lipschitz continuous at each crest; thereby excluding the existence of even, periodic, *cusped* traveling-wave solutions. Of our concern is the existence and regularity of highest, traveling waves for the fractional Korteweg–de Vries equation , where $r>1$. In the case when $r=2$, can be viewed as the nonlocal form of the *reduced Ostrovsky equation* $$(u_t+uu_x)_x=u.$$ For the reduced Ostrovsky equation, a highest, periodic, peaked traveling-wave solution is known explicitly [@Ostrovsky1978] and its regularity at each crest is exactly Lipschitz continuous. Recently, the existence and stability of smooth, periodic traveling-wave solutions for the reduced Ostrovsky equation, was investigated in [@GP; @HSS]. In [@GP2], the authors prove that the (unique) highest, $2\pi$-periodic traveling-wave solutions of the reduced Ostrovsky equation is linearly and nonlinearly unstable. We are going to investigate the existence and precise regularity of highest, periodic traveling-wave solutions of the entire family of equations $\eqref{eq:nonlocal}$ for Fourier multipliers $L_r$, where $r>1$. Based on the nonlocal approach introduced for the Whitham equation [@EW], we adapt the method in a way which is convenient to treat homogeneous symbols, and prove the existence and precise Lipschitz regularity of highest, periodic, traveling-wave solutions of corresponding to the symbol $m(k)=|k|^{-r}$, where $r>1$. The advantage of this nonlocal approach relies not only in the fact that it can be applied to various equations of local and nonlocal type, but in particular, that it is suitable to study entire families of equations simultaneously; thereby providing an insight into the interplay between a certain nonlinearity and varying order of linearity. The main novelty in our work relies upon implementing the approach used in [@EW; @EJC; @A] for equations exhibiting *homogeneous* symbols. For a homogeneous symbol, the associated convolution kernel can not be identified with a positive, decaying function on the real line. Instead we have to work with a periodic convolution kernel. The lack of positivity of the kernel can be compensated by working within the class of zero mean function, though. Moreover, we affirm that starting with a linear operator of order strictly smaller than $-1$ in equation a further decrease of order does not affect the regularity of the corresponding highest, periodic traveling-wave. Main result and outline of the paper ------------------------------------ Let us formulate our main theorem, which provides the existence of a global bifurcation branch of nontrivial, smooth, periodic and even traveling-wave solutions of equation , which reaches a limiting peaked, precisely Lipschitz continuous, solution at the end of the bifurcation curve. \[thm:main\] For each integer $k\geq 1$ there exists a wave speed $\mu^*_{k}>0$ and a global bifurcation branch $$s\mapsto (\phi_{k}(s),\mu_{k}(s)),\qquad s>0,$$ of nontrivial, $\frac{2\pi}{k}$-periodic, smooth, even solutions to the steady equation for $r>1$, emerging from the bifurcation point $(0,\mu^*_{k})$. Moreover, given any unbounded sequence $(s_n)_{n\in{\mathbb{N}}}$ of positive numbers $s_n$, there exists a subsequence of $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$, which converges uniformly to a limiting traveling-wave solution $(\bar \phi_{k},\bar\mu_{k})$ that solves and satisfies $$\bar \phi_{k}(0)=\bar \mu_{k}.$$ The limiting wave is strictly increasing on $(-\frac{\pi}{k},0)$ and exactly Lipschitz at $x\in \frac{2\pi}{k}{\mathbb{Z}}$. It is worth to notify that the regularity of peaked traveling-wave solutions is Lipschitz for *all* $r>1$. The reason mainly relies in the smoothing properties of the Fourier multiplier, which is of order strictly bigger than $1$, see Theorem \[thm:regularity\]. The outline of the paper is as follows: In Section \[S:Setting\] we introduce the functional-analytic setting, notations, and some general conventions. Properties of general Fourier multipliers with homogeneous symbol and a representation formula for the corresponding convolution kernel are discussed in Section \[S:Fourier\]. Section \[S:Properties\] is the heart of the present work, where we use the regularity and monotonicity properties of the convolution kernel to study a priori properties of bounded, traveling wave solutions of . In particular, we prove that an even, periodic traveling-wave solution $\phi$, which is monotone on a half period and whose maximum equals the wave speed, is precisely Lipschitz continuous. Eventually, in Section \[S:Global\] we investigate the global bifurcation result. By excluding certain alternatives for the bifurcation curve, we conclude the main theorem. In Section \[S:RO\] we apply our result to the reduced Ostrovsky equation, which can be reformulated as a nonlocal equation of the form with Fourier symbol $m(k)=k^{-2}$. We recover the well known explicit, even, peaked, periodic traveling-wave given by $$\phi(x)= \frac{2\pi^2-x^2}{18},\qquad \mbox{for}\quad \mu=\frac{\pi^2}{9}$$ on $[-\pi,\pi]$ and extended periodically. Moreover, we prove that any periodic traveling-wave $\phi\leq \mu$ is *at least* Lipschitz continuous at its crests; thereby excluding the possibility of periodic, traveling-waves $\phi\leq \mu$ exhibiting a cusp at its crests. Let us mention that the Fourier multiplier $L_2$ for the reduced Ostrovsky equation can be written as a convolution operator, whose kernel can be computed explicitly, see Remark \[rem:ker\]. Furthermore, relying on a priori bounds on the wave speed coming from a dynamical system approach for the reduced Ostrovsky equation in [@GP], we are able to obtain a better understanding of the behavior of the global bifurcation branch. Functional-analytic setting and general conventions {#S:Setting} =================================================== Let us introduce the relevant function spaces for our analysis and fix some notation. We are seeking for $2\pi$-periodic solutions of the steady equation . Let us set ${\mathbb{T}}:=[-\pi,\pi]$, where we identify $-\pi$ with $\pi$. In view of the nonlocal approach via Fourier multipliers, the Besov spaces on torus ${\mathbb{T}}$ form a natural scale of spaces to work in. We recall the definition and some basic properties of periodic Besov spaces. Denote by $\mathcal{D}({\mathbb{T}})$ the space of test functions on ${\mathbb{T}}$, whose dual space, the space of distributions on ${\mathbb{T}}$, is $\mathcal{D}^\prime({\mathbb{T}})$. If $\mathcal{S}({\mathbb{Z}})$ is the space of rapidly decaying functions from ${\mathbb{Z}}$ to ${\mathbb{C}}$ and $\mathcal{S}^\prime({\mathbb{Z}})$ denotes its dual space, let $\mathcal{F}:\mathcal{D}^\prime({\mathbb{T}})\to \mathcal{S}^\prime( {\mathbb{Z}})$ be the Fourier transformation on the torus defined by duality on $\mathcal{D}({\mathbb{T}})$ via $$\mathcal{F}f (k)=\hat f(k):=\frac{1}{2\pi}\int_{{\mathbb{T}}} f(x)e^{-ixk}\,dx, \qquad f\in \mathcal{D}({\mathbb{T}}).$$ Let $(\varphi)_{j\geq 0}\subset C_c^\infty({\mathbb{R}})$ be a family of smooth, compactly supported functions satisfying $$\operatorname{supp}\varphi_0 \subset [-2,2],\qquad \operatorname{supp}\varphi_j \subset [-2^{j+1},-2^{j-1}]\cap [2^{j-1},2^{j+1}] \quad\mbox{ for}\quad j\geq 1,$$ $$\sum_{j\geq 0}\varphi_j(\xi)=1\qquad\mbox{for all}\quad \xi\in{\mathbb{R}},$$ and for any $n\in{\mathbb{N}}$, there exists a constant $c_n>0$ such that $$\sup_{j\geq 0}2^{jn}\|\varphi^{(n)}_j\|_\infty\leq c_n.$$ For $p,q\in[1,\infty]$ and $s\in{\mathbb{R}}$, the [periodic Besov spaces]{} are defined by $$B_{p,q}^s({\mathbb{T}}):=\left\{ f\in \mathcal{D}^\prime({\mathbb{T}})\mid \|f\|_{B^s_{p,q}}^q:=\sum_{j\geq 0}2^{sjq}\left\|\sum_{k\in {\mathbb{Z}}} e^{ik(\cdot)} \varphi_j(k)\hat f(k)\right\|_{L^p}^{q}<\infty\right\},$$ with the common modification when $q=\infty$[^1]. If $s>0$ and $p\in [1,\infty]$, then $$W^{s,p}({\mathbb{T}})\subset B^s_{p,q}({\mathbb{T}})\subset L^p({\mathbb{T}})\qquad \mbox{for any} \quad q\in [1,\infty].$$ Moreover, for $s>0$, the Besov space $B^s_{\infty,\infty}({\mathbb{T}})$ consisting of functions $f$ satisfying $$\|f\|_{B^s_{\infty,\infty}}=\sup_{j\geq 0}2^{sj}\left\|\sum_{k\in {\mathbb{Z}}} e^{ik(\cdot)} \varphi_j(k)\hat f(k)\right\|_\infty < \infty$$ is called [periodic Zygmund space]{} of order $s$ and we write $$\mathcal{C}^s({\mathbb{T}}):=B^s_{\infty,\infty}({\mathbb{T}}).$$ Eventually, for $\alpha \in (0,1)$, we denote by $C^\alpha({\mathbb{T}})$ the space of $\alpha$-Hölder continuous functions on ${\mathbb{T}}$. If $k\in {\mathbb{N}}$ and $\alpha\in (0,1)$, then $C^{k,\alpha}({\mathbb{T}})$ denotes the space of $k$-times continuously differentiable functions whose $k$-th derivative is $\alpha$-Hölder continuous on ${\mathbb{T}}$. To lighten the notation we write $C^s({\mathbb{T}})=C^{\left \lfloor{s}\right \rfloor, s- \left \lfloor{s}\right \rfloor }({\mathbb{T}})$ for $s\geq 0$. As a consequence of Littlewood–Paley theory, we have the relation $\mathcal{C}^s({\mathbb{T}})=C^s({\mathbb{T}})$ for any $s>0$ with $s\notin {\mathbb{N}}$; that is, the Hölder spaces on the torus are completely characterized by Fourier series. If $s\in {\mathbb{N}}$, then $C^s({\mathbb{T}})$ is a proper subset of $\mathcal{C}^s({\mathbb{T}})$ and $$C^1({\mathbb{T}})\subsetneq C^{1-}({\mathbb{T}})\subsetneq \mathcal{C}^1({\mathbb{T}}).$$ Here, $C^{1-}({\mathbb{T}})$ denotes the space of Lipschitz continuous functions on ${\mathbb{T}}$. For more details we refer to [@T3 Chapter 13]. We are looking for solutions in the class of $2\pi$-periodic, bounded functions with zero mean, the class being denoted by $$L^\infty_0({\mathbb{T}}):= \{f\in L^\infty({\mathbb{T}}) \mid f \mbox{ has zero mean} \}.$$ In the sequel we continue to use the subscript $0$ to denote the restriction of a respective space to its subset of functions with zero mean. If $f$ and $g$ are elements in an ordered Banach space, we write $f\lesssim g$ ($f\gtrsim g$) if there exists a constant $c>0$ such that $f\leq c g$ ($f\geq cg$). Moreover, the notation $f\eqsim g$ is used whenever $f\lesssim g$ and $f\gtrsim g$. We denote by ${\mathbb{R}}_+$ the nonnegative real half axis ${\mathbb{R}}_+:=[0,\infty]$ and by ${\mathbb{N}}_0$ the set of natural numbers including zero. The space $\mathcal{L}(X;Y)$ denotes the set of all bounded linear operators from $X$ to $Y$. Fourier multipliers with homogeneous symbol {#S:Fourier} =========================================== The following result is an analogous statement to the classical Fourier multiplier theorems for nonhomogeneous symbols on Besov spaces (e.g. [@BCD Proposition 2.78]): \[prop:FM\] Let $m>0$ and $\sigma:{\mathbb{R}}\to {\mathbb{R}}$ be a function, which is smooth outside the origin and satisfies $$|\partial^a \sigma(\xi)|\lesssim |\xi|^{-m-a}\qquad \mbox{for all}\quad \xi\neq 0,\quad a\in {\mathbb{N}}_0.$$ Then, the Fourier multiplier $L$ defined by $$Lf=\sum_{k\neq 0}\sigma(k)\hat f(k)e^{ik(\cdot)}$$ belongs to the space ${\mathcal{L}(B^s_{\infty,\infty}}_0({\mathbb{T}});{B^{s+m}_{\infty,\infty}}_0({\mathbb{T}}))$. In view of the zero mean property of $f$, the proof can be carried out in a similar form as in [@AB Theorem 2.3 (v)], where it is show that a function $f$ belongs to ${B^s_{\infty,\infty}}({\mathbb{T}})$ if and only if $$\sum_{k\neq 0}\hat f(k)(ik)^{-m}e^{ik(\cdot)} \in {B^{s+m}_{\infty,\infty}}({\mathbb{T}}).$$ The above proposition yields in particular that $$\begin{aligned} \label{def:L} L_rf:= \sum_{k\neq 0}|k|^{-r}\hat f(k)e^{ik(\cdot)} , \qquad r>1,\end{aligned}$$ defines a bounded operator form $\mathcal{C}_0^s({\mathbb{T}})$ to $\mathcal{C}_0^{s+r}({\mathbb{T}})$ for any $s>0$; thereby it is a smoothing operator of order $-r$. We are interested in the existence and regularity properties of solutions of $$\label{Equation} -\mu \phi + L_r\phi + \frac{1}{2}\phi-\frac{1}{2}\widehat{\phi^2}(0)=0,\qquad r>1.$$ The operator $L_r$ is defined as the inverse Fourier representation $$L_r f(x)= \mathcal{F}^{-1}(m_r\hat f)(x),$$ where $m_r(k)=|k|^{-r}$ for $k\neq 0$ and $m_r(0)=0$. In view of the convolution theorem, we define the integral kernel $$\begin{aligned} \label{def:K} K_r(x):= 2\sum_{k=1}^\infty |k|^{-r}\cos\left(xk\right), \qquad x\in {\mathbb{T}},\end{aligned}$$ so that the action of $L_r$ is described by the convolution $$\label{eq:convolution_kernel} L_rf=K_r*f.$$ One can then express equation as $$\label{eq:RO} -\mu \phi +K_r*\phi -\frac{1}{2}\widehat{\phi^2}(0)=0, \qquad K_r:= \mathcal{F}^{-1}( m_r).$$ In what follows we examine the kernel $K_r$. We start by recalling some general theory on [completely monotonic]{} sequences taken from [@Guo; @Widder]. A sequence $(\mu_k)_{k\in{\mathbb{N}}_0}$ of real numbers is called *completely monotonic* if its elements are nonnegative and $$(-1)^n\Delta^n\mu_k \geq 0\qquad \mbox{for any}\quad n,k\in{\mathbb{N}}_0,$$ where $\Delta^0\mu_k=\mu_k$ and $\Delta^{n+1}\mu_k=\Delta^n\mu_{k+1}-\Delta^n \mu_k$. A function $f:[0,\infty)\to{\mathbb{R}}$ is called *completely monotone* if it is continuous on $[0,\infty)$, smooth on the open set $(0,\infty)$, and satisfies $$(-1)^n f^{(n)}(x)\geq 0\qquad\mbox{for any}\quad x>0.$$ For completely monotonic sequences we have the following theorem, which can be considered as the discrete analog of Bernstein’s theorem on completely monotonic functions. \[thm:B\] A sequence $(\mu_k)_{k\in{\mathbb{N}}_0}$ of real numbers is completely monotonic if and only if $$\mu_k=\int_0^1 t^k d\sigma(t),$$ where $\sigma$ is nondecreasing and bounded for $t\in[0,1]$. There exists a close relationship between completely monotonic sequences and completely monotonic functions. \[lem:CM\] Suppose that $f:[0,\infty)\to {\mathbb{R}}$ is completely monotone, then for any $a\geq 0$ the sequence $(f(an))_{n\in{\mathbb{N}}_0}$ is completely monotonic. We are going to use the theory on completely monotonic sequences to prove the following theorem, which summarizes some properties of the kernel $K_r$. \[thm:P\] Let $r>1$. The kernel $K_r$ defined in has the following properties: - $K_r$ is even, continuous, and has zero mean. - $K_r$ is smooth on ${\mathbb{T}}\setminus\{0\}$ and decreasing on $(0,\pi)$. - $K_r \in W^{r-{\varepsilon},1}({\mathbb{T}})$ for any ${\varepsilon}\in (0,1)$. In particular, $K_r^\prime$ is integrable and $K_r$ is $\alpha$-Hölder continuous with $\alpha \in (0,r-1)$ if $r\in (1,2]$, and continuously differentiable if $r> 2$. Claim a) follows directly form the definition of $K_r$ and $r>1$. Now we want to prove part b). Set $$\mu_k:=(k+1)^{-r}\qquad \mbox{for}\quad k\in{\mathbb{N}}_0.$$ Clearly $x\mapsto (x+1)^{-r}$ is completely monotone on $(0,\infty)$. Thus, Lemma \[lem:CM\] guarantees that $(\mu_k)_{k\in {\mathbb{N}}_0}$ is a completely monotonic sequence. By Theorem \[thm:B\], there exists a nondecreasing and bounded function $\sigma_r:[0,1]\to{\mathbb{R}}$ such that $$(k+1)^{-r}=\int_0^1 t^{k}\, d\sigma_r(t)\qquad\mbox{for any}\quad k\geq 0.$$ In particular $$|k|^{-r}=\int_0^1 t^{|k|-1}\, d\sigma_r(t)\qquad\mbox{for any}\quad k\neq 0.$$ The coefficients $t^{|k|-1}$ can be written as $$t^{|k|-1}=\int_{\mathbb{T}}f(t,x)e^{-ixk}\,dx\qquad \mbox{for}\quad k\neq 0,$$ where $$f(t,x)=\sum_{k\neq 0}t^{|k|-1}e^{ixk}+a_0(t)$$ for some bounded function $a_0:(0,1)\to {\mathbb{R}}$. Thereby, $$\begin{aligned} |k|^{-r}= \int_{\mathbb{T}}\int_0^1 f(t,x)\,d\sigma_r(t)e^{-ixk}\,dx\qquad\mbox{for any}\quad k\neq 0. \end{aligned}$$ In particular, we deduce that $$\int_0^1 f(t,x)\,d\sigma_r(t)=\sum_{k\neq 0} |k|^{-r}e^{ixk}=K_r(x).$$ Notice that we can compute $f$ explicitly as $$\begin{aligned} f(t,x)-a_0(t)&= \sum_{k\neq 0}t^{|k|-1}e^{ixk}=2\sum_{k=1}^\infty t^{k-1}\cos(xk)=2\sum_{k=0}t^{k}\cos(x(k+1))\\ &=2\operatorname{Re}\left(e^{ix}\sum_{k=0}t^{k}e^{ixk}\right)=2\operatorname{Re}\left(e^{ix}\sum_{k=0}^\infty \left(te^{ix}\right)^k \right). \end{aligned}$$ Thus, for $x\in (0,\pi)$, we have that $$f(t,x)=2\operatorname{Re}\left(e^{ix}\frac{1}{1-te^{ix}}\right)+a_0(t)=\frac{2(\cos(x)-t)}{1-t^2\cos(x)+t^4}+a_0(t).$$ Consequently, on the interval $(0,\pi)$, the kernel $K_r$ is represented by $$\label{eq:rep} K_r(x)=\int_0^1 \left(\frac{2(\cos(x)-t)}{1-t\cos(x)+t^2}+a_0(t)\right)\,d\sigma_r(t).$$ From here it is easy to deduce that $K_r$ is smooth on ${\mathbb{T}}\setminus\{0\}$ and decreasing on $(0,\pi)$, which completes the proof of b). Regarding the regularity of $K_r$ claimed in c), let ${\varepsilon}\in (0,1)$ be arbitrary. On the subset of zero mean functions of $W^{r-{\varepsilon},1}({\mathbb{T}})$ an equivalent norm is given by $$\|K_r\|_{W^{r-{\varepsilon},1}_0}\eqsim \|\mathcal{F}^{-1}\left(|\cdot|^{r-{\varepsilon}}\hat K_r\right)\|_{L^1}.$$ Thereby, $K_r$ is in $W^{r-{\varepsilon},1}_0({\mathbb{T}})$ if and only if the function $$x\mapsto \mathcal{F}^{-1}(|\cdot|^{r-{\varepsilon}}\hat K_r)(x)= 2\sum_{k=1}^\infty |k|^{r-{\varepsilon}-r}\cos(xk)=2\sum_{k=1}^\infty |k|^{-{\varepsilon}}\cos(xk)$$ is integrable over ${\mathbb{T}}$. Now, this follows by a classical theorem on the integrability of trigonometric transformations (cf. [@Boas Theorem 2] ), and we deduce the claimed regularity and integrability of $K_r^\prime$. The continuity properties are a direct consequence of Sobolev embedding theorems, see [@Demengel Theorem 4.57]. \[rem:ker\] \[lem:touch\] Let $r>1$. The operator $L_r$ is parity preserving on $L^\infty_0({\mathbb{T}})$. Moreover, if $f,g \in L^\infty_0({\mathbb{T}})$ are odd functions satisfying $f(x)\geq g(x)$ on $[0,\pi]$, then either $$L_rf(x)> L_rg(x)\qquad \mbox{for all}\quad x\in (0,\pi),$$ or $f=g$ on ${\mathbb{T}}$. The fact that $L_r$ is parity preserving is an immediate consequence of the evenness of the convolution kernel. In order to prove the second assertion, assume that $f,g\in L^\infty_0({\mathbb{T}})$ are odd, satisfying $f(x)\geq g(x)$ on $[0,\pi]$ and that there exists $x_0\in (0,\pi)$ such that $f(x_0)=g(x_0)$. Using the zero mean property of $f$ and $g$, we obtain that $$L_rf(x_0)-L_rg(x_0)=\int_{-\pi}^\pi (K_r(x_0-y)-\min K_r)\left( f(y)-g(y)\right)\,dy>0,$$ where $\min K_r$ denotes the minimum of $K_r$ on ${\mathbb{T}}$. In view of $K_r$ being nonconstant and $K_r(y)-\min K_r\geq 0$ for all $y\in {\mathbb{T}}$, we conclude that $$L_rf(x_0)-L_rg(x_0)>0,$$ which is a contradiction unless $f=g$ on ${\mathbb{T}}$. A priori properties of periodic traveling-wave solutions {#S:Properties} ======================================================== In the sequel, let $r>1$ be fixed. We consider $2\pi$-periodic solutions of $$\label{eq:gRO} -\mu\phi +L_r\phi+\frac{1}{2}\phi^2-\frac{1}{2}\widehat{\phi^2}(0)=0.$$ The existence of solutions is subject of Section \[S:Global\], where we use analytic bifurcation theory to first construct small amplitude solutions and then extend this bifurcation curve to a global continuum terminating in a highest, traveling wave. Aim of this section is to provide a priori properties of traveling-wave solutions $\phi\leq\mu$. In particular, we show that any nontrivial, even solution $\phi \leq \mu$, which is nondecreasing on the half period $(-\pi,0)$ and attaining its maximum at $\phi(0)=\mu$ is precisely Lipschitz continuous. This holds true for any $r>1$, see Theorem \[thm:reg\]. *We would like to point out that the subsequent analysis can be carried out in the very same manner for $2P$-periodic solutions, where $P\in (0,\infty)$ is the length of a finite half period.* Let us start with a short observation. \[lem:distance\] If $\phi \in C_0 ({\mathbb{T}})$ is a nontrivial solution of , then $$\phi(x_M)+ \phi(x_m)\geq 2 \left( \mu -\|K_r\|_{L^1}\right),$$ where $\phi(x_M)=\max_{x\in{\mathbb{T}}}\phi(x)$ and $\phi(x_m)=\min_{x\in{\mathbb{T}}}\phi(x)$. If $\phi\in C_0({\mathbb{T}})$ is a nontrivial solution of , then $\phi(x_M)>0>\phi(x_m)$ and $$\begin{aligned} \mu (\phi(x_M)-\phi(x_m))&=K_r*\phi(x_M)-K_r*\phi(x_m)+\frac{1}{2}\left( \phi^2(x_M)- \phi^2 (x_m) \right)\\ & \leq \|K_r\|_{L^1}(\phi(x_M)-\phi(x_m)) + \frac{1}{2}\left( \phi(x_M) - \phi(x_m)\right)\left( \phi(x_M) + \phi(x_m)\right),\end{aligned}$$ which proves the statement. In what follows it is going to be convenient to write as $$\label{trav:eqn} \frac{1}{2}(\mu - \phi)^{2} = \frac{1}{2}\mu^{2} - L_r\phi+\frac{1}{2}\widehat{\phi^2}(0).$$ In the next two lemmata we establish a priori properties of periodic solutions of requiring solely boundedness. \[lem:c1\] Let $\phi \in L_0^\infty({\mathbb{T}})$ be a solution of , then $\left(\mu-\phi \right)^2 \in C^1({\mathbb{T}})$ and $$\left\|\frac{d}{dx}\left(\mu-\phi \right)^2\right\|_\infty \leq 2\left\|K_r^\prime\right\|_{L^1}\|\phi\|_\infty \quad \mbox{for all}\quad x \in {\mathbb{T}}.$$ We can read of from , that the derivative of $(\mu-\phi)^2$ is given by $$\frac{d}{dx}(\mu-\phi)^2(x)=-2 K_r^\prime*\phi(x).$$ Since $K_r^\prime$ and $\phi$ are integrable over ${\mathbb{T}}$ (cf. Theorem \[thm:P\]), the convolution on the right hand side is continuous and the claimed estimate follows. \[lem:uniform\_bound\] Let $\phi\in L_0^\infty({\mathbb{T}})$ be a solution of , then $$\|\phi\|_\infty \leq 2 \left(\mu + \|K_r\|_{L^1}\right) +2\pi\|K_r^\prime\|_{L^1}.$$ If $\phi=0$, there is nothing to prove. Therefore it is enough to assume that $\phi$ is a nontrivial solution. From Lemma \[lem:c1\] we know that $(\mu-\phi)^2$ is a continuously differentiable function. In view of $\phi$ being a function of zero mean and $(\mu-\phi)^2$ being continuous, we deduce the existence of $x_0\in {\mathbb{T}}$ such that $$(\mu-\phi)^2(x_0)=\mu^2.$$ By the mean value theorem, we obtain that $$(\mu-\phi)^2(x)= \left[(\mu - \phi)^2\right]^\prime(\xi)(x-x_0)+\mu^2$$ for some $\xi \in {\mathbb{T}}$ and $$\begin{aligned} \widehat{\phi^2} (0)&= \frac{1}{2\pi}\int_{-\pi}^\pi \phi^2(x)\,dx =\frac{1}{2\pi}\int_{-\pi}^\pi\left[(\mu - \phi)^2\right]^\prime(\xi)(x-x_0)\,dx,\end{aligned}$$ where we used that $\phi$ has zero mean. Again by Lemma \[lem:c1\] we can estimate the term above generously by $$\widehat{\phi^2} (0) \leq 2\pi\|K_r^\prime\|_{L^1}\|\phi\|_\infty.$$ Using that $\phi$ solves , we obtain $$\begin{aligned} \|\phi\|^2_\infty \leq 2 (\mu + \|K_r\|_{L^1})\|\phi\|_\infty + 2\pi\|K_r^\prime\|_{L^1}\|\phi\|_\infty.\end{aligned}$$ Dividing by $\|\phi\|_\infty$ yields the statement. From now on we restrict our considerations on periodic solutions of , which are even and nondecreasing on the half period $[-\pi,0]$. \[lem:nod\] Any nontrivial, even solution $\phi \in C_0^1({\mathbb{T}})$ of which is nondecreasing on $(-\pi,0)$ satisfies $$\phi^\prime(x)>0 \qquad \mbox{and}\qquad \phi(x)<\mu \qquad \mbox{on}\quad (-\pi,0).$$ Moreover, if $\phi\in C_0^2({\mathbb{T}})$, then $\phi^{\prime \prime}(0)<0$. Assuming that $\phi\in C_0^1({\mathbb{T}})$ we can take the derivative of and obtain that $$(\mu-\phi)\phi^\prime(x)=L_r\phi^\prime (x).$$ Due to the assumption that $\phi^\prime\geq 0$ on $(-\pi,0)$ it is sufficient to show that $$\label{eq:ineq1} L_r\phi^\prime(x)> 0 \qquad \mbox{on}\quad (-\pi,0)$$ to prove the statement. In view of $\phi^\prime$ being odd with $\phi^\prime(x)\geq 0$ on $[-\pi,0]$, the desired inequality follows from Lemma \[lem:touch\]. In order to prove the second statement, let us assume that $\phi\in C_0^2({\mathbb{T}})$. Differentiating twice yields $$(\mu-\phi)\phi^{\prime\prime}(x)=L_r\phi^{\prime\prime} (x)+(\phi^\prime)^2(x).$$ In particular, we have that $$(\mu-\phi)\phi^{\prime\prime}(0)=L_r\phi^{\prime\prime} (0).$$ We are going to show that $L_r\phi^{\prime\prime} (0)<0$, which then (together with the first part) proves the statement. Using the evenness of $K_r$ and $\phi^{\prime\prime}$, we compute $$\begin{aligned} \frac{1}{2}L_r\phi^{\prime\prime} (0) &= \frac{1}{2}\int_{-\pi}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy \\ &= \int_{0}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy \\ &= \int_{0}^{\varepsilon}K_r(y)\phi^{\prime\prime}(y)\,dy+\int_{{\varepsilon}}^\pi K_r(y)\phi^{\prime\prime}(y)\,dy\\ &=\int_{0}^{\varepsilon}K_r(y)\phi^{\prime\prime}(y)\,dy + K_r({\varepsilon})\phi^\prime({\varepsilon})- \int_{\varepsilon}^\pi K_r^\prime(y)\phi^\prime(y)\,dy.\end{aligned}$$ Notice that the first integral on the right hand side tends to zero if ${\varepsilon}\to 0$, so does the second term in view of $\phi$ being differentiable and $K_r$ continuous on ${\mathbb{T}}$. Concerning the last integral, we observe that $$\frac{1}{2}L_r\phi^{\prime\prime} (0)=- \lim_{{\varepsilon}\to 0^+}\int_{\varepsilon}^\pi K_r^\prime(y)\phi^\prime(y)\,dy<0,$$ since $K_r^\prime$ and $\phi^\prime$ are negative on $(0,\pi)$. We continue by showing that any bounded solution $\phi$ of that satisfies $\phi<\mu$ is smooth. \[th:phi:prop\] Let $\phi\le \mu$ be a bounded solution of . Then: - If $\phi<\mu$ uniformly on ${\mathbb{T}}$, then $\phi\in C^{\infty}({\mathbb{R}})$. - Considering $\phi$ as a periodic function on ${\mathbb{R}}$ it is smooth on any open set where $\phi<\mu$. Let $\phi<\mu$ uniformly on ${\mathbb{T}}$. Recalling Proposition \[prop:FM\], we know that the operator $L_r$ maps ${B^s_{\infty,\infty}}_0({\mathbb{T}})$ into ${B^{s+r}_{\infty,\infty}}_0({\mathbb{T}})$ for any $s\in{\mathbb{R}}$. Moreover, if $s>0$ then the Nemytskii operator $$\begin{aligned} f\mapsto \mu - \sqrt{\frac{1}{2}\mu^{2}- f}\end{aligned}$$ maps $ {B^s_{\infty,\infty}}_0({\mathbb{T}})$ into itself for $f<\frac{1}{2}\mu^2$. From we see that for any solution $\phi<\mu$ we have $$L\phi-\frac{1}{2}\widehat \phi^2(0)<\frac{1}{2}\mu^2.$$ Thus, $$\begin{aligned} \label{maps:reg} \begin{split} \left[ L_r\phi\mapsto \sqrt{\frac{1}{2}\mu^2- L\phi+\frac{1}{2}\hat \phi^2(0)}\right]\circ \left[ \phi\mapsto L_r\phi-\frac{1}{2}\widehat \phi^2(0)\right]: {B^s_{\infty,\infty}}_0({\mathbb{T}}) \to {B^{s+r}_{\infty,\infty}}_0({\mathbb{T}}), \end{split}\end{aligned}$$ for all $s\geq 0$. Eventually, gives rise to $$\begin{aligned} \phi = \mu-\sqrt{\mu^{2} - 2L_r\phi+ \hat{\phi}^{2}(0) }.\end{aligned}$$ Hence, an iteration argument in $s$ guarantees that $\phi\in C^{\infty}({\mathbb{T}})$. In order to prove the statement on the real line, recall that any Fourier multiplier commutes with the translation operator. Thus, if $\phi$ is a periodic solution of , then so is $\phi_h:=\phi(\cdot +h)$ for any $h\in {\mathbb{R}}$. The previous argument implies that $\phi_h \in C^\infty({\mathbb{T}})$ for any $h\in{\mathbb{R}}$, which proves statement (i). In order to prove part (ii) let $U\subset {\mathbb{R}}$ be an open subset of ${\mathbb{R}}$ on which $\phi <\mu$. Then, we can find an open cover $U=\cup_{i\in I}U_i$, where for any $i\in I$ we have that $U_i$ is connected and satisfies $|U_i|<2\pi$. Due to the translation invariance of and part (i), we obtain that $\phi$ is smooth on $U_i$ for any $i\in I$. Since $U$ is the union of open sets, the assertion follows. \[thm:regularity\] Let $\phi\leq \mu$ be an even solution of , which is nondecreasing on $[-\pi, 0]$. If $\phi$ attains its maximum at $\phi(0)=\mu$, then $\phi$ cannot belong to the class $C^1({\mathbb{T}})$. Assuming that $\phi \in C^1({\mathbb{T}})$, the same argument as in Lemma \[lem:c1\] implies that the function $(\mu-\phi)^2$ is twice continuously differentiable and its Taylor expansion in a neighborhood of $x=0$ is given by $$\begin{aligned} \label{eq:taylor:ser} (\mu-\phi)^2(x)=[(\mu-\phi)^2]^{\prime}(0)x+\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)x^2\end{aligned}$$ for some $|\xi|\in (0,|x|)$ where $|x|\ll 1$. Since $\phi$ attains a local maximum at $x=0$, its first derivative above vanishes at the origin whereas the second derivative is given by $$\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)=-K_r^{\prime}*\phi^\prime(\xi).$$ We aim to show that in a small neighborhood of zero the right hand side is strictly bounded away from zero. Set $f(\xi):=-K_r^\prime*\phi^\prime(\xi)$. Using that $K_r$ and $\phi$ are even functions with $K_r^\prime$ and $\phi^\prime$ being negative on $(0,\pi)$, we find that $$f(0)=-K_r^\prime*\phi^\prime(0)=2\int_0^\pi K_r^\prime(y)\phi^\prime(y)\,dy=c>0$$ for some constant $c>0$. Since $f$ is even (cf. Lemma \[lem:touch\]) and continuous, there exists $|x_0|\ll 1$ and a constant $c_0>0$ such that $$\frac{1}{2}[(\mu-\phi)^2]^{\prime\prime}(\xi)=f(\xi)\geq c_0,\qquad \mbox{for all}\quad \xi\in (0,|x_0|).$$ Thus, considering the Taylor series  in a neighborhood of zero, we have that $$(\mu-\phi)^2(x)\gtrsim x^2\qquad \mbox{for}\quad |x|\ll 1,$$ which in particular implies that $$\label{eq:Lip} \frac{\mu-\phi(x)}{|x|}\gtrsim 1\qquad \mbox{for}\quad |x|\ll 1.$$ Passing to the limit $x\to 0$, we obtain a contradiction to $\phi^\prime(0)=0$. We are now investigating the precise regularity of a solution $\phi$, which attains its maximum at $\phi(0)=\mu$. \[thm:reg\] Let $\phi\leq \mu$ be an even solution of , which is nondecreasing on $[-\pi, 0]$. If $\phi$ attains its maximum at $\phi(0)=\mu$, then the following holds: - $\phi\in C^\infty({\mathbb{T}}\setminus \{0\})$ and $\phi$ is strictly increasing on $(-\pi,0)$. - $\phi\in C_0^{1-}({\mathbb{T}})$, that is $\phi$ is Lipschitz continuous. - $\phi$ is precisely Lipschitz continuous at $x=0$, that is $$\begin{aligned} \label{itm:3} \mu - \phi(x) \simeq |x| \qquad \mbox{for}\; |x|\ll 1.\end{aligned}$$ <!-- --> - Assume that $\phi\leq \mu$ is a solution which is even and nondecreasing on $(-\pi,0)$. Let $x\in (-\pi,0)$ and $h\in (0,\pi)$. Notice that by periodicity and evenness of $\phi$ and the kernel $K_r$, we have that $$\begin{aligned} K_r*\phi(x+h)&-K_r*\phi(x-h)\\ &=\int_{-\pi}^{0} \left(K_r(x-y)-K_r(x+y)\right)\left(\phi(y+h)-\phi(y-h)\right)\,dy.\end{aligned}$$ The integrand is nonnegative, since $K_r(x-y)-K_r(x+y)> 0$ for $x,y\in (-\pi,0)$ and $\phi(y+h)-\phi(y-h)\geq 0$ for $y\in (-\pi,0)$ and $h\in (0,\pi)$ by assumption that $\phi$ is even and nondecreasing on $(-\pi,0)$. Since $\phi$ is a nontrivial solution and $K_r$ is not constant, we deduce that $$\label{eq:mon} K_r*\phi(x+h)-K_r*\phi(x-h)>0$$ for any $h\in (0,\pi)$. Moreover, we have that $$\frac{1}{2}\left( 2\mu-\phi(x)-\phi(y) \right)\left(\phi(y)-\phi(x)\right)=K_r*\phi(x)-K_r*\phi(y)$$ for any $x,y\in {\mathbb{T}}$. Hence $K_r*\phi(x)=K_r*\phi(y)$ if and only if $\phi(x)=\phi(y)$. In view of , we obtain that $$\phi(x+h)\neq \phi(x-h)\qquad\mbox{for any}\quad h\in (0,\pi).$$ Thereby, $\phi$ is strictly increasing on $(-\pi,0)$. In view of Theorem \[th:phi:prop\], $\phi$ is smooth on ${\mathbb{T}}\setminus\{0\}$. - In order to prove the Lipschitz regularity at the crest, we make use of a simple *bootstrap argument*. We would like to emphasize that the following argument strongly relies on the fact that we are dealing with a smoothing operator of order $-r$, where $r>1$. Let us assume that $\phi$ is *not* Lipschitz continuous and prove a contradiction. If $\phi\leq \mu$ is merely a bounded function, the regularization property of $L_r$ implies immediately the $\phi$ is a priori $\frac{1}{2}$-Hölder continuous. To see this, recall that $$\frac{1}{2}\left( 2\mu-\phi(x)-\phi(y) \right)\left(\phi(y)-\phi(x)\right)=L_r\phi(x)-L_r\phi(y).$$ Using $\phi\leq \mu$, we deduce that $$\frac{1}{2}\left(\phi(x)-\phi(y)\right)^2\leq |L_r\phi(x)-L_r\phi(y)|.$$ Since $L_r:L_0^\infty({\mathbb{T}})\to\mathcal{C}_0^r({\mathbb{T}})$, where $r>1$, the right hand side can be estimated by a constant multiple of $|x-y|$. An immediate consequence is the $\frac{1}{2}$-Hölder continuity of $\phi$. Since $\phi$ is smooth in ${\mathbb{T}}\setminus\{0\}$, we can differentiate the equality $$\frac{1}{2}(\mu-\phi)^2(x)=K_r*\phi(0)-K_r*\phi(x)$$ for $x\in (-\pi,0)$ and obtain that $$\label{eq:BA} (\mu-\phi)\phi^\prime(x)=\left(K_r*\phi\right)^\prime(x)-\left(K_r*\phi\right)^\prime(0),$$ where we are using that $\left(K_r*\phi\right)^\prime(0)=0$. If $\phi$ is $\frac{1}{2}$-Hölder continuous, then $K_r*\phi\in \mathcal{C}_0^{\frac{1}{2}+r}({\mathbb{T}})$. In view of $r>1$, we gain at least some Hölder regularity for $(K_r*\phi)^\prime$. Thereby, $$\label{eq:BE} (\mu-\phi)\phi^\prime(x)\lesssim |x|^{a}$$ for some $a\in (\frac{1}{2},1]$. By assumption that $\phi$ is not Lipschitz continuous at $x=0$, the above estimate guarantees that $\phi$ is at least $a$-Hölder continuous, where $a>\frac{1}{2}$. We aim to bootstrap this argument to obtain Lipschitz regularity of $\phi$ at $x=0$. If above $\frac{1}{2}+r>2$, we use that $K_r*\phi\in \mathcal{C}_0^{\frac{1}{2}+r}({\mathbb{T}})\subset C_0^2(T)$, which guarantees that its derivative is at least Lipschitz continuous ($a=1$ in ) and we are done. If $\frac{1}{2}+r\leq 2$, we merely obtain an improved $a$-Hölder regularity of $\phi$. However, repeating the argument finitely many times, yields that $\phi$ is indeed Lipschitz continuous at $x=0$, that is $$\label{eq:upper} \mu-\phi(x)\lesssim |x|,\qquad\mbox{for}\quad |x|\ll 1.$$ - In view of the upper bound we are left to establish an according lower bound for $|x|\ll 1$ to prove the claim . To achieve this, we show that the derivative is positive and bounded away from zero on $(-\pi,0)$. Let $\xi \in (-\pi,0)$, then $$\begin{aligned} (\mu-\phi)\phi^\prime(\xi)=K_r^\prime * \phi(\xi) =\int_{-\pi}^0 \left(K_r(\xi-y)-K_r(\xi+y) \right)\phi^\prime(y)\,dy.\end{aligned}$$ Using the upper bound established in , we divide the above equation by $(\mu-\phi)(\xi)$ and obtain that $$\begin{aligned} \phi^\prime(\xi)\gtrsim \int_{-\pi}^0 \frac{K_r(\xi-y)-K_r(\xi+y)}{|\xi|} \phi^\prime(y)\,dy.\end{aligned}$$ Our aim is to show that $\liminf_{\xi \to -0}\phi^\prime(\xi)$ is strictly bounded away from zero. We have that $$\begin{aligned} \lim_{\xi \to -0}&\frac{K_r(\xi-y)-K_r(\xi+y)}{|\xi|}\\&= \lim_{\xi \to -0}\left(\frac{K_r(y-\xi)-K_r(y)}{\xi}+\frac{K_r(y)-K_r(y+\xi)}{\xi}\right)\frac{\xi}{|\xi|}=2K_r^\prime(y) \end{aligned}$$ for any $y\in (-\pi,0)$ (keep in mind that $\xi<0$). The integrability of $K_r^\prime$ allows us to estimate $$\label{eq:upperbound} \liminf_{\xi \to 0}\phi^\prime(\xi)\gtrsim 2\int_{-\pi}^{0} K_r^\prime(y)\phi^\prime(y)\, dy =c$$ for some constant $c>0$, since $\phi$ as well as $K_r$ are strictly increasing on $(-\pi,0)$. Let $x<0$ with $|x|\ll 1$. Applying the mean value theorem for $\phi$ on the interval $(x,0)$ yields that $$\begin{aligned} \frac{\phi(0) - \phi(x)}{|x|} = \phi'(\xi) \qquad \text{for some}\quad |\xi|\ll 1.\end{aligned}$$ In accordance with , we conclude that $$\mu - \phi (x) \simeq |x|\qquad \mbox{for} \; |x|\ll 1.$$ *The above theorem implies in particular that *any* periodic solution $\phi \leq \mu$ of , which is monotone on a half period, is at least Lipschitz continuous. Thereby, the existence of corresponding cusped traveling-wave solutions satisfying $\phi \leq \mu$ is a priori excluded.* \[lem:lowerbound:aux\] Let $\phi \leq \mu$ be an even solution of , which is nondecreasing on $[-\pi,0]$. Then there exists a constant $\lambda=\lambda(r)>0$, depending only on the kernel $K_r$, such that $$\mu-\phi(\pi)\geq \lambda \pi.$$ Let us pick $x\in [-\frac{3}{4}\pi,-\frac{1}{4}\pi]$. Then, $$\begin{aligned} \label{eq:estimateA} \begin{split} (\mu-\phi(\pi))\phi^\prime(x)&\geq (\mu-\phi(x))\phi^\prime(x)\\ &=\int_{-\pi}^{0}\left(K_r(x-y)-K_r(x+y)\right)\phi^\prime(y)\,dy\\ &\geq \int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi}\left(K_r(x-y)-K_r(x+y)\right)\phi^\prime(y)\,dy, \end{split} \end{aligned}$$ using the evenness of the kernel $K_r$, implying that $K_r(x-y)-K_r(x+y)>0$ for $x,y\in (-\pi,0)$. We observe that there exists a constant $\lambda=\lambda(r)>0$, depending only on the kernel $K_r$, such that $$K_r(x-y)-K_r(x+y)\geq 2\lambda \qquad \mbox{for all}\quad x,y \in \left(-\frac{3}{4}\pi,-\frac{1}{4}\pi\right).$$ Thus, integrating with respect to $x$ over $\left(-\frac{3}{4}\pi,-\frac{1}{4}\pi\right)$ yields $$\begin{aligned} (\mu-\phi(\pi))\left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right)&\geq\int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi}\left(\int_{-\frac{3}{4}\pi}^{-\frac{1}{4}\pi} K_r(x-y)-K_r(x+y)\, dx\right)\phi^\prime(y)\,dy\\ &\geq \lambda \pi \left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right). \end{aligned}$$ In view of $\phi$ being strictly increasing on $(-\pi,0)$ (cf. Theorem \[thm:reg\] (i)), we can divide the above inequality by the positive number $\left(\phi\left(-\frac{1}{4}\pi\right)-\phi\left(-\frac{3}{4}\pi\right)\right)$ and thereby affirm the claim. We close this section by proving that there is a natural bound on $\mu$ above which there do not exist any nontrivial, continuous solutions, which satisfying the uniform bound $\phi \leq \mu$. This is going to be used to exclude certain alternatives in the analysis of the global bifurcation curve in Section \[S:Global\]. \[lem:bound\_mu\] If $\mu\geq 2 \|K_r\|_{L^1}$, then there exist no nontrivial continuous solution $\phi\leq\mu$ of . Assume that $\phi\leq \mu$ is a nontrivial continuous solution of . The statement is a direct consequence of Lemma \[lem:distance\]: Since $\phi$ is continuous and has zero mean, we have that $$2(\mu-\|K_r\|_{L^1})\leq \phi(x_M)+\phi(x_m) <\phi(x_M) \leq \mu,$$ where $\phi(x_M)= \max_{x\in{\mathbb{T}}} \phi(x)$ and $\phi(x_m)=\min_{x\in{\mathbb{T}}} \phi(x)<0$. Then $$\mu < 2 \|K_r\|_{L^1}.$$ Global bifurcation and conclusion of the main theorem {#S:Global} ===================================================== This section is devoted to the existence of nontrivial, even, periodic solutions of . After constructing small amplitude solutions via local bifurcation theory, we extend the local bifurcation branch globally and characterize the end of the global bifurcation curve. By excluding certain alternatives, based on a priori bounds on the wave speed (cf. Lemma \[lem:bound\_mu\] and Lemma \[lem:lowerbound\] below), we prove that the global bifurcation curve reaches a limiting highest wave $\phi$, which is even, strictly monotone on its open half periods and with maximum at $\phi(0)=\mu$. By Theorem \[thm:reg\] then, the highest wave is a peaked traveling-wave solution of $$u_t + L_ru_x + uu_x=0\qquad\mbox{for}\quad r>1.$$ We use the subscript $X_{\rm even}$ for the restriction of a Banach space $X$ to its subset of even functions. Let $\alpha \in (1,2)$ and set $$F:C^{\alpha}_{0,\rm even}({\mathbb{T}})\times \mathbb{R}^{+}\rightarrow C^{\alpha}_{0,\rm even}({\mathbb{T}}),$$ where $$\label{oper:F} F(\phi,\mu):= \mu\phi - L_r\phi - \phi^{2}/2+\widehat{\phi^2}(0)/2, \qquad (\phi, \mu) \in {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times \mathbb{R}_{+}.$$ Then, $F(\phi,\mu)=0$ if and only if $\phi$ is an even $C^\alpha_0({\mathbb{T}})$-solution of corresponding to the wave speed $\mu\in {\mathbb{R}}_+$. Clearly. $F(0,\mu)=0$ for any $\mu\in {\mathbb{R}}_+$. We are looking for $2\pi$-periodic, even, nontrivial solutions bifurcating from the line $\{(0,\mu)\mid \mu\in{\mathbb{R}}\}$ of trivial solutions. The wave speed $\mu>0$ shall be the bifurcation parameter. The linearization of $F$ around the trivial solution $\phi=0$ is given by $$\label{eq:Fderivative} D_\phi F(0,\mu): {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}) \to {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}), \qquad \phi\mapsto \left(\mu \,{\rm id}-L_r \right) \phi.$$ Recall that $L_r:{{C}}^{\alpha}_{0,\rm even}({\mathbb{T}}) \to {\mathcal{C}}^{\alpha+r}_{0,\rm even}({\mathbb{T}})$ is parity preserving and a smoothing operator, which implies that it is compact on ${{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})$. Hence, $D_\phi F(0,\mu)$ is a compact perturbation of an isomorphism, and therefore constitutes a Fredholm operator of index zero. The nontrivial kernel of $D_\phi F(0,\mu)$ is given by those functions $\psi\in {{C}}^\alpha_{0,\rm even}({\mathbb{T}})$ satisfying $$\begin{aligned} \widehat{\psi}(k)\left( \mu - |k|^{-r}\right) = 0,\ \ \ k\neq 0.\end{aligned}$$ For $\mu\in (0,1]$, we see that $\operatorname{supp}\psi\subseteq \{\pm \mu^{-\frac{1}{r}}\}$. Therefore, the kernel of $D_\phi F(0,\mu)$ is one-dimensional if and only if $\mu=|k|^{-r}$ for some $k\in {\mathbb{Z}}$, in which case it is given by $$\begin{aligned} \label{ker:form} \ker D_\phi(0,\mu)= \mbox{span} \{\phi^*_k\} \qquad \mbox{with}\quad \phi^*_k(x):=\cos \left(xk \right).\end{aligned}$$ The above discussion allows us to apply the Crandall–Rabionwitz theorem, where the transversality condition is trivially satisfied since we bifurcate from a simple eigenvalue (cf. [@buffoni2003 Chapter 8.4]). \[cor:lcl:bfr\] For each integer $k\ge 1$, the point $(0,\mu_k^*)$, where $\mu_k^*=k^{-r}$ is a bifurcation point. More precisely, there exists ${\varepsilon}_0>0$ and an analytic curve through $(0,\mu^*_{k})$, $$\begin{aligned} \{ (\phi_{k}({\varepsilon}), \mu_{k}({\varepsilon}))\mid |{\varepsilon}|<{\varepsilon}_0\} \subset {{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+,\end{aligned}$$ of nontrivial, $\frac{2\pi}{k}$-periodic, even solutions of with $\mu_k(0)=\mu^*_k$ and $$D_{{\varepsilon}}\phi_{k}(0) =\phi^*_{k}(x)= \cos\left(xk\right).$$ In a neighborhood of the bifurcation point $(0,\mu^*_{k})$ these are all the nontrivial solutions of $F(\phi,\mu)=0$ in ${{C}}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+$. We aim to extend the local bifurcation branch found in Theorem \[cor:lcl:bfr\] to a global continuum of solutions of $F(\phi, \mu)=0$. Set $$\begin{aligned} S:= \{(\phi,\mu)\in U: F(\phi,\mu)=0 \},\end{aligned}$$ where $$U:= \{(\phi,\mu)\in {C}^{\alpha}_{0,\rm even}({\mathbb{T}})\times {\mathbb{R}}_+\mid \ \phi<\mu\}.$$ \[lem:glb:ind\] The Frechét derivative $D_{\phi}F(\phi,\mu)$ is a Fredholm operator of index $0$ for all $(\phi,\mu)\in U$. If $(\phi,\mu)\in U$, then $\phi<\mu$ and $$D_{\phi}F(\phi,\mu)=(\mu-\phi){\rm id}-L_r,$$ constitutes a compact perturbation of an isomorphism. Thereby, it is a Fredholm operator of index zero. Let us recall that all bounded solutions $\phi$ of , that is all bounded solutions $\phi$ satisfying $F(\phi, \mu)=0$, are uniformly bounded by $$\label{eq:uniform_bound} \|\phi\|_\infty \leq 2(\mu + \|K_r\|_{L^1}+ 2\pi \|K_r^\prime\|_{L^1}),$$ as shown in Lemma \[lem:uniform\_bound\]. \[lem:cpt\] Any bounded and closed set of $S$ is compact in ${C}^{\alpha}_{0,\rm even}(\mathbb{T})\times {\mathbb{R}}_+$. If $(\phi,\mu)\in S$, then in particular $\phi$ is smooth and $$\begin{aligned} \label{def:til:F} \phi=\mu-\sqrt{\mu^{2} + \hat{\phi^2}(0)-2L_r\phi}=:\tilde{F}(\phi,\mu).\end{aligned}$$ Since the function $\tilde F$ maps $U$ into $\mathcal{C}^{\alpha +r}_{0, \rm even}({\mathbb{T}})$, the latter being compactly embedded into $C^\alpha_{0, \rm even}({\mathbb{T}})$, we obtain that $\tilde F$ maps bounded sets in $U$ into relatively compact sets in $C^{\alpha}_{0, \rm even}({\mathbb{T}})$. Let $A\subset S\subset U$ be a bounded and closed set. Then $\tilde F (A)=\{\phi \mid (\phi, \mu)\in A\}$ is relatively compact in $C^{\alpha}_{0, \rm even}({\mathbb{T}})$. In view of $A$ being closed, any sequence $\{(\phi_n,\mu_n)\}_{n\in {\mathbb{N}}}$ has a convergent subsequence in $A$. We conclude that $A$ is compact in $C^{\alpha}_{0, \rm even}({\mathbb{T}})\times {\mathbb{R}}_+$. Using Lemma \[lem:glb:ind\] and \[lem:cpt\] we can extend the local branches found in Theorem \[cor:lcl:bfr\] to global curves. The result follows from [@buffoni2003 Theorem 9.1.1] once we show that $\mu({\varepsilon})$ is not identically constant for $0<{\varepsilon}\ll 1$. The latter claim however is an immediate consequence of Theorem \[thm:Biformulas\] below. The proof essentially follows the lines in [@EK Section 4]. \[thm:glb:bfr\] The local bifurcation curve $s\mapsto (\phi_{k}(s),\mu_{k}(s))$ from Theorem \[cor:lcl:bfr\] of solutions of extends to a global continuous curve of solutions ${\mathbb{R}}_+\to S$ and one of the following alternatives holds: - $\|(\phi_{k}(s), \mu_{k}(s))\|_{C^{\alpha}(\mathbb{T})\times {\mathbb{R}}_+}$ is unbounded as $s \to \infty$. - The pair $(\phi_{k}(s),\mu_{k}(s))$ approaches the boundary of $S$ as $s\to \infty$. - The function $s\mapsto(\phi_{k}(s),\mu_{k}(s))$ is (finitely) periodic. (0,-0.2) – (0,2.5) node\[left\][max $\phi$]{}; (0,0) – (2,0) – (2,2); (1,0) circle (0.03cm) node\[below=5pt\]; (-0.2,0) – (2.8,0) node\[right\]; (0,0) – (2.4,2.4) node\[right\] ; (2,2.5) – (2,-0.1) node\[below\] ; plot \[smooth, tension=1\] coordinates [ (1,0) (1.12,0.3) (1.6,0.8) (1.8,1.8)]{}; We apply the Lyapunov–Schmidt reduction, in order to establish the bifurcation formulas. Let $k\in {\mathbb{N}}$ be a fixed number and set $$M:= \mbox{span}\left\{\cos\left(xl\right)\mid l\neq k\right\},\qquad N:=\ker D_\phi F(0,\mu^*_{k})= \mbox{span}\{\phi^*_{k}\}.$$ Then, $C^\alpha_{0, \rm even}({\mathbb{T}})=M \oplus N$ and a continuous projection onto the one-dimensional space $N$ is given by $$\Pi \phi = \left<\phi, \phi^*_{k} \right>_{L_2}\phi^*_{k}$$ where $\left< \cdot, \cdot \right>_{L_2}$ denotes the inner product in $L_2({\mathbb{T}})$. Let us recall the Lyapunov–Schmidt reduction theorem from [@Kielhoefer Theorem I.2.3]: There exists a neighborhood $\mathcal{O}\times Y \subset U$ of $(0,\mu^*_{k})$ such that the problem $$\label{eq:infinite} F(\phi, \mu)=0 \quad \mbox{for}\quad (\phi, \mu)\in \mathcal{O} \times Y$$ is equivalent to the finite-dimensional problem $$\label{eq:finite} \Phi({\varepsilon}\phi^*_{k} , \mu):= \Pi F ({\varepsilon}\phi^*_{k} + \psi({\varepsilon}\phi^*_{k} , \mu), \mu)=0$$ for functions $\psi \in C^\infty(\mathcal{O}_N \times Y, M)$ and $\mathcal{O}_N \subset N$ an open neighborhood of the zero function in $N$. One has that $\Phi(0, \mu^*_{k})=0$, $\psi(0,\mu^*_{k})=0$, $D_\phi \psi(0,\mu^*_{k})=0$, and solving problem provides a solution $$\phi= {\varepsilon}\phi^*_{k}+\psi ({\varepsilon}\phi^*_{k}, \mu)$$ of the infinite-dimensional problem . \[thm:Biformulas\] The bifurcation curve found in Theorem \[thm:glb:bfr\] satisfies $$\label{eq:biformula1} \phi_k({\varepsilon})={\varepsilon}\phi^*_k(x)- \frac{{\varepsilon}^2}{2}k^r\left( 1+\frac{1}{1-2^{-r}}\cos \left( 2kx\right)\right)+O({\varepsilon}^3)$$ and $$\label{eq:biformula2} \mu_{k}({\varepsilon})=\mu^*_{k}+{\varepsilon}^2k^r\frac{3-2^{1-r}}{8(1-2^{-r})}+O({\varepsilon}^3)$$ in $C^\alpha_{0, \rm even}({\mathbb{T}})\times {\mathbb{R}}_+$ as ${\varepsilon}\to 0$. In particular, $\ddot{\mu}_{k}(0)>0$ for any $k\geq 1$, that is, Theorem \[cor:lcl:bfr\] describes a supercritical pitchfork bifurcation. Let us prove the bifurcation formula for $\mu_{k}$ first. The value $\dot \mu_{k}(0)$ can be explicitly computed using the bifurcation formula $$\dot \mu_{k} (0)=-\frac{1}{2}\frac{\left< D_{\phi \phi}^2 F(0,\mu^*_{k})[\phi^*_{k}, \phi^*_{k}], \phi^*_{k} \right>_{L^2}}{\left< D_{\phi \mu}^2 F(0,\mu^*_{k})\phi^*_{k},\phi^*_{k} \right>_{L^2}},$$ cf. [@Kielhoefer Section I.6]. We have $$\begin{aligned} D_{\phi \phi}^2 F[0,\mu^*_{k}](\phi^*_{k},\phi^*_{k})&=(\phi^*_{k})^2,\\ D_{\phi,\mu}^2 F[0,\mu^*_{k}]\phi^*_{k}&=-\phi^*_{k}.\end{aligned}$$ In view of $\int_{{\mathbb{T}}}(\phi^*_{k})^3(x)\,dx=0$, the first derivative of $\mu^*_{k}$ vanishes in zero. In this case the second derivative is given by $$\label{eq:2derivative} \ddot \mu_{k}(0)=-\frac{1}{3}\frac{\left< D_{\phi\phi\phi}^3 \Phi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}],\phi^*_{k}\right>_{L_2}}{\left< D_{\phi \mu}^2 F(0,\mu^*_{k})\phi^*_{k},\phi^*_{k} \right>_{L^2}},$$ where $\Phi \in C^\infty (\mathcal{O}_N \times Y, N)$ is the function defined in . We have that $$\begin{aligned} &D_\phi \Phi(\phi, \mu)\phi^*_{k}=\Pi D_\phi F(\phi+ \psi(\phi, \mu), \mu) \left[\phi^*_{k} + D_\phi \psi (\phi, \mu)\phi^*_{k} \right], \\ &D_{\phi\phi} \Phi (\phi, \mu)[\phi^*_{k},\phi^*_{k}] \\ &\quad=\Pi D_{\phi\phi}^2F(\phi + \psi(\phi, \mu), \mu)\left[\phi^*_{k} + D_\phi \psi(\phi,\mu)\phi^*_{k}, \phi^*_{k} + D_\phi \psi(\phi, \mu)\phi^*_{k} \right]\\ &\qquad + \Pi D_{\phi}F(\phi + \psi(\phi, \mu), \mu)D_{\phi \phi}^2\psi(\phi, \mu)[\phi^*_{k},\phi^*_{k}],\\ &D_{\phi\phi\phi}^3\Phi(\phi,\mu)[\phi^*_{k},\phi^*_{k},\phi^*_{k}]= \Pi D_{\phi}F(\phi + \psi(\phi, \mu), \mu)D_{\phi\phi\phi}^3 \psi(\phi, \mu)[\phi^*_{k},\phi^*_{k},\phi^*_{k}]\\ &\qquad +3\Pi D_{\phi\phi}^2F(\phi+ \psi(\phi, \mu), \mu)[\phi^*_{k}+D_\phi\psi(\phi, \mu)\phi^*_{k},D^2_{\phi\phi}\psi(\phi,\mu)[\phi^*_{k},\phi^*_{k}]],\end{aligned}$$ in view of $F$ being quadratic in $\phi$ and therefore $D_{\phi\phi\phi}^3F(\phi,\mu)=0$. Using that $\psi(0,\mu^*_{k})=D_\phi \psi(0,\mu^*_{k})\phi^*_{k}=0$ we obtain that $$\begin{aligned} D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]&=\Pi D_\phi F(0,\mu^*_{k})D_{\phi\phi\phi}^3 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]\\ &\quad+3 \Pi D_{\phi\phi}^2 F(0,\mu^*_{k})[\phi^*_{k},D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]].\end{aligned}$$ Since $N= \ker D_\phi F(0,\mu^*_{k})$ and $\Pi$ is the projection onto $N$, the above derivative reduces to $$\begin{aligned} D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]= 3\Pi\phi^*_{k}D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}].\end{aligned}$$ As in [@Kielhoefer Section 1.6] we use that $D_\phi F(0,\mu^*_{k})$ is an isomorphism on $M$ to write $$\begin{aligned} \label{eq:Dphiphi} \begin{split} D_{\phi\phi}^2 \psi(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]&=- (D_\phi F(0,\mu^*_{k}))^{-1}(1-\Pi)D_{\phi\phi}^2F(0,\mu^*_{k})[\phi^*_{k},\phi^*_{k}]\\ &=- (D_\phi F(0,\mu^*_{k}))^{-1}(1-\Pi)(\phi^*_{k})^2\\ &=-\frac{1}{2}(D_\phi F(0,\mu^*_{k}))^{-1}\left( 1+\cos\left(2xk\right)\right)\\ &=-\frac{1}{2}\left(\frac{1}{\mu^*_{k}}+\frac{\cos\left(2xk \right)}{\mu^*_{k}-(2k)^{-r}}\right). \end{split}\end{aligned}$$ We conclude that $$D_{\phi\phi\phi}^3\Phi(0, \mu^*_{k})[\phi^*_{k},\phi^*_{k},\phi^*_{k}]=-\frac{3}{2}\phi^*_{k} \left( \frac{1}{\mu^*_{k}}+\frac{1}{2(\mu^*_{k}-(2k)^{-r})} \right).$$ In view of the dominator in being $-1$, the second derivative of $\mu_{k}$ at zero is given by $$\label{eq:2D} \ddot \mu_{k}(0)=\frac{1}{2}\left(\frac{1}{\mu^*_{k}}+\frac{1}{2(\mu^*_{k}-(2k)^{-r})} \right)= k^{r}\frac{3-2^{1-r}}{4(1-2^{-r})} >0,\ \ \text{for all}\ r>1.$$ The formula is now a direct consequence of a Maclaurin series expansion and $\dot \mu_{k}(0)=0$. Since $\ddot \mu_{k}(0)>0$, we conclude that the bifurcation curve describes a supercritical pitchfork bifurcation. Keeping in mind that $\phi_k(0)=0$ and $\dot \phi_k(0)=\phi^*_k$, we are left to compute $\ddot \phi_k (0)$ in order to establish . We use that $$\phi_k({\varepsilon})={\varepsilon}\phi^*_k+\psi({\varepsilon}\phi^*_k, \mu_k({\varepsilon})),$$ cf. [@Kielhoefer Chapter I.5]. It follows that $$\begin{aligned} \ddot \phi_k(0)=&D^2_{\phi\phi}\psi(0,\mu^*_{k})[\phi^*_k,\phi^*_k]+2D^2_{\phi\mu}\psi(0,\mu^*_{k})[\phi^*_k,\dot \mu_{k}(0)] + D^2_{\mu\mu}\psi(0,\mu^*_{k})[\dot \mu_k(0),\dot \mu_{k}(0)]\\ &+D_\mu\psi(0,\mu^*_{k})\dot \mu_{k}(0).\end{aligned}$$ Since $D_\mu \psi(0,\mu^*_{k})=0$ and $\dot \mu_{k}(0)=0$, we obtain that $$\ddot \phi_k(0)=D^2_{\phi\phi}\psi(0,\mu^*_{k})[\phi^*_k,\phi^*_k].$$ Thus, the claim follows from . \[lem:con\] Any sequence of solutions $(\phi_{n},\mu_{n})_{n\in{\mathbb{N}}}\subset S$ to with $(\mu_n)_{n\in {\mathbb{N}}}$ bounded has a subsequence which converges uniformly to a solution $\phi$. In view of the boundedness of $(\mu_n)_{n\in {\mathbb{N}}}$ implies that also $(\phi_n)_{n\in {\mathbb{N}}}$ is uniformly bounded in $C({\mathbb{T}})$. In order to show that $(\phi_n)_{n\in {\mathbb{N}}}$ has a convergent subsequence, we prove that $(\phi_n)_{n\in {\mathbb{N}}}$ is actually uniformly Hölder continuous. By compactness, it then has a convergent subsequence in $C({\mathbb{T}})$. From Theorem \[thm:P\] it is known that $K_r$ is $\alpha$-Hölder continuous for some $\alpha\in (0,1]$. Since $(\phi_n)_{n\in {\mathbb{N}}}$ is uniformly bounded, we have that $(K_r*\phi_n)_{n\in{\mathbb{N}}}$ is uniformly $\alpha$-Hölder continuous. Recalling that $$\frac{1}{2}(\phi_n(x)-\phi_n(y))^2\leq |K_r*\phi_n(x)-K_r*\phi_n(y)|$$ whenever $\phi_n\leq \mu$, we deduce that $(\phi_n)_{n\in{\mathbb{N}}}$ is uniformly $\frac{\alpha}{2}$-Hölder continuous. Thus, $(\phi_n,\mu_n)_{n\in {\mathbb{N}}}$ has a convergent subsequence which allows us to choose a uniformly convergent subsequences to a solution of . The remainder of the section is devoted to exclude alternative (iii) in Theorem \[thm:glb:bfr\] and to prove that alternative (i) and (ii) occur simultaneously, which in particular implies that the highest wave is reached as a limit of the global bifurcation curve. Let $$\mathcal{K}_k:= \{ \phi\in {C}^{\alpha}_{0,\rm even}({\mathbb{T}}):\ \phi\ \text{is $2\pi/k$-periodic and nondecreasing in}\ (-\pi/k,0)\},$$ a closed cone in ${C}^{\alpha}_0(\mathbb{T})$. \[prop:A3\] The solutions $\phi_{k}(s)$, $s>0$ on the global bifurcation curve belong to $\mathcal{K}_k\setminus \{0\}$ and alternative (iii) in Theorem \[thm:glb:bfr\] does not occur. In particular, the bifurcation curve $(\phi_{k}(s),\mu_{k}(s))$ has no intersection with the trivial solution line for any $s>0$. Due to [@buffoni2003 Theorem 9.2.2] the statement holds true if the following conditions are satisfied - $\mathcal{K}_k$ is a cone in a real Banach space. - $(\phi_{k}({\varepsilon}),\mu_{k}({\varepsilon}))\subset \mathcal{K}_k\times {\mathbb{R}}$ provided ${\varepsilon}$ is small enough. - If $\mu \in {\mathbb{R}}$ and $\phi\in \ker D_\phi F(0,\mu)\cap \mathcal{K}_k$, then $\phi=\alpha \phi^*$ for $\alpha \geq 0$ and $\mu=\mu^*_{k}$. - Each nontrivial point on the bifurcation curve which also belongs to $\mathcal{K}_k\times {\mathbb{R}}$ is an interior point of $\mathcal{K}_k\times {\mathbb{R}}$ in $S$. In view of the local bifurcation result in Theorem \[cor:lcl:bfr\], we are left to verify condition (d). Let $(\phi,\mu)\in \mathcal{K}_k\times {\mathbb{R}}$ be a nontrivial solution on the bifurcation curve found in Theorem \[thm:glb:bfr\]. By Theorem \[th:phi:prop\], $\phi$ is smooth and together with Lemma \[lem:nod\], we have that $\phi'>0$ on $(-\pi,0)$ and $\phi''(0)<0$. Choose a solution $\varphi$ lying within a $\delta \ll1$ small enough neighborhood in $C^\alpha_0({\mathbb{T}})$ such that $\varphi < \mu$ and $\|\phi-\varphi\|_{C^{\alpha}}<\delta$. In view of an iteration process on the regularity index yields that $\|\phi-\varphi\|_{C^{2}}<\tilde{\delta}$, where $\tilde{\delta}>0$ depends on $\delta$ and can be made arbitrarily small by choosing $\delta$ small enough. It follows that for $\delta$ small enough $\varphi<\mu$ is a smooth, even solution, nondecreasing on $(-\frac{\pi}{k},0)$ and hence $(\phi,\mu)$ belongs to the interior of $\mathcal{K}_k\times {\mathbb{R}}$ in $S$, which concludes the proof. \[lem:lowerbound\] Along the bifurcation curve in Theorem \[thm:glb:bfr\] we have that $$\mu(s)\gtrsim 1$$ uniformly for all $s\geq0$. Let us assume for a contradiction that there exists a sequence $(s_n)_{n\in {\mathbb{N}}}\in {\mathbb{R}}_+$ with $\lim_{n\to \infty}s_n=\infty$ such that $\mu(s_n)\to 0$ as $n\to \infty$, while $\phi(s_n)\to \phi_0$ as $n\to \infty$ along the bifurcation curve found in Theorem \[thm:glb:bfr\]. In view of Lemma \[lem:con\], there exists a subsequence of $(s_n)_{n\in {\mathbb{N}}}$ (not relabeled) such that $\phi(s_n)$ converges to a solution $\phi_0$ of . Along the bifurcation curve we have that $\phi(s_n)<\mu(s_n)$. Taking into account the zero mean property of solutions of , it follows that $\phi_0=0$ is the trivial solution. But then Lemma \[lem:lowerbound:aux\] yields the contradiction $$0=\lim_{n\to \infty}\left(\mu(s_n)-\phi(s_n)(\pi)\right)\geq \lambda \pi>0.$$ \[thm:A12\] In Theorem \[thm:glb:bfr\], alternative (i) and (ii) both occur. Let $(\phi_{k}(s),\mu_{k}(s)), s\in{\mathbb{R}}$, the bifurcation curve found in Theorem \[thm:glb:bfr\]. In view of Proposition \[prop:A3\] we know that any solution along the bifurcation curve is even and nondecreasing on $(-\frac{\pi}{k},0)$. Moreover, alternative (iii) in Theorem \[thm:glb:bfr\] is excluded. That is either alternative (i) or alternative (ii) in Theorem \[thm:glb:bfr\] occur. Let us assume first that alternative (i) occurs, that is either $\|\phi_{k}(s)\|_{C^\alpha}\to \infty$ for some $\alpha\in (1,2)$ or $|\mu_{k}(s)|\to \infty$ as $s\to \infty$. The former case implies alternative (ii) in view of Theorem \[th:phi:prop\]. Since $\phi_{k}(s)$ has zero mean and keeping in mind Lemma \[lem:bound\_mu\], it is clear that the second option $\lim_{s\to \infty}|\mu_{k}(s)|=\infty$ can not happen unless we reach the trivial solution line, which is excluded by Proposition \[prop:A3\]. Suppose now that alternative (ii) occurs, but not alternative (i). Then there exists a sequence $(\phi_{k}(s_n),\mu_{k}(s_n))_{n\in {\mathbb{N}}}$ in $S$ satisfying $\phi_{k}(s_n)<\mu$ and $\lim_{n\to \infty}\max \phi_{k}(s_n)=\mu$, while $\phi_{k}(s_n)$ remains uniformly bounded in $C^\alpha({\mathbb{T}})$ for $\alpha\in (1,2)$ and $\mu\gtrsim 1$ by Lemma \[lem:lowerbound\]. But this is clearly a contradiction to Theorem \[thm:reg\]. We deduce that both, alternative (i) and alternative (ii) occur simultaneously. Now, we are at the end of our analysis and conclude our main result: Let $(\phi_{k}(s),\mu_{k}(s))$ be the global bifurcation curve found in Theorem \[thm:glb:bfr\] and let $(s_n)_{n\in {\mathbb{N}}}$ be a sequence in ${\mathbb{R}}_+$ tending to infinity. Due to our previous analysis (Lemma \[lem:bound\_mu\] and Proposition \[prop:A3\]), we know that $(\mu_{k}(s_n))_{n\in {\mathbb{N}}}$ is bounded and bounded away from zero. In view of the $\mu_{k}$-dependent bound of $\phi_{k}$ we obtain that also $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$ is bounded, whence Lemma \[lem:con\] implies the existence of a converging subsequence (not relabeled) of $(\phi_{k}(s_n),\mu_{k}(s_n))_{n\in{\mathbb{N}}}$. Let us denote the limit by $(\bar \phi, \bar \mu)$. By Theorem \[thm:A12\] and Theorem \[thm:reg\] we conclude that $\bar \phi (0)=\bar \mu$ with $\bar \phi$ admitting precisely Lipschitz regularity at each crest, which proves the main assertion Theorem \[thm:main\]. Application to the reduced Ostrovsky equation {#S:RO} ============================================= In this section we show that our approach can be applied to traveling-waves of the reduced Ostrovsky equation, which is given by $$\label{eq:GRO} \left[u_t +uu_x \right]_x-u=0$$ and arises in the context of long surface and internal gravity waves in a rotating fluid [@Ostrovsky1978]. We are looking for $2\pi$-periodic traveling-wave solutions $u(t,x)=\phi(x-\mu t)$, where $\mu>0$ denotes the speed of the right-propagating wave. In this context equation reduces to $$\label{eq:T} \left[ \frac{1}{2}\phi^{2}-\mu\phi\right]_{xx}-\phi=0.$$ Let us emphasize that the existence of periodic traveling wave solutions of is well-known. Furthermore, there exists an explicit example of a $2\pi$-periodic traveling-wave with wave speed $\mu=\frac{\pi^2}{9}$ of the form $$\label{eq:formula} \phi_p(x)=\frac{3x^2-\pi^2}{18},$$ which satisfies point-wise on $(-\pi,\pi)$. It is easy to check that $\phi_p$ is precisely Lipschitz continuous at its crest points located at $\pi(2{\mathbb{Z}}+1)$ and smooth elsewhere. (-6,0) – (7,0) node\[right\] [$x$]{}; (0,-1) – (0,2); (-0.2, 1)–(0.2,1) node\[left\][$\frac{\pi^2}{9}$]{}; (-pi, 0.2)–(-pi,-0.2) node\[below\][$-\pi$]{}; (pi, 0.2)–(pi,-0.2) node\[below\][$\pi$]{}; plot (,[(3\*-pi\^2)/18]{}); plot (,[((3\*(+2\*pi)\*(+2\*pi)-pi\^2)/18]{}); plot (,[((3\*(-2\*pi)\*(-2\*pi)-pi\^2)/18]{}); Recall that any periodic solution of has necessarily zero mean. Therefore, working in suitable spaces restricted to their zero mean functions, the pseudo differential operator $\partial_x^{-2}$ can be defined uniquely in terms of a Fourier multiplier. We show in Lemma \[lem:relation\] that the steady reduced Ostrovsky equation can be reformulated in nonlocal form as $$\label{eq:Reduced_Ostrovsky} -\mu \phi + L\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0.$$ Here $L$ denotes the Fourier multiplier with symbol $m(k)=k^{-2}$ for $k\neq 0$ and $m(0)=0$. Recall that any function $f\in {C}^\alpha({\mathbb{T}})$ for $\alpha>\frac{1}{2}$ has an absolutely convergent Fourier series, that is $$\sum_{k\in {\mathbb{Z}}}|\hat f(k)|<\infty,$$ and the Fourier representation of $f$ is given by $$f(x)=\sum_{k\in{\mathbb{Z}}}\hat f\left(k\right)e^{ixk}.$$ \[lem:relation\] Let $\alpha>\frac{1}{2}$. A function $\phi\in {C}_0^\alpha ({\mathbb{T}})$ is a solution of if and only if $\phi$ solves $$-\mu \phi + L_2\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0,$$ where $$L\phi(x) :=\sum_{k\neq 0}k^{-2}\hat \phi(k)e^{ixk}.$$ Notice that $\phi\in \mathcal{C}_0^\alpha({\mathbb{T}})$ is a solution of if and only if $$\int_{-\pi}^\pi \left[ \frac{1}{2}\phi^{2}(x)-\mu\phi(x)\right]\psi_{xx}(x)\,dx = \int_{-\pi}^\pi\phi(x) \psi(x)\,dx$$ for all $\psi \in C_c^\infty(-\pi,\pi)$, which is equivalent to $$\mathcal{F}\left( \left[ \frac{1}{2}\phi^{2}-\mu\phi\right]\psi_{xx}\right)(0)= \mathcal{F}\left(\phi \psi\right)(0).$$ Using the property that the Fourier transformation translates products into convolution, we can write $$\mathcal{F}\left( \frac{1}{2}\phi^{2}-\mu\phi\right) * \mathcal{F}\left(\psi_{xx}\right)(0)=\hat \phi * \hat \psi(0).$$ In view of $\phi$ having zero mean and therefore $\hat \phi (0)=0$, we deduce that $\phi\in {C}_0^\alpha ({\mathbb{T}})$ is a solution to if and only if $$-\sum_{k\neq 0}\mathcal{F}\left( \frac{1}{2}\phi^{2}-\mu\phi\right)(-k)k^2\hat \psi (k)=\sum_{k\neq 0 }\hat \phi(-k)\hat \psi (k)$$ for all $\psi \in C_c^\infty(-\pi,\pi)$. In particular, $$\frac{1}{2}\widehat{\phi^{2}}(k)-\mu\hat \phi(k)+k^{-2}\hat\phi (k)=0 \qquad \mbox{for all}\quad k \neq 0,$$ which is equivalent to $$\sum_{k\neq 0} \left( \frac{1}{2}\widehat{\phi^{2}}(k)-\mu\hat \phi(k)+k^{-2}\hat\phi (k)\right)e^{ixk}=0.$$ Due to the fact that $\phi$ has zero mean, the above equation can be rewritten as $$-\mu \phi + L\phi + \frac{1}{2}\left( \phi^{2}-\widehat{\phi^2}(0)\right)=0,$$ which proves the statement. We proved in Theorem \[thm:reg\] that *any* even, periodic, bounded solution $\phi\leq \mu$, which is monotone on a half period, is Lipschitz continuous on ${\mathbb{R}}$, which guarantees by Lemma \[lem:relation\] that all solutions of we consider here are indeed solutions of the reduced Ostrovsky equation. As a consequence of our main result Theorem \[thm:main\], we obtain the following corollary: \[cor:RO\] For each integer $k\geq 1$ there exists a global bifurcation branch $$s\mapsto (\phi_{k}(s),\mu_{k}(s)),\qquad s>0,$$ of nontrivial, $\frac{2\pi}{k}$-periodic, smooth, even solutions to the steady reduced Ostrovsky equation emerging from the bifurcation point $(0,k^{-2})$. Moreover, given any unbounded sequence $(s_n)_{n\in{\mathbb{N}}}$ of positive numbers $s_n$, there exists a subsequence of $(\phi_{k}(s_n))_{n\in {\mathbb{N}}}$, which converges uniformly to a limiting traveling-wave solution $(\bar \phi_{k},\bar\mu_{k})$ that solves and satisfies $$\bar \phi_{k}(0)=\bar \mu_{k}.$$ The limiting wave is strictly increasing on $(-\frac{\pi}{k},0)$ and is exactly Lipschitz at $x\in \frac{2\pi}{k}{\mathbb{Z}}$. In case of the reduced Ostrovsky equation, we know even more about the bifurcation diagram. Using methods from dynamical systems, the authors of [@GP; @GP2] are able to prove that the peaked, periodic traveling-wave for the reduced Ostrovsky equation is the *unique* nonsmooth $2\pi$-periodic traveling-wave solution ([@GP2 Lemma 2]). Moreover, from [@GP Lemma 3] we obtain the following a priori bound on the wave speed for nontrivial, $2\pi$-periodic traveling-wave solutions of : \[lem:optimal\] If $\phi$ is a nontrivial, smooth, $\frac{2\pi}{k}$-periodic, traveling-wave solution of the reduced Ostrovsky equation, then the wave speed $\mu$ satisfies the bound $$\mu\in k^{-2}\left(1,\frac{\pi^2}{9}\right).$$ (0,-0.2) – (0,2.5) node\[left\][max $\phi$]{}; (1,0) – (1,1) – (1.8,1.8) – (1.8,0); (1,0) circle (0.03cm) node\[below=5pt\]; (-0.2,0) – (2.8,0) node\[right\]; (0,0) – (2.4,2.4) node\[right\] ; (1.8,1.8) – (1.8,-0.1) node\[below\] ; (1,1) – (1,0) ; plot \[smooth, tension=1\] coordinates [ (1,0) (1.12,0.3) (1.6,0.8) (1.8,1.8)]{}; *Notice, that in the class of $2\pi$-periodic solutions, the range for the wave speed $\mu$ supporting nontrivial traveling-wave solutions of the reduced Ostrovsky equation is given by $(1,\frac{\pi^2}{9})$, where $\mu=1$ is the wave speed from which nontrivial, $2\pi$-periodic solutions bifurcate and $\mu=\frac{\pi^2}{9}$ is exactly the wave speed corresponding to the highest peaked wave in .* *Regarding the $2\pi$-periodic, nontrivial traveling-wave solutions of on the global bifurcation branch from Corollary \[cor:RO\], we have that Lemma \[lem:bound\_mu\] and Lemma \[lem:lowerbound\], proved in the previous sections, guarantee that the wave speed is a priori bounded by $$\mu \in \left(M, \frac{4\pi^3}{9\sqrt{3}}\right)\qquad\mbox{for some}\quad M\in (0,1].$$ Certainly this bound is if far from the optimal bound provided by [@GP] in Lemma \[lem:optimal\]. Thus, there is still room for improvement in our estimates.* Acknowlegments {#acknowlegments .unnumbered} -------------- The author G.B. would like express her gratitude to Mats Ehrnström for many valuable discussions. Moreover, G.B. gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. Part of this research was carried out while G.B. was supported by grant no. 250070 from the Research Council of Norway.\ The author R.N.D. acknowledges the support during the tenure of an ERCIM ‘Alain Bensoussan’ Fellowship Program and was supported by grant nos. 250070 & 231668 from the Research Council of Norway. Moreover, R.N.D. would also like to thank the Fields Institute for Research in Mathematical Sciences for its support to attain the Focus Program on *Nonlinear Dispersive Partial Differential Equations and Inverse Scattering* (July 31 to August 23, 2017) in the related field. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the [Fields Institute](www.fields.utoronto.ca). [^1]: One can show that the above definition is independent of the particular choice of $(\varphi)_{j\geq 0}$
--- author: - | Tatiane F. N. Melo\ [*Institute of Mathematics and Statistics, Federal University of Goiás, Brazil*]{}\ [email: [tmelo@ufg.br]{}]{}\ \ Silvia L. P. Ferrari\ [*Departament of Statistics, University of São Paulo, Brazil*]{}\ [email: [silviaferrari@usp.br]{}]{}\ \ Alexandre G. Patriota\ [*Departament of Statistics, University of São Paulo, Brazil*]{}\ [email: [patriota@ime.usp.br]{}]{}\ \ title: Improved hypothesis testing in a general multivariate elliptical model --- [**Abstract:**]{} This paper investigates improved testing inferences under a general multivariate elliptical regression model. The model is very flexible in terms of the specification of the mean vector and the dispersion matrix, and of the choice of the error distribution. The error terms are allowed to follow a multivariate distribution in the class of the elliptical distributions, which has the multivariate normal and Student-t distributions as special cases. We obtain Skovgaard’s adjusted likelihood ratio statistics and Barndorff-Nielsen’s adjusted signed likelihood ratio statistics and we conduct a simulation study. The simulations suggest that the proposed tests display superior finite sample behavior as compared to the standard tests. Two applications are presented in order to illustrate the methods. [**Keywords**]{}: Elliptical model; General parameterization; Modified likelihood ratio statistic; Modified signed likelihood ratio statistic; Multivariate normal distribution; Multivariate Student $t$ distribution. Introduction {#sec1} ============ Likelihood inference is usually based on the first order asymptotic theory, which can lead to inaccurate inference when the sample is small. In general, this is the case of the signed likelihood ratio test, whose statistic has asymptotic standard normal distribution under the null hypothesis, with an error of order $n^{-1/2}$, where $n$ is the size sample. In order to improve this approach, Barndorff-Nielsen (1986) proposed a new test statistic, that under the null hypothesis is asymptotically standard normal distributed with error of order $n^{-3/2}$. Barndorff-Nielsen’s adjustment is applied when the parameter of interest is scalar. Skovgaard (2001) developed an extension of this adjustment for the multidimensional case. These adjustments require a suitable ancillary statistic such that, in conjunction with the maximum likelihood estimator, they must constitute a sufficient statistic for the model. It is difficult or even impossible to find an appropriate ancillary for some statistical models (Peña et al, 1992). In this paper, we obtain Barndorff–Nielsen’s and Skovgaard’s adjustments in a general multivariate elliptical model using an approximate ancillary statistic. We perform a simulation study that suggests that the modified tests have type I probability error closer to the nominal level the original tests in small and moderate-sized samples. A general multivariate elliptical model was introduced by Lemonte and Patriota (2011). It considers that the mean vector and the dispersion matrix are indexed by the same vector of parameters. Multiple linear regressions, multivariate nonlinear regressions, mixed-effects models (Verbeke and Molenberghs, 2000), errors-in-variables models (Cheng and Van Ness, 1999) and log-symmetric regression models (Vanegas and Paula, 2015) are special cases of this general multivariate elliptical model. The elliptical family of distributions includes the multivariate normal as well as many other important distributions such as the multivariate Student $t$, power exponential, contaminated normal, Pearson II, Pearson VII, and logistic distributions, with heavier or lighter tails than the multivariate normal distribution. The random vector ${\bm Y}$ ($q\times 1$) has a multivariate elliptical distribution with location parameter ${\bm \mu}$ ($q\times 1$) and a positive definite scatter matrix $\Sigma$ ($q\times q$) if its density function exists and is given by $$\label{dens} f_{{{\bm Y}}}({{\bm y}}) = |{\Sigma}|^{-1/2} g\bigl(({{\bm y}} - {{\bm \mu}})^{\top}{\Sigma}^{-1}({{\bm y}} - {{\bm \mu}})\bigr),$$ where $g:[0,\infty)\to(0,\infty)$ is called the density generating function and it is such that $\int_{0}^{\infty}u^{\frac{q}{2}-1} g(u) du < \infty$. We will denote ${{\bm Y}}\sim El_{q}({{\bm \mu}},{\Sigma}, g) \equiv El_{q}({{\bm \mu}},{\Sigma})$. The characteristic function is $\psi({\bm t}) = \mbox{E}(\exp(i {\bm t}^\top {\bm Y}))=\exp(i {\bm t}^\top {\bm \mu})\varphi({\bm t}^\top \Sigma {\bm t})$, where ${\bm t} \in \mathbb{R}^q$ and $\varphi: [0,\infty) \to \mathbb{R}$. Then, if $\varphi$ is twice differentiable at zero, we have that $\mbox{E}({\bm Y}) = {\bm \mu}$ and $\mbox{Var}({\bm Y}) = \xi \Sigma$, where $\xi = \varphi'(0)$. We assume that the density generating function $g$ does not have any unknown parameter, which implies that $\xi$ is a known constant. In this case, when ${{\bm \mu}} = {\bm 0}$ and ${\Sigma} = {I}_{q}$, where ${I}_{q}$ is the $q\times q$ identity matrix, we obtain the spherical family of densities; for more details see Fang et al. (1990). This paper is organized as follows. Section \[sec2\] introduces the general elliptical model. Section \[sec3\] contains our main results, namely explicit formulas for Barndorff-Nielsen’s and Skovgaard’s adjustments in the general elliptical model. Section \[sec4\] presents a simulation study on the finite sample behavior of the standard and signed likelihood ratio tests and their modified counterparts. Our simulation results show that the unmodified tests tend to be liberal and their modified versions developed in this paper are much less size-distorted. Section \[sec5\] presents two real data applications. Finally, Section \[sec6\] concludes paper. Technical details are compiled in the Appendix. The model {#sec2} ========= The general multivariate elliptical model defined by Lemonte and Patriota (2011) considers $n$ independent random vectors ${\bm Y}_{1}, {\bm Y}_{2},\ldots, {\bm Y}_{n}$ modeled by the following equation: $$\label{MainModel1} {\bm Y}_{i} = {\bm \mu}_{i}({\bm \theta}) + {\bm e}_{i},\quad i = 1,\ldots,n,$$ with ${\bm e}_{i} \stackrel{ind}{\sim} El_{q_{i}}({\bm 0},{\Sigma}_i({\bm \theta}))$, where “$ \stackrel{ind}{\sim}$” means “independently distributed as”, ${\bm \mu}_{i}({\bm \theta}) = {\bm \mu}_{i}$ is the location parameter and ${\Sigma}_i({\bm \theta}) = {\Sigma}_i$ is the positive definite scatter matrix[^1] We can write $$\label{MainModel} {\bm Y}_{i} \stackrel{ind}{\sim}El_{q_{i}}({\bm \mu}_{i}, {\Sigma}_i),\quad i = 1,\ldots,n.$$ Both ${\bm \mu}_{i}$ and ${\Sigma}_i$ have known functional forms. Additionally, ${\bm \theta}$ is a vector of unknown parameters, with ${\bm \theta} \in {\bm \Theta} \subseteq \mathbb{R}^p$ (where $p<n$ is fixed). For the ${q}$-variate normal distribution, $N_{q}({\bm{\mu}}, {\Sigma})$, the density generating function is $g(u) = e^{-u/2}/(\sqrt{2 \pi})^{q}$. For the ${q}$-variate Student $t$ distribution with $\nu$ degrees of freedom, $t_{q}({\bm{\mu}}, {\Sigma}, \nu)$, we have $g(u) = \Gamma\left((\nu + q)/2\right) \pi^{-q/2} \nu^{-q/2} (1 + u/\nu)^{-(\nu + q)/2} / \Gamma(\nu/2) $. Additionally, for the $q$-variate power exponential, $PE_{q}({\bm{\mu}}_i, {\Sigma}, \lambda)$, with shape parameter $\lambda>0$, we have $g(u) = \lambda \Gamma(q/2) 2^{-q/(2 \lambda)} \pi^{-q/2} e^{-u^{\lambda}/2} /$ $\Gamma(q / (2 \lambda))$. For this general model, Lemonte and Patriota (2011) proposed diagnostic tools and Melo et al. (2015) obtained the second-order bias of the maximum likelihood estimator and conducted some simulation studies, which indicate that the proposed bias correction is effective. The log-likelihood function associated with (\[MainModel\]) is given by $$\label{log-likelihood} \ell({\bm \theta}) = \sum_{i=1}^n \ell_{i}({\bm \theta}),$$ where $\ell_{i}({\bm \theta}) = -\frac{1}{2} \log{|{\Sigma}_i|} + \log g(u_i)$, $u_{i} = {\bm z}_i^{\top}{\Sigma}_i^{-1}{\bm z}_i$ and ${\bm z}_i = {\bm Y}_i - {\bm \mu}_i$. The dependence of $\ell({\bm \theta})$ on ${\bm \theta}$ enters through ${\bm{\mu}}_i=\mu_i({\bm \theta})$ and $\Sigma_i=\Sigma_i({\bm \theta})$. We assume regularity conditions for the asymptotic theory of maximum likelihood estimation and likelihood ratio tests; see Severini (2000, § 3.4). The model must be identifiable and it must be guaranteed that the first four derivatives of $(1/n)\ell(\bm{\theta})$ with respect to $\bm{\theta}$ exist, are bounded by integrable functions, and converge almost surely for all $\bm{\theta}$. The conditions of the Lindeberg-Feller Theorem (or the Liapounov Theorem) must be valid for the score function to converge in distribution to a normal distribution (Sen and Singer, 1993, p.108). These conditions impose restrictions on the sequences $\{\bm{\mu}_i\}_{i\geq 1}$ and $\{\Sigma_i\}_{i\geq 1}$ that will not be detailed here. Maximum likelihood estimation of the parameters can be carried out by numerically maximizing the log-likelihood function (\[log-likelihood\]) through an iterative algorithm such as the Newton–Raphson, Fisher scoring, EM or BFGS. Our numerical results were obtained using the library function [MaxBFGS]{} in the [Ox]{} matrix programming language (Doornik, 2013). Modified likelihood ratio tests {#sec3} =============================== Consider the vector of unknown parameters ${\bm \theta} = ({\bm \psi}^\top, {\bm \omega}^\top)^\top\in \mathbb{R}^p$, with ${\bm \psi}\in \mathbb{R}^q$ being the vector of parameters of interest and ${\bm \omega}\in \mathbb{R}^{p-q}$ being the vector of nuisance parameters. The null and alternative hypotheses of interest are, respectively: ${\cal H}_{0}: {{\bm \psi}} = {{\bm \psi}}^{(0)}$ and ${\cal H}_{1}: {{\bm \psi}} \neq {{\bm \psi}}^{(0)}$, where ${{\bm \psi}}^{(0)}$ is a known $q$-vector. The maximum likelihood estimator of ${\bm \theta}$ is denoted by $\bm{\widehat{\theta}} = (\bm{\widehat{\psi}}, \bm{\widehat{\omega}}^\top)^\top$ and the maximum likelihood estimator of ${\bm \theta}$ under the null hypothesis, by $\bm{\widetilde\theta} = ({\bm{\psi}}^{(0)}, \bm{\widetilde\omega} ^\top)^\top$. We use “ $\widehat{}$ " and “ $\widetilde{}$ " for matrices and vectors to indicate that they are computed at $\widehat{\bm{\theta}}$ and $\widetilde{\bm{\theta}}$, respectively. The standard likelihood ratio statistic for testing ${\mathcal{H}}_{0}$ against ${\mathcal{H}}_1$ is $$\begin{aligned} \label{E.4.11} LR = 2\:\left\{\ell(\bm{\widehat{\theta}}) - \ell(\bm{\widetilde{\theta}})\right\}.\end{aligned}$$ Under regularity conditions (Severini, 2000, Section 3.4), $LR$ converges in distribution to $\chi^2_q$ when ${\cal H}_{0}: {{\bm \psi}} = {{\bm \psi}}^{(0)}$ holds. When the parameter of interest, $\psi$, is one-dimensional, i.e., $q=1$, the signed likelihood ratio statistic, $$\begin{aligned} \label{rS} r = {\rm sgn}\left(\widehat{\psi} - {\psi}^{(0)} \:\right) \sqrt{2\: \left(\ell(\widehat{\bm{\theta}}) - \ell(\widetilde{\bm{\theta}})\right)},\end{aligned}$$ may be employed. Under regularity conditions, $r$ converges in distribution to a standard normal distribution when ${\cal H}_{0}: \psi=\psi^{(0)}$ is true. In addition to two-sided tests (${\mathcal{H}}_{0}: {\psi} = {\psi}^{(0)} \:\:\: {\rm against} \:\:\: {\mathcal{H}}_{1}: {\psi} \neq {\psi}^{(0)}$), one may use the signed likelihood ratio statistic to test one-sided hypotheses such as ${\mathcal{H}}_{0}: {\psi} \geq {\psi}^{(0)} \:\:\: {\rm against} \:\:\: {\mathcal{H}}_{1}: {\psi} < {\psi}^{(0)}$ and ${\mathcal{H}}_{0}: {\psi} \leq {\psi}^{(0)} \:\:\: {\rm against} \:\:\: {\mathcal{H}}_{1}: {\psi} > {\psi}^{(0)}$. Barndorff-Nielsen (1986) proposes a modified version of $r$ that intends to better approximate the signed likelihood ratio statistic by the standard normal distribution. It can be difficult to obtain the modified statistic because one needs to obtain an appropriate ancillary statistic and derivatives of the log-likelihood function with respect to the data. By “ancillary statistic" we mean a statistic, say ${\bm a}$, whose distribution does not depend on the unknown parameter $\bm{\theta}$, and such that $(\bm{\widehat{\theta}},{\bm a})$ is a minimal sufficient statistic for the model. If $(\bm{\widehat{\theta}},{\bm a})$ is sufficient, but not minimal sufficient, Barndorff-Nielsen’s results still hold; see, Severini (2000, § 6.5). Sufficiency implies that the log-likelihood function depends on the data only through $(\bm{\widehat{\theta}}, {\bm a})$, and we then write $\ell(\bm{\theta};\bm{\widehat \theta},{\bm a})$. The derivatives of $\ell(\bm{\theta};\bm{\widehat \theta},{\bm a})$ with respect to the data and the parameter vector are $$\begin{aligned} \label{E.17} \bm{\ell}'(\bm{\theta};\bm{\widehat \theta},{\bm a}) = \frac{\partial \ell(\bm{\theta};\bm{\widehat \theta},{\bm a})} {\partial \bm{\widehat \theta}}, \ \ U'(\bm{\theta};\bm{\widehat \theta},{\bm a}) = \frac{\partial^2 \ell(\bm{\theta};\bm{\widehat \theta}, {\bm a})}{\partial \bm{\widehat \theta} \partial{\bm{\theta}}^\top}, \ \ {\rm and} \ \ J(\bm{\theta};\bm{\widehat \theta},{\bm a}) = -\frac{\partial^2 \ell(\bm{\theta};\bm{\widehat \theta}, {\bm a})}{\partial \bm{\theta} \partial{\bm{\theta}}^\top}. \end{aligned}$$ The modified signed likelihood ratio statistic is $$\begin{aligned} \label{rstar} r^* = r - \frac{1}{r} \log \gamma,\end{aligned}$$ with $$\label{E.19} \gamma = |\widehat{J}\:|^{1/2} |{\widetilde U}'|^{-1} |{\widetilde J}_{\bm{\omega\omega}}|^{1/2} \frac{r}{[({\widehat {\bm{\ell}}}'- {\widetilde {\bm{\ell}}}')^\top ({\widetilde U}')^{-1}]_{\psi}},$$ where $J = J(\bm{\theta};\bm{\widehat \theta},{\bm a})$ is the observed information matrix and $J_{\bm{\omega\omega}}$ is the lower right submatrix of $J$ corresponding to the nuisance parameter $\bm{\omega}$. Here, $[\bm{v}]_{\psi}$ denotes the element of the vector $\bm{v}$ that corresponds to the parameter of interest $\psi$. The quantities ${\widehat {\bm{\ell}}}'={\bm{\ell}}'(\bm{\widehat \theta};\bm{\widehat \theta},{\bm a})$, ${\widetilde {\bm{\ell}}}'={\bm{\ell}}'(\bm{\widetilde \theta};\bm{\widehat \theta},{\bm a})$ and ${\widetilde U}'= U'(\bm{\widetilde \theta};\bm{\widehat \theta},{\bm a})$ are computed as described above. Barndorff-Nielsen’s $r^*$ statistic is only useful when testing a one-dimensional hypothesis. However, in practical applications it is often the case that the null hypothesis involves several parameters. As an example, we mention the test for treatment effects in linear mixed models; see Verbeke and Molenberghs (2000, § 6.2). The seminal work of Skovgaard (2001) extended Barndorff-Nielsen’s (1986) results to the multiparameter test situation. He proposed two modified, asymptotically equivalent, likelihood ratio statistics, which can be seen as multiparameter versions of $r^*$. The modified statistics are derived from Barndorff-Nielsen’s work and share similar properties with $r^*$; see Skovgaard (2001, § 5, and eq. (8)-(10)). Skovgaard’s modified likelihood ratio statistics are given by $$\begin{aligned} \label{E.4.1} LR^* = LR \left(1 - \frac{1}{LR} \log\rho \right)^2\end{aligned}$$ and $$\begin{aligned} \label{E.4.16} LR^{**} = LR - 2\log\rho,\end{aligned}$$ with $$\begin{aligned} \label{E.4.17} \rho = |\widehat{J}\:|^{1/2} |{\widetilde U}'|^{-1} | {\widetilde J}_{\bm{\omega\omega}}|^{1/2} | \:{{\widetilde{\!\!\widetilde{J}}}}_{\bm{\omega\omega}}|^{-1/2} |\:{{\widetilde{\!\!\widetilde{J}}}}\:|^{1/2} \frac{\{{\widetilde {\bm{U}}}^{\top} {{\widetilde{\!\!\widetilde{J}}}}^{\: -1} {\widetilde {\bm{U}}}\}^{p/2}} {LR^{q/2 - 1} ({\widehat {\bm{\ell}}}'- {\widetilde {\bm{\ell}}}')^{\top} ({\widetilde U}')^{-1} {\widetilde{\bm{U}}}},\end{aligned}$$ where ${\bm{U}}$ is the score vector and $\:\:{{\widetilde{\!\!\widetilde{J}}}} = J(\bm{\widetilde{\theta}}; \bm{\widetilde{\theta}}, {\bm a})$, and ${\:\:{{\widetilde{\!\!\widetilde{J}}}}}_{\bm{\omega \omega}}$ is the lower-right sub-matrix of ${\:\:{{\widetilde{\!\!\widetilde{J}}}}}$ related to the nuisance parameters $\bm{\omega}$. Although the statistic $LR^*$ is non-negative and reduces to ${r^*}^2$ when $q=1$, the second version, $LR^{**}$, seems to be numerically more stable and is naturally attained from theoretical developments. These statistics approximately follow the asymptotic reference distribution (${\cal X}^2_q$ distribution) with high accuracy under the null hypothesis (Skovgaard, 2001). In fact, recent simulation studies suggest that Barndorff-Nielsen’s and Skovgaard’s statistics considerably improve small-sample inference; see, for example, Brazzale and Davison (2008), Lemonte and Ferrari (2011), Ferrari and Pinheiro (2014), Guolo (2012) and Cribari-Neto and Queiroz (2014). We now turn to the general elliptical model. Let ${\bm a} = ({\bm a}_1^\top, {\bm a}_2^\top$, $\ldots, {\bm a}_n^\top)^\top$, with $$\label{E.20} {\bm a}_i = {\widehat P}_i^{-1}\left({\bm Y}_i - {\bm{\widehat \mu}_i}\right),$$ where $P_i\equiv P_i(\bm\theta)$ is a lower triangular matrix such that $P_i P_i^\top = \Sigma_i$ is the Cholesky decomposition of $\Sigma_i$ for all $i=1, \ldots, n$. From Slutsky’s theorem, it follows that $\bm{a}_i$ converges in distribution to $El_{q_{i}}(\bm{0}, I_{q_i})$, since ${\widehat P}_i$ and $\widehat{\bm{\mu}}_i$ converge in probability to $P_i$ and $\bm{\mu}_i$, respectively. Additionally, it can be shown that any fixed number of the $\bm{a}_i$’s are asymptotically independent, and hence their joint asymptotic distribution is free of unknown parameters. Also, it follows from Neyman’s Factorization Theorem that $(\widehat{\bm{\theta}},\bm{a})$ is a sufficient statistic, since the log-likelihood function can be written as $$\label{E.ell} \ell(\bm{\theta};\widehat{\bm{\theta}}, \bm{a}) = \sum_{i=1}^n \left\{ - \frac{1}{2} \log |\Sigma_i | + \log g[({\widehat P}_i \bm{a}_i + \bm{\widehat \mu}_i - \bm{\mu}_i)^\top \Sigma_i^{-1} ({\widehat P}_i \bm{a}_i + {\bm{\widehat \mu}_i - \bm{\mu}_i}) ] \right\},$$ where the dependence[^2] on $\bm{\theta}$ is through $\bm{\mu}_i$ and $\Sigma_i$. Hence, we will use $\bm{a}$ as an approximate ancillary statistic. The use of an approximate ancillary statistic in connection with Barndorff-Nielsen’s $r^{*}$ statistic can be found, for example, in Fraser, Reid and Wu (1999); see also Severini (2000, § 7.5.3). Formulas (\[E.19\]) and (\[E.4.17\]), and the modified statistics $r^{*}$, $LR^{*}$, and $LR^{**}$ can be computed from (\[E.ell\]). Details are presented in the Appendix. Let $${\bm d}_{i(r)} = \frac{\partial {\bm \mu}_i}{\partial \theta_{r}}, \quad {\bm d}_{i(sr)} = \frac{\partial^2 {\bm \mu}_i}{\partial \theta_{s} \partial\theta_{r}}, \quad {C}_{i(r)} = \frac{\partial {\Sigma}_i}{\partial \theta_{r}}, \quad {C}_{i(sr)} = \frac{\partial^2 {\Sigma}_i}{\partial \theta_{s} \partial\theta_{r}},$$ and$${A}_{i(r)} = -{\Sigma}_i^{-1} {C}_{i(r)}{\Sigma}_i^{-1},$$ for $r, s = 1, \ldots, p$. The score vector and the observed information matrix for ${\bm \theta}$ can be written as $$\label{ScoreFisher2} {\bm{U}} = {F}^{\top}{H}{\bm s}, \quad J = T^{\top} \Sigma^{-1} D + G,$$ respectively, with ${F} = \left({F}_1^\top, \ldots, {F}_n^\top\right)^{\top}$, $H = \mbox{block-diag}\left\{H_1, \ldots, H_n\right\}$, ${\bm s} = ({\bm s}_1^\top, \ldots, {\bm s}_n^\top )^{\top}$, $T = \left({T}_1^\top, \ldots, {T}_n^\top\right)^{\top}$, $\Sigma^{-1} = \mbox{block-diag}\left\{ \Sigma_1^{-1},\ldots, \Sigma_n^{-1} \right\}$, ${D} = \left({D}_1^\top, \ldots, {D}_n^\top\right)^{\top}$, wherein $$\label{matrix-score} F_i = \begin{pmatrix} {D}_i\\ {V}_i\\ \end{pmatrix},\quad {H}_i = \begin{bmatrix} {\Sigma}_i & {0}\\ {0} & 2{\Sigma}_i\otimes {\Sigma}_i \end{bmatrix}^{-1}, \quad {\bm s}_i = \begin{bmatrix} v_{i}{\bm z}_i\\ -{\textrm{vec}}({\Sigma}_i - v_{i} {\bm z}_i {\bm z}_i^{\top}) \end{bmatrix},$$ where the “vec" operator transforms a matrix into a vector by stacking the columns of the matrix, ${D}_i = ({\bm d}_{i(1)}, \ldots, {\bm d}_{i(p)})$, ${V}_i = ({\textrm{vec}}({C}_{i(1)}),\ldots, {\textrm{vec}}({C}_{i(p)}))$, ${T}_i = ({\bm T}_{i(1)}, \ldots, {\bm T}_{i(p)})$, $v_{i} = -2 W_{g}(u_{i})$ and $W_g(u) = \mbox{d} \log g(u)/\mbox{d} u$. The $(r,s)$-th elements of $G$ and $E_i$ are given by $G_{rs}$ and $E_{i(rs)}$, respectively. The quantities ${\bm T}_{i(r)}$, $G_{rs}$, and $E_{i(rs)}$ are given in the Appendix. The symbol “$\otimes$” indicates the Kronecker product. We have $$\begin{aligned} \label{Deriv} \widehat{\bm{\ell}}' = \widehat{R}^{\top} \widehat{\Sigma}^{-1} \widehat{{\bm z}}^*, \ \ \widetilde{\bm{\ell}}' = \widehat{R}^{\top} \widetilde{\Sigma}^{-1} \widetilde{{\bm z}}^*, \ \ \widetilde{U}' = \widetilde{Q}^{\top} \widetilde{\Sigma}^{-1} \widehat{R}, \ \ {\widetilde{\!\!\widetilde{J}}}= \ {\widetilde{\!\!\widetilde{{T}}}}^{\top} \widetilde{\Sigma}^{-1} \widetilde{{D}} + \ {\widetilde{\!\!\widetilde{G}}},\end{aligned}$$ where $\widehat{R}$, $\widehat{{\bm z}}^*$, $\widetilde{{\bm z}}^*$, $\widetilde{Q}$, $\ {\widetilde{\!\!\widetilde{{T}}}}$, and $\ {\widetilde{\!\!\widetilde{G}}}$ are given in the Appendix. By inserting $\widehat J$, ${\widetilde J}_{\bm{\omega\omega}}$, ${\:{{\widetilde{\!\!\widetilde{J}}}}}_{\bm{\omega\omega}}$, ${\:{{\widetilde{\!\!\widetilde{J}}}}}$, $\widetilde{\bm{U}}$, ${\widetilde U}'$, and ${\widehat{\bm{\ell}}}'- {\widetilde{\bm{\ell}}}'$ into (\[E.19\]) and (\[E.4.17\]), one obtains the required quantities $\gamma$ and $\rho$ for Barndorff-Nielsen’s and Skovgaard’s adjustments. Now, one is able to compute the modified statistics $r^*$, $LR^{*}$, and $LR^{**}$. Computer packages that perform simple operations on matrices and vectors can be used to calculate $\gamma$ and $\rho$. Note that $\gamma$ and $\rho$ depend on the model through ${\bm \mu}_i$, $P_i$, $\Sigma_i$ and $\Sigma_i^{-1}$. The dependence on the specific distribution of ${\bm Y}$ in the class of elliptical distributions occurs through $W_{g}$. Simulation study {#sec4} ================ In this section, we present the results of Monte Carlo simulation experiments in which we evaluate the finite sample performances of the signed likelihood ratio test ($r$) and the standard likelihood ratio test ($LR$) and their corrected versions $r^*$, $LR^*$, and $LR^{**}$. The simulations are based on the univariate nonlinear model and the multivariate mixed linear model when ${\bm Y}_i$ follows a normal distribution, a Student $t$ distribution with $\nu = 3$ degrees of freedom, or a power exponential distribution with shape parameter $\lambda = 0.9$. All simulations are performed using the `Ox` matrix programming language (Doornik, 2013). The number of Monte Carlo replications is 10,000 (ten thousand). The tests are carried out at the following nominal levels: $\alpha = 1\%, 5\%, 10\%$. First consider the nonlinear model defined in (\[MainModel1\]) with $$\label{nonlinear-model} {\mu}_{i} = \frac{1}{1 + \beta_0 + \beta_1 x_{i1} + \beta_2 x_{i2} + \beta_3 x_{i2}^2}, \quad i = 1,\ldots, n$$ (model 1). We test ${\mathcal H}_{0}: \beta_3 \geq 0$ against ${\mathcal H}_{1}: \beta_3 < 0$ and ${\mathcal H}_{0}: (\beta_2, \beta_3)^{\top} = (0, 0)^{\top}$ against ${\mathcal H}_{1}: (\beta_2, \beta_3)^{\top} \neq (0, 0)^{\top}$. The values of the covariates ${x}_{i1}$ and ${x}_{i2}$ are taken as random draws from the standard uniform distribution ${\mathcal U}(0,1)$ and $n = 15, 25, 35, 50$. The parameter values are set at $\beta_0 = 0.5$, $\beta_1 = 0.2$, $\beta_2 = 0$, $\beta_3 = 0$, and $\sigma^2 = 0.005$. For this parameter setting, ${\mu}_i\in (1/(1+0.5+0.2), 1/(1+0.5)) \approx (0.59, 0.67)$ because $x_{ij}\in (0,1)$. This implies that $\sigma^2$ must be very small for the response variable not to be dominated by the random noise. The null rejection rates (say $\widehat p$) of the tests are estimates of the true type I error probabilities with standard error ${\rm se}=\sqrt{\widehat p (1-\widehat p)/10000}$, and 95% confidence intervals are given by $\widehat p \pm 1.96 \sqrt{\widehat p (1-\widehat p)/10000}$. The null rejection rates and the corresponding standard errors are displayed in Tables $1$ (one-sided tests) and $2$ (two-sided tests) for different sample sizes. The simulation results show that the test based on the modified signed likelihood ratio statistic, $r^*$, presents rejection rates closer to the nominal levels than the original version, $r$, in small samples. For instance, for the normal distribution, $n=25$, and $\alpha = 1\%$, the rejection rates are $2.1\%$ $(r)$ and $1.4\%$ $(r^*)$. For $n = 15$, Student $t$ distribution and $\alpha = 5\%$, the rejection rates are $11.9\%$ $(r)$ and $6.5\%$ $(r^*)$. Additionally, we observe that the test based on the standard likelihood ratio statistic, $LR$, is considerably liberal when the sample size is small, i.e. the rejection rates are much larger than the nominal levels. For instance, for the Student $t$ distribution and $n = 15$, the rejection rates for the $LR$ test equal $5.7\%$ $(\alpha = 1\%)$, $15.7\%$ $(\alpha = 5\%)$, and $25.2\%$ $(\alpha = 10\%)$ (Table 2). The tests based on the modified versions, $LR^{*}$ and $LR^{**}$, present rejection rates much closer to the nominal levels than the original version, $LR$. For example, for the Student $t$ distribution, $n=15$ and $\alpha=1\%$, the rejection rates are $5.7\%$ $(LR)$, $1.1\%$ $(LR^{*})$, and $0.8\%$ $(LR^{**})$. The above mentioned findings are corroborated by comparing the 95% confidence intervals of the true probability type I error of the different tests. [ccccccccc]{}\ &&&&&\ & $r$ & $r^*$ && $r$ & $r^*$ && $r$ & $r^*$\ 15 &2.8 (0.2) &1.5 (0.1) && 8.8 (0.3) &6.1 (0.2) &&14.4 (0.4) &11.0 (0.3)\ 25 &2.1 (0.1) &1.4 (0.1) && 7.6 (0.3) &6.2 (0.2) &&12.8 (0.3) &11.1 (0.3)\ 35 &1.7 (0.1) &1.5 (0.1) && 6.5 (0.3) &5.7 (0.2) &&12.0 (0.3) &10.8 (0.3)\ 50 &1.3 (0.1) &1.1 (0.1) && 5.6 (0.2) &5.1 (0.2) &&11.1 (0.3) &10.3 (0.3)\ \ &&&&&\ & $r$ & $r^*$ && $r$ & $r^*$ && $r$ & $r^*$\ 15 &4.3 (0.2) &1.8 (0.1) &&11.9 (0.3) &6.5 (0.3) &&18.1 (0.4) &12.2 (0.3)\ 25 &2.3 (0.2) &1.7 (0.1) && 7.9 (0.3) &6.2 (0.2) &&13.6 (0.3) &11.5 (0.3)\ 35 &1.9 (0.1) &1.8 (0.1) && 7.1 (0.3) &6.2 (0.2) &&12.4 (0.3) &11.5 (0.3)\ 50 &1.6 (0.1) &1.4 (0.1) && 6.0 (0.2) &5.4 (0.2) &&11.4 (0.3) &10.6 (0.3)\ \ &&&&&\ & $r$ & $r^*$ && $r$ & $r^*$ && $r$ & $r^*$\ 15 &2.9 (0.2) &1.9 (0.1) && 9.2 (0.3) &6.5 (0.3) &&15.1 (0.4) &11.9 (0.3)\ 25 &1.6 (0.1) &1.4 (0.1) && 7.2 (0.3) &5.8 (0.2) &&12.7 (0.3) &11.5 (0.3)\ 35 &1.6 (0.1) &1.6 (0.1) && 6.7 (0.3) &6.2 (0.2) &&12.2 (0.3) &11.5 (0.3)\ 50 &1.4 (0.1) &1.2 (0.1) && 6.1 (0.2) &5.7 (0.2) &&11.5 (0.3) &11.0 (0.3)\ \[tab.1\] \[tab.2\] Figures 1 and 2 depict curves of relative $p$-values discrepancies [*versus*]{} the corresponding asymptotic $p$-values for the tests that use $r$ and $r^*$ (Figure 1), and $LR$, $LR^{*}$, and $LR^{**}$ (Figure 2) for $n = 15$ under normal, Student $t$ and power exponential distributions. The relative $p$-value discrepancy is defined by the difference between the exact and the asymptotic $p$-values divided by the asymptotic $p$-value. The closer to zero the curve is the better the asymptotic approximation. The figures clearly suggest that the modified statistics are much better approximated by the respective asymptotic distributions than the unmodified ones. We conclude that the modified versions of the signed likelihood ratio test and standard likelihood ratio test have better performance than the original tests for small and moderate sample sizes. Although the tests based on the modified statistics, $LR^*$ and $LR^{**}$, present similar results, the test based on $LR^{**}$ had better performance (in the majority of the cases) than the one based on $LR^*$ in our simulations. However, a comparison based on 95% confidence intervals is not able to distinguish between the two modified tests in most of the cases. We now consider the mixed linear model $$\label{mixed-model} \bm{Y}_i = X_i {\bm{\beta}} + Z_i \bm{b}_i + \bm{e}_i, \quad i = 1,\ldots, n,$$ where $\bm{Y}_i$ is the $q_i \times 1$ response vector with $q_i$ randomly chosen from $\{1, 2, 3, 4, 5\}$, $X_i = [{\bm 1} \ \ {\bm x}_{i1} \ \ {\bm x}_{i2} \ \ {\bm x}_{i3} \ \ {\bm x}_{i4}]$ is a matrix of nonstochastic covariates $(q_i \times 5)$, and ${Z}_i = [{\bm 1} \ \ {\bm x}_{i1}]$ is a matrix of known constants $(q_i \times 2)$ (model 2). The vector ${\bm x}_{i1}$ is composed by the first $q_i$ elements of $(5, 10, 15, 30, 60)^\top$; ${\bm x}_{i2}, {\bm x}_{i3}$, and ${\bm x}_{i4}$ are vectors of dummy variables. The vector of the fixed effects parameters is $\bm{\beta} = (\beta_0, \beta_1, \beta_2, \beta_3, \beta_4)^\top$. Assume that $(\bm{e}_i, {\bm b}_i)^\top \sim El_{q_i}(\bm{0}, S_i)$, where $$\label{MatrizD} S_i = \left[\begin{array}{cc} \sigma^2 I_{q_i} & 0_{q_i \times 2} \\ 0_{2 \times q_i} & \Delta({\bm{\gamma}}) \\ \end{array}\right], \ \ \Delta({\bm{\gamma}}) = \left[\begin{array}{cc} \gamma_1 & \gamma_2 \\ \gamma_2 & \gamma_3 \\ \end{array}\right],$$ and ${\bm{\gamma}} = (\gamma_1, \gamma_2, \gamma_3)^\top$. Here, $0_{q_i \times 2}$ is null matrix of dimension $q_i \times 2$. Therefore, the marginal distribution of the observed vector is $\bm{Y}_i \sim El_{q_i}\left({\bm \mu}_{i}; \Sigma_i\right)$, where ${\bm \mu}_{i} = X_i {\bm{\beta}}$ and ${\Sigma}_i = Z_i \Delta({\bm{\gamma}}) Z_i^{\top} + \sigma^2 I_{q_i}$. Note that model (\[mixed-model\]) is a special case of (\[MainModel\]). Here, the vector of unknown parameters is ${\bm{\theta}} = ({\bm{\beta}}^\top, {\bm{\gamma}}^\top, \sigma^2)^\top$. The sample sizes considered are $n = 16, 24, 32, 40$, and $48$. We test ${\mathcal H}_0: {\bm \psi} = {\bm 0}$ against ${\mathcal H}_1: {\bm \psi} \neq {\bm 0}$, where ${\bm \psi} = (\beta_2, \beta_3, \beta_4)^\top$. The parameter values are $\beta_0 = 0.7$, $\beta_1 = 0.5$, $\beta_2 = \beta_3 = \beta_4 = 0$, $\gamma_1 = 500$, $\gamma_2 = 2$, $\gamma_3 = 200$, and $\sigma^2 = 5$. The null rejection rates of the tests are displayed in Table 3. We note that the likelihood ratio test is liberal. For instance, when ${\bm Y}_i$ follows a Student $t$ distribution, $n = 16$, and $\alpha = 10\%$, its rejection rate exceeds $26\%$. The tests based on the modified statistic, $LR^{*}$ and $LR^{**}$, present rejection rates closer to the nominal levels than the original version, $LR$. For example, the rejection rates when ${\bm Y}_i$ follows a Student $t$ distribution, $n = 16$, and $\alpha = 1\%$, are $6.3\%$ ($LR$), $1.1\%$ ($LR^{*}$), and $0.9\%$ ($LR^{**}$). Figure 3 presents curves of relative $p$-values discrepancies [*versus*]{} the corresponding asymptotic $p$-values for the statistics $LR$, $LR^{*}$, and $LR^{**}$ for $n = 16$ under the distributions considered. It is evident that the modified statistics are much better approximaded by the reference distributions than the original statistics. In conclusion, the simulations suggest that the modified versions of the standard and signed likelihood ratio tests perform better than the original tests for small and moderate sample sizes. \[tab.3\] Applications {#sec5} ============ Fluorescent lamp data {#lamp} --------------------- The data used in this section were presented by Rosillo et al. (2010, Table 5). The authors analyzed the lifetime of $n=14$ fluorescent lamps in photovoltaic systems using an analytical model whose goal is to assist in improving ballast design and extending the lifetime of fluorescent lamps. We consider the nonlinear model (\[nonlinear-model\]) where the response variable is the observed lifetime/advertised lifetime ($Y$), the covariates correspond to a measure of gas discharge ($x_1$) and the observed voltage/advertised voltage (measure of performance of lamp and ballast – $x_2$); see Rosillo et al. (2010, eq(15)). The errors are assumed to follow a Student $t$ distribution with $\nu = 4$ degrees of freedom. This model provides a suitable fit for the data (Melo et al, 2015). The maximum likelihood estimates of the parameters (standard errors are given in parentheses) are $\widehat\beta_0 = 33.519 \ (5.082)$, $\widehat\beta_1 = 9.592 \ (4.417)$, $\widehat\beta_2 = -63.501 \ (9.789)$, $\widehat\beta_3 = 29.777 \ (4.710)$, and $\widehat\sigma^2 = 0.006 \ (0.003)$. The signed likelihood ratio statistic for the one-sided test of ${\mathcal H}_0: \beta_1 \leq 0$ against ${\mathcal H}_1: \beta_1 > 0$ is $r = 2.374$ ($p$-value: $0.009$), and the corrected statistic is $r^{*} = 1.878$ ($p$-value: $0.030$). The unmodified test rejects the null hypothesis at the 1% significance level, unlike the modified test. In addition, the standard likelihood ratio statistic for testing the two-sided test of ${\mathcal H}_0: \beta_1 = 0$ against ${\mathcal H}_1: \beta_1 \neq 0$ is $LR = 5.634$ ($p$-value: $0.018$) whereas the corrected statistics are $LR^{*} = 3.528$ ($p$-value: $0.060$) and $LR^{**} = 3.282$ ($p$-value: $0.070$). Note the considerable increase in the $p$-values when the modified statistics are employed. At the 5% significance level, the null hypothesis is rejected by the tests based on the modified statistics $LR^{*}$ and $LR^{**}$ but not by the standard likelihood ratio test. Blood pressure data {#blood} ------------------- We consider a randomly selected sub-sample of the data presented by Crepeau et al. (1985). Heart attacks were induced in rats exposed to four different low concentrations of halothane; group 1: 0% (control), group 2: 0.25%, group 3: 0.50% and group 4: 1.0%. Our sample consists of 35 rats. The blood pressure of each rat (in mm Hg) is recorded over different points in time, from 1 to 9 recordings, after the induced heart attack. The goal is to investigate the effect of halothane on the blood pressure. We consider the mixed linear model (\[mixed-model\]), where the $j$th element of the response variable ${\bm Y}_i$ is the blood pressure of the $i$th rat at time $j$ with $i = 1, 2, \ldots, n$, $j = 1, 2, \ldots, q_i$ and $q_i \in \{1, 2, 3, 4, 5, 6, 7, 8, 9\}$. The values of ${\bm x}_{i1}$ are obtained of the vector from points in time (in minutes) in which the $i$th rat blood pressure was recorded. This vector is given by $(5, 10, 15, 30, 60, 120, 180, 240)^\top$. Furthermore, ${\bm x}_{i2}$ is a dummy variable that equals 1 if the $i$th rat belongs to group 2 and 0 otherwise. Also, ${\bm x}_{i3}$ and ${\bm x}_{i4}$ equal 1 for groups 3 and 4, respectively. As in Crepeau et al. (1985), we assume a normal distribution for ${\bm Y}_i$. The hypothesis ${\mathcal H}_0: {\bm \psi} = {\bm 0}$ is to be tested against ${\mathcal H}_1: {\bm \psi} \neq {\bm 0}$, where ${\bm \psi} = (\beta_2, \beta_3, \beta_4)^\top$. The maximum likelihood estimates of the parameters (standard errors are given in parentheses) are $\widehat\beta_0 = 100.60 \ (6.74)$, $\widehat\beta_1 = 0.011 \ (0.012)$, $\widehat\beta_2 = 4.493 \ (9.333)$, $\widehat\beta_3 = -11.032 \ (9.081)$, $\widehat\beta_4 = -23.022 \ (8.913)$, $\widehat\sigma^2 = 97.761 \ (10.660)$, $\widehat\gamma_1 = 483.550 \ (122.770)$, $\widehat\gamma_2 = -0.700 \ (0.297)$, and $\widehat\gamma_3 = 0.002 \ (0.001)$. The likelihood ratio statistic and its modified versions for testing ${\cal H}_0: {\bm \psi} = {\bm 0}$ are: $LR = 7.954$ ($p$-value: $0.047$), $LR^{*} = 6.883$ ($p$-value: $0.075$), and $LR^{**} = 6.844$ ($p$-value: $0.077$). We notice that the null hypothesis is rejected at the 5% nominal level when one uses the original statistic, but ${\cal H}_0$ is not rejected when the modified likelihood ratio tests are employed. That is, when using the modified tests (at the 5% nominal level) one concludes that there is not enough evidence that the blood pressure is affected by the administration of halothane at the concentrations considered in the experiment. This conclusion agrees with the analysis of Crepeau et al. (1985, p.510) and the test of ${\cal H}_0$ based on the full sample with $43$ rats: $LR = 7.162$ ($p$-value: $0.067$), $LR^{*} = 6.613$ ($p$-value: $0.085$), and $LR^{**} = 6.602$ ($p$-value: $0.086$). Concluding remarks {#sec6} ================== We studied the issue of testing two-sided and one-sided hypotheses in a general multivariate elliptical model. Some special cases of this model are errors-in-variables models, nonlinear mixed-effects models, heteroscedastic nonlinear models, among others. In any of these models, the vector of the errors may have any multivariate elliptical distribution. In small and moderate-sized samples the distributions of the standard and signed likelihood ratio statistics may be far from the respective reference distributions. As a consequence, the tests may be considerably liberal. We derived modified versions of these statistics. Our simulations suggest that the modified statistics closely follow the reference distributions in finite samples. The modifications obtained in this paper attenuate the liberal behavior of the original tests. Acknowledgement {#acknowledgement .unnumbered} =============== We gratefully acknowledge the financial support from CNPq and FAPESP. The authors thank the anonymous referee and the Associate Editor for helpful comments and suggestions. Appendix. The observed information matrix and derivatives with respect to the data {#secA .unnumbered} ================================================================================== The $(r,s)$th element of the observed information matrix $J$ is given by $$\sum_{i=1}^{n}\Big\{\bm{T}_{i(r)}^{\top} \Sigma_i^{-1} \bm{d}_{i(s)} + {\mbox{tr}}(B_{i(r)} A_{i(s)}) + E_{i(rs)}\Big\},$$ where $$\begin{split} \bm{T}_{i(r)}^{\top} &= - \dot{v}_i \left(\bm{z}_i^{\top} A_{i(r)} \bm{z}_i\right) \bm{z}_i^{\top} + 2 \dot{v}_i \left(\bm{d}_{i(r)}^{\top} \Sigma_i^{-1} \bm{z}_i\right) \bm{z}_i^{\top} + v_i \bm{d}_{i(r)}^{\top} + v_i \bm{z}_i^{\top} \Sigma_i^{-1} C_{i(r)}, \\ B_{i(r)} &= - \dot{v}_i \left(\bm{d}_{i(r)}^{\top} \Sigma_i^{-1} \bm{z}_i\right) \bm{z}_i \bm{z}_i^{\top} + \frac{1}{2}\dot{v}_i \left(\bm{z}_i^{\top} A_{i(r)} \bm{z}_i\right) \bm{z}_i \bm{z}_i^{\top} - v_i \bm{z}_i \bm{d}_{i(r)}^{\top} - \frac{1}{2} C_{i(r)},\\ E_{i(rs)} &= - \frac{1}{2} {\mbox{tr}}\left[A_{i(sr)} \left(\Sigma_i - v_i \bm{z}_i \bm{z}_i^{\top}\right)\right] - v_i \bm{z}_i^{\top} \Sigma_i^{-1} \bm{d}_{i(sr)},\\ \end{split}$$ with $\bm{z}_i = {\bm Y}_i - \bm{\mu}_i = \widehat{P}_i \bm{a}_i + \widehat{\bm{\mu}}_i - \bm{\mu}_i$, $\dot{v}_i = - 2 W_g'(u_i)$, $W_g'(u) = d W_g(u)/du$, and $A_{i(sr)} = \partial A_{i(s)}/\partial\theta_r = - 2 A_{i(r)} C_{i(s)} \Sigma_i^{-1} - \Sigma_i^{-1} (\partial C_{i(s)}/\partial \theta_r)\Sigma_i^{-1}$. For the $N_{q_i}({{\bm \mu}_i}, {\Sigma_i})$ distribution, $v_i = 1$ and $\dot{v}_i = 0$. For the $t_{q_i}({{\bm \mu}_i}, {\Sigma_i}, \nu)$ distribution, we have $v_i = (\nu + q_i)/(\nu + u_i)$ and $\dot{v}_i = -(\nu + q_i)/(\nu + u_i)^2$. Additionally, for the $PE_{q_i}({{\bm \mu}_i}, {\Sigma_i}, \lambda)$ distribution, we have $v_i = \lambda u_i^{\lambda - 1}$ and $\dot{v}_i = \lambda (\lambda - 1) u_i^{\lambda - 2}$. Hence, the observed information matrix can be written as in (\[ScoreFisher2\]), where $$G_{rs} = \sum_{i=1}^{n}\left[{\mbox{tr}}(B_{i(r)} A_{i(s)}) + E_{i(rs)}\right].$$ We now turn to the derivatives with respect to the sample space required to compute Barndorff-Nielsen’s and Skovgaard’s adjustments. From (\[E.ell\]), the $r$th element of the vector ${\bm{\ell}}'$ is $$\label{C.2} \begin{split} \ell_r' &= \sum_{i=1}^{n} \left(\bm{a}_i^{\top} \widehat{P}_{i(r)}^{\top} + \widehat{\bm{d}}_{i(r)}^{\top}\right) \Sigma_i^{-1} \left(-v_i \bm{z}_i\right), \end{split}$$ where $\widehat{\bm{d}}_{i(r)} = \partial {\widehat{\bm{\mu}}}_{i}/\partial{\widehat{\theta}}_r$ and ${\widehat P}_{i(r)} = \partial {\widehat P}_{i}/\partial{\widehat{\theta}}_r$. The derivatives of $P_i$ with respect to the parameters may be obtained by using the algorithm proposed by Smith (1995). The $(r,s)$th element of the matrix $U'$ is $$U_{rs}' = \sum_{i=1}^{n} \left(\bm{a}_i^{\top} \widehat{P}_{i(s)}^{\top} + \widehat{\bm{d}}_{i(s)}^{\top}\right) \Sigma_i^{-1} \Big[2 \dot{v}_i \bm{z}_i \bm{d}_{i(r)}^{\top} \Sigma_i^{-1} \bm{z}_i + v_i \bm{d}_{i(r)} - \dot{v}_i \bm{z}_i \bm{z}_i^{\top} A_{i(r)} \bm{z}_i + v_i C_{i(r)} \Sigma_i^{-1} \bm{z}_i\Big].$$ In matrix notation ${\bm{\ell}}'$ and $U'$ can be written as $${\bm{\ell}}' = \widehat{R}^\top \Sigma^{-1} {\bm z}^*, \ \ U' = Q^\top \Sigma^{-1} \widehat{R},$$ where $\widehat{R}^\top = (\widehat{R}_1^{\top}, \ldots, \widehat{R}_n^{\top})$, ${{\bm z}}^* = (- {v}_1 {{\bm z}}_1^{{\top}}, \ldots, - {v}_n {{\bm z}}_n^{{\top}})^{\top}$, $Q = (Q_1^{\top}, \ldots, Q_n^{\top})^\top$, $\widehat{R}_i = (\widehat{\bm R}_{i(1)}, \ldots, \break \widehat{\bm R}_{i(p)})$, and $Q_i = (\bm{Q}_{i(1)}, \ldots, \bm{Q}_{i(p)})$, with $\widehat{\bm{R}}_{i(r)} = \widehat{P}_{i(r)} {\bm a}_i + \widehat{\bm{d}}_{i(r)}$ and $$\bm{Q}_{i(r)} = 2 \dot{v}_i \bm{z}_i \bm{d}_{i(r)}^{\top} \Sigma_i^{-1} \bm{z}_i + v_i \bm{d}_{i(r)} - \dot{v}_i \bm{z}_i \bm{z}_i^{\top} A_{i(r)} \bm{z}_i + v_i C_{i(r)} \Sigma_i^{-1} \bm{z}_i. \\$$ Therefore, $\widehat{\bm{\ell}}'$, $\widetilde{\bm{\ell}}'$ and $\widetilde{U}'$ can be written as in (\[Deriv\]), where $\widetilde{v}_i = -2 W_{g}(\widetilde{u}_{i})$, $\widetilde{\dot{v}}_i = -2 W_{g}'(\widetilde{u}_{i})$, $\widehat{{\bm z}}_i = {\widehat P}_i {\bm a}_i$, $\widetilde{{\bm z}}_i = {\widehat P}_i {\bm a}_i + {\widehat{\bm \mu}}_i - \widetilde{{\bm \mu}}_i$, and $\widetilde{u}_i = \left({\widehat P}_i {\bm a}_i + {\widehat{\bm \mu}}_i - \widetilde{{\bm \mu}}_i\right)^{\top} \widetilde{\Sigma}_i^{-1} \left({\widehat P}_i {\bm a}_i + {\widehat{\bm \mu}}_i - \widetilde{{\bm \mu}}_i\right)$. Finally, the matrix $\:\:{{\widetilde{\!\!\widetilde{J}}}}$ required to compute Skovgaard’s adjustment is defined in (\[Deriv\]), where   ${\widetilde{\!\!\widetilde{T}}} = \left(\ \ {\widetilde{\!\!\widetilde{T}}}_1^\top, \ldots, \ \ {\widetilde{\!\!\widetilde{T}}}_n^\top\right)^{\top}$, $\ {\widetilde{\!\!\widetilde{T}}}_i = \left(\ \ {\widetilde{\!\!\widetilde{\bm T}}}_{i(1)}, \ldots, \ \ {\widetilde{\!\!\widetilde{\bm T}}}_{i(p)}\right)$, and the $(r,s)$th element of $\ {\widetilde{\!\!\widetilde{G}}}$ is given by $${\widetilde{\!\!\widetilde{G}}}_{rs} = \sum_{i=1}^{n}\left[{\mbox{tr}}\left(\ {\widetilde{\!\!\widetilde{B}}}_{i(r)} \widetilde{A}_{i(s)}\right) + \ {\widetilde{\!\!\widetilde{E}}}_{i(rs)}\right]\:,$$ with $$\begin{split} \ {\widetilde{\!\!\widetilde{\bm T}}}_{i(r)} &= - \ {\widetilde{\!\!\widetilde{\dot{v}}}}_{i} \left({\bm a}_i^{\top} {\widetilde P}_i^{\top} \widetilde{A}_{i(r)} {\widetilde P}_i {\bm a}_i\right) {\bm a}_i^{\top} {\widetilde P}_i^{\top} + 2 \ {\widetilde{\!\!\widetilde{\dot{v}}}}_{i} \left(\widetilde{\bm{d}}_{i(r)}^{\top} \widetilde{\Sigma}_i^{-1} {\widetilde P}_i {\bm a}_i \right) {\bm a}_i^{\top} {\widetilde P}_i^{\top} + \ {\widetilde{\!\!\widetilde{v}}}_{i} \widetilde{\bm{d}}_{i(r)}^{\top} + \ {\widetilde{\!\!\widetilde{v}}}_{i} {\bm a}_i^{\top} {\widetilde P}_i^{\top} \widetilde{\Sigma}_i^{-1} \widetilde{C}_{i(r)},\\ \\ \ {\widetilde{\!\!\widetilde{B}}}_{i(r)} &= - \ {\widetilde{\!\!\widetilde{\dot{v}}}}_{i} \left(\widetilde{\bm{d}}_{i(r)}^{\top} \widetilde{\Sigma}_i^{-1} {\widetilde P}_i {\bm a}_i \right) {\widetilde P}_i {\bm a}_i {\bm a}_i^{\top} {\widetilde P}_i^{\top} + \frac{1}{2} \ {\widetilde{\!\!\widetilde{\dot{v}}}}_{i} \left({\bm a}_i^{\top} {\widetilde P}_i^{\top} \widetilde{A}_{i(r)} {\widetilde P}_i {\bm a}_i\right) {\widetilde P}_i {\bm a}_i {\bm a}_i^{\top} {\widetilde P}_i^{\top} - \ {\widetilde{\!\!\widetilde{v}}}_{i} {\widetilde P}_i {\bm a}_i \widetilde{\bm{d}}_{i(r)}^{\top} \\ & \ \ \ - \frac{1}{2} \widetilde{C}_{i(r)}, \\ \\ \ {\widetilde{\!\!\widetilde{E}}}_{i(rs)} &= - \frac{1}{2} {\mbox{tr}}\left[\widetilde{A}_{i(sr)} \left(\widetilde{\Sigma}_i - \ {\widetilde{\!\!\widetilde{v}}}_{i} {\widetilde P}_i {\bm a}_i {\bm a}_i^{\top} {\widetilde P}_i^{\top}\right)\right] - \ {\widetilde{\!\!\widetilde{v}}}_{i} {\bm a}_i^{\top} {\widetilde P}_i^{\top} \widetilde{\Sigma}_i^{-1} \widetilde{\bm{d}}_{i(sr)}, \\ \end{split}$$ where $${\widetilde{\!\!\widetilde{v}}}_{i} = -2 W_{g}(\ {\widetilde{\!\!\widetilde{u}}}_{i}), \ \ {\widetilde{\!\!\widetilde{\dot{v}}}}_{i} = -2 W_{g}'(\ {\widetilde{\!\!\widetilde{u}}}_{i}), \ {\rm and} \ \ {\widetilde{\!\!\widetilde{u}}}_{i} = {\bm a}_i^{\top} {\widetilde P}_i^{\top} \widetilde{\Sigma}_i^{-1} {\widetilde P}_i {\bm a}_i.$$ [3]{} Barndorff-Nielsen, O.E. (1986). Inference on full or partial parameters, based on the standardized signed log likelihood ratio. [*Biometrika*]{}, 73, 307–322. Brazzale, A.R. and Davison, A.C. (2008). Accurate parametric inference for small samples. [*Statistical Science*]{}, 23, 465–484. Cheng, C.L. and Van Ness, J.W. (1999). [*Statistical Regression with Measurement Error*]{}. Oxford University Press, London. Crepeau, H., Koziol, J., Reid, N. and Yuh, Y.S. (1985). Analysis of incomplete multivariate data from repeated measurement experiments. [*Biometrics*]{}, 41, 505–514. Cribari-Neto, F. and Queiroz, M.P.F. (2014). On testing inference in beta regressions. [*Journal of Statistical Computation and Simulation*]{}, 84, 186–203. Doornik, J.A. (2013). [*Object-Oriented Matrix Programming using Ox*]{}. London: Timberlake Consultants Press (ISBN 978-0-9571708-1-0). Fang, K.T., Kotz, S. and Ng, K.W. (1990). [*Symmetric Multivariate and Related Distributions*]{}. London: Chapman and Hall. Ferrari, S.L.P. and Pinheiro, E.C. (2014). Small-sample likelihood inference in extreme-value regression models. [*Journal of Statistical Computation and Simulation*]{}, 84, 582–595. Fraser, D.A.S., Reid, N. and Wu, J. (1999). A simple general formula for tail probabilities for frequentist and bayesian inference. [*Biometrika*]{}, 86, 249–264. Guolo, A. (2012). Higher-order likelihood inference in meta-analysis and meta-regression. [*Statistics in Medicine*]{}, 31, 313–327. Lemonte, A. J. and Ferrari, S.L.P. (2011). Signed likelihood ratio tests in the Birnbaum-Saunders regression model. [*Journal of Statistical Planning and Inference*]{}, 141, 1031–1040. Lemonte, A.J. and Patriota, A.G. (2011). Multivariate elliptical models with general parameterization. [*Statistical Methodology*]{}, 8, 389–400. Melo, T.F.N., Ferrari, S.L.P. and Patriota, A.G. (2015). Improved estimation in a general multivariate elliptical model. [*arXiv:1508.05994*]{}. Peña, E.A., Rohatgi, V.K. and Szekely, G.J. (1992). On the non-existence of ancillary statistics. [*Statistics and Probability Letters*]{}, 15, 357–360. Rosillo, F.G., Martín, N. and Egido, M.A. (2010). Comparison of conventional and accelerated lifetime testing of fluorescent lamps. [*Lighting Research and Technology*]{}, 42, 243–259. Sen, P.K. and Singer, J.M. (1993). [*Large Sample Methods in Statistics. An Introduction with Applications*]{}. Chapman & Hall, New York. Severini, T.A. (2000). [*Likelihood Methods in Statistics*]{}. Oxford University Press. Skovgaard, I.M. (2001). Likelihood asymptotics. [*Scandinavian Journal of Statistics*]{}, 28, 3–32. Smith, S.P. (1995). Differentiation of the Cholesky algorithm. [*Journal of Computational and Graphical Statistics*]{}, 4, 134–147. Vanegas, L.H. and Paula, G.A. (2015). A semiparametric approach for joint modeling of median and skewness. [*Test*]{}, 24, 110–135. Verbeke, G. and Molenberghs, G. (2000). [*Linear Mixed Models for Longitudinal Data*]{}. Springer, New York. [^1]: Note that ${\bm \mu}_{i}$ and ${\Sigma}_i$ may depend on covariates associated with the $i$th observed response ${\bm Y}_{i}$. The covariates may have components in common. [^2]: The dependence on $\bm{\theta}$ is omitted here and in the sequel for the sake of readability.
--- author: - Mattias Blennow - Enrique Fernandez Martinez - Olga Mena - Javier Redondo - Paolo Serra title: Asymmetric Dark Matter and Dark Radiation --- Introduction ============ By now we have overwhelming evidence for the presence of an extra non-baryonic Dark Matter (DM) component in the Universe from a variety of different independent sources (see [*e.g.*]{}, [Ref.]{} [@Bertone:2004pz]). These include rotation curves of galaxies, gravitational lensing, structure formation and global fits of cosmological data such as the cosmic microwave background (CMB) anisotropies measured by the WMAP satellite. However, all this evidence comes exclusively from gravitational effects and we remain sadly ignorant of the properties characterizing the new particle species constituting the DM, such as their masses or the types and strengths of their interactions with the Standard Model (SM) particles. Even in the case where DM is composed by a single stable species, it is a reasonable expectation that this will be only one ingredient of a richer sector with more particles and interactions, given the complexity of the SM sector. The model-building possibilities are therefore limitless, given the drought of information on the dark sector (DS) we currently suffer. In this context, most of the interest over the last years has been focused in obtaining a DM candidate within theories that try to address other shortcomings of the SM. Notable examples are the axion [@Peccei:1977hh], introduced to solve the strong CP problem, and the popular weakly interacting massive particle (WIMP) [@Ellis:1983ew], with a $\sim 100$ GeV mass, which can be easily accommodated in models addressing the electroweak hierarchy problem. Indeed, the masses of the extra degrees of freedom required to stabilize the Higgs mass under radiative corrections cannot be much further away from the electroweak scale if the fine tuning problem is to be addressed. The fact that the relic abundance of a weakly interacting particle with $\sim 100$ GeV mass gives the observed DM energy density is an additional bonus for these theories and is usually dubbed the “WIMP miracle”. Lately, a paradigm shift in the study of DM models is taking place with the goal of exploring the phenomenological consequences of, not only the standard WIMP and axion DM models, but also those of other plausible dark sectors. After all, while the electroweak hierarchy and the strong CP problems are well-motivated theoretical puzzles on their own, the only solid experimental evidence we have so far for physics beyond the SM is DM —along with neutrino masses and mixings— and it is fully justified to develop SM extensions with only the aim of addressing the existence and nature of DM. This is an important avenue to pursue in order to prevent our experimental efforts from becoming too focused on the leading DM paradigms, possibly missing relevant phenomenological signals. In this context, the old idea of asymmetric dark matter (ADM) [@Nussinov:1985xr; @Barr:1990ca; @Barr:1991qn; @Kaplan:1991ah; @Kuzmin:1996he; @Kusenko:1998yi] is becoming increasingly popular [@Farrar:2004qy; @Hooper:2004dc; @Kitano:2004sv; @Agashe:2004bm; @Kitano:2005ge; @Cosme:2005sb; @Farrar:2005zd; @Suematsu:2005kp; @Tytgat:2006wy; @Banks:2006xr; @Page:2007sh; @Kitano:2008tk; @Nardi:2008ix; @Kaplan:2009ag; @Kribs:2009fy; @Cohen:2009fz; @Cai:2009ia; @An:2009vq; @Frandsen:2010yj; @An:2010kc; @Cohen:2010kn; @Taoso:2010tg; @Shelton:2010ta; @Davoudiasl:2010am; @Haba:2010bm; @Belyaev:2010kp; @Chun:2010hz; @Buckley:2010ui; @Gu:2010ft; @Blennow:2010qp; @Hall:2010jx; @Dutta:2010va; @Falkowski:2011xh; @Haba:2011uz; @Chun:2011cc; @Heckman:2011sw; @Graesser:2011wi; @Frandsen:2011kt; @McDermott:2011jp; @Buckley:2011kk; @Iminniyaz:2011yp; @Bell:2011tn; @Cheung:2011if; @Davoudiasl:2011fj; @MarchRussell:2011fi; @Cui:2011qe; @Arina:2011cu; @Buckley:2011ye; @Barr:2011cz; @Cirelli:2011ac; @Lin:2011gj; @Petraki:2011mv; @vonHarling:2012yn; @Kamada:2012ht; @Iocco:2012wk]. ADM offers a DM paradigm in which the origin and properties of DM are much more closely related to those of baryonic matter. This seems appealing, since both abundances are observed to be close to each other $\Omega_{DM} \approx 5 \Omega_{b}$. Indeed, if the origin and mass of the DM candidate are similar to the baryons, this coincidence is less striking than within the “WIMP miracle”, where the production mechanism and masses in the dark and visible sectors are completely different. ADM models postulate that the stability of the DM population stems from a new conserved quantum number, $X$. The relic density is then associated to a particle-antiparticle asymmetry, in complete analogy to the baryonic sector and baryon number. A common origin for both the baryon and DM asymmetries is usually assumed, which typically implies similar abundances and, therefore, a constraint on the DM mass close to 5 GeV so as to reproduce the correct observed energy density. This precise prediction turns out to be quite general if the mechanism linking the DM and baryon asymmetries conserves a combination of $B-L$ and $X$, say $Q$. In this case, the DM mass turns out to be $5-7/Q_{\rm DM}$, with $Q_{\rm DM}$ the $Q$-charge of the dark matter particle [@Ibe:2011hq]. The main challenge of successful ADM models is to provide sufficient annihilation of the thermally produced symmetric component with such a light DM candidate without violating collider or direct search constraints [@Buckley:2011kk; @MarchRussell:2012hi]. The most common solution involves a lighter mediator with $\sim 10-100$ MeV mass [@Lin:2011gj] in the DS, into which DM can efficiently annihilate and which subsequently decays into SM particles. This lighter mediator would be the DS analogue of the pion, which leads to efficient annihilation of the symmetric component in the baryon sector. In this work we will take the analogy between the dark and visible sectors one step further and assume that the DS, in addition to the DM and the mediator, contains very light degrees of freedom which, like photons and neutrinos in the SM sector, contribute to the radiation content of the Universe. We will dub this content “dark radiation” (DR). Since the symmetric component of DM and any mediators can now annihilate or decay into DR and not into SM particles, the connection between the DS and the SM weakens, alleviating the constraints stemming from collider, DM direct and indirect detection experiments[^1]. Also, in models where the symmetry related to the DM number $X$ is explicitly violated by a small Majorana mass term, the DM interactions with the DR bath could play an important role in DM$\leftrightarrow$anti-DM oscillations, which can normally challenge the survival of the asymmetry [@Buckley:2011ye; @Cirelli:2011ac; @Tulin:2012re]. This scenario offers two new probes into the structure of the DS, which are complementary to direct, indirect and collider DM searches. On one hand, after the SM and DS decouple, the thermal symmetric populations of DS particles end up annihilating via the light mediator into DR, which becomes heated with respect to the photon thermal bath. Thus, constraints on the allowed amount of extra radiation can lead to constraints on the DS degrees of freedom, providing valuable information for model building. The amount of energy in dark radiation is traditionally expressed in terms of the extra effective number of neutrinos $\Delta {N_{\rm eff}}$ that can be probed by its effect on the cosmic microwave background (CMB) and the outcome of big bang nucleosynthesis (BBN). Interestingly, these estimates of $\Delta {N_{\rm eff}}$ presently show a trend towards a non-zero DR component [@Mangano:2006ur; @Hamann:2007pi; @Reid:2009nq; @Dunkley:2010ge; @Komatsu:2010fb; @Hamann:2010pw; @Keisler:2011aw; @Archidiacono:2011gq; @Smith:2011ab; @Hamann:2011hu; @Nollett:2011aa; @Izotov:2010ca; @Aver:2011bw; @Aver:2010wq] which shall be either confirmed or excluded by the great improvement in sensitivity expected in the near future from the CMB data of the Planck satellite. On the other hand, since some interaction between the ADM population and the DR must exist via the lighter mediator, this interaction can be bounded through constraints on the matter power spectrum. Indeed, for significant DM-DR interactions, DM and DR can form a coupled fluid which, in a way analogous to the photon-baryon plasma, is not pressureless and can propagate acoustic oscillations. We will discuss the constraints that galaxy surveys can set on this DM-DR interactions. The present study on the presence of extra relativistic components in the DS that interact at some level with the DM population has been inspired by the asymmetric DM paradigm, in which complex dark sectors with extra light degrees of freedom are generally required. However, we will try to keep our study as model independent as possible and hence our results can also apply to different DM models, not necessarily based in a particle-antiparticle asymmetry, in which DM interacts with a DR component. Indeed, the constraints on $\Delta {N_{\rm eff}}$ stemming from BBN and the CMB analysis that we will discuss can also apply to other models of DM. In principle this is also true for the bounds on DM-DR interactions that we will derive from the matter power spectrum. However, as we will see, the size of the interactions required to lead to any observable effect rules out that DM is a thermal relic. Indeed, the annihilation of DM to DR would be too large to reproduce the observed DM abundance. The ADM paradigm, on the other hand, decouples the DM abundance, determined by a particle-antiparticle asymmetry, from its annihilation cross section and could thus lead to the signals we will constrain. Indeed, large annihilation cross sections are particularly desirable in ADM models so as to efficiently remove the thermal symmetric DM component that can otherwise dominate over the asymmetry and spoil its relation to the baryon abundance. This paper is organized as follows: In [Sec.]{} \[sec:DR\], we derive the constraints that present and near future measurements of $\Delta {N_{\rm eff}}$ imply on the degrees of freedom present in the dark sector as a function of the decoupling temperature. Section \[sec:DR-DM\] is dedicated to reviewing the bounds that can be derived on the interactions between the DM and DR populations from the galaxy power spectrum. Finally, in [Sec.]{} \[sec:summary\], we summarize our results and give our conclusions. Dark Radiation {#sec:DR} ============== If the thermal component of DM ends up annihilating into DR, an extra contribution to the energy density in relativistic degrees of freedom will be present, $\rho_{\rm DR}$. This contribution is generally parametrized through $\Delta {N_{\rm eff}}$, the number of extra effective neutrino species by normalizing the contribution of the extra radiation to that of a neutrino field $$\Delta {N_{\rm eff}}= \frac{\rho_{\rm DR}}{2\frac{7}{8}\frac{\pi^2}{30}T_\nu^4} ,$$ where $T_\nu$ is the temperature of neutrinos at the specific moment of interest. Many different possibilities can be envisioned for the thermal histories of the SM and the DS depending on the details of particular DM realizations, the structure of the DS, and its interactions with the SM. A common feature of ADM models is that they usually contain significantly more structure than the light stable field constituting the DM. Furthermore, given the flavour structure observed in the SM, it seems naive to assume that the DS, which amounts to a five times larger fraction of the energy content of the Universe, would actually be only composed by a single field. In this context, models with flavoured DM are becoming increasingly popular [@Ibanez:1983kw; @Hagelin:1984wv; @Freese:1985qw; @Falk:1994es; @Servant:2002aq; @Blennow:2010qp; @Kile:2011mn; @Batell:2011tc; @Cui:2011qe; @Kamenik:2011nb; @Agrawal:2011ze]. In this work, we will adopt an approach as model independent as possible. We parametrize the potential complexity of the DS through the number of relativistic degrees of freedom $g_h+g_\ell$ present in the DS at the time of decoupling from the SM, characterized by a temperature $T_d$. While $g_\ell$ corresponds to the number of degrees of freedom present in the field(s) that ultimately constitute the DR, the $g_h$ degrees of freedom correspond to relatively heavy degrees of freedom that are going to turn non-relativistic and heat the DR with respect to the SM. This general approach implies that the results of this section can apply, not only to ADM models by which we were inspired, but also to any DS that contains light degrees of freedom which constitute DR apart from the DM candidate. Two extreme examples of our parametrization are the following: If only the heaviest fields in the DS interact with the SM, $T_d$ will be very high and most of the DS degrees of freedom will be relativistic at decoupling, allowing arbitrarily high values of $g_h$ depending on the complexity of the model. The opposite scenario would be realized when the DR fields have interactions with the SM which keep them in thermal equilibrium until very late times. In this case, $g_h = 0$ and the DR will not be heated again with respect to the photon bath and will have a final lower temperature depending on the moment of decoupling. This last example can be realized if DR is made up of sterile neutrinos that are mixed with the SM ones and decouple almost at the same time. The SM and DS share a common temperature until $T_d$ and from that point onwards they evolve independently. In this case, the comoving entropies of the two sectors are conserved separately. We can use this fact to track the temperature changes in each sector, which we need to evaluate $\Delta {N_{\rm eff}}$. The comoving entropy in a thermalized sector is defined as $$S= \frac{2 \pi^2}{45}g^s_{*} T^3 a^3,$$ where $T$ is the common temperature of the sector, $a$ is the scale factor, and $g^s_{*}$ is the *effective number of entropy degrees of freedom*. The latter can be conveniently expressed as a sum over species $$g^s_{*}(T)= \sum_{i=\rm bosons} g_i f^-_i + \frac{7}{8}\sum_{j=\rm fermions} g_i f^+_j$$ where $g_i$ is the number of internal degrees of freedom of species $i$ and $$f_i^\pm = \frac{45}{4 \pi^4}\left(\frac{8}{7}\right)^{\frac{1\pm 1}{2}} z_i^4\int_1^\infty \frac{y\sqrt{y^2-1}}{\exp (y z_i)\pm 1}\frac{4 y^2-1}{3 y}dy$$ are functions of the particle mass $m_i$ through the ratio $z_i=m_i/T_x$ that changes smoothly from $f^\pm_i=1$ when the particle is relativistic ($z_i\ll 1$) to $f^\pm_i=0$ when it becomes non-relativistic ($z_i\gg1$). In practice we can consider them as a filter that allows only relativistic species to contribute to $g^s_*$. For the SM $g^s_*(T)$ we have used the approximate fitted expression of Appendix A of Ref. [@Wantz:2009it], which takes into account its non-trivial behavior around the QCD phase transition (which cannot be reproduced with the simple formulas above). From now on, we will not use any subindex for SM quantities ($g^s_*,T$) and subindex DS for DS quantities ($g^s_{*,{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }},T_{{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}$). At this stage, it is convenient to develop a general formula for the temperature difference of two sectors. After decoupling the comoving entropies are constant during the expansion, and so it is their ratio, $S_1/S_2=g^s_{*,1}T_1^3/g_{2,*} T_2^3= {\rm constant}$, where all these four quantities can depend on time through the temperatures. The constant can be evaluated at the time of decoupling, where $T_1=T_2=T_d$, to be $S_1/S_2=g^s_{*,1}(T_d)/g^s_{*,2}(T_d)$. The ratio of the two temperatures at any later time is therefore $$\label{T1T2} \frac{T_1}{T_2} = \left(\frac{g^s_{*,1}(T_d)}{g^s_{*,1}} \frac{g^s_{*,2}}{g^s_{*,2}(T_d)}\right)^{1/3} ,$$ where the $g^s_{*i}$ and $T_i$ are understood to be evaluated at the same time. We will now apply this formula to compute $\Delta {N_{\rm eff}}$ as a function of the DS degrees of freedom. It is convenient to distinguish two different cases. High DS decoupling temperature, $T_d\gtrsim$ MeV {#high-ds-decoupling-temperature-t_dgtrsim-mev .unnumbered} ------------------------------------------------ The simplest and probably more realistic case occurs when the decoupling temperature of the hidden sector is higher than the neutrino-electron decoupling, i.e. $T_d \gtrsim$ MeV. In terms of the light and heavy DS degrees of freedom introduced before, the relative temperature between DR and photons at the CMB epoch is then given by:\ $$\left.\frac{T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}{T_\gamma}\right|_{\rm CMB} = \left( \frac{(g_h+g_l)}{g_l}\frac{g^s_{*\rm CMB}}{g^s_*(T_d)} \right)^{1/3},$$ with $g^s_{*\rm CMB}$ being the SM effective number of entropy degrees of freedom at the CMB epoch. This number receives contributions from photons and neutrinos, but since the latter ones decouple before the electron/positron annihilation they do not share the corresponding entropy injection and have a smaller temperature. The neutrino/photon temperature ratio follows from Eq. \[T1T2\] by encompassing photons and electrons/positrons in sector 2, $T_\nu/T_\gamma=(4/11)^{1/3}$. It follows that $g^s_{*\rm CMB}=2 \left( 1 + \frac{7}{8}\frac{4}{11}3 \right) \simeq 3.909$. [It is worth noting that when electrons and positrons annihilate, neutrinos are not fully decoupled and electroweak reactions are able to pump a bit of energy into the neutrino sector. This makes the standard value of $N_{\rm eff}\simeq 3.046$ [@Mangano:2005cc]. This $\sim 2\%$ correction it is too small to be significantly detected even by the best prospects of Planck, it teaches us however that incomplete thermalization or decoupling of species in the DS can lead to non-integer values of $g_h,g_l$.]{} If we assume that all the DS light degrees of freedom have the same final temperature, the energy density is $\rho_{\rm DR}=\pi^2 g_l T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }^4/30$, and we find $$\left.\Delta {N_{\rm eff}}\right|_{\rm CMB} = \frac{g_l }{2\frac{7}{8} \left(\frac{4}{11} \right)^{4/3}} \left(\left.\frac{T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}{T_\gamma}\right)^4\right|_{\rm CMB}= \frac{13.56}{ g^s_*(T_d)^{4/3}} \frac{(g_l+g_h)^{4/3}}{g_l^{1/3}}. \label{eq:dof}$$ It is easy to prove that in this scenario the value of $\Delta {N_{\rm eff}}$ measured through BBN, $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$, is equal or smaller than $\left.\Delta {N_{\rm eff}}\right|_{\rm CMB}$. Before getting into it, we recall that the definition of $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ differs from that of $\left.\Delta {N_{\rm eff}}\right|_{\rm CMB}$. The time interval where $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ affects BBN spans from the decoupling of the beta reactions that keep protons and neutrons in chemical equilibrium at high temperatures ($T\sim$ MeV) to proper BBN times (end of the deuterium bottleneck, $T\sim 70$ keV). As we already mentioned, between these two boundaries, the electron/positron annihilation heats photons but not neutrinos, changing the neutrino to photon temperature ratio and therefore the definition of $\Delta {N_{\rm eff}}$. It is customary to define $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ evaluated at the highest temperature $T\sim$ MeV and this is the definition we use in this work. At this epoch photons, electrons and neutrinos have the same temperature and $g^s_{*\rm BBN}=10.75$ and following the same steps as before we obtain $$\left.\Delta {N_{\rm eff}}\right|_{\rm BBN} = \frac{g_l^{\rm BBN}}{2\frac{7}{8}} \left(\left.\frac{T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}{T_\gamma}\right)^4\right|_{\rm BBN}= \frac{13.56}{ g^s_*(T_d)^{4/3}} \frac{(g_l+g_h)^{4/3}}{(g_l^{\rm BBN})^{1/3}}.$$ where $g_l^{\rm BBN}$ is the number of relativistic degrees of freedom in the DS at the time of BBN, which may take on values between $g_l$ and $g_l + g_h$. If $g_l^{\rm BBN}=g_l$, then $\Delta {N_{\rm eff}}$ has the same value for CMB and BBN physics, but if larger, the BBN value is smaller. It is interesting to note that by fixing $g_l+g_h$ and $T_d$ we are *fixing* the comoving entropy of the DS, but the final number of degrees of freedom $g_l$ (or $g_l^{\rm BBN}$ for BBN) still has an impact on $\Delta {N_{\rm eff}}$. Actually, the dependence $\Delta {N_{\rm eff}}\propto (g_l)^{-1/3}$ is quite easy to understand. The reason is that, for a fixed entropy density $s\propto g T^3$, the energy density $\rho \propto g T^4\propto s^{4/3}/g^{1/3}$ is larger for larger temperatures and small number of degrees of freedom. This implies that, for two sectors with the same number of degrees of freedom at decoupling, that one with the smallest (non-zero) number of light species will be more noticeable to $\Delta {N_{\rm eff}}$ probes. Finally, note that in this scenario we can be sensitive to $g_l^{\rm BBN}-g_l$ by measuring and comparing $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ and $\left.\Delta {N_{\rm eff}}\right|_{\rm CMB}$. Unfortunately, the foreseeable accuracy of $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ makes this comparison challenging. Low DS decoupling temperature, $T_d\lesssim$ MeV {#low-ds-decoupling-temperature-t_dlesssim-mev .unnumbered} ------------------------------------------------ In this case, the DS has to couple either to the electromagnetic plasma (electrons, baryons and photons) or to neutrinos, which at these low temperatures are decoupled from each other. Here we do not consider the case that the DS can mediate interactions between neutrinos and the electromagnetic sector. Having a DS in thermal contact with the SM below the MeV requires very strong interactions between the two, so the reader should keep in mind that many of the models that we can constrain in this section can actually be excluded by laboratory experiments or astrophysical arguments. In this case, the DS is still coupled during BBN so computing $\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}$ amounts to just counting the DS degrees of freedom present at $T\sim$ MeV and normalize them to the neutrino energy density $$\left.\Delta {N_{\rm eff}}\right|_{\rm BBN}=\frac{4}{7}(g_H+g_h+g_l) ,$$ where $g_H$ is the number of degrees of freedom that become non-relativistic between the BBN and the DS decoupling. We are forced to introduce this new parameter in order to maintain the meaning of $g_h$ and $g_l$ as in the case discussed above. Let us now compute $\left.\Delta {N_{\rm eff}}\right|_{\rm CMB}$. In the case in which the DS remains coupled to neutrinos until $T_d$ we have $$\left. \frac{T_\nu}{T_\gamma}\right|_{T_d}= \left. \frac{T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}{T_\gamma}\right|_{T_d}= \left(\frac{3\times 7/4+g_H+g_h+g_l}{3\times 7/4+g_h+g_l}\frac{2}{2+7/2}\right)^{1/3} ,$$ where $21/4$ are the neutrino degrees of freedom, and we have again assumed $T_d<m_e$. When the DS decouples from neutrinos, it can still get heated with respect to them by the usual factor $T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }/T_\nu=((g_h+g_l)/g_l)^{1/3}$. In this case we find $$\left.{N_{\rm eff}}\right|_{\rm CMB} = \left(3+\frac{4}{7}\frac{(g_h+g_l)^{4/3}}{(g_l)^{1/3}}\right) \left(\frac{3\times 7/4+g_H+g_h+g_l}{3\times 7/4+g_h+g_l}\right)^{4/3} . $$ If, on the other hand, the DS couples preferentially to the electromagnetic sector (ES) we have $T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }=T_\gamma=T_d$ at decoupling. Neutrinos are decoupled from $T\sim$ MeV, where the ES degrees of freedom are $2+7/2$ for photons and $e^\pm$. Thus at $T_d$ they have a temperature $$\left. \frac{T_\nu}{T_\gamma}\right|_{T_d}= \left. \frac{T_\nu}{T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }}\right|_{T_d}=\left(\frac{2+g_h+g_l}{2+7/2+g_H+g_h+g_l}\right)^{1/3},$$ where we have assumed $T_d\ll m_e\sim$ MeV so that only photons are present at the DS decoupling. Also we have used that $g^s_{*,\nu}($MeV)$=g^s_{*,\nu}(T_d)$. Below $T_d$, also the DS decouples and later reduces its degrees of freedom from $g_h+g_l$ to $g_l$ before the CMB epoch. In this period the DS temperature increases with respect to the photon one by a factor $T_{ {\scriptscriptstyle \rm D\hspace{-0.07cm}S} }/T_\gamma=((g_h+g_l)/g_l)^{1/3}$. Since, in this case, the neutrinos do not have their standard temperature ratio to photons, it is more convenient to quote the number of effective neutrino species, and not only the *extra* component ($\Delta {N_{\rm eff}}={N_{\rm eff}}-3$) $$\left.{N_{\rm eff}}\right|_{\rm CMB} = 3\left(\frac{11}{4}\frac{2+g_h+g_l}{2+7/2+g_H+g_h+g_l}\right)^{4/3}+g_l\frac{4}{7}\left(\frac{11}{4}\frac{g_h+g_l}{g_l}\right)^{4/3} . $$ Let us note that this scenario suffers from an additional constraint. The entropy contained in the $H$ species is dumped into the ES, between the BBN and CMB epochs. This entropy ends up in photons at the CMB epoch, but is absent during BBN and therefore the baryon-to-photon ratio $\eta$, the parameter that governs the output of BBN will be different between these two epochs, $$\frac{\eta_{\rm BBN}}{\eta_{\rm CMB}} = \frac{2+g_h+g_l}{2+g_H+g_h+g_l} .$$ The relic abundance of deuterium is quite sensitive to the value of $\eta_{\rm BBN}$, and can be therefore used to constraint any mismatch between $\eta_{\rm CMB}$ and $\eta_{\rm BBN}$, see for instance [@Jaeckel:2008fi; @Cadamuro:2011fd; @Cadamuro:2010cz]. This is because the current BBN prediction using $\eta_{\rm BBN}=\eta_{\rm CMB}=(6.19\pm 0.15)\times 10^{-10}$ [@Nakamura:2010zzi] already agrees with the observations within the uncertainties [@Nakamura:2010zzi]. A nonzero $g_H$ will make $\eta_{\rm BBN}$ smaller than $\eta_{\rm CMB}$, increasing the Deuterium yield for a fixed $\eta_{\rm CMB}$. In our scenarios, we also have a nonzero $\Delta {N_{\rm eff}}$, but actually this also predicts an increase in D/H so both effects go in the same direction. We have checked that indeed these arguments exclude values other than $g_H=0$. Bounds, hints and forecasts {#bounds-hints-and-forecasts .unnumbered} --------------------------- We can now use present data on $\Delta {N_{\rm eff}}$ to constrain the structure of the hidden sector, [*i.e.*]{}, $g_H, g_h$ and $g_l$. At present, constraints from CMB and BBN physics are similar but not too restrictive. There seems to exist a trend in favor of a non-zero value of $\Delta {N_{\rm eff}}>0$, which could be hinting at the kind of more complex dark sectors which we study in this work. Let us first consider BBN, which has so far provided the strongest constraints on $\Delta{N_{\rm eff}}$ [@Iocco:2008va; @Pospelov:2010hj]. An increased value of ${N_{\rm eff}}$ leads to a faster expansion of the Universe and, as consequences of this, larger Helium mass fraction $Y_p$ (since neutrons have higher freeze-out abundance and have less time to decay before BBN), larger Deuterium to proton ratio D/H (because D burning reactions are less effective) and smaller yields of more massive nuclei like Lithium (because they are produced from D and its products, whose reactions are slower). At present, comparisons of the observed primordial abundances with the theoretical predictions show a trend towards $\Delta {N_{\rm eff}}>0$, with best fit values $\Delta {N_{\rm eff}}\sim 0.5-0.8$ [@Izotov:2010ca; @Mangano:2011ar; @Hamann:2010bk; @Hamann:2011ge]. Interestingly, both the Deuterium and the Helium measurements show this preference. The observed primordial abundance of Lithium is much smaller than the predictions from standard cosmology. This is the so-called *Lithium problem*, which can be somewhat alleviated by the presence of extra radiation. However, the cited works conclude that these preferences are not significant given the errors, and in particular the systematic uncertainties involved in the estimation of the primordial abundances. Notwithstanding the above, one can obtain robust upper bounds on $\Delta {N_{\rm eff}}$ like $\Delta {N_{\rm eff}}\leq 1$ [@Mangano:2011ar] and $\Delta {N_{\rm eff}}\leq 1.26$ [@Hamann:2011ge] (both at 95% C.L.). These bounds can be relaxed in non-minimal scenarios, as for instance if neutrinos have a nonzero chemical potential (up to $\Delta {N_{\rm eff}}\leq 2.56$ at 95 % C.L. according to [Ref.]{} [@Hamann:2011ge]). Thus, it seems that BBN does not contradict the presence of extra radiation, but cannot be used to assess it quantitatively nor to exclude it model independently beyond the $\Delta {N_{\rm eff}}\leq 2.56$ level. As we explain in the following, the situation can be much different for the CMB. Many claims for an excess $\Delta {N_{\rm eff}}\sim 1$ during the CMB release are present in the literature [@Mangano:2006ur; @Hamann:2007pi; @Reid:2009nq; @Dunkley:2010ge; @Komatsu:2010fb; @Hamann:2010pw; @Keisler:2011aw; @Archidiacono:2011gq; @Smith:2011ab] with different levels of significance depending on the dataset and analysis performed in the study. While it has been shown that these effects can be amplified by volume effects when analyzing data with Bayesian statistics and the effect seems to be significantly prior dependent for some datasets [@GonzalezMorales:2011ty], the preference for non-zero $\Delta {N_{\rm eff}}$ does persist for prior-independent frequentist analyses [@Hamann:2011hu]. Here we will take the latest results from the South Pole Telescope collaboration in combination with WMAP data [@Keisler:2011aw] as a reference $\Delta {N_{\rm eff}}= 0.85 \pm 0.62$. It should be noted that the presence of extra radiation during matter-radiation equality to which the CMB is sensitive does not necessarily imply its presence during BBN. For this reason, we will regard them as independent constraints into two different epochs of the early Universe and not combine them. In this work we will mainly focus on the constraints from CMB probes. Indeed, the CMB not only offers a window to a lower temperature to which DR must contribute even if it was not present at BBN, but also the forthcoming results from the Planck mission will soon provide much more stringent constraints superseding present BBN sensitivity. Note that, in order to reach its full potential, Planck data should in any case be combined with BBN results on the primordial He abundance $Y_p$ [@Hamann:2007sb]. ![The $1 \sigma$ range for the number of heavy degrees of freedom $g_h$ required to heat the light sector in order to account for the presently preferred number of extra effective neutrino species during CMB $\Delta {N_{\rm eff}}= 0.85 \pm 0.62$ and as a function of the decoupling temperature $T_d$.[]{data-label="fig:dNeffpresent"}](gh){width=".60\textwidth"} In [Fig.]{} \[fig:dNeffpresent\] we show the constraints on $g_h$ as a function of $T_d$ for $\Delta {N_{\rm eff}}= 0.85 \pm 0.62$ at CMB [@Keisler:2011aw] and a given $g_\ell$. Below 1 MeV the results correspond to the (less constrained) scenario where the DS remains coupled to neutrinos after BBN and $g_H = 0$. For $g_H \neq 0$, similar results are obtained, but interpreting $g_h$ in the vertical axis as $g_h + g_H$. As can be seen from this figure, for very late decoupling, having extra heavy degrees of freedom is increasingly disfavored. This is due to the fact that the photon bath will not receive significant heating from the SM sector at such low temperatures and thus, any extra heating in the DS would lead to a too large contribution to $\Delta {N_{\rm eff}}$. On the other hand, if the SM decouples from the DS at a higher temperature, then the relativistic degrees of freedom in the SM will be heated, requiring heating also in the DS in order for it to contribute significantly to $\Delta {N_{\rm eff}}$. The Planck satellite mission is expected to measure the effective number of neutrino species with an excellent accuracy. We compute as a first step the CMB Fisher matrix to obtain forecasts for the Planck satellite [@:2006uk]. Our fiducial model is a $\Lambda$CDM cosmology with five parameters: the physical baryon and CDM densities, $\omega_b$ and $\omega_{\rm DM}$, the scalar spectral index, $n_{s}$, $h$ (being the Hubble constant $H_0=100\ h$ km Mpc$^{-1}$s$^{-1}$) and the dimensionless amplitude of the primordial curvature perturbations, $A_{s}$ (see Tab. \[tab:fiducial\_standard\_model\] for their values). Furthermore, we add to the $\Lambda$CDM fiducial cosmology a number of DR degrees of freedom parametrized as extra sterile neutrino species $\Delta {N_{\rm eff}}=1,2,3$. We assume that the sterile species have thermal spectra and are not coupled among themselves. For these fiducial cosmologies, our Fisher forecast analysis provides the following errors: $\Delta {N_{\rm eff}}=1 \pm 0.08$, $\Delta {N_{\rm eff}}=2 \pm 0.08$ and $\Delta {N_{\rm eff}}=3 \pm 0.1$ at $1\sigma$. We then refine this analysis and perform a Markov Chain Monte Carlo simulation of the expected Planck results when the total number of neutrinos is 3, 4, 5 or 6, which correspond to the cases $\Delta {N_{\rm eff}}=0,1,2$ and $3$ respectively. For the Monte Carlo scan we obtained good agreement with the Fisher matrix results: $\Delta {N_{\rm eff}}< 0.08$, $\Delta {N_{\rm eff}}=1 \pm 0.10$, $\Delta {N_{\rm eff}}=2 \pm 0.11$ and $\Delta {N_{\rm eff}}=3 \pm 0.14$ at $1\sigma$. Therefore, near future data from Planck will definitely be able to settle the issue of DR. If evidence for $N_\nu>3$ still persists after these new accurate CMB measurements, it will be extremely interesting to further study interacting scenarios in the DR and DM sectors. --------------- ---------------------- --------- ------- --------------------- ------------------------ $\Omega_bh^2$ $\Omega_{\rm DM}h^2$ $n_{s}$ $h$ $A_{s}$ $\Delta {N_{\rm eff}}$ 0.02267 0.1131 0.96 0.705 $2.64\cdot 10^{-9}$ 1-3 --------------- ---------------------- --------- ------- --------------------- ------------------------ : Values of the parameters in the fiducial models explored in this study.[]{data-label="tab:fiducial_standard_model"} ![The $1 \sigma$ range for the number of heavy degrees of freedom $g_h$ required to heat the light sector in order to account for a given number of extra effective neutrino species during CMB and as a function of the decoupling temperature $T_d$. The bands correspond to different Planck constraint forecasts as described in the text.[]{data-label="fig:dNeff"}](gh0 "fig:"){width=".48\textwidth"} ![The $1 \sigma$ range for the number of heavy degrees of freedom $g_h$ required to heat the light sector in order to account for a given number of extra effective neutrino species during CMB and as a function of the decoupling temperature $T_d$. The bands correspond to different Planck constraint forecasts as described in the text.[]{data-label="fig:dNeff"}](gh1 "fig:"){width=".48\textwidth"}\ ![The $1 \sigma$ range for the number of heavy degrees of freedom $g_h$ required to heat the light sector in order to account for a given number of extra effective neutrino species during CMB and as a function of the decoupling temperature $T_d$. The bands correspond to different Planck constraint forecasts as described in the text.[]{data-label="fig:dNeff"}](gh2 "fig:"){width=".48\textwidth"} ![The $1 \sigma$ range for the number of heavy degrees of freedom $g_h$ required to heat the light sector in order to account for a given number of extra effective neutrino species during CMB and as a function of the decoupling temperature $T_d$. The bands correspond to different Planck constraint forecasts as described in the text.[]{data-label="fig:dNeff"}](gh3 "fig:"){width=".48\textwidth"}\ For the different scenarios that Planck could find, we show in [Fig.]{} \[fig:dNeff\] the vast improvement that the smaller errors would imply in the constraints on $g_h$ and, thus, on the complexity of the DS. Notice that, in the extreme scenario in which Planck data would prefer 3 extra effective neutrino species, some heating from $g_h$ is required even at decoupling temperatures as low as 1 MeV and for $g_l=3$. This extreme scenario is significantly disfavoured by present BBN and CMB results, though. In the opposite limit in which Planck finds no evidence for extra radiation and only sets an upper limit on $\Delta {N_{\rm eff}}$, no contours at the $1 \sigma$ level are allowed for $g_l=3$ for any decoupling temperature, as long as the SM is only heated by the SM particle content with respect to the dark sector after decoupling. Dark Matter-Dark Radiation interactions {#sec:DR-DM} ======================================= The interactions between the DR and DM components will leave an imprint on the galaxy power spectrum, see Refs. [@Mangano:2006mp; @Serra:2009uu]. In the presence of these interactions, the dark matter fluid is no longer pressureless and therefore the situation will be analogue to that of baryons and photons before the recombination era, with a series of damped oscillations similar to the baryon acoustic oscillations. In analogy to the baryon case, DM-DR interactions will modify the matter power spectrum at scales similar to the size of the Universe when they become ineffective in changing the velocities of DM particles, after which DM particles begin to fall into potential wells. We denote this typical length scale by $1/k_f \sim 1/a_{f}H (a_f)$ where $H=d\log a/dt$ is the expansion rate of the universe and the scale factor at freeze-out $a_f$ is approximately given by solving $$H(a_f)= \Gamma (a_f) = \frac{\rho_{\rm DR}}{\rho_{\rm DM}}n_{\rm DM} \left< \sigma_{\rm DM-DR} v \right> = \frac{\left< E_{\rm DR} \right>}{m_{\rm DM}} n_{\rm DR} \left< \sigma_{\rm DM-DR} v \right>$$ where $\Gamma$ is the rate at which DM velocities are changed by O(1) amounts, $\rho$ and $n$ denote the energy and number density of dark matter or dark radiation, $m_{\rm DM}$ is the DM mass and $\left< E_{\rm DR} \right>$ the average energy of DR particles. It is convenient to parameterize the cross section as $$\left< \sigma_{\rm DM-DR} v \right> = Q_0 \ m_{\rm DM} ~,$$ if it is constant or $$\left< \sigma_{\rm DM-DR} v \right> = \frac{Q_2}{a^2} \ m_{\rm DM}~,$$ if proportional to $T^2$ [@Mangano:2006mp]. Here $Q_0$ and $Q_2$ are constants with units cm$^2$ MeV$^{-1}$. These two cases are representative of possible DR-DM interactions and help to study how the possible temperature dependence of the cross section can affect the constraints. The typical scale of the DM-DR oscillations in these cases are $$\label{eq:scale1} k\sim 0.5 \left(\frac{10^{-32}~{\rm cm}^2~{\rm MeV}^{-1}}{Q_0}\right)^{1/2}~h\textrm{Mpc}^{-1}~,$$ $$\label{eq:scale2} k\sim 0.6\left(\frac{10^{-41}~{\rm cm}^2~{\rm MeV}^{-1}}{Q_2}\right)^{1/2}~h\textrm{Mpc}^{-1}~,$$ where we have assumed $\Delta {N_{\rm eff}}=3$. The dependence of these scales on $\Delta {N_{\rm eff}}$ is however quite mild, increasing $k$ as the number of effective neutrino species decreases. Figure \[fig:matterpower\] (upper panel) illustrates this effect for a constant interaction cross section, $Q_0=10^{-32}$ cm$^2$ MeV$^{-1}$, for $\Delta {N_{\rm eff}}=1$ and $\Delta {N_{\rm eff}}=3$ interacting with the DM sector. As a comparison, we show as well the shape of the matter power spectrum if these species were non interacting. Note that the scale at which the damped oscillations appear is well predicted by the approximated expression given by Eq. (\[eq:scale1\]). The right panel of Fig. \[fig:matterpower\] illustrates the analogous but for a cross section $\propto T^2$ and $Q_2=10^{-41}$ cm$^2$ MeV$^{-1}$. Again, the suppression scale is very well approximated by Eq. (\[eq:scale2\]). In the following, we shall exploit the dark matter suppression effects to set bounds on the interacting rates $Q_0$ and $Q_2$ using galaxy clustering data combined with other cosmological datasets. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Upper panel: Matter power spectrum for a $\Lambda$CDM model (thick red curve). The blue (dotted) dotted-dashed lines depict the matter power spectrum for $\Delta {N_{\rm eff}}=1$ within a (non) interacting scenario with constant cross section and $Q_0=10^{-32}$ cm$^2$ MeV$^{-1}$. The green (long) short dashed lines depict the matter power spectrum for $\Delta {N_{\rm eff}}=3$ within a (non) interaction scenario. The lower panel shows the analogous but for an interaction cross section $\propto 1/a^2$ and $Q_2=10^{-41}$ cm$^2$ MeV$^{-1}$.[]{data-label="fig:matterpower"}](matterpower_q0 "fig:"){width="12cm"} ![Upper panel: Matter power spectrum for a $\Lambda$CDM model (thick red curve). The blue (dotted) dotted-dashed lines depict the matter power spectrum for $\Delta {N_{\rm eff}}=1$ within a (non) interacting scenario with constant cross section and $Q_0=10^{-32}$ cm$^2$ MeV$^{-1}$. The green (long) short dashed lines depict the matter power spectrum for $\Delta {N_{\rm eff}}=3$ within a (non) interaction scenario. The lower panel shows the analogous but for an interaction cross section $\propto 1/a^2$ and $Q_2=10^{-41}$ cm$^2$ MeV$^{-1}$.[]{data-label="fig:matterpower"}](matterpower_q2 "fig:"){width="12cm"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- We have modified the Boltzmann CAMB code [@Lewis:1999bs] incorporating the interacting scenarios and extracted cosmological parameters from current data using a Monte Carlo Markov Chain (MCMC) analysis based on the publicly available MCMC package `cosmomc` [@Lewis:2002ah]. We consider here a flat $\Lambda$CDM universe with $\Delta{N_{\rm eff}}$ DR species interacting with the dark matter. The scenario is described by a set of cosmological parameters $$\label{parameter} \{\omega_b,\omega_{\rm DM}, H_0, n_s, A_s, Q_0 (Q_2) \}~,$$ where $\omega_b\equiv\Omega_bh^{2}$ and $\omega_{\rm DM}\equiv\Omega_{\rm{DM}}h^{2}$ are today’s ratios of the physical baryon and cold dark matter densities to the critical density, $H_0$ is the current value of the Hubble parameter, $n_s$ is the scalar spectral index, $A_{s}$ is the amplitude of the primordial spectrum, and $Q_0$, $Q_2$ encode the DM-DR interactions. Our basic data set is the seven–year WMAP CMB data [@Larson:2010gs] (temperature and polarization) with the routine for computing the likelihood supplied by the WMAP team. We analyze the WMAP data together with the luminous red galaxy clustering results from SDSS II (Sloan Digital Sky Survey) [@Reid:2009xm], with a prior on the Hubble constant from HST (Hubble Space Telescope) [@Riess:2009pu], including to these data sets Supernova Ia Union Compilation 2 data [@Amanullah:2010vv]. Our main results are reported in Tabs. \[tab:Q0bounds\] and \[tab:Q2bounds\], where we list the constraints on the interaction cross section parameters $Q_0$ and $Q_2$ in three possible interaction scenarios, $\Delta {N_{\rm eff}}=1,2,3$. Notice that, particularly in the more common case of the cross section scaling with $T^2$, these constraints are too mild to be significant for the thermal abundance of DM. Thus, while the procedure followed is general, the effects studied are mainly relevant for ADM scenarios, in which larger than thermal cross sections are required to efficiently annihilate the symmetric component. The bounds are stronger as the number of effective neutrino increases, since the typical scale $k$ at which damped oscillations appear in the matter power spectrum decreases with ${N_{\rm eff}}$. The maximum $k$ used in the analysis of the matter power spectrum is $\sim 0.15~h$Mpc$^{-1}$, since effects that appear at larger scales cannot be easily constrained. Also, note that the constraints found in this study are milder than those found by the authors of [@Mangano:2006mp; @Serra:2009uu] since in their case the species interacting with the dark matter fluid were the active neutrinos and not extra radiation species. There exists a large degeneracy between the number of extra radiation species and the dark matter energy density. One of the main effects of $\Delta {N_{\rm eff}}$ comes from the change of the epoch of the radiation matter equality, and consequently, from the shift of the CMB acoustic peaks. The position of acoustic peaks is given by the so-called acoustic scale $\theta_A$, which reads $$\theta_A=\frac{r_s(z_{rec})}{r_\theta(z_{rec})}~,$$ where $r_\theta (z_{rec})$ and $r_s(z_{rec})$ are the comoving angular diameter distance to the last scattering surface and the sound horizon at the recombination epoch $z_{rec}$, respectively. Although $r_\theta (z_{rec})$ almost remains the same for different values of $\Delta {N_{\rm eff}}$, $r_s(z_{rec})$ becomes smaller when $\Delta{N_{\rm eff}}$ is increased. Thus the positions of acoustic peaks are shifted to higher multipoles $\ell$ (smaller angular scales) by increasing the value of $\Delta {N_{\rm eff}}$. The height of the first acoustic CMB peak will also increase as $\Delta {N_{\rm eff}}$ does. Both effects can be compensated by a larger cold dark matter component. Therefore, in our analysis, the cold dark matter component is larger than in the absence of dark radiation species, and the effect in the suppression of the matter power spectrum shown in Fig. \[fig:matterpower\] will be less noticeable than in the case in which the interacting species are active neutrinos and extra DR species are absent. Figures \[fig:sigma8\] and \[fig:sigma8b\] show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_0$) [^2], ($\Omega_{\rm {DM}}h^2$, $Q_0$) and (Age, $Q_0$) planes, with $Q_0$ and $Q_2$ in units of $10^{-34}$ cm$^2$ MeV$^{-1}$ and $10^{-43}$ cm$^2$ MeV$^{-1}$, respectively, and the age of the Universe in Gyrs. The contours are shown for the three possible interaction scenarios considered here, with one, two and three DR species. Notice that the scenarios with $\Delta{N_{\rm eff}}>1$ are increasingly disfavoured. Indeed, taking the minimum of the log-likelihood of the different scenarios, we observe a difference of $\log(L_{max}(\Delta{N_{\rm eff}}=3)/L_{max}(\Delta{N_{\rm eff}}=1)) \sim 2$. Although this suggests that the scenario with $\Delta{N_{\rm eff}}=1$ is more favored by data, it is not sufficient to rule out a scenario with $\Delta{N_{\rm eff}}=3$. For this reason we included the three scenarios in the figures. Notice that both the physical dark matter energy density and the $\sigma_8$ parameter increase as the number of DR species does, since the suppression induced in the matter power spectrum by the presence of the extra radiation species and also by their coupling to dark matter could in principle be alleviated by a larger amount of clustering dark matter as explained above. There exists also a small degeneracy between the interaction cross section and the $\sigma_8$ parameter. This degeneracy can be easily understood in terms of Fig. \[fig:matterpower\] in which we show that the DM-DR interaction decreases the amplitude of dark matter fluctuations at small scales. Finally, the age of the universe in models with two or three DR species is significantly smaller than in a typical pure $\Lambda$CDM universe, since the age of the universe is inversely proportional to the amount of cold dark matter in a given cosmological scenario. Finally let us comment on the difference between the scenarios with constant and $\propto T^2$ DM-DR interactions. It is evident from Figs. \[fig:sigma8\] and \[fig:sigma8b\] that the isocontours very approximately transform onto each other if we associate $$15\frac{Q_0}{10^{-34}~{\rm cm}^2~{\rm MeV}^{-1}} \leftrightarrow 5\frac{Q_2}{10^{-43}~{\rm cm}^2~{\rm MeV}^{-1}} ~.$$ This is not surprising since the DM-DR decoupling is relatively fast for both cases. The two cross sections are indeed similar for values of the scale factor $a_f\sim 2\times 10^4$, which corresponds to the epoch of matter-radiation equality. This is of course, the scale at which the fluctuations in the DM density can start to grow fast, unless impeded by the interactions with the DR, and sets the characteristic scale for the DM-DR oscillations that we can constrain. ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_0$), ($\Omega_{\rm {DM}}h^2$, $Q_0$) and (Age, $Q_0$) planes, respectively. The interacting parameter $Q_0$ is in units of $10^{-34}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8"}](q0_sigma8){width="\linewidth"} ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_0$), ($\Omega_{\rm {DM}}h^2$, $Q_0$) and (Age, $Q_0$) planes, respectively. The interacting parameter $Q_0$ is in units of $10^{-34}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8"}](q0_omdm){width="\linewidth"} ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_0$), ($\Omega_{\rm {DM}}h^2$, $Q_0$) and (Age, $Q_0$) planes, respectively. The interacting parameter $Q_0$ is in units of $10^{-34}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8"}](q0_age){width="\linewidth"} ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_2$), ($\Omega_{\rm {DM}}h^2$, $Q_2$) and (Age, $Q_2$) planes, respectively. The interacting parameter $Q_2$ is in units of $10^{-43}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8b"}](q2_sigma8){width="\linewidth"} ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_2$), ($\Omega_{\rm {DM}}h^2$, $Q_2$) and (Age, $Q_2$) planes, respectively. The interacting parameter $Q_2$ is in units of $10^{-43}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8b"}](q2_omegadm){width="\linewidth"} ![The left, right upper panels and the lower panel show the 1$\sigma$ and 2$\sigma$ contours in the ($\sigma_8$, $Q_2$), ($\Omega_{\rm {DM}}h^2$, $Q_2$) and (Age, $Q_2$) planes, respectively. The interacting parameter $Q_2$ is in units of $10^{-43}$ cm$^2$ MeV$^{-1}$ and the age of the universe is in Gyrs. The red, blue and green contours denote the three possible interacting scenarios explored here with one, two and three sterile neutrino species in the DR sector.[]{data-label="fig:sigma8b"}](q2_age){width="\linewidth"} $\Delta {N_{\rm eff}}$ ($68 \%$ c.l.) ($95 \%$ c.l.) ------------------------ ---------------- ---------------- 1 $Q_0\le 5.6 $ $Q_0\le 11.8$ \[0.1cm\] 2 $Q_0\le 2.6$ $Q_0\le 6.2$ \[0.1cm\] 3 $Q_0\le 1.6$ $Q_0\le 4.6$ \[0.1cm\] : Upper limits on $Q_0$ (in units of $10^{-34}$cm$^2$ MeV$^{-1}$) in different DR scenarios.[]{data-label="tab:Q0bounds"} $\Delta {N_{\rm eff}}$ ($68 \%$ c.l.) ($95 \%$ c.l.) ------------------------ ---------------- ---------------- 1 $Q_2\le 1.8 $ $Q_2\le 3.9$ \[0.1cm\] 2 $Q_2\le 1.0$ $Q_2\le 2.7$ \[0.1cm\] 3 $Q_2\le 0.8$ $Q_2\le 1.9$ \[0.1cm\] : Upper limits on $Q_2$ (in units of $10^{-43}$cm$^2$ MeV$^{-1}$) in different DR scenarios.[]{data-label="tab:Q2bounds"} Conclusions {#sec:summary} =========== We have studied the phenomenological implications of models of dark matter (DM) in which the dark sector (DS) also contains additional lighter fields that interact with the DM component and constitute extra dark radiation (DR). We have been mainly inspired in models of asymmetric dark matter (ADM) that generally require complex DS extra light components into which the symmetric thermal DM abundance can efficiently annihilate. However, we tried to keep our analysis as model independent as possible and the constraints we derived can also be [applied]{} to other models with DM-DR interactions regardless of the asymmetric nature of the DM component. Our description involves the splitting of the DS into light and heavy degrees of freedom ($g_\ell$ and $g_h$, respectively). While $g_\ell$ correspond to the DS degrees of freedom that will ultimately constitute the DR component, $g_h$ parametrizes the DS degrees of freedom that were relativistic at the temperature $T_d$, where the DS decoupled from the Standard Model, but that became non relativistic and heated the DR at a later time. The focus of our study has been put on how cosmological probes on the amount of radiation, such as measurements of the cosmic microwave background (CMB), can constrain the model. This is particularly interesting given the current preference for additional degrees of freedom in radiation displayed by the CMB, as well as the major improvement expected in this measurement with the forthcoming release of the Planck results. This provides a probe of the DS which is very complementary to more conventional searches, such as direct and indirect detection experiments, which would be challenging or absent in this type of models. We have found that the DS composition is, at present, a relatively open question with up to $\sim 20$ extra heavy degrees of freedom $g_h$ for $T_d>10$ GeV. This number is significantly suppressed once $T_d$ falls below $\sim 0.1$ GeV due to the strong reduction of SM degrees of freedom that heat the photon bath with respect to DR. Studying the impact of the forecasted Planck sensitivity we find that, if the hint for non-zero number of extra light degrees of freedom persists, the composition of the DS will be very constrained and this could provide important input for DM model building. Furthermore, if the Planck results simply put an upper limit, this limit will be strong enough to severely constrain the possible existence of light degrees of freedom in this type of models. In addition to the effects in the very early Universe, we have also studied the possible impact of the interaction between DR and DM on structure formation. If this interaction is strong enough, the two would couple allowing the propagation of pressure waves analogous to the baryon acoustic oscillations in the baryon/photon plasma and therefore influence the galaxy power spectrum. We have studied the bounds on the interactions within the DS (in the form of DM-DR scattering) and its correlation with the number of light degrees of freedom. While the applicability of this bound is also independent of the asymmetric nature of DM, the size of the interactions required for observable effects implies too strong an annihilation of the thermal DM component to explain in this way the observed abundance. Thus, this analysis is mainly interesting for ADM scenarios in which large annihilation cross sections are desirable while the DM abundance is instead controlled by the particle-antiparticle asymmetry, as in the baryon sector. We have studied two possible forms of cross sections within the dark sector, constant and $T^2$ dependent. Using Markov Chain Monte Carlo methods, we have derived upper bounds on the strength of such interactions, which are summarized in [Tabs.]{} \[tab:Q0bounds\] and \[tab:Q2bounds\], respectively. We have also seen that cosmological parameters, such as the age of the Universe and $\Omega_{\rm DM}h^2$, have a significant dependence on the number of interacting particles that constitute DR in these scatterings. For example, the best fit for the age of the Universe is generally lower in these scenarios as compared to a pure $\Lambda$CDM model. We acknowledge the help of E. Giusarma with Matlab figures and [partial support from the European Union FP7 ITN INVISIBLES (Marie Curie Actions, PITN-GA-2011-289442). ]{} O.M. is supported by AYA2008-03531 and the Consolider Ingenio project CSD2007-00060. [100]{} G. Bertone, D. Hooper, and J. Silk, “[Particle dark matter: Evidence, candidates and constraints]{},” [*Phys.Rept.*]{} [**405**]{} (2005) 279–390, [[hep-ph/0404175]{}](http://arXiv.org/abs/hep-ph/0404175). R. Peccei and H. R. Quinn, “[CP Conservation in the Presence of Instantons]{},” [*Phys.Rev.Lett.*]{} [**38**]{} (1977) 1440–1443. J. R. Ellis, J. Hagelin, D. V. Nanopoulos, K. A. Olive, and M. Srednicki, “[Supersymmetric Relics from the Big Bang]{},” [*Nucl.Phys.*]{} [**B238**]{} (1984) 453–476. S. Nussinov, “[TECHNOCOSMOLOGY: COULD A TECHNIBARYON EXCESS PROVIDE A ’NATURAL’ MISSING MASS CANDIDATE?]{},” [*Phys.Lett.*]{} [**B165**]{} (1985) 55. S. M. Barr, R. S. Chivukula, and E. Farhi, “[ELECTROWEAK FERMION NUMBER VIOLATION AND THE PRODUCTION OF STABLE PARTICLES IN THE EARLY UNIVERSE]{},” [*Phys. Lett.*]{} [**B241**]{} (1990) 387–391. S. M. Barr, “[Baryogenesis, sphalerons and the cogeneration of dark matter]{},” [*Phys. Rev.*]{} [**D44**]{} (1991) 3062–3066. D. B. Kaplan, “[A Single explanation for both the baryon and dark matter densities]{},” [*Phys. Rev. Lett.*]{} [**68**]{} (1992) 741–743. V. A. Kuzmin, “[Simultaneous solution to baryogenesis and dark-matter problems]{},” [*Phys. Part. Nucl.*]{} [**29**]{} (1998) 257–265, [[hep-ph/9701269]{}](http://arXiv.org/abs/hep-ph/9701269). A. Kusenko, “[Dark matter from Affleck-Dine baryogenesis]{},” [[hep-ph/9901353]{}](http://arXiv.org/abs/hep-ph/9901353). G. R. Farrar and G. Zaharijas, “[Dark matter and the baryon asymmetry of the universe]{},” [[hep-ph/0406281]{}](http://arXiv.org/abs/hep-ph/0406281). D. Hooper, J. March-Russell, and S. M. West, “[Asymmetric sneutrino dark matter and the Omega(b)/Omega(DM) puzzle]{},” [*Phys. Lett.*]{} [**B605**]{} (2005) 228–236, [[hep-ph/0410114]{}](http://arXiv.org/abs/hep-ph/0410114). R. Kitano and I. Low, “[Dark matter from baryon asymmetry]{},” [*Phys. Rev.*]{} [**D71**]{} (2005) 023510, [[hep-ph/0411133]{}](http://arXiv.org/abs/hep-ph/0411133). K. Agashe and G. Servant, “[Baryon number in warped GUTs: Model building and (dark matter related) phenomenology]{},” [*JCAP*]{} [**0502**]{} (2005) 002, [[hep-ph/0411254]{}](http://arXiv.org/abs/hep-ph/0411254). R. Kitano and I. Low, “[Grand unification, dark matter, baryon asymmetry, and the small scale structure of the universe]{},” [[hep-ph/0503112]{}](http://arXiv.org/abs/hep-ph/0503112). N. Cosme, L. Lopez Honorez, and M. H. G. Tytgat, “[Leptogenesis and dark matter related?]{},” [*Phys. Rev.*]{} [**D72**]{} (2005) 043505, [[hep-ph/0506320]{}](http://arXiv.org/abs/hep-ph/0506320). G. R. Farrar and G. Zaharijas, “[Dark matter and the baryon asymmetry]{},” [ *Phys. Rev. Lett.*]{} [**96**]{} (2006) 041302, [[hep-ph/0510079]{}](http://arXiv.org/abs/hep-ph/0510079). D. Suematsu, “[Nonthermal production of baryon and dark matter]{},” [ *Astropart. Phys.*]{} [**24**]{} (2006) 511–519, [[hep-ph/0510251]{}](http://arXiv.org/abs/hep-ph/0510251). M. H. G. Tytgat, “[Relating leptogenesis and dark matter]{},” [[hep-ph/0606140]{}](http://arXiv.org/abs/hep-ph/0606140). T. Banks, S. Echols, and J. L. Jones, “[Baryogenesis, dark matter and the pentagon]{},” [*JHEP*]{} [**11**]{} (2006) 046, [[hep-ph/0608104]{}](http://arXiv.org/abs/hep-ph/0608104). V. Page, “[Non-thermal right-handed sneutrino dark matter and the $\Omega_DM/\Omega_b$ problem]{},” [*JHEP*]{} [**04**]{} (2007) 021, [[hep-ph/0701266]{}](http://arXiv.org/abs/hep-ph/0701266). R. Kitano, H. Murayama, and M. Ratz, “[Unified origin of baryons and dark matter]{},” [*Phys. Lett.*]{} [**B669**]{} (2008) 145–149, [[arXiv:0807.4313]{}](http://arXiv.org/abs/arXiv:0807.4313). E. Nardi, F. Sannino, and A. Strumia, “[Decaying Dark Matter can explain the electron/positron excesses]{},” [*JCAP*]{} [**0901**]{} (2009) 043, [[arXiv:0811.4153]{}](http://arXiv.org/abs/arXiv:0811.4153). D. E. Kaplan, M. A. Luty, and K. M. Zurek, “[Asymmetric Dark Matter]{},” [ *Phys. Rev.*]{} [**D79**]{} (2009) 115016, [[arXiv:0901.4117]{}](http://arXiv.org/abs/arXiv:0901.4117). G. D. Kribs, T. S. Roy, J. Terning, and K. M. Zurek, “[Quirky Composite Dark Matter]{},” [*Phys. Rev.*]{} [**D81**]{} (2010) 095001, [[arXiv:0909.2034]{}](http://arXiv.org/abs/arXiv:0909.2034). T. Cohen and K. M. Zurek, “[Leptophilic Dark Matter from the Lepton Asymmetry]{},” [*Phys. Rev. Lett.*]{} [**104**]{} (2010) 101301, [[arXiv:0909.2035]{}](http://arXiv.org/abs/arXiv:0909.2035). Y. Cai, M. A. Luty, and D. E. Kaplan, “[Leptonic Indirect Detection Signals from Strongly Interacting Asymmetric Dark Matter]{},” [[arXiv:0909.5499]{}](http://arXiv.org/abs/arXiv:0909.5499). H. An, S.-L. Chen, R. N. Mohapatra, and Y. Zhang, “[Leptogenesis as a Common Origin for Matter and Dark Matter]{},” [*JHEP*]{} [**03**]{} (2010) 124, [[arXiv:0911.4463]{}](http://arXiv.org/abs/arXiv:0911.4463). M. T. Frandsen and S. Sarkar, “[Asymmetric dark matter and the Sun]{},” [ *Phys. Rev. Lett.*]{} [**105**]{} (2010) 011301, [[arXiv:1003.4505]{}](http://arXiv.org/abs/arXiv:1003.4505). H. An [*et al.*]{}, “[Energy Dependence of Direct Detection Cross Section for Asymmetric Mirror Dark Matter]{},” [*Phys. Rev.*]{} [**D82**]{} (2010) 023533, [[arXiv:1004.3296]{}](http://arXiv.org/abs/arXiv:1004.3296). T. Cohen, D. J. Phalen, A. Pierce, and K. M. Zurek, “[Asymmetric Dark Matter from a GeV Hidden Sector]{},” [*Phys. Rev.*]{} [**D82**]{} (2010) 056001, [[arXiv:1005.1655]{}](http://arXiv.org/abs/arXiv:1005.1655). M. Taoso [*et al.*]{}, “[Effect of low mass dark matter particles on the Sun]{},” [[arXiv:1005.5711]{}](http://arXiv.org/abs/arXiv:1005.5711). J. Shelton and K. M. Zurek, “[Darkogenesis]{},” [[arXiv:1008.1997]{}](http://arXiv.org/abs/arXiv:1008.1997). H. Davoudiasl, D. E. Morrissey, K. Sigurdson, and S. Tulin, “[Hylogenesis: A Unified Origin for Baryonic Visible Matter and Antibaryonic Dark Matter]{},” [[arXiv:1008.2399]{}](http://arXiv.org/abs/arXiv:1008.2399). N. Haba and S. Matsumoto, “[Baryogenesis from Dark Sector]{},” [[arXiv:1008.2487]{}](http://arXiv.org/abs/arXiv:1008.2487). A. Belyaev, M. T. Frandsen, S. Sarkar, and F. Sannino, “[Mixed dark matter from technicolor]{},” [[arXiv:1007.4839]{}](http://arXiv.org/abs/arXiv:1007.4839). E. J. Chun, “[Leptogenesis origin of Dirac gaugino dark matter]{},” [[arXiv:1009.0983]{}](http://arXiv.org/abs/arXiv:1009.0983). M. R. Buckley and L. Randall, “[Xogenesis]{},” [[arXiv:1009.0270]{}](http://arXiv.org/abs/arXiv:1009.0270). P.-H. Gu, M. Lindner, U. Sarkar, and X. Zhang, “[WIMP Dark Matter and Baryogenesis]{},” [[arXiv:1009.2690]{}](http://arXiv.org/abs/arXiv:1009.2690). M. Blennow, B. Dasgupta, E. Fernandez-Martinez, and N. Rius, “[Aidnogenesis via Leptogenesis and Dark Sphalerons]{},” [*JHEP*]{} [**1103**]{} (2011) 014, [[1009.3159]{}](http://arXiv.org/abs/1009.3159). L. J. Hall, J. March-Russell, and S. M. West, “[A Unified Theory of Matter Genesis: Asymmetric Freeze-In]{},” [[1010.0245]{}](http://arXiv.org/abs/1010.0245). B. Dutta and J. Kumar, “[Asymmetric Dark Matter from Hidden Sector Baryogenesis]{},” [*Phys.Lett.*]{} [**B699**]{} (2011) 364–367, [[1012.1341]{}](http://arXiv.org/abs/1012.1341). A. Falkowski, J. T. Ruderman, and T. Volansky, “[Asymmetric Dark Matter from Leptogenesis]{},” [*JHEP*]{} [**1105**]{} (2011) 106, [[1101.4936]{}](http://arXiv.org/abs/1101.4936). N. Haba, S. Matsumoto, and R. Sato, “[Sneutrino Inflation with Asymmetric Dark Matter]{},” [*Phys.Rev.*]{} [**D84**]{} (2011) 055016, [[1101.5679]{}](http://arXiv.org/abs/1101.5679). E. J. Chun, “[Minimal Dark Matter and Leptogenesis]{},” [*JHEP*]{} [**1103**]{} (2011) 098, [[1102.3455]{}](http://arXiv.org/abs/1102.3455). J. J. Heckman and S.-J. Rey, “[Baryon and Dark Matter Genesis from Strongly Coupled Strings]{},” [*JHEP*]{} [**1106**]{} (2011) 120, [[1102.5346]{}](http://arXiv.org/abs/1102.5346). M. L. Graesser, I. M. Shoemaker, and L. Vecchi, “[Asymmetric WIMP dark matter]{},” [*JHEP*]{} [**1110**]{} (2011) 110, [[1103.2771]{}](http://arXiv.org/abs/1103.2771). M. T. Frandsen, S. Sarkar, and K. Schmidt-Hoberg, “[Light asymmetric dark matter from new strong dynamics]{},” [*Phys.Rev.*]{} [**D84**]{} (2011) 051703, [[1103.4350]{}](http://arXiv.org/abs/1103.4350). S. D. McDermott, H.-B. Yu, and K. M. Zurek, “[Constraints on Scalar Asymmetric Dark Matter from Black Hole Formation in Neutron Stars]{},” [[1103.5472]{}](http://arXiv.org/abs/1103.5472). M. R. Buckley, “[Asymmetric Dark Matter and Effective Operators]{},” [ *Phys.Rev.*]{} [**D84**]{} (2011) 043510, [[1104.1429]{}](http://arXiv.org/abs/1104.1429). H. Iminniyaz, M. Drees, and X. Chen, “[Relic Abundance of Asymmetric Dark Matter]{},” [*JCAP*]{} [**1107**]{} (2011) 003, [[1104.5548]{}](http://arXiv.org/abs/1104.5548). N. F. Bell, K. Petraki, I. M. Shoemaker, and R. R. Volkas, “[Pangenesis in a Baryon-Symmetric Universe: Dark and Visible Matter via the Affleck-Dine Mechanism]{},” [*Phys.Rev.*]{} [**D84**]{} (2011) 123505, [[1105.3730]{}](http://arXiv.org/abs/1105.3730). C. Cheung and K. M. Zurek, “[Affleck-Dine Cogenesis]{},” [*Phys.Rev.*]{} [ **D84**]{} (2011) 035007, [[1105.4612]{}](http://arXiv.org/abs/1105.4612). H. Davoudiasl, D. E. Morrissey, K. Sigurdson, and S. Tulin, “[Baryon Destruction by Asymmetric Dark Matter]{},” [[1106.4320]{}](http://arXiv.org/abs/1106.4320). J. March-Russell and M. McCullough, “[Asymmetric Dark Matter via Spontaneous Co-Genesis]{},” [[1106.4319]{}](http://arXiv.org/abs/1106.4319). Y. Cui, L. Randall, and B. Shuve, “[Emergent Dark Matter, Baryon, and Lepton Numbers]{},” [*JHEP*]{} [**1108**]{} (2011) 073, [[1106.4834]{}](http://arXiv.org/abs/1106.4834). C. Arina and N. Sahu, “[Asymmetric Inelastic Inert Doublet Dark Matter from Triplet Scalar Leptogenesis]{},” [*Nucl.Phys.*]{} [**B854**]{} (2012) 666–699, [[1108.3967]{}](http://arXiv.org/abs/1108.3967). M. R. Buckley and S. Profumo, “[Regenerating a Symmetry in Asymmetric Dark Matter]{},” [*Phys.Rev.Lett.*]{} [**108**]{} (2012) 011301, [[1109.2164]{}](http://arXiv.org/abs/1109.2164). S. Barr, “[The Unification and Cogeneration of Dark Matter and Baryonic Matter]{},” [[1109.2562]{}](http://arXiv.org/abs/1109.2562). M. Cirelli, P. Panci, G. Servant, and G. Zaharijas, “[Consequences of DM/antiDM Oscillations for Asymmetric WIMP Dark Matter]{},” [[1110.3809]{}](http://arXiv.org/abs/1110.3809). T. Lin, H.-B. Yu, and K. M. Zurek, “[On Symmetric and Asymmetric Light Dark Matter]{},” [[1111.0293]{}](http://arXiv.org/abs/1111.0293). K. Petraki, M. Trodden, and R. R. Volkas, “[Visible and dark matter from a first-order phase transition in a baryon-symmetric universe]{},” [[1111.4786]{}](http://arXiv.org/abs/1111.4786). B. von Harling, K. Petraki, and R. R. Volkas, “[Affleck-Dine dynamics and the dark sector of pangenesis]{},” [[1201.2200]{}](http://arXiv.org/abs/1201.2200). K. Kamada and M. Yamaguchi, “[Asymmetric Dark Matter from Spontaneous Cogenesis in the Supersymmetric Standard Model]{},” [[1201.2636]{}](http://arXiv.org/abs/1201.2636). F. Iocco, M. Taoso, F. Leclercq, and G. Meynet, “[Main sequence stars with asymmetric dark matter]{},” [*Phys.Rev.Lett.*]{} [**108**]{} (2012) 061301, [[1201.5387]{}](http://arXiv.org/abs/1201.5387). M. Ibe, S. Matsumoto, and T. T. Yanagida, “[The GeV-scale dark matter with B-L asymmetry]{},” [[1110.5452]{}](http://arXiv.org/abs/1110.5452). J. March-Russell, J. Unwin, and S. M. West, “[Closing in on Asymmetric Dark Matter I: Model independent limits for interactions with quarks]{},” [[1203.4854]{}](http://arXiv.org/abs/1203.4854). L. Ackerman, M. R. Buckley, S. M. Carroll, and M. Kamionkowski, “[Dark Matter and Dark Radiation]{},” [*Phys.Rev.*]{} [**D79**]{} (2009) 023519, [[0810.5126]{}](http://arXiv.org/abs/0810.5126). J. L. Feng, H. Tu, and H.-B. Yu, “[Thermal Relics in Hidden Sectors]{},” [ *JCAP*]{} [**0810**]{} (2008) 043, [[0808.2318]{}](http://arXiv.org/abs/0808.2318). J. L. Feng, M. Kaplinghat, H. Tu, and H.-B. Yu, “[Hidden Charged Dark Matter]{},” [*JCAP*]{} [**0907**]{} (2009) 004, [[0905.3039]{}](http://arXiv.org/abs/0905.3039). S. Tulin, H.-B. Yu, and K. M. Zurek, “[Oscillating Asymmetric Dark Matter]{},” [[1202.0283]{}](http://arXiv.org/abs/1202.0283). G. Mangano, A. Melchiorri, O. Mena, G. Miele, and A. Slosar, “[Present bounds on the relativistic energy density in the Universe from cosmological observables]{},” [*JCAP*]{} [**0703**]{} (2007) 006, [[astro-ph/0612150]{}](http://arXiv.org/abs/astro-ph/0612150). J. Hamann, S. Hannestad, G. Raffelt, and Y. Y. Wong, “[Observational bounds on the cosmic radiation density]{},” [*JCAP*]{} [**0708**]{} (2007) 021, [[0705.0440]{}](http://arXiv.org/abs/0705.0440). B. A. Reid, L. Verde, R. Jimenez, and O. Mena, “[Robust Neutrino Constraints by Combining Low Redshift Observations with the CMB]{},” [*JCAP*]{} [**1001**]{} (2010) 003, [[0910.0008]{}](http://arXiv.org/abs/0910.0008). J. Dunkley [*et al.*]{}, “[The Atacama Cosmology Telescope: Cosmological Parameters from the 2008 Power Spectra]{},” [*Astrophys. J.*]{} [**739**]{} (2011) 52, [[1009.0866]{}](http://arXiv.org/abs/1009.0866). E. Komatsu [*et al.*]{}, “[Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation]{},” [[arXiv:1001.4538]{}](http://arXiv.org/abs/arXiv:1001.4538). J. Hamann, S. Hannestad, J. Lesgourgues, C. Rampf, and Y. Y. Wong, “[Cosmological parameters from large scale structure - geometric versus shape information]{},” [*JCAP*]{} [**1007**]{} (2010) 022, [[1003.3999]{}](http://arXiv.org/abs/1003.3999). R. Keisler, C. Reichardt, K. Aird, B. Benson, L. Bleem, [*et al.*]{}, “[A Measurement of the Damping Tail of the Cosmic Microwave Background Power Spectrum with the South Pole Telescope]{},” [*Astrophys.J.*]{} [**743**]{} (2011) 28, [[1105.3182]{}](http://arXiv.org/abs/1105.3182). M. Archidiacono, E. Calabrese, and A. Melchiorri, “[The Case for Dark Radiation]{},” [[1109.2767]{}](http://arXiv.org/abs/1109.2767). A. Smith, M. Archidiacono, A. Cooray, F. De Bernardis, A. Melchiorri, [*et al.*]{}, “[The Impact of Assuming Flatness in the Determination of Neutrino Properties from Cosmological Data]{},” [[1112.3006]{}](http://arXiv.org/abs/1112.3006). J. Hamann, “[Evidence for extra radiation? Profile likelihood versus Bayesian posterior]{},” [[1110.4271]{}](http://arXiv.org/abs/1110.4271). K. M. Nollett and G. P. Holder, “[An analysis of constraints on relativistic species from primordial nucleosynthesis and the cosmic microwave background]{},” [[1112.2683]{}](http://arXiv.org/abs/1112.2683). Y. Izotov and T. Thuan, “[The primordial abundance of 4He: evidence for non-standard big bang nucleosynthesis]{},” [*Astrophys.J.*]{} [**710**]{} (2010) L67–L71, [[1001.4440]{}](http://arXiv.org/abs/1001.4440). E. Aver, K. A. Olive, and E. D. Skillman, “[An MCMC determination of the primordial helium abundance]{},” [[1112.3713]{}](http://arXiv.org/abs/1112.3713). E. Aver, K. A. Olive, and E. D. Skillman, “[A New Approach to Systematic Uncertainties and Self-Consistency in Helium Abundance Determinations]{},” [*JCAP*]{} [**1005**]{} (2010) 003, [[1001.5218]{}](http://arXiv.org/abs/1001.5218). L. E. Ibanez, “[The Scalar Neutrinos as the Lightest Supersymmetric Particles and Cosmology]{},” [*Phys.Lett.*]{} [**B137**]{} (1984) 160. J. S. Hagelin, G. L. Kane, and S. Raby, “[Perhaps Scalar Neutrinos Are the Lightest Supersymmetric Partners]{},” [*Nucl.Phys.*]{} [**B241**]{} (1984) 638. K. Freese, “[Can Scalar Neutrinos Or Massive Dirac Neutrinos Be the Missing Mass?]{},” [*Phys.Lett.*]{} [**B167**]{} (1986) 295. T. Falk, K. A. Olive, and M. Srednicki, “[Heavy sneutrinos as dark matter]{},” [*Phys.Lett.*]{} [**B339**]{} (1994) 248–251, [[hep-ph/9409270]{}](http://arXiv.org/abs/hep-ph/9409270). G. Servant and T. M. P. Tait, “[Is the lightest Kaluza-Klein particle a viable dark matter candidate?]{},” [*Nucl. Phys.*]{} [**B650**]{} (2003) 391–419, [[hep-ph/0206071]{}](http://arXiv.org/abs/hep-ph/0206071). J. Kile and A. Soni, “[Flavored Dark Matter in Direct Detection Experiments and at LHC]{},” [*Phys.Rev.*]{} [**D84**]{} (2011) 035016, [[1104.5239]{}](http://arXiv.org/abs/1104.5239). B. Batell, J. Pradler, and M. Spannowsky, “[Dark Matter from Minimal Flavor Violation]{},” [*JHEP*]{} [**1108**]{} (2011) 038, [[1105.1781]{}](http://arXiv.org/abs/1105.1781). J. F. Kamenik and J. Zupan, “[Discovering Dark Matter Through Flavor Violation at the LHC]{},” [[1107.0623]{}](http://arXiv.org/abs/1107.0623). P. Agrawal, S. Blanchet, Z. Chacko, and C. Kilic, “[Flavored Dark Matter, and Its Implications for Direct Detection and Colliders]{},” [[1109.3516]{}](http://arXiv.org/abs/1109.3516). O. Wantz and E. Shellard, “[Axion Cosmology Revisited]{},” [*Phys.Rev.*]{} [ **D82**]{} (2010) 123508, [[0910.1066]{}](http://arXiv.org/abs/0910.1066). G. Mangano, G. Miele, S. Pastor, T. Pinto, O. Pisanti and P. D. Serpico, “Relic neutrino decoupling including flavor oscillations,” [*Nucl.Phys.B*]{} [**729**]{} (2005) 221 \[hep-ph/0506164\]. J. Jaeckel, J. Redondo, and A. Ringwald, “[Signatures of a hidden cosmic microwave background]{},” [*Phys.Rev.Lett.*]{} [**101**]{} (2008) 131801, [[0804.4157]{}](http://arXiv.org/abs/0804.4157). D. Cadamuro and J. Redondo, “[Cosmological bounds on pseudo Nambu-Goldstone bosons]{},” [*JCAP*]{} [**1202**]{} (2012) 032, [[1110.2895]{}](http://arXiv.org/abs/1110.2895). D. Cadamuro, S. Hannestad, G. Raffelt, and J. Redondo, “[Cosmological bounds on sub-MeV mass axions]{},” [*JCAP*]{} [**1102**]{} (2011) 003, [[1011.3694]{}](http://arXiv.org/abs/1011.3694). Collaboration, K. Nakamura [*et al.*]{}, “[Review of particle physics]{},” [*J.Phys.G*]{} [**G37**]{} (2010) 075021. F. Iocco, G. Mangano, G. Miele, O. Pisanti, and P. D. Serpico, “[Primordial Nucleosynthesis: from precision cosmology to fundamental physics]{},” [ *Phys.Rept.*]{} [**472**]{} (2009) 1–76, [[0809.0631]{}](http://arXiv.org/abs/0809.0631). M. Pospelov and J. Pradler, “[Big Bang Nucleosynthesis as a Probe of New Physics]{},” [*Ann.Rev.Nucl.Part.Sci.*]{} [**60**]{} (2010) 539–568, [[1011.1054]{}](http://arXiv.org/abs/1011.1054). G. Mangano and P. D. Serpico, “[A robust upper limit on $N_{\rm eff}$ from BBN, circa 2011]{},” [*Phys.Lett.*]{} [**B701**]{} (2011) 296–299, [[1103.1261]{}](http://arXiv.org/abs/1103.1261). J. Hamann, S. Hannestad, G. G. Raffelt, I. Tamborra, and Y. Y. Wong, “[Cosmology seeking friendship with sterile neutrinos]{},” [ *Phys.Rev.Lett.*]{} [**105**]{} (2010) 181301, [[1006.5276]{}](http://arXiv.org/abs/1006.5276). J. Hamann, S. Hannestad, G. G. Raffelt, and Y. Y. Wong, “[Sterile neutrinos with eV masses in cosmology: How disfavoured exactly?]{},” [*JCAP*]{} [ **1109**]{} (2011) 034, [[1108.4136]{}](http://arXiv.org/abs/1108.4136). A. X. Gonzalez-Morales, R. Poltis, B. D. Sherwin, and L. Verde, “[Are priors responsible for cosmology favoring additional neutrino species?]{},” [[1106.5052]{}](http://arXiv.org/abs/1106.5052). J. Hamann, J. Lesgourgues, and G. Mangano, “[Using BBN in cosmological parameter extraction from CMB: A Forecast for PLANCK]{},” [*JCAP*]{} [ **0803**]{} (2008) 004, [[0712.2826]{}](http://arXiv.org/abs/0712.2826). Collaboration, “[The Scientific programme of planck]{},” [[ astro-ph/0604069]{}](http://arXiv.org/abs/astro-ph/0604069). G. Mangano, A. Melchiorri, P. Serra, A. Cooray, and M. Kamionkowski, “[Cosmological bounds on dark matter-neutrino interactions]{},” [ *Phys.Rev.*]{} [**D74**]{} (2006) 043517, [[astro-ph/0606190]{}](http://arXiv.org/abs/astro-ph/0606190). P. Serra, F. Zalamea, A. Cooray, G. Mangano, and A. Melchiorri, “[Constraints on neutrino – dark matter interactions from cosmic microwave background and large scale structure data]{},” [*Phys.Rev.*]{} [**D81**]{} (2010) 043507, [[0911.4411]{}](http://arXiv.org/abs/0911.4411). A. Lewis, A. Challinor, and A. Lasenby, “[Efficient computation of CMB anisotropies in closed FRW models]{},” [*Astrophys.J.*]{} [**538**]{} (2000) 473–476, [[ astro-ph/9911177]{}](http://arXiv.org/abs/astro-ph/9911177). A. Lewis and S. Bridle, “[Cosmological parameters from CMB and other data: A Monte Carlo approach]{},” [*Phys.Rev.*]{} [**D66**]{} (2002) 103511, [[astro-ph/0205436]{}](http://arXiv.org/abs/astro-ph/0205436). D. Larson, J. Dunkley, G. Hinshaw, E. Komatsu, M. Nolta, [*et al.*]{}, “[Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Power Spectra and WMAP-Derived Parameters]{},” [*Astrophys.J.Suppl.*]{} [**192**]{} (2011) 16, [[1001.4635]{}](http://arXiv.org/abs/1001.4635). B. A. Reid, W. J. Percival, D. J. Eisenstein, L. Verde, D. N. Spergel, [*et al.*]{}, “[Cosmological Constraints from the Clustering of the Sloan Digital Sky Survey DR7 Luminous Red Galaxies]{},” [*Mon.Not.Roy.Astron.Soc.*]{} [ **404**]{} (2010) 60–85, [[0907.1659]{}](http://arXiv.org/abs/0907.1659). A. G. Riess, L. Macri, S. Casertano, M. Sosey, H. Lampeitl, [*et al.*]{}, “[A Redetermination of the Hubble Constant with the Hubble Space Telescope from a Differential Distance Ladder]{},” [*Astrophys.J.*]{} [**699**]{} (2009) 539–563, [[0905.0695]{}](http://arXiv.org/abs/0905.0695). R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier, [*et al.*]{}, “[Spectra and Light Curves of Six Type Ia Supernovae at 0.511 $\langle z \rangle$ 1.12 and the Union2 Compilation]{},” [*Astrophys.J.*]{} [**716**]{} (2010) 712–738, [[1004.1711]{}](http://arXiv.org/abs/1004.1711). [^1]: However, if the mediator itself constitutes the DR, the long range DM-DM interactions implied are strongly constrained through structure formation and can be ruled out in many scenarios [@Lin:2011gj]. Similar constraints have been studied in the context of non-ADM models [@Ackerman:2008gi; @Feng:2008mu; @Feng:2009mn], such as scenarios with a hidden sector photon mediating DM-DM interactions [@Ackerman:2008gi]. [^2]: $\sigma_8$ is defined as the rms matter density fluctuations in spheres of $8$ Mpc.
--- abstract: | In the last few years, cosmological simulations of structures and galaxies formations have assumed a fundamental role in the study of the origin, formation and evolution of the universe. These studies improved enormously with the use of supercomputers and parallel systems, allowing more accurate simulations, in comparison with traditional serial systems. The code we describe, called FLY, is a newly written code (using the tree N-body method), for three-dimensional self-gravitating collisionless systems evolution.\ FLY is a fully parallel code based on the tree Barnes-Hut algorithm and periodical boundary conditions are implemented by means of the Ewald summation technique. We use FLY to run simulations of the large scale structure of the universe and of cluster of galaxies, but it could be usefully adopted to run evolutions of systems based on a tree N-body algorithm. FLY is based on the one-side communication paradigm to share data among the processors, that access to remote private data avoiding any kind of synchronism. The code was originally developed on CRAY T3E system using the logically SHared MEMory access routines ([*SHMEM*]{}) but it runs also on SGI ORIGIN systems and on IBM SP by using the Low-Level Application Programming Interface routines ([*LAPI*]{}).\ This new code is the evolution of preliminary codes (WDSH-PT and WD99) for cosmological simulations we implemented in the last years, and it reaches very high performance in all systems where it has been well-tested. This performance allows us today to consider the code FLY among the most powerful parallel codes for tree N-body simulations. The performance that FLY reaches is discussed and reported, and a comparison with other similar codes is preliminary considered. The FLY version 1.1 is freely available on http://www.ct.astro.it/fly/ and it will be maintained and upgraded with new releases. address: | Osservatorio Astrofisico di Catania, Città Universitaria, Via S. Sofia, 78 – I-95125 Catania - Italy\ e-mail: ube@sunct.ct.astro.it van@sunct.ct.astro.it author: - 'U. BECCIANI' - 'V. ANTONUCCIO-DELOGU' title: 'Are You Ready to FLY in the Universe ? A Multi-platform N-body Tree Code for Parallel Supercomputers' --- and **Introduction** ================ Numerical simulations are very important tools to study the origin and the evolution of the universe, the cluster of galaxies and galaxy formations. They play a fundamental role in testing and verifying several cosmological theories, adopting different initial conditions and models of the expansion of the universe that affect the formation and the evolution of the large scale structures (hereafter LSS) [@bar86] [@her87] [@dub88] and the matter property and distribution. This class of numerical simulations is referred to as the N-body problem class.\ Observations covering the entire span of electro-magnetic spectrum allow us to estimate the amount of [*visible*]{} matter. But only a small fraction of the matter of the universe is visible, and it could be that up to ninety-five per cent of the matter is [*dark*]{} and does not emit any form of electro-magnetic radiation. It is the dark matter that is governing the dynamics of the universe. The dark matter is modelled as a self-gravitating collisionless fluid, described by the collisionless Boltzmann equation.\ The method we adopt to run LSS simulations, is a tree based algorithm that places the particles in hierarchical groups. The fundamental idea of the tree codes consists in the approximation of the force component for a particle. Considering a region $\gamma$, the force component on an [*i-th*]{} particle may be computed as $$\sum_{j\in\gamma} - \frac{Gm_j {\bf d}_{ij}}{\mid d_{ij}\mid^3} \approx \frac{GM {\bf d}_{i,cm}} {\mid d_{i,cm}\mid^3} + \; \hbox{higher order multipoles terms }\; \label{eq1}$$ where $M = \sum_{j\in\gamma}m_j$ and $cm$ is the center of mass of $\gamma$.\ In Eq. (\[eq1\]) the multipole expansion is carried out up to the quadrupole order when a [*far*]{} group is considered. The tree method, having no geometrical constraints, adapts dynamically the tree structure to the particles distribution and to the clusters, without loss of accuracy. This method scales as $O(NlogN)$.\ The most popular tree algorithm for cosmological simulations is the algorithm proposed by Barnes and Hut in 1986 [@barh86], including three main phases. In the first phase, the system is first surrounded by a single cubic region, encompassing all the particles, that forms the [*root-cell*]{} of the tree. The next tree levels are formed by using the Orthogonal Recursive Bisection (ORB). During the force compute phase (hereafter FC) an [*interaction list*]{} (hereafter IL) is formed for each particle. Starting from the root-cell, and analysing the tree level by level, the ratio $C_{ellsize}/d_{i-cell}$ is compared with an opening angle parameter $\theta$ (generally ranging from 0.5 to 1.0), being $d_{i-cell}$ the distance between the particle and the center of mass of the cell. If the ratio is smaller than $\theta$ the cell is [*closed*]{}, added to the IL and it is considered as a [*far*]{} region. The sub-cells of a closed cell will not be investigated in the next tree level any longer. Otherwise, the cell is [*opened*]{} and, in the next tree level analysis, the sub-cells will be checked using the same criterion. This procedure will be repeated until all the tree levels are considered. All the particles found during the tree analysis are always added to the IL. In the last phase all the particles positions are updated, before starting a new cycle. Each particle evolves following the laws of Newtonian physics. Generally, in a very large simulation, the differential equations are integrated using the numerical Leapfrog integration scheme: $$\frac{{\bf x}^{n+1} - {\bf x}^{n}}{\Delta T} = {\bf v}^{n+1/2} \label{eq2}$$ $${\bf v}^{n+1/2} - {\bf v}^{n-1/2} = \frac{{\bf F}^n \Delta T}{m} \label{eq3}$$ where $\Delta T$ is the discrete time-step and the superscript $n$ refer to the time instant $t=n \Delta T$.\ The parallel code FLY ([**F**]{}ast [**L**]{}evel-based N-bod[**Y**]{} code) we describe, is a tree algorithm code to run simulations of collisionless gravitational systems with a larger number of particles ($ N \geq 10^7$). FLY incorporates fully periodic boundary conditions using the Ewald method, without the use of the Fast Fourier Transform techniques [@her91] and it is designed for MPP/SMP systems using the one-side communication paradigm.\ Section 2 show an overview of the FLY design. We will discuss the four main characteristics of FLY in Sections 3, 4, 5 and 6 respectively (domain decomposition, grouping, dynamic load balance and data buffering). Section 7 shows the results of our tests and some comparisons with other codes, Section 8 presents our conclusions. **FLY parallel code** ===================== FLY is the code we design, develop and use to run very big simulations of the LSS of the universe using parallel systems MPP and/or SMP. It is based on the tree algorithm described above and all the phases are fully parallelized. FLY uses the Leapfrog numerical integration scheme for performance reasons, and incorporates fully periodic boundary conditions using the Ewald method, without using the fast Fourier transform techniques [@her91].\ FLY is the result of a project started in 1994 in order to produce a code for collisionless cosmological simulations. We start with the design of a parallel code for a workstations cluster [@ant94], based on the locally essential tree [@sal90] [@sal97] [@dub96], and using the PVM [@pvm] library. The code did not give high performance results due to the low network bandwidth and high latency; moreover, the number of particles that was possible to simulate was small, due to the big size of the locally essential tree.\ The choice we did in 1996 was to re-design the code without using a locally essential tree and using the CRAY T3D with CRAFT [@craft]. The new code was called WDSH-PT [@bec96] (Work and data SHaring - Parallel Tree code) and a dynamically balanced version was produced in 1997 [@bec97].\ In 1999, to allow simulations with larger resolution, we first re-considered the grouping strategy as described in J. Barnes (1990) [@bar90] and applied it, with some modifications, to our WDSH-PT. This code wass called WD99 [@bec2000] and the obtained performance in terms of particles/second was very high.\ FLY is based on WD99 and is designed for MPP and SMP systems. It is written in Fortran 90 and uses the one-side communication paradigm; it has been developed on the CRAY T3E using the SHMEM library. FLY mainly uses remote GET and PUT operations and some atomic operations of global counters. The one-side communication paradigm avoids the processors synchronism during all the phases of the tree algorithm. This choice gives FLY an obvious increment of performances, moreover this paradigm allows us to port FLY in all platforms where the one-side communication is available.\ FLY is based on four main characteristics described in the following sections. It adopts a simple domain decomposition, a grouping strategy, a dynamic load balance mechanism without overhead, and a data buffering that allows us to minimize data communication.\ Data input and data output contain positions and velocities of all particles and are written without any [*control words*]{}. The data format is integrated with a package that we develop for data analysis: ASTROMD [@amd2000], a freely available software (http://www.cineca.it/astromd) for collisionless and gas-dynamical cosmological simulations.\ The FLY version 1.1, described in this paper, is freely available. It runs on CRAY T3E, SGI ORIGIN 2000 using the SHMEM library, and on IBM SP using the LAPI library. **Domain decomposition** ======================== Data distribution plays a fundamental role in obtaining a high performance of the N-Body codes designed for MPP and SMP systems, and the domain decomposition is an extremely crucial aspect. Optimal data distribution among the processor elements (hereafter PEs), must avoid any imbalance of the load among the PEs and minimize the communication on the network. Many kinds of codes use a domain decomposition based on splitting planes that subdivide the domain in sub-domains having the same load, taking into account the mass density distribution. The splitting planes produce a final domain decomposition with sub-domains that do not have an equal geometry; during the system evolution, to avoid a load imbalance, the domain decomposition must be repeated many times.\ FLY does not split the domain with orthogonal planes, but the domain decomposition is done by assigning an equal number of particles to each processor. The data structures of both particles and tree, are subdivided among the PEs to ensure a good initial balance of the load and to avoid any bottleneck while accessing remote data. *Particles data sorting and distribution* ----------------------------------------- The data input of FLY is the array of the fields of position and velocity. Each particle has a tag number from $1$ to $Nbodies$ (the total number of particles). FLY\_sort, an internal utility of FLY, organizes the input data by using the same tree domain decomposition of the algorithm. Fixing a level $l$ of the tree, FLY\_sort builds the tree up to this level, where there are $2^{3 \cdot l}$ cells, $l=0$ being the level of the tree root. Then FLY\_sort assigns the [*cell parent*]{} to each particle, the parent being the cell where the particle is physically located. At the end, following the tree scheme, [*cell parent by cell parent*]{} and considering in turn the nearest cells, FLY\_sort assigns the tag number and stores all the particles.\ It is important to choose the $l$ level to fit the number of PEs; the number of [*cell parents*]{} must be equal to or greater than the number of PEs: i.e. if the user uses 64 PEs he must fix $l \geq 2$ so as to have $ 2^6 = 64 $ cells. The final result is a sorted file containing the fields of position and velocity, so that particles with a near tag number are also near in the physical space.\ Data organized in this way are the input of the FLY simulation code. Each processor has the same number of particles, $Nbodies/N\_PES$, near in the space, in the reserved and/or local memory, being $N\_PES$ the number of processors. This kind of distribution, in contiguous blocks, was already studied in the WDSH\_PT code [@bec96] and it is the best data distribution in terms of measured code performance. When necessary the FLY\_sort procedure may be re-executed, to preserve these properties during the system evolution. FLY\_sort consumes a negligible CPU time compared to the CPU time for a complete simulation run. *Tree data distribution* ------------------------- The tree cells are numbered progressively from the root, which encompasses the whole system, down to the smallest cells which enclose smaller and smaller regions of space. The optimal data distribution scheme of the arrays containing the tree properties, both geometric and physical characteristics, is reached using a fine grain data distribution. The first tree levels contain cells that are typically large enough to contain many particles and, during the simulation, these cells are checked to form the IL of each particle. A fine grain size distribution prevents these cells from being located in the same PE, or in a small number of PEs. In fact in case of a coarse grain size data distribution, all the PEs will attempt to access the same PEs memory, with the typical problems of [*access to a critical resource*]{}. This effect would produce a bottleneck that drastically decreases the code performance. On the contrary, a tree fine grain data distribution allows, on average, all the PEs memories to be requested with the same frequency; thus each particle will have the same average access time to the tree cells avoiding the bottleneck problem.\ We do not claim that this is the optimal choice for mapping the tree onto the T3E torus or onto other SMP systems, nevertheless the tree data distribution adopted by FLY gives the best results on all those systems where FLY runs. **The grouping** ================= FLY uses the grouping strategy we adopted with our WD99 code. The basic idea is to build a single interaction list to be applied to all particles inside a [*grouping cell*]{} $C_{group}$ of the tree. This reduces the number of the tree accesses to build the ILs. We consider a hypothetical particle we call Virtual Body (hereafter VB) placed in the center of mass of the $C_{group}$.\ Using a threshold value R equal to 3 times the $C_{group}$ size to limit the errors, the interaction list $IL_{VB}$ is formed by two parts: $$IL_{VB} = IL_{far} + IL_{near} \label{eq4}$$ where $IL_{far}$ includes the elements more distant than R from VB and $IL_{near}$ includes the elements near VB. Moreover, all $p \in C_{group}$ are included in $IL_{near}$. Using the two lists in Eq. (\[eq4\]) it is possible to compute the force ${\bf F}_p$ as the sum of two components: $${\bf F}_p ={\bf F}_{far} +{\bf F}_{near} \label{eq5}$$ The component ${\bf F}_{far}$ is computed for VB, using the elements listed in $L_{far}$, and it is applied to all the particles $p \in C_{group}$, while the ${\bf F}_{near}$ component is computed separately for each particle with the elements listed in $IL_{near}$. The list $IL_{near}$ contains only a few elements compared to the $IL_{far}$ list, and we obtain a net gain in performance. The size of the $C_{group}$ is constrained by the maximum allowed value of the overall error of this method. In a 16-million-particle simulation with a box size of 50 Mpc, using a conservative level 7 for the size of the $C_{group}$ where the error is lower than 1%, the number of the computed particle/second increases by a factor of 7. In this sense the performance of FLY is level-based and the fast execution of a time-step depends on the fixed level for the grouping. **Dynamic Load Balance** ======================== FLY uses the DLB system already used by the WD99 code. Each particle or grouping cell has a PE executor (hereafter PEx) that performs the FC phase for it. At first FLY computes the FC phase for all the grouping cells, the default PEx being the processor where the greatest number of particles belonging to the $C_{group}$, are memory located. When a PE has no more $C_{group}$ cells to compute, it can start the FC phase for other $C_{group}$ cells, not yet computed by the default PEx. The one-side communication paradigm allows FLY to perform this task without synchronism or waiting states among the PEs, and to get a load balance for the FC phase.\ In a very similar way, FLY balances the load for particles that are not included in grouping cells. There is a fixed portion $N_{ass}$ of local particles that must be computed by the local PE for performance reasons, but the remaining portion $N_{free}$ does not have an assigned PEx. When each PE completes the FC phase for the $N_{ass}$ particles, it can compute the $N_{free}$ particles of all the PEs. As above, the one-side communication paradigm avoids synchronism or waiting states among the PEs, and allows FLY to have a load balance also in the above mentioned FC phase.\ The portion $N_{ass}$ is fixed by the user and, in order to obtain the best performance, it must be as large as possible, thus each PE can work mainly on the local particles but it must always work during the FC phase, avoiding the load imbalance. At the start of each time-step FLY uses, as a prevision value, the load of the last time-step, and it automatically computes the $N_{ass}$ portion again to guarantee the best performance. The tests we perform show that the best results are obtained when the $N_{ass}$ quantity ranges from 80% to 95% of all particles. **Data buffering** ================== To compute the FC phase each PEx, for each particle, must check the tree cells, level by level, starting from the root cell, to form the IL. The IL includes about 90% of cells and, as the cells are distributed among the PEs, each PEx must execute a high number of remote accesses. Moreover, generally, a PEx executes the FC phase for near particles, residing in the local memory, and having a very similar IL. Then it must often check the same tree cells for many particles.\ The figures reported in Tab. 1 are measured both in uniform (redshift $Z=50$) and clustered conditions (redshift $Z=0.3$) for 2, 8 and 16 million particles, in a region of 50 Mpc with $\theta=0.8$. [|l|l|l|l|l|l|]{} & A & B & C\ 2Ml uniform & 1.0 million & 315 & 341\ 2Ml clustered & 1.2 million & 340 & 376\ 8Ml uniform & 3.9 million & 355 & 386\ 8Ml clustered & 4.8 million & 390 & 420\ 16Ml uniform & 7.8 million & 375 & 403\ 16Ml clustered & 10.0 million & 420 & 456\ [**Tab. 1**]{}: A column: number of internal tree cells. B column: average length of the IL. C column: average tree cells checked to form one IL.\ In a 16-million-particle simulation, in clustered conditions, the tree has about 10.0 million cells, the average IL length has 420 elements, but a PEx must checks 456 tree cells to form one IL. Each tree cell access, retrieves 3 positions (8 bytes each), and for each opened cell it is necessary to retrieves 8 sub-pointers (4 bytes each), whereas for each closed cell, it is necessary to retrieve the mass (8 bytes) and the 5-quadrupole momentum (8 bytes each). Considering the opened cells, there are $$(16 \cdot 10^6 \cdot (456 - 420) \cdot 2 + 16 \cdot 10^6 \cdot 420 \cdot 3 ) \cdot \frac{NPEs - 1}{NPEs} \label{eq6}$$ number of remote GETs of contiguous elements, being $NPEs$ the number of PEs used to run the simulation, and $$(16 \cdot 10^6 \cdot (456 - 420) \cdot (3 \cdot 8 + 8 \cdot 4) + 16 \cdot 10^6 \cdot 420 \cdot 9 \cdot 8 ) \cdot \frac{NPEs - 1}{NPEs} \label{eq7}$$ the total remote data transfer in bytes, to execute the FC phase. The number of remote GETs as reported in expression (\[eq6\]), the latency time of each remote access and the bandwidth make it difficult to run a big simulation, even with powerful parallel systems having a high number of CPUs.\ FLY introduces the [*data buffering*]{} to limit the number of remote GETs and the global data transfer, thus obtaining a great improvement in the code performance and scalability. The data buffering uses all the free memory, not allocated to store the arrays of particles and the tree cells properties.\ At the start, FLY statically allocates all the data structures. The memory occupancy of the code is about 5 Mbytes plus 220 bytes for each particle. In conservative mode, FLY allocates a tree having a number of cells equal to the number of particles. In a 16-million-particle simulation with 64 PEs each of them have 250,000 particles, and the local memory occupancy is about 68 Mbytes. Using a system with 256 Mbytes of local memory for each PE, there is a large quantity of memory that can be used to store the remote data: FLY checks the free space and dynamically allocates arrays, in order to store positions masses, pointers and quadrupole momenta of each remote tree cell, that the PE investigates during the FC phase.\ The data buffer is managed with a policy of a [*simulated cache*]{} in the RAM. The cache arrays have an index array, and the mapping of each element has a one-to-one correspondence: each remote element has only one line of the simulated local cache array where it can be loaded. Every time the PE has to access a remote element, at first it looks for the local simulated cache and, if the element is not found, the PE executes the GET calls to down-load the remote element and stores it in the cache arrays. In this phase FLY computes only the acceleration components for each particle so that any problem can arise on the data validity in the simulated cache.\ In a simulation with 16 million particles clustered, with 32 PEs and 256 Mbytes of local memory, without the use of the simulated cache, the PEs execute about $2.1 \cdot 10^{10}$ remote GETs. This value, using the data buffering, decreases at $1.6 \cdot 10^8$ remote GETs, with an enormous advantage in terms of scalability and performance. **FLY performances and scalability** ==================================== In this section we show the measured FLY performance in terms of particles/second and the code scalability using 2,097,152, 8,388,608 and 16,777,216 particles both in uniform and clustered conditions in a box region of 50 Mpc and $\theta=0.8$. We use a conservative grouping level and the data buffering. The tests are executed on all the systems where FLY runs, as described in the following sub-sections. We run two kinds of tests. The first is to measure the performance and the second is to measure the code scalability. All the measures are executed using dedicated systems and processors. *FLY on CRAY T3E* ----------------- We use the CRAY T3E/1200e available at Cineca (Casalecchio di Reno - Bologna). The CRAY T3E is a physically distributed memory but globally accessible through one-way communication (SHMEM library). The system has 256 processors DEC Alfa 21164A, 600 MHz and 308 Gflop/second peaks (1.2 Gflop/second for each PE). The network topology is 3D-torus with 600 Mbytes/second and 1 -128 80 microseconds latency time. The global memory is 48 GBytes. There are two sub-pools: 128 PEs with 128 Mbytes Ram, and 128 PEs with 256 Mbytes Ram. The global space disk is 200 GBytes. We use the pool with processors having 256 Mbytes to test the FLY performances and to obtain the highest gains from the data buffer.\ Fig. 1 shows the code performance obtained by running simulations with 32 and 64 PEs. FLY scalability is shown in Fig. 2 considering the case of 16,777,216 particles, where a speed-up factor of 118 is reached using 128 PEs.\ The highest performance obtained in a clustered configuration is a positive effect of the grouping characteristic as already discussed in WD99. The obtained results show that FLY has a very good scalability and a very high performance and it can be used to run very big simulations. *FLY on SGI ORIGIN 2000* ------------------------ The system is available at Cineca. It has 32 nodes and 64 processors MIPS Superscalar R12000, 300 MHz, with more than 38 Gflop/second peaks (600 Mflops/second for each PE). Each node has 2 PEs and 1 Gbyte memory. The global memory is 32 Gbytes, and the global space disk is 650 Gbytes. The ORIGIN 2000 is a CC-NUMA system with globally addressable memory, and distributed shared memory. The interconnecting network has 780 Mbytes/second bi-directional transfer rate.\ Fig. 3 shows the code performance obtained by running simulations with 16 and 32 PEs. The FLY scalability is shown in Fig. 4.\ The obtained results show that FLY has still good performance but a lower scalability when more than 16 PEs are used. However, FLY can be usefully adopted to run big simulations on this system with a performance comparable with the run on the CRAY T3E. *FLY on IBM SP* --------------- The system is available at the Osservatorio Astrofisico di Catania. It is a distributed memory computer having 3 nodes and 24 Power 3 RISC superscalar processors, 220 MHz, with more than 20 Gflops/second peaks (880 Mflops/second for each PE). Each node has 8 PEs and 16 GByte memory. The global memory is 48 GBytes, and the global space disk is 250 GBytes. The network topology is based on an SPS scalable Omega switch having a 300 Mbytes/second bi-directional transfer rate.\ FLY uses the LAPI library to perform one-side communications. The code performance obtained by running simulations with 2, 4, 8 and 12 PEs is shown in Fig. 5. FLY scalability is shown in Fig. 6. Unfortunately, running a parallel code using LAPI, the Omega switch allows the user to use no more than 4 PEs for each node. Moreover, FLY is optimized when a number of PEs equal to a power of two is used.\ The obtained results show that FLY has not such a good performance and scalability compared to the CRAY T3E system. Therefore, we are developing a new version of FLY that will use MPI-2, and a new algorithm to build the tree. In fact, although on the CRAY T3E system the tree formation phase is shorter than 5% of each time-step, this phase takes more than 30% on the ORIGIN 2000 and on IBM SP, and its scalability is not as good as for the FC phase. *FLY comparison* ---------------- Fig. 7 shows the FLY performance in all the above systems. We run a 2,097,152 clustered particle simulation starting from 2 PEs up to 32 PEs, where availables. The results show that the IBM SP system has lower performance than the CRAY T3E and ORIGIN 2000 systems; however, FLY, on CRAY T3E, has a linear scalability up to 32 PEs (and over), and reaches a speedup higher than 82 using 128 PEs. The FLY starting performance (4 PEs) with the ORIGIN 2000 system is better than with other systems, but the scalability is not as good as with the CRAY T3E system.\ The results that we obtained can be compared with other similar codes. We consider GADGET [@gad2000], one of the most recently released tree-SPH codes. Gadget is a code for collisionless and gasdynamical cosmological simulations. GADGET implements individual particles timesteps and uses only standard C and standard MPI. It is available on CRAY T3E, on IBM SP and on Linux-PC clusters, but other platforms could run the code. With GADGET each processor has a physical spatial domain assigned and builds a local tree. The PE can provide the force exerted by its particles for any location in space. The force computation on a particle therefore requires the communication of the particle coordinates to all processors that will reply with the partial force components. The total force is obtained by summing up the incoming contributions.\ Fig. 8 shows a comparison between FLY and data reported by the GADGET authors, considering only the gravitational section, that is the heaviest section of GADGET.\ FLY seems to have performances higher than GADGET with the same number of PEs. FLY has an optimal scalability, due to the use of the data buffering. The scalability and speed-up data of the gravitational section of GADGET, increasing the number of simulated particles and the number of PEs to more than 16, are not given by the authors. But the GADGET performance, reported in Fig. 8, has a slope lower than FLY’s, thus a lower scalability, with more than 16 PEs, could be expected. **Conclusion and future** ========================= The use of FLY on multiple platforms and the performance obtained enable us to run simulations with high accuracy on the most popular super-computers. FLY is currently used by our research group to run simulations with more than 16 million particles [@ant2000]. In this sense FLY contributes to study the origin and the evolution of the universe and allows the execution of simulations of gravitational effects, even if the user has a limited budget of CPU resources. FLY version 1.1 is freely available and open to the user contribution to enhance the FLY code capability, and the porting of FLY on other platforms. The code is written in Fortran 90 and a C language version is already in progress. We also plan to use the MPI-2 syntax for a future version of FLY.\ FLY is used to run simulations considering only the gravitational effects. Even if we are opened to include the hydrodynamic part in a future version, we plan to integrate FLY with other freely available softwares to consider the hydrodynamic effects, including other kinds of particles (the gas), to study the star formation and other hydrodynamical effects. Acknowledgements ================ All the tests carried out on the CRAY T3E system and the SGI ORIGIN 2000 system at the CINECA, were executed using the financial support of the Italian Consortium CNAA (Consorzio Nazionale per l’Astronomia e l’Astrofisica). We gratefully acknowledge useful discussions with Dr. G. Erbacci of CINECA and Dr. C. Magherini of the Department of Mathematics, Florence, for the FLY porting on IBM SP, and Dr. A. F. Lanza of Catania Astrophysical Observatory for his useful help. Antonuccio-Delogu, V. and Becciani, U. 1994, in Parallel Scientific Computing - PARA ’94 (Springer), 17 Antonuccio-Delogu, V., Becciani, U., Pagliaro, A., Van Kampen, E., Colafrancesco, S., Germaná, A. and Gambera, M. 2000, submitted to MNRAS, also http://babbage.sissa.it/ps/astro-ph/0009495 Becciani, U., Antonuccio-Delogu, V., Gheller, C., Calori, L., Buonomo, F., Imboden, S., 2000, submitted to IEEE CG&A, also http://babbage.sissa.it/ps/astro-ph/0006402 Barnes, J., 1986, in [*Use of Supercomputers in Stellar Dynamics*]{}, eds. P. Hut and S. McMillan (Berlin: Springer), 175 Barnes, J., 1990, J. of Comp. Physics, 87, 161 Barnes, J.E. and Hut, P., 1986, Nature, 324, 446 Becciani, U., Antonuccio-Delogu, V., Pagliaro, A., 1996, Comp. Phys. Comm., 99, 1 Becciani, U., Ansaloni R., Antonuccio-Delogu, V., Erbacci, G., Gambera, M., Pagliaro, A., 1997, Comp. Phys. Comm., 106, 1 Becciani, U., Antonuccio-Delogu, V., 2000, J. Comput. Phys., 163, 118 Cray Research Inc., 1994, [*“Cray MPP Fortran Reference Manual”*]{} SR-2504 6.1 Dubinski, J., 1988, M.Sc. Thesis, University of Toronto Dubinski, J., 1996, New Astronomy, 133, 1 Springel, V., Yoshida, N. White S.D.M., 2000, submitted to New Astronomy also http://babbage.sissa.it/ps/astro-ph/0003162 Hernquist, L., 1987, ApJ. Suppl., 64, 715 Hernquist, L., Bouchet, F.R., Suto, Y., 1991, ApJ. Suppl., 75, 231 Hockney, R.W., Eastwood, J.W., 1981, Computer simulation using particles ed. Mcgraw-Hill International, New York Geist, A., Beguelin, A. , Dongarra, J., Jiang, W., Manchek, R. and Sunderam, V., 1994, “[*PVM 3 User’s Guide and Reference Manual*]{}”, ORNL/TM-12187 Salmon, J., 1990, PhD Thesis, California Institute of Technology Salmon, J. and Warren, M.S, 1997, Proc. of the Eight Conf. on Parallel Processing for Scientific Computing, SIAM 1997
--- abstract: 'The transfer of polarized radiation in stochastic synchrotron sources is explored by means of analytic treatment and Monte Carlo simulations. We argue that the main mechanism responsible for the circular polarization properties of compact synchrotron sources is likely to be Faraday conversion and that, contrary to common expectation, a significant rate of Faraday rotation does not necessarily imply strong depolarization. The long-term persistence of the sign of circular polarization, observed in many sources, is most likely due to a small net magnetic flux generated in the central engine, carried along the jet axis and superimposed on a highly turbulent magnetic field. We show that the mean levels of circular and linear polarizations depend on the number of field reversals along the line of sight and that the gradient in Faraday rotation across turbulent regions can lead to “correlation depolarization”. Our model is potentially applicable to a wide range of synchrotron sources. In particular, we demonstrate how our model can naturally explain the excess of circular over linear polarization in the Galactic Center and the nearby spiral galaxy M81 and discuss its application to the quasar 3C 279, the intraday variable blazar PKS 1519-273 and the X-ray binary SS 433.' author: - 'Mateusz Ruszkowski and Mitchell C. Begelman' title: Circular polarization from stochastic synchrotron sources --- Introduction ============ Polarization has proven to be an important tool in AGN research. In principle, linear and particularly circular polarization observations of synchrotron radiation may permit measurements of various properties of jets such as: magnetic field strength and topology, the net magnetic flux carried by jets (and hence generated in the central engine), the energy spectrum of radiating particles, and the jet composition (i.e., whether jets are mainly composed of $e^{+}-e^{-}$ pairs or electron-proton plasma). The renewed interest in polarization of compact radio sources stems from two recent developments. First, @bow99 detected circular polarization using the Very Large Array (VLA) in the best supermassive black hole candidate, the Galactic center (Sgr A$^{*}$). This discovery was quickly confirmed by @sau99 using the Australia Telescope Compact Array (ATCA). Circular polarization was also detected in the celebrated X-ray binary system SS 433 [@fen00]. Moreover, the Very Long Baseline Array (VLBA) has now detected circular polarization in as many as 20 AGN [@war98; @hom99]. Second, it is now possible to measure circular polarization with unprecedented accuracy of 0.01% using the ATCA [@ray00]. This dramatic improvement in the observational status of polarization measurements has also brought new questions. For example, there is now growing observational evidence that the sign of circular polarization is persistent over decades [@kom84; @hom99], which indicates that it is a fundamental property of jets. Another problem, which has not been satisfactorily explained as yet, is how to reconcile the high level of circular polarization with the lower value of linear polarization in Sgr A$^{*}$ [@bow99] and M81$^{*}$ [@bru01]. Indeed, there is not even a general consensus on the mechanism responsible for the circular polarization properties of jets [@war98].\ In this paper we attempt to solve some of the theoretical puzzles. The paper is organized as follows. In the next section we summarize the most important observational facts. In Section 3 we briefly discuss mechanisms for producing circular polarization and argue that the most likely process is “Faraday conversion”. Section 4 presents our model for polarization and in the subsequent sections we compare Monte Carlo simulations with analytic results and discuss general implications for observations as well as specific observational cases. We summarize our conclusions in Section 7. Observational trends ==================== Compact radio sources typically show a linear polarization (LP) of a few percent of the total intensity [@jon85]. This is much less than the theoretical maximum for synchrotron sources, which can approach $70\%$ in homogeneous sources with unidirectional magnetic field. Therefore, magnetic fields in radio sources are believed to be highly inhomogeneous, although the nonvanishing linear polarization is in itself an indirect indication of a certain degree of ordering of the field. Although the precise topology of the magnetic field in jets is not known, there are other compelling reasons, both theoretical and observational, to believe that magnetic fields are indeed partially ordered. On the observational side, these conclusions are based on measurements of the orientation of linear polarization ($\mathbf E$ vector), which reveal coherent structures across jet images. This indicates that magnetic fields, which are predominantly perpendicular to the electric vectors, are also preferentially aligned, although in different sources or in different parts of the jet the magnetic fields can be mainly orthogonal [@war98] or parallel [@jon85; @rus85] to the projected jet orientation. From the theoretical point of view, ordered jet magnetic field is expected when shocks compress an initially random field [@lai80; @lai81; @mer85; @hug89; @war94] ($\mathbf B$ perpendicular to the jet axis) or when such initial fields are sheared to lie in a plane [@lai80; @lai81; @beg84] ($\mathbf B$ parallel to the jet axis).\ Circular polarization (CP) is a common feature of quasars and blazars [@ray00; @hom01], is usually characterized by an approximately flat spectrum, and is generated near synchrotron self-absorbed jet cores [@hom99]. CP is detected in about 30%-50% of these objects. Measured degrees of CP are generally lower than the levels of linear polarization and usually range between 0.1 and 0.5% [@hom99; @hom01]. As reported by @mar00 for the intraday variable source PKS 1519-273, the CP of variable components of intensity can be much higher than the overall circular polarization levels. Observations of proper motion of CP-producing regions in the quasar 3C 273 [@hom99] suggest that circular polarization is intrinsic to the source, as opposed to being due to foreground effects. Most importantly, comparisons of CP measurements made within the last 30 years [@wei83; @kom84; @hom99] with the most recent observations reveal that, despite CP variability, its sign is a persistent feature of AGN, which must therefore be related to a small net unidirectional component of magnetic field in jets. Mechanisms for producing circular polarization ============================================== The most obvious candidate for explaining circular polarization of compact radio sources is intrinsic emission [@leg68]. Although intrinsic CP is roughly $\pi_{c, \rm int}\sim\gamma^{-1}$ where $\gamma$ is the Lorentz factor of radiating electrons, in a realistic source it will most likely be strongly suppressed by the tangled magnetic field and possibly the emissivity from $e^{+}-e^{-}$ pairs, which do not contribute CP. Specifically, $\pi_{c, \rm int}\sim\gamma^{-1}(B_{u}/B_{\rm rms})f_{\rm pair}\ll 1\%$, where $B_{u}$ and $B_{\rm rms}$ are the unidirectional component of the magnetic field projected onto the line-of-sight and the fluctuating component of the field, respectively, and $f_{\rm pair}\equiv(n^{-}-n^{+})/(n^{-}+n^{+})\le 1$. Therefore, intrinsic CP appears to be inadequate to explain the observed polarization. Other mechanisms have also been proposed, among which the most popular ones are coherent radiation processes [@ben00], scintillation [@marm00] and Faraday conversion [@pac75; @jon77a; @jon88; @war98]. The first of these mechanisms produces polarization in a narrow frequency range which now seems to be ruled out by multiband observations. The recently proposed scintillation mechanism, in which circular polarization is stochastically produced by a birefringent screen located between the jet and the observer, fails to explain the persistent sign of circular polarization as the time-averaged CP signal is predicted to vanish. The last mechanism — Faraday conversion — seems to be the most promising one and in the next subsection we discuss it in more detail. Faraday rotation and conversion ------------------------------- The polarization of radiation changes as it propagates through any medium in which modes are characterized by different plasma speeds. In the case of cold plasma the modes are circularly polarized. The left and right circular modes have different phase velocities and therefore the linear polarization vector of the propagating radiation rotates. This effect is called Faraday rotation and it is often used to estimate magnetic field strength in the interstellar medium and to estimate pulsar distances. Note that Faraday rotation does not alter the degree of circular polarization, since any circular polarization can be decomposed into two independent linearly polarized waves. Faraday rotation is a specific example of a more general phenomenon called birefringence. In a medium whose natural modes are linearly or elliptically polarized, such as a plasma of relativistic particles, birefringence leads to the partial cyclic conversion between linearly and circularly polarized radiation as the phase relationships between the modes along the ray change with position. This effect is best visualized by means of the Poincaré sphere [@mel91; @ken98]. An arbitrary elliptical polarization can be represented by a vector $\mathbf P$ with its tip lying on the Poincaré sphere and characterized by Cartesian coordinates $(Q,U,V)/I$, where $Q,U,V$ and $I$ are the Stokes parameters (see Fig. 1). Thus, the north and south poles correspond to right and left circular polarizations and points on the equator to linear polarization. Different azimuthal positions on the sphere correspond to different orientations of the polarization ellipses. The polarization of natural modes of the medium is represented by a diagonal axis, whose polar angle measured from the vertical axis depends on whether the medium is dominated by cold $(0^{\rm o})$ or highly relativistic particles $(90^{\rm o})$. As the radiation passes through the medium, birefringence causes the tip of the polarization vector to rotate at a constant latitude around the axis of the natural plasma modes. In this picture, Faraday rotation corresponds to the case where the natural modes axis is vertical and the polarization vector $\mathbf P$ rotates around it. Note that, even if radiation initially has no circular polarization (i.e., $\mathbf P$ lies in the equatorial plane) and then encounters a medium in which the normal modes are elliptical, it will develop an elliptically polarized component. An interesting property of a relativistic birefringent plasma is that it can generate circular polarization even if it is composed almost entirely of electron-positron pairs. At first this may seem paradoxical, as one would expect electrons and positrons to contribute to CP with opposite signs. However, despite the fact that the intrinsic CP in such a case is indeed close to zero, some additional CP can be produced by conversion of linear polarization [@saz69; @noe78]. In terms of the Poincaré sphere, this situation corresponds to the normal modes axis pointing close to but not exactly in the equatorial plane. Therefore some conversion of intrinsic linear polarization may occur provided that there is some imbalance in the number of electrons and positrons. ### Strong rotativity limit Strong departures from mode circularity occur only when radiation propagates within a small angle $\sim\nu_{L}/\nu$ of the direction perpendicular to the magnetic field, where $\nu_{L}=eB/2\pi m_{e}c$. Therefore radiative transfer is often performed in the quasi-longitudinal (QL) approximation. If the normal modes are highly elliptical then the opposite, quasi-transverse (QT), limit applies [@gin61]. In a typical observational situation it is usually assumed that Faraday rotation within the source cannot be too large, as this will lead to the suppression of linear polarization. However, this constraint does not prevent rotativity from achieving large values locally as long as the mean rotativity, i.e., averaged over all directions of magnetic field along the line of sight, is indeed relatively small. Such a situation may happen in a turbulent plasma. Some effects of turbulence on polarization were discussed by @jon88 who neglected a uniform magnetic field component and by @war98, who presented results for the case of a small synchrotron depth. Technically, the strong rotativity regime is equivalent to the QL limit and in this paper we build our model on this approximation. Model for polarization ====================== We consider a highly tangled magnetic field with a very small mean component which is required to determine the sign of circular polarization. From a theoretical view-point, we would expect some net poloidal magnetic field, either originating from the central black hole or from the accretion disk, to be aligned preferentially along the jet axis. Specifically, from equipartition and flux freezing arguments applied to a conical jet [@bla79] we get $\langle B^{2}_{\|}\rangle^{1/2}\sim\langle B^{2}_{\bot}\rangle^{1/2}\sim B_{\rm rms}\propto r^{-1}$ where $r$ is the distance along the synchrotron emitting source and the symbols $\|$ and $\bot$ refer to magnetic fields parallel and perpendicular to the jet axis, respectively. From the flux-freezing argument applied to the small parallel bias in the magnetic field we obtain $\langle B_{\bot}\rangle\sim 0$ and $\langle B_{\|}\rangle\propto r^{-2}\propto\delta B_{\rm rms}$, where $\delta\equiv B_{u}/B_{\rm rms}\ll 1$ is the ratio of the uniform and fluctuating components of the magnetic field. Mean Stokes parameters in the presence of field reversals --------------------------------------------------------- We solve the radiative transfer of polarized radiation in a turbulent plasma by adopting transfer equations for a piecewise homogeneous medium with a weakly anisotropic dielectric tensor [@saz69; @jon77a]. Details of the transfer equations are given in the Appendix. We assume that the mean rotativity per unit synchrotron optical depth $\langle\zeta_{v}^{*}\rangle\equiv\delta\zeta$ and that $\langle\sin2\phi\rangle = 0$ and $\langle\cos2\phi\rangle =2p-1$, where $0\leq p\leq 1$ is a parameter describing the polarization direction and degree of order in the field. We also assume that circular absorptivity $\zeta_{v}$ and circular emissivity $\epsilon_{v}$ are both negligible. ### Large synchrotron depth limit Averaging the transfer equations over orientations of the magnetic field, we obtain the following asymptotic expressions for large synchrotron optical depth:\ $$\begin{aligned} \overline{I}+(2p-1)\zeta_{q}\overline{Q} & = & J\\ \overline{Q}+\langle\zeta_{v}^{*}U\rangle +(2p-1)\zeta_{q}\overline{I} & = & (2p-1)\epsilon_{q}J\\ \overline{U}-\langle\zeta_{v}^{*}Q\rangle +(2p-1)\overline{\zeta}_{q}^{*}\overline{V} & = & 0\\ \overline{V}-(2p-1)\overline{\zeta}_{q}^{*}\overline{U} & = & 0 \; .\end{aligned}$$ Note that $\zeta_{v}^{*}$ is not statistically independent from $U$ and $Q$ due to the gradient in Faraday rotation across each cell. The correlation in eq. (2) then reads:\ $$\langle\zeta_{v}^{*}U\rangle=\delta\zeta\overline U+\langle\widetilde{\zeta}_{v}^{*}\widetilde{U}\rangle, \label{eq5}$$ where $\widetilde{\zeta}_{v}^{*}$ and $\widetilde{U}$ denotes the fluctuating part of $\zeta_{v}^{*}$ and $U$. An analogous relation holds for $Q$ in eq. (3). We neglect the term $\langle\widetilde{\zeta}_{q}^{*}\widetilde{U}\rangle$ in eq. (4) as convertibility is a much weaker function of plasma parameters than rotativity. It will be shown in Section 4.1.3 that the correlations $\langle\widetilde{\zeta}_{v}^{*}\widetilde{U}\rangle$ and $\langle\widetilde{\zeta}_{v}^{*}\widetilde{Q}\rangle$ tend to zero as the number of field reversals along the line of sight increases and that\ $$-\langle\widetilde{\zeta}_{v}^{*}\widetilde{Q}\rangle/\overline{U}= \langle\widetilde{\zeta}_{v}^{*}\widetilde{U}\rangle/\overline{Q}\equiv\xi \; .$$ This implies that the mean levels of circular and linear polarizations will also depend on the number of the field reversals along the line of sight. Setting $\overline{\pi}_{q}=\overline{Q}/\overline{I}$, $\overline{\pi}_{u}=\overline{U}/\overline{I}$ and $\overline{\pi}_{v}=\overline{V}/\overline{I}$, we get mean normalized Stokes parameters from equations (1)–(4):\ $$\begin{aligned} \overline{\pi}_{q}&=&\frac{(2p-1)[1-\xi +(2p-1)^{2}\overline{\zeta}_{q}^{*2}](\epsilon_{q}-\zeta_{q})}{\cal D}\\ \overline{\pi}_{u}&=&\frac{(2p-1)\delta\zeta(\epsilon_{q}-\zeta_{q})}{\cal D}\\ \overline{\pi}_{v}&=&\frac{(2p-1)^{2}\overline{\zeta}_{q}^{*2}\delta\zeta(\epsilon_{q}-\zeta_{q})}{\cal D},\end{aligned}$$ where $$\begin{aligned} {\cal D}&\equiv&1+2\xi+(2p-1)^{2}(\zeta_{q}^{*2}-\zeta_{q}\epsilon_{q})+ \delta^{2}\zeta^{2}+\nonumber\\ & &[\xi-(2p-1)^{2}\zeta_{q}\epsilon_{q}] [\xi+(2p-1)^{2}\overline{\zeta}_{q}^{*2}] \; ,\end{aligned}$$ Note that\ $$\frac{\overline{\pi}_{v}}{\overline{\pi}_{u}}=(2p-1)\overline{\zeta}^{*}_{q}$$ $$\frac{\overline{\pi}_{v}}{\overline{\pi}_{q}}= \frac{(2p-1)\overline{\zeta}^{*}_{q}\delta\zeta}{1+\xi+ (2p-1)^{2}\overline{\zeta}^{*2}_{q}} \; .$$ Therefore, circular polarization $\overline{\pi}_{c}=-\overline{\pi}_{v}$ can dominate over linear polarization $\overline{\pi}_{l}=(\overline{\pi}^{2}_{q}+\overline{\pi}^{2}_{u})^{1/2}$ if $\delta\zeta\ga (2p-1)\zeta^{*}_{q}>1$, where the last inequality holds for a large number of field reversals along the line of sight. ### Small synchrotron depth limit In the limit of small synchrotron depth and for $\delta\zeta\ga 1$, we approximately have: $$\begin{aligned} \frac{d\overline{I}}{d\tau}&=&J\\ \frac{d\overline{Q}}{d\tau}+\delta\zeta\overline{U}+\langle\widetilde{\zeta}_{v}^{*}\widetilde{U}\rangle&=&(2p-1)\epsilon_{q}J\\ \frac{d\overline{U}}{d\tau}-\delta\zeta\overline{Q}-\langle\widetilde{\zeta}_{v}^{*}\widetilde{Q}\rangle&=&0\\ \frac{d\overline{V}}{d\tau}-(2p-1)\overline{\zeta}_{q}^{*}\overline{U}&=&0 \; .\end{aligned}$$ Introducing $X\equiv \overline{Q}+i\overline{U}$, where $i=\sqrt{-1}$, we obtain: $$\frac{dX}{d\tau}+(\xi -i\delta\zeta)X=(2p-1)\epsilon_{q}J$$ Solving eq. (16) and using eq. (17) we get: $$\begin{aligned} \overline{\pi}_{l}&=&(2p-1)\epsilon_{q}\left [ \frac{1-2e^{-\tau_{\xi}}\cos\tau_{r}+e^{-2\tau_{\xi}}} {\tau_{\xi}^{2}+\tau_{r}^{2}}\right ]^{1/2}\\ \overline{\pi}_{v}&=&(2p-1)^{2}\epsilon_{q}\frac{\tau_{c}}{\tau_{\xi}^{2}+\tau_{r}^{2}}\times \nonumber \\ && \left [\tau_{r}+\frac{[2\tau_{r}\tau_{\xi}\cos\tau_{r}+(\tau_{\xi}^{2}-\tau_{r}^{2})\sin\tau_{r}]e^{-\tau_{\xi}}-2\tau_{r}\tau_{\xi}}{\tau_{\xi}^{2}+\tau_{r}^{2}}\right ],\end{aligned}$$ where $\tau_{r}=\delta\zeta\tau$, $\tau_{c}=\zeta_{q}^{*}\tau$ and $\tau_{\xi}=\xi\tau$ are the rotation, conversion and “correlation” depths, respectively. As in the synchrotron thick case, circular polarization can exceed linear polarization. For example, when $\tau_{\xi}$ is negligible, the CP/LP ratio exceeds unity if:\ $$(2p-1)\tau_{c}\ga 2\left|\sin\left(\frac{\tau_{r}}{2}\right)\right|.$$ As equations (6)–(9) for the synchrotron thick case and equations (18) and (19) for the synchrotron thin case clearly demonstrate, correlations induced by the gradient in Faraday rotation across turbulent regions have a depolarizing effect on linear and circular polarizations. ### Effect of statistical fluctuations on the mean Stokes parameters Having obtained averaged quantities, we now proceed to calculate the effect of statistical fluctuations on the mean Stokes parameters. We introduce the following notation for the mean and fluctuating parts of the Stokes parameters: $S=\overline{S}+\widetilde{S}$, where $\langle \widetilde{S}_{i}\rangle =0$; and for the angular distribution of the projected magnetic field: $\sin 2\phi\equiv a$, $\cos 2\phi =(2p-1)+b$, where $\langle a\rangle =\langle b\rangle =\langle ab\rangle =0$. We write the fluctuating rotativity as $\zeta_{v}^{*}\equiv\delta\zeta +\widetilde{\zeta}$, where $\langle\widetilde{\zeta}\rangle =0$ but $\widetilde{\zeta}\sim{\cal O}(\zeta)$. We constrain ourselves to the case dominated by Faraday rotation, i.e., we have $|\zeta Q|\gg |a\zeta_{q}I|$, etc. In such a case, the lowest order terms do not depend on the fluctuations in the orientation of the projected magnetic field $\phi$. Thus, retaining only leading terms, we obtain the fluctuating part of the transfer equations for $\widetilde{Q}$ and $\widetilde{U}$: $$\frac{d\widetilde{Q}}{d\tau}+\widetilde{\zeta}(\overline{U}+\widetilde{U})=0$$ $$\frac{d\widetilde{U}}{d\tau}-\widetilde{\zeta}(\overline{Q}+\widetilde{Q})=0 \; .$$ Note that when the fluctuations in $Q$ and $U$ are not dominated by rotativity terms, they are determined by variations in the orientation of the magnetic field. From equations (21) and (22) we can obtain corrections to the mean Stokes parameters due to the gradient in Faraday rotation across each cell:\ $$\langle\widetilde\zeta_{v}^{*}\widetilde{Q}\rangle= -\left\langle\frac{\widetilde{\Delta\tau}}{2}\widetilde{\zeta}^{2}(\overline{U}+\widetilde{U})\right\rangle=-\frac{1}{2}\overline{U}\langle\widetilde{\zeta}^{2}\widetilde{\Delta\tau}\rangle$$ $$\langle\widetilde\zeta_{v}^{*}\widetilde{U}\rangle= \frac{1}{2}\overline{Q}\langle\widetilde{\zeta}^{2}\widetilde{\Delta\tau}\rangle,$$ where $\widetilde{\Delta\tau}=\Delta\tau (\sin\theta)^{\alpha +3/2}$, $\theta$ is the angle between the line of sight and the direction of the magnetic field, $\Delta\tau =\tau_{\rm o}/N$, $\tau_{\rm o}$ is the maximum synchrotron depth and $N$ is the number of turbulent zones along the line of sight within this depth. Although these terms play an important role in the expressions for the mean Stokes parameters (equations \[6\]–\[10\] and \[18\]–\[19\]), they introduce only higher order corrections $\sim {\cal O}(\zeta^{2}\Delta\tau/2)$ to the fluctuation equations and therefore should not be taken into account in equations (21) and (22) for the treatment to be self-consistent (recall that we assume that $\zeta\Delta\tau <1$). Note also that both $\langle\widetilde\zeta_{v}^{*}\widetilde{Q}\rangle$ and $\langle\widetilde\zeta_{v}^{*}\widetilde{U}\rangle$ are proportional to $\tau_{\rm o}N^{-1}$ and therefore asymptotically tend to zero as the number of field reversals along the line of sight increases.\ We note that the presence of inhomogeneities in the magnetic field can, under certain circumstances, introduce additional complications into the radiative transfer due to the tracking and coupling of plasma modes. As the propagating wave goes from the QL to the QT regime (i.e. $\mathbf B$ almost perpendicular to the line of sight) and then back to the QL regime, it can adiabatically adjust itself to the shifting nature of the eigenmodes provided that $\zeta_{ov}\Delta\tau\gg 1$, where $\zeta_{ov}\equiv\zeta_{\alpha}^{*v}\gamma^{2}\ln\gamma_{i}\;/\gamma_{i}^{3}$ [@bjo90; @tho94; @fuk98], i.e., provided that the gradient in the magnetic field is sufficiently small. In the opposite limit, the propagating wave does not ‘notice’ any inhomogeneities. In such a case, a circularly polarized wave will preserve its helicity as it crosses the QT region. Therefore, our assumption that Stokes $I$, $Q$, $U$ and, in particular, $V$, are continuous at sharp boundaries between the turbulent cells, is consistent with our initial assumption that $\zeta_{ov}\Delta\tau <1$. A spatially varying magnetic field can also introduce mode coupling when the coupling constant ${\cal L}\sim \zeta_{oq}/(\zeta_{ov}^{2}\Delta\tau)\sim 1$, where $\zeta_{oq}\equiv 2\zeta_{\alpha}^{*q}\ln (\gamma/\gamma_{i})$ [@jon77b] even in the QL region (i.e., $\mathbf B$ not in a direction almost perpendicular to the line of sight). When ${\cal L}\ll 1$ and propagation occurs in the QL regime, coupling effects are unimportant. In the opposite limit when ${\cal L}\gg 1$, and when there is no uniform component of the magnetic field, radiation propagates as in a vacuum. Although we present no formal proof, we argue that in the latter limit radiation will be unaffected by the fluctuating component of the magnetic field and will be sensitive only to the mean bias component of the total magnetic field. In a real situation, the precise value of ${\cal L}$ will depend on the details of magnetohydrodynamical turbulence. However, since our calculations were performed assuming a piece-wise homogeneous medium, technically our results are exact. This is consistent with claims made by @jon88, who also considered the piece-wise homogeneous case and performed calculations using “standard” transfer equations and the wave equation. He obtained practically identical results from the two methods even though some of his results formally violated the non-coupling criterion. In a real situation we would expect some gradient of magnetic field across each turbulent cell but, again, in general the results will depend on the detailed treatment of the MHD turbulence which is beyond the scope of this work. Results and implications ======================== We now consider a range of specific examples aimed at demonstrating predictions of our model and the consistency of our formulae with the results of Monte Carlo simulations. As mentioned in Section 3.1.1, we focus on cases where Faraday rotation per unit synchrotron optical depth $\zeta_{v}^{*}$ is large. Specifically, we assume that the typical mean Lorentz factor of radiating electrons $\gamma\sim 10^{2}$ and that the electron energy distribution function has a power-law form $n(\gamma)\propto\gamma^{-(2\alpha +1)}$, where $\alpha$ is the spectral index of optically thin synchrotron emission. We use $\alpha=0.5$ and assume that the electron distribution is cut-off below $\gamma_{i}\sim$ a few. For example, for the maximum brightness temperature $T_{b}\sim 10^{11}K$ [@rea94] we have $\gamma\sim 3kT_{b}/m_{e}c^{2}\sim 50$, which corresponds to mean rotation and conversion per unit synchrotron optical depth of order $\sim \delta\zeta_{v}^{*}\sim 3\times 10^{3}\delta\ln\gamma_{i}/\gamma_{i}^{3}$ and $\zeta_{q}^{*}\sim -\ln(\gamma/\gamma_{i})$, respectively, for $\nu\sim\gamma^{2}eB/2\pi m_{e}c$ [^1]. In order to facilitate comparison of analytical and numerical results we use appropriate synchrotron depth-weighted solid angle averages of coefficients of rotativity $\zeta_{v}^{*}$ and convertibility $\zeta_{q}^{*}$ which include the effect of inclination of the uniform component of magnetic field to the line of sight: $$\overline{\zeta}_{q,v}^{*}=\frac{\langle\zeta_{q,v}^{*} (B\sin\theta)^{\alpha +3/2}\rangle_{\Omega}} {\langle (B\sin\theta)^{\alpha +3/2}\rangle_{\Omega}}.$$ Retaining only the leading terms, we get $$\overline{\zeta}_{q}^{*}=-2\zeta_{\alpha}^{*q}\ln\left(\frac{\gamma}{\gamma_{i}}\right)+\frac{3}{8}\pi \left(\frac{1}{2}-\ln 2\right)\zeta_{\alpha}^{*q}$$ $$\overline{\zeta}_{v}^{*}=\frac{3}{2}\zeta_{\alpha}^{*v}\gamma^{2}\frac{\ln\gamma_{i}}{\gamma_{i}^{3}}\delta\cos\theta_{u},$$ where $\theta_{u}$ is the angle between the line of sight and the direction of the uniform magnetic field component. Analogously, for the correlation terms (equations \[23\] and \[24\]), we use $$-\frac{\langle\widetilde{\zeta}_{v}^{*}\widetilde{Q}\rangle}{\overline{U}}= \frac{\langle\widetilde{\zeta}_{v}^{*}\widetilde{U}\rangle}{\overline{Q}}= \frac{\Delta\tau}{4}\left(\zeta_{\alpha}^{*v}\gamma^{2}\frac{\ln\gamma_{i}}{\gamma_{i}^{3}}\right)^{2}\equiv\xi .$$ Mean linear and circular polarizations -------------------------------------- Fig. 2 presents mean linear (upper panel) and circular polarizations for the case of radiation transfer through a high synchrotron depth. The uppermost lines on each panel show analytic results for “saturated” polarizations corresponding to $\xi\la 1$ for $p=1$ (i.e., no dispersion of the projected magnetic field on the sky), and filled squares connected by dashed lines denote results of numerical simulations. Small systematic differences on Figures 2 and 3 between the analytical curves and the numerical ones at higher values of $\delta$ are due to the fact that the analytical formulae do not include higher order $\delta$-terms. The general behavior of these curves can be understood in simple terms. Linear polarization gradually decreases with $\delta$ (and mean rotativity $\overline{\zeta_{v}^{*}}$) as a result of the increasing strength of Faraday depolarization. Note that, contrary to common expectation, the mean linear polarization does not vanish as the result of a highly inhomogeneous magnetic field even though the magnitude of rotation per unit synchrotron optical depth is very large. As expected, circular polarization initially becomes stronger with an increasing component of the uniform magnetic field parallel to the line-of-sight. However, as Faraday depolarization gradually eliminates linear polarization (both Stokes $Q$ and $U$ are affected by this process), there is a reduced amount of $U$ available for conversion into circular polarization (Stokes $V$). Thus circular polarization has an extremum.\ Other curves on Fig. 2 represent the case of “unsaturated” polarization when $\xi\ga 1$. In such situations, the levels of both linear and circular polarizations systematically decline as $N$ decreases. Note that the number of turbulent zones along the line of sight cannot be arbitrarily low, as this would reduce the mean polarization levels to very low values due to the increasing influence of the correlations between rotativity and Stokes $Q$ and $U$ parameters. Apart from affecting the mean polarization levels, increasing the number of field reversals along the line of sight also reduces the polarization fluctuations. Due to the finite size of the telescope beam, the fluctuations are further suppressed, however the beam averaging process does not affect the mean level of circular polarization. It also does not influence the mean level of linear polarization, provided that LP fluctuations are not very large prior to the beam averaging. Assuming that the number of turbulent zones in each of the perpendicular directions across the telescope beam is very roughly comparable to $N$, we conclude that fluctuations in the mean levels of CP and LP due to the stochastic nature of the plasma should be relatively small. Indeed, even though our numerical results for the mean circular and linear polarizations were obtained for a number of “pencil beams” much smaller than $N^{2}$, none of our results exhibit strong fluctuations. Thus, by demanding that there be enough turbulent cells along the line of sight to avoid correlation depolarization, we also guarantee that the polarization variability should be dominated by changes in the mean plasma parameters (e.g., the magnitude of the uniform component of the magnetic field, dispersion of the projected random magnetic field on the sky, the synchrotron depth, etc.), rather than by statistical fluctuations in, e.g., the orientation of the local turbulent magnetic field. This also assures that the sign of circular polarization should be a persistent feature — variations of the mean plasma parameters are not likely to change the sense of circular polarization, provided that the synchrotron optical depth does not change dramatically from low to very high values (see below) and the orientation of the mean field remains the same.\ Fig. 3 illustrates the effect of relaxing the assumption of that the the projected magnetic field has zero dispersion on the sky. As the projected orientation of the magnetic field becomes increasingly chaotic $(p\rightarrow 0.5)$, the magnitudes of linear and circular polarizations gradually decline.\ As both Figures 2 and 3 indicate, circular polarization can exceed linear polarization at higher values of the bias parameter $\delta$. The excess of CP over LP requires significant Faraday depolarization of linear polarization. Nevertheless, even a small amount of $U$ can then be effectively converted to circular polarization, leading to CP/LP ratios in excess of unity. It is even conceivable to have a situation in which linear polarization falls below the detection threshold whereas circular polarization is still easily observable. Note that significant Faraday depolarization does not require a large bias $\delta$ (i.e., exceeding unity) because of the large values of rotativity. This is consistent with observations which do not reveal any dominant large scale unidirectional fields. Such fields would lead to very strong linear polarization and are also unexpected on theoretical grounds.\ In real sources the effective synchrotron optical depth will depend not only on the emission properties of the plasma along the line of sight but also, among other factors, on the solid angle subtended by the emitting region. In a realistic situation, the telescope will integrate over a finite-size beam with different synchrotron depths along different lines of sight. Thus, the emission from the synchrotron self-absorbed core of a jet will be weighted by a smaller solid angle than the emission from the more extended regions which have lower synchrotron depths and, therefore, lower surface brightness. Detailed calculations of the polarization properties of specific jet models are beyond the scope of this paper and will be presented in a forthcoming publication (Ruszkowski and Begelman, in preparation). Fig. 4 illustrates the effect of varying the total synchrotron depth $\tau_{\rm o}$ for ‘saturated’ polarization ($\xi\la 1$). For large $\tau_{\rm o}$ the results are similar to the ones presented on Fig. 2 and 3. In the opposite extreme, i.e., when $\tau_{\rm o}$ tends to zero, we have qualitatively similar behavior with the main difference being that the helicity of circular polarization is reversed. As in the high synchrotron depth case, the general trends in the polarization behavior can be understood in simple terms. The gradual decline of linear polarization with $\delta$ is just a result of Faraday depolarization. As radiation propagates through the plasma, Stokes $U$ is generated from Stokes $Q$ due to Faraday rotation. Therefore, circular polarization, which is produced from Stokes $U$ by Faraday conversion, initially increases. However, for large $\delta$, Faraday depolarization reduces Stokes $Q$ and $U$ and thus leads to the suppression of circular polarization. The oscillations in linear and circular polarizations are due to cyclic rotation of Stokes $Q$ into $U$ followed by conversion to $V$. Note that, for a narrow range of intermediate values of synchrotron depth, the behavior of CP resembles that of low $\tau_{\rm o}$ when $\delta$ (and the mean rotativity) is small, and that of high $\tau_{\rm o}$ when $\delta$ is larger.\ The effects of “correlation depolarization” for synchrotron depths around unity are showed in Fig. 5. This figure also demonstrates our analytic results for small synchrotron depths (dashed lines). Clearly, our analytic solutions for both very large and small synchrotron depths adequately describe the numerical results which smoothly join the two regimes. Application to specific sources =============================== The case of quasar 3C279 ------------------------ @war98 reported the discovery of circular and linear polarization in 3C 279 and attributed CP to internal Faraday conversion. Typical fractional linear and circular polarizations in 3C 279 are of order $\sim 10\%$ and $\la 1\%$, respectively. @war98 concluded that if the jet is composed of normal plasma, then the low-energy cut-off of the energy distribution of relativistic electrons must be as high as $\gamma_{i}\sim 100$ in order to avoid Faraday depolarization and overproduction of the jet kinetic power. They considered synchrotron thin models with a bias in magnetic field and a small number of field reversals along the line of sight but neglected the dispersion of the projected magnetic field on the sky. They were unable to fit their polarization models to the observational data for $\gamma_{i}\ga 20$ and thus claimed that the jet must be pair-dominated. However, the above observational constraints on CP and LP and the jet energetics can be satisfied for a variety of microscopic plasma parameters. This is due to the fact that different “microscopic” parameters, such as $\gamma_{i}$, the ratio of the cold to relativistic electron number densities, or the positron fraction, can lead to similar “macroscopic” parameters such as convertibility and rotativity. In order to illustrate this, we consider two radically different examples and show that both cases can lead to the same CP and LP. ### Electron-proton jet In this example, plasma is composed exclusively of a mixture of protons and electrons with both relativistic and cold populations being present. For instance, for a low-energy cut-off $\gamma_{i}\sim 30$ and an electron number density-weighted mean Lorentz factor $\gamma\sim 50$, we get $\langle\zeta_{v}^{*}\rangle\sim 80(n_{c}/n_{r})\delta$ and $\langle\zeta_{q}^{*}\rangle\sim -0.5$ (for $n_{c}/n_{r}\ll 10^{2}$; see Appendix for details). The above value of rest-frame $\gamma$ is consistent with self-absorbed sources having brightness temperatures in a narrow range close to $\sim 10^{11}K$ [@rea94]. The actual energy distribution of the radiating particles will not be characterized by sharp energy cut-offs but will rather be a smooth function [@shi00; @die00]. However, the detailed treatment of these subtleties is beyond the scope of this paper and we believe that the overall complexity of the problem justifies the use of our approximate treatment. For the above choice of parameters, the main contribution to rotativity comes from cold electrons as long as $n_{c}/n_{r}\ga 5\times 10^{-3}$ (cf. eq. \[A9\]). The required levels of LP and CP can be obtained, for example, when $p=1$ and the combination $(n_{c}/n_{r})\delta\sim 2.5\times 10^{-3}$, as this assures sufficiently low mean rotativity that it does not lead to Faraday depolarization. Bear in mind that the admixture of cold electrons does not have to be large to explain the data. For example, we get the right levels of LP and CP for $n_{c}/n_{r}\sim 5\times 10^{-2}$ and $\delta\sim 5\times 10^{-2}$. Interestingly, a jet with such a plasma composition could carry roughly as small a kinetic power as the pure electron-positron jet with the same emissivity, since the ratio of kinetic powers of an $e$-$p$ jet to a pure relativistic $e^{+}$-$e^{-}$ jet is $\sim 18.4(\langle\gamma\rangle_{e^{+}e^{-}}/50)^{-1}(\gamma_{i,e^{+}e^{-}}/\gamma_{i,pe})$, where we have assumed that protons are cold and have used $\alpha =0.5$ and where $e^{+}e^{-}$ and $pe$ refer to pair plasma and normal plasma, respectively. This means that, in principle, one can also explain the mean levels of circular and linear polarizations in 3C 279 for much lower values of $\gamma_{i}$ than the one used above and therefore much higher rotativity, provided that the bias parameter $\delta$ is also much lower. For example, the right levels of CP and LP can be obtained for $\gamma_{i}=3$, $\gamma =50$, $N=1875$, $\delta\sim 3\times 10^{-3}$ and $\tau=1$ (see middle panels in Fig. 5). We note that observational constraints on jet composition based on energetics should be taken with caution in any case, as kinetic luminosities derived from these methods scale with high powers of poorly determined observational quantities. ### Electron-positron jet The alternative possibility is that the jet is dominated by relativistic pair plasma. For example, for $\gamma_{i}=2$ and $\gamma =50$ we get $\langle\zeta_{q}^{*}\rangle\sim -3.1$ and $\langle\zeta_{v}^{*}\rangle\sim 1.4\times 10^{2}\delta (n_{p}/n_{e})$, where $n_{p}$ and $n_{e}$ are the number densities of protons and electrons, respectively. Agreement with the observed fractional linear and circular polarizations can be obtained for $p=1$ and $(n_{p}/n_{e})\delta\sim 2.1\times 10^{-3}$. Depending on the actual values of $\delta$ and the ratio of protons to electrons, the jet may be pair-dominated in the sense that $n_{e}\gg n_{p}$ while being dominated dynamically by protons; or it can be dominated by pairs both numberwise and dynamically. Recent theoretical work of @sik00 suggests that jets may be pair-dominated numberwise but still dynamically dominated by protons. The case of Sgr A$^{*}$ ----------------------- Among the most intriguing sets of polarization observations are those of the Galactic center. Observations of stellar proper motions in the vicinity of the nonthermal source in the center of the Galaxy (Sgr A$^{*}$) reveal the presence of a $\sim2.6\times 10^{6}M_{\odot}$ compact object — the most convincing candidate for a supermassive black hole [@eck96; @eck97; @ghe98]. Recently @bow99 reported the detection of circular polarization from Sgr A$^{*}$ with the VLA, which was confirmed by @sau99 using ATCA. The typical level of CP in their observations was $\sim 0.3\%$, greater than the level of linear polarization. This result may seem surprising in light of the strong limits on the ratio of CP to LP in AGN where CP/LP is usually much less than unity [@wei83]. However, as explained above, an excess of CP over LP can be explained easily in the framework of our model. Archival VLA data indicate that the mean CP was stable over ten years [@bow00]. This is also not surprising as our model naturally predicts a persistent CP sign provided that the number of field reversals, either along one line-of-sight or across the beam area, is sufficiently large and the source does not undergo dramatic changes from synchrotron thin to synchrotron thick regimes. However, short-term circular polarization variability ($\sim$ a few days) is also present. Interestingly, recent VLA observations from 1.4 to 15 GHz separated by a week revealed a CP increase at frequencies greater than 5 GHz, which coincided with an increase in the total intensity [@bow00b]. The CP spectrum was characterized by a flat to slightly positive spectral index ($\pi_{c}\propto \nu^{\beta}, \beta\ga 0$). This result can also be accounted for in our model. For example, in the framework of a self-absorbed, self-similar jet model [@bla79] but with a small bias $\delta$ we have $\langle\zeta_{v}^{*}\rangle\propto\delta\propto B\propto \nu$. Assuming that $\xi\ga\epsilon_{q}\zeta_{q}$ and $q\equiv (2p-1)^{2}\langle\zeta_{q}^{*}\rangle^{2}\gg 1$, and then demanding $|\pi_{c}|/\pi_{l}\ga 1$ and $\beta\ga 0$, we obtain the following criterion which leads to the observed behavior (cf. eq. \[7\]–\[10\]):\ $$(\xi +q)^{2}q^{-1}\la (\delta\zeta)^{2}\la 2\xi +q+\xi (\xi +q) .$$ For instance, for $\xi=20$, $\langle\zeta_{q}^{*}\rangle \sim -10$, $\delta\zeta\sim 40$ and $p=1$ we have $\beta\ga 0$, $|\pi_{c}|/\pi_{l}\sim 3$ and $|\pi_{c}|\sim 0.6\%$. Such values of the mean convertibility and rotativity could be obtained in the case of predominantly cold $e^{-}$-$p$ plasma with a small admixture of relativistic electrons. Note, however, that a certain amount of relativistic particles is always essential as synchrotron processes dominate emission and absorption in typical nonthermal sources. Specifically, the minimum and maximum size of Sgr A$^{*}$ constrain the brightness temperature to be $10^{10}\la T_{b}\la 5\times 10^{11}K$ [@mel01], which is within the range of typical AGN radio cores. Taking $T_{b}\sim 10^{11}$ as the representative rest frame value [@rea94], we get $\gamma\sim 50$. For minimum cut-off Lorentz factor $\gamma_{i}=2$, this gives relativistic contributions to convertibility and rotativity equal to $\langle\zeta_{q}^{*({r)}}\rangle\sim -3$ and $\langle\zeta_{v}^{*({r)}}\rangle\sim 275\delta$, respectively. Provided that dielectric suppression (Razin effect) is unimportant, the overall transfer coefficients are merely the sums of cold $(c)$ and relativistic $(r)$ contributions. Thus, for $\zeta_{v}^{*(c)}/\zeta_{v}^{*(r)}\sim 4.3 (n_{c}/n_{r})$ and $\zeta_{q}^{*(c)}/\zeta_{q}^{*(r)}\sim 0.08(n_{c}/n_{r})$ (see Appendix), we obtain $(n_{c}/n_{r})\sim 30$ and $\delta\sim 10^{-3}$, which corresponds to the required values of the mean convertibility and rotativity.\ It has recently been suggested that observations of linear polarization can be used to constrain the accretion rate in Sgr A$^{*}$ and other low-luminosity AGN [@ago00; @qua00]. These authors base their argument on the assumption that the Faraday rotation measure has to be sufficiently small in order not to suppress strong linear polarization [@ait00]. This assumption places limits on density and magnetic field strength and leads to very low accretion rates $\sim 10^{-8}$ to $10^{-9}M_{\odot}$ yr$^{-1}$. As noted by @ago00 and @qua00, this is inconsistent with an advection-dominated model for Sgr A$^{*}$, which assumes that the accretion rate is of order the canonical Bondi rate $\sim 10^{-4}$ to $10^{-5}M_{\odot}$ yr$^{-1}$. We point out that strong rotation measure does not in principle limit densities and magnetic fields provided that the field has a very small bias superimposed on it, which is required to define the sign of circular polarization, and that the dispersion of the projected magnetic field on the sky is not too large. This implies that ‘high’ accretion rates comparable to the Bondi rate cannot be excluded on these grounds. More work on this issue is required to fully exploit the information contained in polarization observations in order to constrain physical conditions in Sgr A$^{*}$. Radio galaxy M81$^{*}$ ---------------------- @bru01 detected circular polarization in the compact radio jet of the nearby spiral galaxy M81. Their estimated values of CP were $0.27\pm 0.06\pm 0.07\%$ at 4.8 GHz and $0.54\pm 0.06\pm 0.07\%$ at 8.4 GHz, where errors are separated into statistical and systematic terms. This suggests that the CP spectrum is flat or possibly inverted. They also detected no linear polarization at a level of $0.1\%$, indicating that the source has a high circular-to-linear polarization ratio. The spectral index indicates that this source is synchrotron thick and we can apply the same approach as for Sgr A$^{*}$. For example, for $\xi\sim 50$ and $\delta\zeta\sim 80$ we get a flat to slightly positive CP spectral index, CP/LP ratio $\sim 4.7$ and $\pi_{c}\sim 0.33\%$. IDV blazar PKS 1519-273 ----------------------- @mar00 reported strongly variable circular and linear polarization from the intraday variable blazar PKS 1519-273. The source was characterized by an inverted intensity spectrum. The spectrum of circular polarization was roughly consistent with being flat or inverted (see their Fig. 3c) and typical values of linear and circular polarizations were a few percent and $\sim 1\%$, respectively. These data can be explained, for example, in the limit of high synchrotron depth. In order to have a roughly flat/inverted CP spectrum, we need the bias to correspond approximately to $\delta$ near or below the extremum in circular polarization according to curves in Fig. 3. Levels of linear and circular polarization, in qualitative agreement with the observed ones, can be obtained for various combinations of $\xi$ and the dispersion of the projected random magnetic field on the sky ($2p-1$) (cf. equations \[6\]–\[8\] and see Figures 2 and 3). Alternatively, the data can be explained for synchrotron depth $\tau\sim 1$. In this case, the flatness of the synchrotron spectrum could be explained by means of a Blandford-Königl model in which the photospheric surface of constant, model-dependent, synchrotron depth moves outward along the jet with decreasing frequency. The required levels of CP and LP can then be obtained easily for the parameters considered in Fig. 5 (see middle and right panel). X-ray binary SS 433 ------------------- @fen00 detected circular polarization from the radio jet in the famous X-ray binary SS 433. The flux density spectra of circular polarization and Stokes $I$ were roughly of the form $V\propto I\propto\nu^{-1}$. However, multiple components in the source and a lack of high spatial resolution prevented determination of the origin of circular polarization and the spectrum of fractional polarization. They argued that the CP emission is likely to be produced in the innermost regions of the binary, and that the fractional CP of this region can be as high as $10\%$. In this case the ratio of the fractional circular polarization to the overall linear polarization ($\la 1\%$) would exceed unity. If circular polarization originates from regions characterized by synchrotron depth $\tau\gg 1$, then the underlying synchrotron spectrum from this region would be approximately flat/inverted [@par99]. Then, the fractional CP spectrum would scale roughly $\propto\nu^{-1}$, which corresponds to a bias $\delta$ (and mean rotativity) greater than that associated with the CP extremum. The highest circular polarization would then be about $3\%$. Note that in such a situation we would also expect an excess of CP over LP even in the innermost regions of SS 433. If CP originates from the inner parts of the binary but from regions of $\tau\la 1$, then the flatness of the synchrotron spectrum could be explained by means of a Blandford-Königl model. In such a scenario, the maximum fractional CP from the central regions could be as high as $\sim 20\%$ (see Fig 4) and the CP spectrum could have a negative slope $\beta$ ($\pi_{c}\propto\nu^{\beta}$; cf. eq. \[19\] for $\tau_{\xi}\ll 1$). Conclusions =========== We have considered the transfer of polarized synchrotron radiation in stochastic sources by means of an analytic approach and a set of numerical simulations, and have argued that Faraday conversion is the primary mechanism responsible for the circular polarization properties of compact radio sources. A crucial ingredient of our model is a small bias in the highly turbulent magnetic field which accounts for the persistence of the sign of circular polarization. This bias is direct evidence for the net magnetic flux carried by magnetically accelerated jets (e.g., @blap82 [@lic92]).\ Extremely large rates of Faraday rotation, i.e., Faraday rotation per unit synchrotron absorption depth, do not necessarily lead to depolarization provided that the mean rate of Faraday rotation across the source is relatively small, or in other words, that the turbulent magnetic field possesses a very small directional bias. Indeed, a large Faraday rotativity is required in order to explain the high ratio of circular to linear polarization observed in some sources. Constraints on jet composition or accretion rate, based on the requirement that the source does not become Faraday depolarized, may be circumvented under these conditions.\ Gradients in Faraday rotation across turbulent cells can lead to correlations between rotativity and Stokes $Q$ and $U$ parameters, which can result in “correlation depolarization”. Observed polarization levels require that the field have many reversals along the line of sight to avoid this effect. Statistical fluctuations of circular and linear polarizations are then likely to be dominated by changes in the mean parameters describing the plasma rather than by the stochastic behavior of the turbulent medium. Variations in the mean parameters are unlikely to change the helicity of circular polarization unless a source undergoes a sharp transition from very low to very high synchrotron depth.\ We have shown that our model is potentially applicable to a wide range of compact synchrotron sources. In particular, it naturally predicts an excess of circular over linear polarization when a source is strongly depolarized by the mean Faraday rotation and when a small amount of linear polarizarion is efficiently converted into circular polarization. This can explain the polarization properties of the Galactic Center and M81$^{*}$. We thank Roger Blandford, Avery Broderick and Marek Sikora for insightful discussions. This work was supported in part by NSF grant AST-9876887. Transfer of polarized radiation =============================== The transfer equation of polarized radiation reads [@zhe74; @jon88] [^2]: $$\left(\begin{array}{cccc} \left(\frac{d}{d\tau}+1\right) & \zeta_{q}\cos 2\phi & -\zeta_{q}\sin 2\phi &\zeta_{v}\\ \zeta_{q}\cos 2\phi & \left(\frac{d}{d\tau}+1\right) & \zeta_{v}^{*} &\zeta_{q}^{*}\sin 2\phi\\ -\zeta_{q}\sin 2\phi & -\zeta_{v}^{*}&\left(\frac{d}{d\tau}+1\right) &\zeta_{q}^{*}\cos 2\phi\\ \zeta_{v} & -\zeta_{q}^{*}\sin 2\phi & -\zeta_{q}^{*}\cos 2\phi&\left(\frac{d}{d\tau}+1\right) \end{array}\right) \left( \begin{array}{c} I\\ Q\\ U\\ V \end{array} \right) = \left( \begin{array}{c} 1\\ \epsilon_{q}\cos 2\phi\\ -\epsilon_{q}\sin 2\phi\\ \epsilon_{v} \end{array} \right) J$$ where $I$, $Q$, $U$ and $V$ are the usual Stokes parameters, $\tau\propto (\nu_{B}/\nu)^{\alpha +5/2}\nu_{B}^{-1}$ is the synchrotron optical depth, $J$ is the source function, $\phi$ is the azimuthal projection angle of the magnetic field on the sky and the coefficients of emissivity ($\epsilon_{q}$, $\epsilon_{v}$), absorptivity ($\zeta_{q}$, $\zeta_{v}$), convertibility ($\zeta_{q}^{*}$) and rotativity ($\zeta_{v}^{*}$) are given by: $$\begin{aligned} \epsilon_{q} & = & \epsilon_{\alpha}^{q}\\ \epsilon_{v} & = & 0\\ \zeta_{q} & = & \zeta_{\alpha}^{q}\\ \zeta_{v} & = & 0\\ \zeta_{q}^{*} & = & -\left(1+\frac{\zeta_{q}^{*(c)}}{\zeta_{q}^{*(r)}}\right) \zeta_{\alpha}^{*q}\left(\frac{\nu_{B}}{\nu}\right)^{\alpha -1/2}\left[1-\left(\frac{\nu_{i}}{\nu}\right)^{\alpha -1/2}\right]\left(\alpha -\frac{1}{2}\right)^{-1},\hspace{1cm}\alpha>\frac{1}{2}\\ \zeta_{v}^{*} & = &\left(1+\frac{\zeta_{v}^{*(c)}}{\zeta_{v}^{*(r)}}\right) \zeta_{\alpha}^{*v}\left(\frac{\nu}{\nu_{i}}\right)^{\alpha +1/2}\frac{\ln\gamma_{i}}{\gamma_{i}}\left(\frac{n_{r}^{-}-n_{r}^{+}} {n_{r}^{-}+n_{r}^{+}}\right)\cot\theta\end{aligned}$$ where $\zeta_{v,q}^{*(c)}/\zeta_{v,q}^{*(r)}$ are the ratios of convertibilities $(q)$ and rotativities $(v)$ of the cold $(c)$ and relativistic $(r)$ plasmas given by: $$\begin{aligned} \frac{\zeta_{q}^{*(c)}}{\zeta_{q}^{*(r)}}&=&\frac{1}{2\alpha}\;\frac{\alpha -1/2}{1-(\nu/\nu_{i})^{\alpha -1/2}}\;\frac{1}{\gamma_{i}}\;\frac{n_{c}^{-}+n_{c}^{+}}{n_{r}^{-}+n_{r}^{+}}\\ \frac{\zeta_{v}^{*(c)}}{\zeta_{v}^{*(r)}}&=&\frac{1}{2\alpha}\;\frac{\alpha+ 1}{\alpha +3/2}\;\frac{\gamma_{i}^{2}}{\ln\gamma_{i}}\;\frac{n_{c}^{-}-n_{c}^{+}}{n_{r}^{-}-n_{r}^{+}}.\end{aligned}$$ The transfer coefficients have been generalized to include contributions from both electron and positron plasmas but they assume an isotropic pitch-angle distribution of the radiating particles. We also neglected circular emissivity $\epsilon_{v}$ and absorptivity $\zeta_{v}$ [@jon77a]. In the above equations, $n_{c,r}^{+,-}$ are number densities of cold/relativistic electrons $(-)$ or positrons $(+)$, $\nu_{B}=eB\sin\theta/2\pi m_{e}c$ is the Larmor frequency, $\gamma_{i}$ is the low-energy cut-off Lorentz factor of relativistic particles, $\nu_{i}=\gamma_{i}^{2}\nu_{B}$ is the frequency corresponding to radiating particles of energy $\sim\gamma_{i}m_{e}c^{2}$, $\alpha$ is the synchrotron-thin spectral index which also defines the slope of the relativistic particle energy distribution $n(\gamma)\propto\gamma^{-2\alpha -1}$, $\theta$ is the angle between the line of sight and the direction of the magnetic field and $\epsilon_{\alpha}^{q}$, $\zeta_{\alpha}^{q}$, $\zeta_{\alpha}^{*q}$ and $\zeta_{\alpha}^{*v}$ are the proportionality coefficients which are tabulated in @jon77a.\ We integrated the radiative transfer equations using the Cash-Karp embedded Runge-Kutta fifth-order method with an adaptive stepsize control [@pre92]. Computations were performed by integrating transfer equations in a piece-wise homogeneous medium. The physical conditions in each cell, i.e., orientation of the magnetic field vector, were chosen using the random number generation method of L’Ecuyer in the implementation provided by @pre92. This method is particularly suited for our purposes as it generates long-period sequences of random numbers and thus prevents any spurious correlations. We assumed a constant strength of the variable $\mathbf{B}$-field component but allowed its solid angle distribution to be uniform within $\theta\in (0^{\rm o}, 180^{\rm o})$ and $\phi\in (-\phi_{o},\phi_{o})$, where $\langle\cos2\phi\rangle_{(-\phi_{o},\phi_{o})}=2p-1$ is the average of $\cos 2\phi$ in the interval $(-\phi_{o},\phi_{o})$. We also superimposed a weak constant and unidirectional component $|\mathbf{B}_{u}|=\delta |\mathbf{B}|$ on the variable magnetic field. Agol, E. 2000, , 538, L121 Aitken, D.K., Greaves, J., Chrysostomou, A., Jennes, T., Holland, W., Hough, J.H., Pierce-Price, D, and Richer J. 2000, , 534, L173 Begelman, M.C., Blandford R.D., and Rees, M.J. 1984, Rev. Mod. Phys., 56, 255 Benford, G., and Tzach, D. 2000, , 317, 497 Björnsson, C.-I. 1990, , 242, 158 Blandford, R.D., and Königl, A. 1979, , 232, 34 Blandford, R.D., and Payne, D.G. 1982, , 199, 883 Bower, G.C. 2000, GCNEWS, 11, 4, http://www.mpifr-bonn.mpg.de/gcnews/index.shtml Bower, G.C., Falcke, H., and Backer, D.C. 1999, , 523, L29 Bower, G.C., Falcke, H., Sault, H., and Backer, D.C. 2000, in preparation Brunthaler, A., Bower, G.C., Falcke, H., and Mellon, R.R. 2001, ApJL, in press, astro-ph/0109170 Dieckmann, M.E., Chapman, S.C., McClements K.G, Dendy, R.O., and Drury, L. O’C. 2000, A&A, 356, 377 Eckart, A., and Genzel, R. 1996, , 383, 415 Eckart, A., and Genzel, R. 1997, , 284, 776 Fender, R., Rayner D., Norris R., Sault R.J., and Pooley, G. 2000, , 530, L29 Fuki, A.A., Kravtsov, Yu.A., and Naida, O.N. 1998, [*Geometrical Optics of Weakly Anisotropic Media*]{} (Gordon & Breach) Ghez, A., Klein, B.L., Morris, M., and Becklin, E.E. 1998, , 509, 678 Ginzburg, V.L. 1961, [*The Propagation of Electromagnetic Waves in Plasma*]{} (New York: Gordon & Breach) Homan, D.C., and Wardle, J.F.C. 1999, , 118, 1942 Homan, D.C., Attridge, J.M., and Wardle J.F.C. 2001, , 556, 113 Hughes, P.A., Aller H.D., and Aller M.F. 1989, , 341, 54 Jones, T.W. 1988, , 332, 678 Jones, T.W., and O’Dell S.L. 1977, , 214, 522 Jones, T.W., and O’Dell S.L. 1977, , 215, 236 Jones, T.W., Rudnick, L., Aller, H.D., Aller, M.F., Hodge, P.E., and Fiedler, R.L. 1985, , 290, 627 Kennet, M., Melrose, D. 1998, PASA, 15, 211 Komesaroff, M.M., Roberts, J.A., Milne, D.K., Rayner, P.T., and Cooke, D.J. 1984, , 208, 409 Laing, R.A. 1980, , 193, 493 Laing, R.A. 1981, , 248, 87 Legg, M.P.C., and Westfold, K.C. 1968, , 154, 499 Li Z-Y., Chiueh, T., and Begelman M.C. 1992, , 394, 459 Macquart, J.-P., and Melrose, D.B. 2000, , 545, 798 Macquart, J.-P., Kedziora-Chudczer, L., Rayner, D.P., and Jauncey, D.L. 2000, , 538, 623 Melia, F., and Falcke, H. 2001, ARA&A, 39, 309 Melrose D.B., and McPredhan, R.C. 1991, [*Electromagnetic processes in dispersive media : A treatment based on the dielectric tensor*]{}, Cambridge \[England\] ; New York : Cambridge University Press Marscher, A.P., and Gear, W.K. 1985, , 298, 114 Noerdlinger, P.D. 1978, Phy. Rev. Lett, 41, 135 Pacholczyk, A.G., and Swihart, T.L. 1975, , 197, 125 Paragi, Z., Vermeulen, R.C., Fejes, I., Schilizzi, R.T., Spencer, R.E., and Stirling, A.M. 1999, A&A, 348, 910 Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P. 1992, [*Numerical Recipes in C: The art of scientific computing*]{}, Cambridge \[England\] ; Cambridge University Press Quataert, E., and Gruzinov, A. 2000, , 545, 842 Rayner, D.P., Norris, R.P., and Sault, R.J. 2000, , 319, 484 Readhead, A.C.S. 1994, , 426, 51 Rusk, R.E., and Seaquist, E.R. 1985, J.R.A.S. Canada, 79, 246 Sault, R.J., and Macquart, J.-P. 1999, , 526, L85 Sazonov, V.N. 1969, Sov. Phys. JETP, 29, 578 Shimada, N., and Hoshino, M. 2000, , 543, L67 Sikora M., and Madejski G. 2000, , 534, 109 Thompson, C., Blandford, R.D., Evans, C.R., and Phinney, E.S. 1994, , 422, 304 Wardle, J.F.C., and Roberts D.H. 1994, in [*Compact Extragalactic Radio Sources*]{} (eds Zensus, J.A., and Kellerman, K.I.), 217, (Workshop Proc., NRAO, Socorro, NM) Wardle, J.F.C., Homan, D.C., Ojha, R., and Roberts, D.H. 1998, , 395, 457 Weiler, K.W., and de Pater, I. 1983, , 52, 293 Zheleznyakov, V.V., Suvorov, E.V., and Shaposhnikov, V.E. 1974, [*Soviet Astr.*]{}-AJ, 18, 142 [^1]: We further comment on the choice of parameters and consider more general situations while discussing specific observational cases in Sections 6.1 and 6.2 [^2]: Our formulae are free from typographic errors found in @jon88.
--- abstract: | The resonant electric quadrupole amplitude in the transition $\gamma N\leftrightarrow\Delta(1232)$ is of great interest for the understanding of baryon structure. Various dynamical models have been developed to extract it from the corresponding photoproduction multipole of pions on nucleons. It is shown that once such a model is specified, a whole class of unitarily equivalent models can be constructed, all of them providing exactly the same fit to the experimental data. However, they may predict quite different resonant amplitudes. Therefore, the extraction of the E2/M1($\gamma N\leftrightarrow\Delta$) ratio (bare or dressed) which is based on a dynamical model using a largely phenomenological $\pi N$ interaction is not unique. address: 'Institut für Kernphysik, Johannes Gutenberg-Universität, D-55099 Mainz, Germany' author: - 'P. Wilhelm, Th. Wilbois, and H. Arenhövel' title: 'Unitary ambiguity in the extraction of the E2/M1 ratio for the $\gamma N\leftrightarrow\Delta$ transition[^1]' --- The ratio $R_{EM}$ of the electric quadrupole to the magnetic dipole amplitude of the $\gamma N \leftrightarrow\Delta$(1232) transition is an important quantity for our understanding of hadronic structure. It provides a powerful test for hadron models since it indicates a deviation from spherical symmetry. For example, in constituent quark models, it is directly related to the tensor interaction between quarks. Consequently, there is considerable experimental effort in measuring the corresponding $E_{1+}$ and $M_{1+}$ isospin $3/2$ multipole amplitudes for photoproduction of pions on the nucleon [@legs; @mainz]. However, all realistic pion photoproduction models show that both multipoles, in particular $E_{1+}^{3/2}$, contain nonnegligible nonresonant background contributions. Unfortunately, their presence complicates the isolation of the resonant parts. In the literature, there are basically two different approaches in order to extract the $\gamma N \leftrightarrow\Delta$ transition amplitudes. The first one is the Effective-Lagrangian-Approach (ELA) adopted by Olsson and Osypowsky [@OlO75] and also used later on by Davidson, Mukhopadhyay and Wittman [@DaM91]. In this approach, the $\pi N$ scattering is not treated dynamically and thus unitarity can be implemented only phenomenologically using different unitarization methods (K matrix, Olsson or Noelle prescription) which introduces some model dependence. However, in view of the phenomenological character of these methods the deeper origin of this model dependence remains unclear. In the second approach, the $\pi N$ interaction is treated dynamically and thus unitarity is respected automatically. Various models of this type have been suggested in the past, e.g., Tanabe and Ohta [@TaO85], Yang [@Yan85], and Nozawa, Blankleider and Lee [@NoB90]. However, due to our limited understanding of the dynamics of the $\pi N$ system, all these models are to a large extent phenomenological. Nevertheless, the necessity of such a dynamical treatment has been stressed again by Bernstein, Nozawa and Moinester [@BeN93]. Thereby, it is implicitly assumed that the ongoing improvement of the experimental data base (for both $\pi N\rightarrow\pi N$ and $\gamma N\rightarrow\pi N$) will finally allow to favor one of the models and thus will lead to a unique $R_{EM}$. In this paper we would like to point out an inherent unitary ambiguity in the latter approach which, to our knowledge, has never been discussed before. Qualitatively, one may understand this unitary freedom in the following way. First of all, the separation of a resonant $\Delta$ contribution corresponds to the introduction of a $\Delta$ component into the $\pi N$ scattering state which vanishes in the asymptotic region. The explicit form of a wave function depends on the chosen representation, which can be changed by means of unitary transformations. As a consequence, the probability of a certain wave function component is not an observable, because it depends on the representation. Classical examples are the deuteron $D$ wave or isobaric components in nuclei [@friar; @amado]. Introducing a phenomenological $\pi N$ interaction model always implies the choice of a specific representation. However, its relation to other representations and in particular its relationship to hadron models remains unknown. Thus, it is not clear in principle, how the extracted resonant multipoles, which are representation dependent, can be related to the $\gamma N \leftrightarrow \Delta$ transition matrix elements calculated within, e.g., a nonrelativistic quark model. We will illustrate our arguments quantitatively by means of a simple model [@WiW96] whose main features are taken from Ref. [@TaO85]. It assumes as Hilbert space $\Delta \oplus \pi N \oplus \gamma N$ with corresponding projectors $P_\Delta$, $P_{\pi N}$, $P_{\gamma N}$, and a Hamiltonian of the form $$h = t(m_\Delta ) + v_{\pi\pi}^B + v_\pi + v_\pi^\dagger + v_{\pi\gamma}^B +v_\gamma \, , \label{eqn:h}$$ with the background $\pi N$ interaction $v_{\pi\pi}^B=P_{\pi N} (h-t) P_{\pi N}$, the $\pi N\Delta$ vertex $v_\pi=P_\Delta h P_{\pi N}$, the nonresonant $\gamma N\rightarrow \pi N$ driving term $v_{\pi\gamma}^B=P_{\pi N} h P_{\gamma N}$, and the $\gamma N\Delta$ vertex $v_\gamma =P_\Delta h P_{\gamma N}$. The kinetic energy $t$ in the $\Delta$ sector depends on the bare resonance mass $m_\Delta$ which is a model parameter. The pure hadronic sector ($v_{\pi\pi}^B$, $v_\pi$, $m_\Delta$) of our model is identical to model B of [@TaO85] and thus yields a good fit of the $\pi N$ scattering phase shift in the $P_{33}$ channel. The electromagnetic background $v_{\pi\gamma}^B$ is modeled differently in order to guarantee gauge invariance (for details see [@WiW96]). For such a dynamical model, the general structure of one of the total pion production multipoles $M$ ($M_{1+}^{3/2}$ or $E_{1+}^{3/2}$) is shown diagrammatically in Fig. \[fig:tgpi\]. It consists of three parts, namely in the notation of [@BeN93], the background $M_B$, the bare resonant multipole $M_\Delta$, and the vertex renormalization part $M_{VR}$. The sum $M_R = M_\Delta + M_{VR}$ is referred to as the dressed resonant multipole. Formally, one has $$M=\langle \pi N^{(-)}|v_\gamma+v_{\pi\gamma}^B|\gamma N\rangle_M ,$$ with the decomposition $M_\Delta = \langle \pi N^{(-)}|v_\gamma|\gamma N\rangle_M$ and $M_B+M_{VR} = \langle \pi N^{(-)}|v_{\pi\gamma}^B|\gamma N\rangle_M$, where $|\pi N^{(-)}\rangle$ denotes the $\pi N$ scattering state. The index $M$ on the r.h.s. indicates the angular momentum configuration for the magnetic dipole or the electric quadrupole absorption of the photon. Any unitary transformation can be written as $U (\alpha) = e^{i\alpha\chi}$, with a generator $\chi = \chi^\dagger$ and an arbitrary real number $\alpha$. Clearly, only generators which are nondiagonal with respect to $\Delta \oplus \pi N$ have to be considered here. Keeping in mind that $\chi$ has to be odd under time reversal, a prototype is given by $$\label{eqn:chi} \chi = i \,\left[ v_\pi + v_\pi^\dagger ,v_{\pi\pi}^B \right] \, .$$ It obviously mixes resonant and background $\pi N$ interactions and leaves the $\gamma N$ sector unchanged. Assuming the background interaction to be of separable form, i.e., $v_{\pi \pi}^B = \lambda |b\rangle\langle b|$ with $\langle b | b\rangle = 1$, as was actually done in [@TaO85; @Yan85; @NoB90], $U(\alpha)$ can be evaluated without a perturbative expansion. Even though the total pion production multipole $M$ remains invariant under $U(\alpha)$, its decomposition changes according to $$\begin{aligned} M_\Delta(\alpha) &=& \langle \pi N^{(-)}|U(-\alpha)P_\Delta U(\alpha) (v_\gamma+v_{\pi\gamma}^B)|\gamma N\rangle_M, \\ M_B(\alpha)+M_{VR}(\alpha) &=& \langle \pi N^{(-)}|U(-\alpha)P_{\pi N} U(\alpha)(v_\gamma+v_{\pi\gamma}^B)|\gamma N\rangle_M.\end{aligned}$$ Note that by construction $U(\alpha)$ does not modify the initial state $|\gamma N\rangle$. Actually, it would be of interest to find out the representation dependence of both the bare and the dressed resonant multipole because, as pointed out in [@BeN93], it is intuitive to compare predictions of nucleon models without and with a pion cloud to the bare and dressed resonant multipoles, respectively. In this paper we focus on the bare multipoles. One finds $$\label{eqn:malpha} M_\Delta (\alpha) = M_\Delta (0) \left[ 1- \frac{1}{2} \left( 1 - r g_M \right) \left( 1 - \cos 2\tilde \alpha \right) + \frac{1}{2} \left( r + g_M \right) \sin 2\tilde \alpha \right]$$ with $g_M = \langle b | v_{\pi\gamma}^B| \gamma N \rangle _M / \langle \Delta | v_\gamma | \gamma N \rangle _M $ and $r= \langle \pi N ^{(-)} | b\rangle / \langle \pi N ^{(-)} | \Delta \rangle$, where $|\Delta\rangle$ is the bare $\Delta$ state, i.e., $P_\Delta = |\Delta \rangle \langle \Delta |$. Moreover, we have introduced a dimensionless parameter ${\tilde \alpha}$ which is proportional to $\alpha$ (for details see [@WiW96]). Note, that $M_\Delta(\alpha)$ still carries the $P_{33}$ phase shift. It is easily verified that, irrespective of the model quantities $r$ and $g_M$, Eq. (\[eqn:malpha\]) implies that $M_\Delta (\alpha )$ always goes through zero for a certain value of $\alpha$. Consequently, the ratio of the bare multipoles $R_{EM}^\Delta = E_{1+,\Delta }^{3/2}/ M_{1+, \Delta }^{3/2}$ as a function of $\alpha $ is in principle [*unbound*]{}. The representation dependence of both bare multipoles is plotted in Fig. \[fig:multipol\]. It is already sufficient to consider only transformations close to the identity. Even then, the bare amplitudes change substantially, as can be seen by comparing the dotted ($\tilde \alpha = 10^\circ$) and dash-dotted curves ($\tilde \alpha = -10^\circ$) with the dashed one ($\tilde\alpha=0$). For positive $\tilde \alpha$ also the bare electric multipole exhibits a more pronounced resonance behavior. For negative $\tilde \alpha$, the bare multipoles, in particular $E_{1+,\Delta }^{3/2}$, come closer to those of Nozawa [*et al. *]{} (see Fig. 2 of [@BeN93]). For completeness we note that the predicted total multipoles are in satisfactory agreement with experimental results. Even though we have demonstrated the representation dependence for the bare multipoles, a similar though weaker dependence occurs also for the dressed multipoles. The ratio $R_{EM}^\Delta $ is plotted for ${\tilde \alpha} = 0^\circ , \pm 5^\circ , \pm 10^\circ$ in Fig.  \[fig:rem\]. At resonance, it varies strongly between $-1.5$% and $-5$%. The ratio predicted by our original model is $-3.1\,$% [@gauge], which is identical to the result of [@NoB90]. The transformed ratios exhibit a slight energy dependence whereas the original one is energy independent, which is just a consequence of the simple ansatz for $v_\gamma$ in [@TaO85] and does not have a deeper physical origin. Moreover, the generated energy dependence is weak compared to the dependence on ${\tilde\alpha}$. Now it remains to check whether the transformed Hamiltonian $h(\alpha)=U(\alpha)\,h\,U(-\alpha)$ corresponds to a “physically reasonable” interaction model. Therefore, we write it in the following form $$\label{eqn:halpha} h(\alpha) = t(m_\Delta(\alpha)) + v_{\pi\pi}^B(\alpha) + v_\pi(\alpha) + v_\pi^\dagger(\alpha) + + v_{\pi\gamma}^B(\alpha) + v_\gamma(\alpha)\, ,$$ where $m_\Delta({ \alpha}) = \langle \Delta | h({\alpha})| \Delta \rangle$. The interaction pieces are defined completely analogous to Eq. (\[eqn:h\]), e.g., $v_{\pi\pi}^B ({\alpha}) = P_{\pi N}\left( h({\alpha})-t\right) P_{\pi N}$. The explicit ${\alpha}$-dependence of the various terms is rather lengthy and will be reported elsewhere [@WiW96]. Since one deals with semiphenomenological interactions here, the only criterion whether Eq. (\[eqn:halpha\]) is “physically resonable” will be the shape of the transformed form factors. We will demonstrate this by considering, for example, the $\pi N\Delta$ form factor. Suppressing the isospin structure, it reads $$\label{eqn:vpi} \langle\Delta| v_\pi ( \alpha) | \vec q\, \rangle = i \vec S \!\cdot\! \vec q \, v_{\pi}( q;\tilde \alpha ) \,,$$ where $|\vec q\,\rangle $ denotes a plane wave $\pi N$ state with relative momentum $\vec q$ and $\vec S$ the $N\rightarrow \Delta$ transition spin operator. In Fig. \[fig:vpi\], we have plotted $q\, v_\pi (q;{\tilde \alpha} )$ for various values of $\tilde \alpha$. Apparently, none of them can be ruled out. Here, we just mention that the modifications of the remaining parts of the transformed Hamiltonian do not change this conclusion [@WiW96]. Incidentally, the high momentum components become more and more suppressed when going from $\tilde \alpha = +10^\circ$ to $-10^\circ$ with a corresponding increase of $R_{EM}^\Delta$ from $-5$% to $-1.5$%. In summary, we have demonstrated that any extraction of resonant (bare or dressed) and nonresonant contributions from the experimental $E_{1+}^{3/2}$ and $M_{1+}^{3/2}$ multipoles for photoproduction of pions from nucleons which is based on a dynamical treatment of a phenomenological $\pi N$ interaction model suffers from inherent ambiguities. More precisely, once a phenomenological model is specified, a whole class of completely equivalent models can be constructed by means of unitary transformations, all of them providing [*exactly*]{} the same fit to the experimental data while predicting different ratios $R_{EM}^\Delta$. Incidentally, also the K-matrix residues in the $\Delta$ region which have been extracted by Davidson and Mukhopadhyay [@dmk] are not affected by the unitary freedom. Thus we have to conclude that even with a perfectly accurate data base one will not be able to discriminate between any of these models, which actually are merely different representations. Moreover, those representations which are sufficiently close to the original one, are not at all less “physically reasonable” because they cannot be excluded by arguments based on physical intuition, say, what form factors should look like. However, even those models close to the original one predict significantly different resonant amplitudes. With respect to our example, none of the different representations, say for $|{\tilde \alpha}|\! \leq \!10^\circ$, can be favored, although the resonant multipoles differ considerably, in particular the ratio $R_{EM}^\Delta$ varies from $-1.5$% to $-5$%. But one has to keep in mind, that this variation may be even larger if one considers other choices for $\chi$ than the one in Eq. (\[eqn:chi\]). However, $\chi$ should not contain any quantity which is completely unrelated to the original Hamiltonian. With respect to the model dependence in the ELA, mentioned in the introduction, it remains to be clarified in the future whether it can be traced back to unitary transformations relating the different unitarization procedures. The arguments presented here may also affect the separation of resonant and nonresonant amplitudes in other reactions like, e.g., the particularly interesting $S_{11}$(1535) in the photoproduction of $\eta$ mesons on nucleons. Notwithstanding this unitary ambiguity in the extraction of resonance properties, we would like to stress the urgent need for more precise data on pion photoproduction in the $\Delta$ region providing the necessary basis for an accurate multipole analysis. However, the challenge is on the theoretical side because for a clean test of any microscopic hadron model, the amplitude for the complete process $\gamma N \rightarrow \pi N$ including background contributions rather than the $\gamma N \leftrightarrow \Delta$ transition alone, has to be calculated dynamically within the same model. The LEGS Collaboration, G. Blanpied [*et al.*]{}, Phys. Rev. Lett.  [**69**]{}, 1880 (1992). R. Beck [*et al.*]{}, Mainz experiment, A2 Collaboration. M. G. Olsson, E. T. Osypowsky, Nucl. Phys. [**B87**]{}, 399 (1975); Phys. Rev. D [**17**]{}, 174 (1978). R. Davidson, N. C. Mukhopadhyay, and R. Wittman, Phys. Rev. Lett.  [**56**]{}, 804 (1986); Phys. Rev. D [**43**]{}, 71 (1991). H. Tanabe and K. Ohta, Phys. Rev. C [**31**]{}, 1876 (1985). S. N. Yang, J. Phys. G [**11**]{}, L205 (1985). S. Nozawa, B. Blankleider, and T.-S. H. Lee, Nucl. Phys. [**A513**]{}, 459 (1990). A. M. Bernstein, S. Nozawa, and M. A. Moinester, Phys. Rev. C [**47**]{}, 1274 (1993). J. L. Friar, Phys. Rev. C [**20**]{}, 325 (1979). R. D. Amado, Phys. Rev. C [**19**]{}, 1473 (1979). P. Wilhelm, Th. Wilbois, and H. Arenhövel, in preparation. This value differs considerably from $R_{EM}^\Delta = +3.7\,\%$ found in [@TaO85]. The deviation stems from the different ansatz for the off-shell behavior of $v_{\pi\gamma}^B$. Using the original ansatz of [@TaO85], we can reproduce their value. Solid triangles: F. Berends and A. Donnachie, Nucl. Phys. [**B84**]{}, 342 (1975); open triangles: R. A. Arndt [*et al.*]{}, Phys. Rev. D [**42**]{}, 1853 (1990), solution SM95 from SAID. R. Davidson and N. C. Mukhopadhyay, Phys. Rev. D [**42**]{}, 20 (1990). [^1]: Supported by the Deutsche Forschungsgemeinschaft (SFB 201)
--- abstract: 'Modelling spatio-temporal processes has become an important issue in current research. Since Gaussian processes are essentially determined by their second order structure, broad classes of covariance functions are of interest. Here, a new class is described that merges and generalizes various models presented in the literature, in particular models in Gneiting (*J. Amer. Statist. Assoc.* **97** (2002) 590–600) and Stein (Nonstationary spatial covariance functions (2005) Univ. Chicago). Furthermore, new models and a multivariate extension are introduced.' address: 'Institut für Mathematische Stochastik & Zentrum für Statistik, Universität Göttingen, Goldschmidtstr. 7, D-37077 Göttingen, Germany. ' author: - title: Some covariance models based on normal scale mixtures --- Introduction ============ Spatio-temporal modelling is an important task in many disciplines of the natural sciences, geosciences, and engineering. Hence, the development of models for spatio-temporal correlation structure is of particular interest. The lively activity in this field of research has become apparent through various recent reviews of known classes of spatio-temporal covariance functions (@GGG07, @MPG08, @ma08). To categorise these classes, different aspects have been considered. @GGG07 distinguish between the properties of covariance functions, such as motion invariance, separability, full symmetry, or conformity with Taylor’s hypothesis. Another classification is based on the construction principles (@ma08), such as spectral methods [@stein05], multiplicative mixture models [@ma02], additive models [@ma05c], turning bands upgrade [@KCHS04], derivatives and integrals [@ma05b], and Gneiting’s ([-@gneitingnonseparable]) approach, see also @stein05c and @ma03. Surprisingly, some rather different approaches to the construction of spatial and spatio-temporal covariance models can be subsumed in a unique class of normal scale mixtures, which is a generalization of Gneiting’s ([-@gneitingnonseparable]) class. As its construction is based on cross covariance functions, Section \[sec:background\] illustrates some of the properties of cross covariance functions and cross variograms. In Section \[sec:gneiting\], Gneiting’s class itself is generalized. Section \[sec:random\] introduces two new classes of spatio-temporal models. Section \[sec:multivariate\] presents an extension to multivariate models. In addition to the two-dimensional realisations illustrated below, three-dimensional realisations are available in the form of films at the following website: [www.stochastik.math.uni-goettingen.de/data/ bernoulli10/](http://www.stochastik.math.uni-goettingen.de/data/bernoulli10/). Background: Cross covariance functions {#sec:background} ====================================== Here we introduce some basic notions and properties of cross covariance functions and cross variograms. See @wackernagel for a geostatistical overview and @reisertburkhardt07 for some of the construction principles of multivariate cross covariance functions in a general framework. Let $Z(x)=(Z_1(x), \ldots, Z_m(x))$, $x \in{\mathbb{R}}^d$, be a zero mean, second order $m$-variate, complex valued random field in ${\mathbb{R}}^d$, that is, ${\operatorname{Var}}Z_j(x)$ exists and ${\mathbb{E}}Z_j(x) =0$ for all $x\in{\mathbb{R}}^d$ and $j=1,\ldots, m$. Then, the cross covariance function $C\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$ is defined by $$C_{jk}(x, y) = {\operatorname{Cov}}(Z_j(x),Z_k(y)),\qquad x,y\in{\mathbb{R}}^d, j,k=1,\ldots,m .$$ Clearly $C(x,y) = \overline{C^\top(y,x)}$, but $C(x,y)= \overline{C^\top(x,y)}$ is not valid in general. A function $C\dvtx{\mathbb{R}}^{2d} \rightarrow{\mathbb{C}}^{m\times m}$ with $C(x,y) = \overline{C^\top(y,x)}$, $x,y\in{\mathbb{R}}^d$, is called positive definite if for all $n\in{\mathbb{N}}$, $x_1,\ldots, x_n\in{\mathbb{R}}^d$ and $a_1,\ldots,a_n\in{\mathbb{C}}^m$, $$\label{eq:posdef} \sum_{p=1}^n\sum_{q=1}^n a_p^\top C(x_p, x_q) \bar a_q \ge0.$$ It is called strictly positive definite if strict inequality holds in (\[eq:posdef\]) for $(a_1,\ldots,a_n)\not= 0$ and pairwise distinct $x_1,\ldots, x_n$. Accordingly, we name a Hermitian matrix $M\in{\mathbb{C}}^{m\times m}$ positive definite, if $v^\top M \bar v\ge0$ for all $v\in{\mathbb{C}}^m$, and strictly positive definite if strict inequality holds for $v\not=0$. As in the univariate case, we derive from Kolmogorov’s existence theorem that a function $C\dvtx{\mathbb{R}}^{2d} \rightarrow{\mathbb{C}}^{m\times m}$ with $C(x,y) = \overline {C^\top(y,x)}$ is a positive definite function if and only if a (Gaussian) random field exists with $C$ as cross covariance function. Further, a function $C \dvtx{\mathbb{R}}^{2d} \rightarrow{\mathbb{R}}^{m\times m}$ is a positive definite function if and only if Equation (\[eq:posdef\]) holds for any $a_1,\ldots,a_n\in{\mathbb{R}}^m$. The cross variogram $\gamma\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$, $\gamma=(\gamma_{jk})_{j,k=1,\ldots,m}$ is defined by $$\gamma_{jk}(x, y) = \tfrac12 {\mathbb{E}}\bigl( Z_j(x)-Z_j(y)\bigr)\overline{\bigl(Z_k(x)-Z_k(y)\bigr)},\qquad x,y\in{\mathbb{R}}^d, j,k=1,\ldots,m.$$ If $Z$ has second order stationary increments, then $\gamma(x,y)$ depends only on the distance vector $h=x-y$, that is, $\gamma(x,y)=\tilde\gamma(h)$ for some function $\tilde \gamma\dvtx{\mathbb{R}}^d \rightarrow{\mathbb{C}}^{m\times m}$. If in addition $Z$ is univariate, then $\tilde\gamma$ is called a (semi-)variogram. @schoenberg38b’s ([-@schoenberg38b]) theorem states that a function $\tilde\gamma\dvtx{\mathbb{R}}^{d}\rightarrow{\mathbb{R}}$ with $\tilde\gamma (0)=0$ is a variogram if and only if $\exp(-r\tilde\gamma)$ is a covariance function for all $r>0$, see also @GSS01. Let us now discuss multivariate and non-stationary versions of this statement. To this end, we denote the componentwise multiplication of matrices by “$*$”, in particular, $$A^{*n} = (A_{jk}^n)_{jk}\qquad \hbox{for } A= (A_{jk})_{jk} .$$ Further, $f^*(A)$ denotes the componentwise function evaluation, for example, $${\exp^*}(A) = (\exp(A_{jk}))_{jk}.$$ \[thm:Mplus\] Let $C\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$ and $E_{m\times m}$ be the $m\times m$ matrix whose components are all $1$. 1. The following three assertions are equivalent: $C$ is a cross covariance function;  $\exp^*(rC)-E_{m\times m}$ is a cross covariance function for all $r>0$; $\sinh^*(rC)$ is a cross covariance function for all $r>0$. 2. If $\exp^*(rC)$ is a cross covariance function for all $r>0$ then $$\label{eq:x0} C^{(z)}(x,y) = C(z, z) -C(x, z) -C(z, y) + C(x,y)$$ is a cross covariance function for all $z\in{\mathbb{R}}^d$. If $m=1$ and (\[eq:x0\]) holds for one $z\in{\mathbb{R}}^d$, then $\exp(rC)$ is a covariance function for all $r>0$. Note that the componentwise product $C_1 * C_2$ of two $m$-variate cross covariance functions $C_1$ and $C_2$ is again a cross covariance function. To see this, consider the componentwise product of two independent random fields with cross covariance functions $C_1$ and $C_2$. In particular, $C(x,y)^{*n}$ and $r C(x,y)$, $r\ge0$, are cross covariance functions. Furthermore, the sum and the pointwise limit of $m$-variate cross covariance functions are cross covariance functions. Both functions, $\exp(x)-1$ and $\sinh(x)$, have Taylor expansion on ${\mathbb{R}}$ with positive coefficients only. Hence, ${\exp^*}(rC) -E_{m\times m}$ and $\sinh^*(rC)$ are cross covariance functions if $C$ is a cross covariance function. On the other hand, since the Taylor expansions equal $x+\mathrm{o}(x)$ as $x\rightarrow0$, we have that $({\exp^*}(rC) -E_{m\times m})/r$ and $\sinh^*(rC)/r$ converge to $C$ as $r\rightarrow0$ and $C$ must be a cross covariance function. The proof follows the lines in @matheron72. Let $a_1,\ldots, a_n\in{\mathbb{C}}^m$, $x_1,\ldots, x_n\in{\mathbb{R}}^d$, $a_0= -\sum_{p=1}^n a_p$ and $x_0=z$ for some $z\in{\mathbb{R}}^d$. Then $$\begin{aligned} 0& \le& \lim_{r\rightarrow0} \sum_{p=0}^n \sum_{q=0}^n a_p^\top\frac{{\exp^*}(rC(x_p,x_q))- E_{m\times m}}{r} \bar a_q = \sum_{p=0}^n \sum_{q=0}^n a_p^\top C(x_p, x_q)\bar a_q \\ &=& \sum_{p=1}^n \sum_{q=1}^n a_p^\top [C(x_p, x_q) + C(z, z) - C(x_p, z) - C(z, x_q) ]\bar a_q.\end{aligned}$$ Conversely, assume that $m=1$ and Equation (\[eq:x0\]) holds. Since $C_0(x,y) = f(x)\overline{f(y)}$ is a covariance function for any function $f\dvtx{\mathbb{R}}^d\rightarrow{\mathbb{C}}$ (@BTA, Lemma 1) part 1 of the theorem results in $$\exp(rC(x, y)) = f(x) \overline{f(y)} \exp\bigl(rC(x, y) + rC(z, z) - rC(x, z) - rC(z, y)\bigr)$$ being a positive definite function for any $r>0$ and $f(x) = \exp(rC(x,z) - rC(z,z) /2)$. If $m=1$, $C(x,y)= -\tilde\gamma(x-y)$ and $z=0$, then $C^{(0)}$ in Equation (\[eq:x0\]) equals the covariance function of an intrinsically stationary random field $Z$ with $Z(0)=0$ almost surely, that is, part 2 of Theorem \[thm:Mplus\] yields @schoenberg38b’s ([-@schoenberg38b]) theorem. If $m>1$, the reverse statement in part 2 of Theorem \[thm:Mplus\] does not hold in general, as the following example shows. Let $M\in{\mathbb{R}}^{m\times m}$, $m\ge2$, be a symmetric, strictly positive definite matrix with identical diagonal elements, $\tilde \gamma\dvtx{\mathbb{R}}^d\rightarrow{\mathbb{R}}$ a variogram, and $C(x,y)=-M\tilde \gamma (x-y)$. Then $C^{(0)}(x,y)$ given by (\[eq:x0\]) is a cross covariance function, but ${\exp^*}(-M\tilde\gamma)$ is a positive definite function if and only if $\tilde\gamma\equiv0$. To see this, assume that ${\exp^*}(-M\tilde \gamma)$ is a positive definite function and let $m=2$, $M=(M_{jk})_{j,k=1,2}$, and $Z(x)=(Z_1(x), Z_2(x))$ be a corresponding random field. Then with $a=(1, -1, 1, -1)^\top$ we have $$\begin{aligned} {\operatorname{Var}}\bigl(Z_1(0) - Z_2(0) + Z_1(y) - Z_2(y)\bigr) &=& a^\top\pmatrix{ \exp^*(-M\tilde\gamma(0)) & \exp^*(-M\tilde\gamma(y)) \cr \exp^*(-M\tilde\gamma(y)) & \exp^*(-M\tilde\gamma(0)) } a \\&=& 2(1, -1) \exp^*(-M\tilde\gamma(y)) (1, -1)^\top \\&=& 4 \bigl(\mathrm{e}^{-M_{11} \tilde\gamma(y)} - \mathrm{e}^{-M_{12} \tilde\gamma(y)}\bigr) .\end{aligned}$$ Since $M_{11} > M_{12}$, the latter is non-negative if and only if $\tilde\gamma(y)=0$. So, for an arbitrary cross variograms $\gamma\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$ the function ${\exp^*}(-\gamma(x,y))$ is not a positive definite, in general. However, $$C_1(x,y) = {\exp^*}\bigl(\gamma(x, 0) + \gamma(y,0) - \gamma(x,y)\bigr)$$ and $$\begin{aligned} \label{eq:C2} C_2(x,y) &=& {\exp^*}\bigl(\gamma(x, 0) + \gamma(y,0) -D_{xy} - \gamma(x,y)\bigr), \nonumber\\[-8pt]\\[-8pt] (D_{xy})_{jk} &=& \gamma_{jj}(x,0) + \gamma_{kk}(y,0),\nonumber\end{aligned}$$ are always positive definite functions in ${\mathbb{R}}^d$, cf. Theorem 2.2 in @BCR for the univariate case. To see this, let $\gamma$ be an $m$-variate cross variogram and $Z$ a corresponding $m$-variate random field. Let $Y(x) = Z(x) - Z(0)$ and $c(x,y) ={\mathbb{E}}Y(x) Y^\top(y)$. Then $c$ and $\overline{c^\top}$ are positive definite functions and $$\begin{aligned} c_{jk}(x,y)+\overline{c_{kj}(x,y)} &=& {\mathbb{E}}\bigl(Y_j(x)\overline{Y_k(y)} +\overline{Y_k(x)}Y_j(y)\bigr) \\&=& {\mathbb{E}}\bigl[ Y_j(x)\overline{Y_k(x)} + Y_j(y)\overline{Y_k(y)} + \bigl(Y_j(x) -Y_j(y)\bigr)\bigl(\overline{Y_k(y)}-\overline{Y_k(x)}\bigr)\bigr] \\&=&\gamma_{jk}(x,0) + \gamma_{jk}(y,0) - \gamma_{jk}(x,y).\end{aligned}$$ Part 1 of Theorem 1 yields that $C_1$ is a positive definite function. Let $Z$ be a corresponding random field. Then the random field $(\mathrm{e}^{-\gamma_{11}(x,0)} Z_1(x),\ldots, \mathrm{e}^{-\gamma_{mm}(x,0)} Z_m(x))$, $x\in{\mathbb{R}}^d$, has cross covariance function $C_2$. Let $C(x_1,x_2)$ = $V D(x_1, x_2) \bar V^\top\in{\mathbb{C}}^{m\times m}$, $x,y\in{\mathbb{R}}^d$, for some unitary matrix $V\in{\mathbb{C}}^{m\times m}$. The values of the mapping $D\dvtx{\mathbb{R}}^{2d}\rightarrow {\mathbb{C}}^{m\times m}$ are diagonal matrices, $$D(x_1,x_2)={\operatorname{diag}}(D_1(x_1, x_2),\ldots, D_m(x_1,x_2)), \qquad x_1,x_2\in{\mathbb{R}}^d,$$ and the $D_j\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}$, $j=1,\ldots, m$, are arbitrary functions. Then the $n$-fold matrix product $C^n\dvtx{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$ is a cross covariance function in ${\mathbb{R}}^d$ for any $n\in{\mathbb{N}}$ if and only if the $D_j$ are all covariance functions, and Theorem \[thm:Mplus\] remains true if $ {\exp^*}(r C(x,y) )$ is replaced by $$\exp(r C(x,y) ) = \sum_{n=0}^\infty\frac{r^n C^{n}(x,y)}{n!} ,\qquad x,y\in{\mathbb{R}}^d.$$ The subsequent proposition generalizes the results in @cressiehuang99 and Theorem 1 in @gneitingnonseparable. Denote by ${\mathcal{B}}^d$ the ensemble of Borel sets of ${\mathbb{R}}^d$. \[thm:bochner\] Let $d$ and $l$ be non-negative integers with $d+l>0$ and $C\dvtx{\mathbb{R}}^{l + 2d}\rightarrow {\mathbb{C}}^{m\times m}$ a continuous function in the first argument. Then the following two assertions are equivalent: 1. $C$ is a cross covariance function that is translation invariant in the first argument, that is, $C(h, y_1, y_2) = {\operatorname{Cov}}(Z(x+h, y_1), Z(x, y_2))$ for some second order random field $Z$ on ${\mathbb{R}}^{l+d}$ and all $x,h\in{\mathbb{R}}^l$ and $y_1,y_2\in{\mathbb{R}}^d$. 2. $C \dvtx {\mathbb{R}}^l \times{\mathbb{R}}^{2d}\rightarrow{\mathbb{C}}^{m\times m}$ is the Fourier transform of some finite measures $F_{y_1,y_2,j,k}$, $y_1,y_2 \in{\mathbb{R}}^d$, $j,k=1,\ldots,m$, that is, $$\label{eq:Cjk} C_{jk}(h, y_1 , y_2) = \int \mathrm{e}^{-\mathrm{i} \langle h, \omega\rangle} F_{y_1,y_2,j,k}({\mathrm{d}}\omega),\qquad h\in{\mathbb{R}}^l, j,k=1,\ldots,m,$$ and $$\label{eq:CA} (C_{jk}^A(y_1,y_2))_{jk} = (F_{y_1,y_2,j,k}(A))_{jk},\qquad y_1,y_2\in{\mathbb{R}}^d,$$ is an $m$-variate cross covariance function in ${\mathbb{R}}^d$ for any $A\in{\mathcal{B}}^l$. The proof follows the lines in @gneitingnonseparable. Let us first assume that Equations (\[eq:Cjk\]) and (\[eq:CA\]) hold. Let $n\in{\mathbb{N}}$, $x_1,\ldots, x_n\in{\mathbb{R}}^l$, $y_1,\ldots, y_n\in{\mathbb{R}}^d$ and $a_1,\ldots, a_n\in{\mathbb{C}}^m$ be fixed. Then a matrix-valued function $f\dvtx{\mathbb{R}}^{l+ 2d}\rightarrow {\mathbb{C}}^{m\times m}$ and a non-negative finite measure $F$ on ${\mathbb{R}}^d$ exists, such that $$\label{eq:fF} \int_A f_{jk}(\omega, y_p, y_q) F(\mathrm{d}\omega) = F_{y_p, y_q, j, k}(A),\qquad p,q=1,\ldots, n, j,k=1,\ldots,m,$$ for any $A\in{\mathcal{B}}^l$. For instance, let $F(A) = \sum_{p=1}^n \sum_{k=1}^m F_{y_p, y_p, k, k}(A)$. Then, Equation (\[eq:CA\]) implies that the $mn\times mn$ matrix $(f_{jk}(\omega, y_p, y_q))_{j,k; p,q}$ is hermitian for $F$-almost all $\omega$. Now, $$\begin{aligned} \sum_{p=1}^n \sum_{q=1}^n a_p^\top C(x_p - x_q, y_p, y_q) \overline{a_q} &=& \int\sum_{p=1}^n \sum_{q=1}^n \mathrm{e}^{ -\mathrm{i} \langle x_p, \omega\rangle} a_p^\top f(\omega, y_p, y_q) \overline{ \mathrm{e}^{ -\mathrm{i} \langle x_q,\omega\rangle}} \overline{a_q} F(\mathrm{d}\omega) \ge0 .\end{aligned}$$ Conversely, let $C(h, y_1, y_2)\dvtx {\mathbb{R}}^{l + 2d} \rightarrow{\mathbb{C}}^{m\times m}$ be a covariance function that is stationary in its first argument. We have $$C_{jk}(h, y, y') = \int \mathrm{e}^{-\mathrm{i} \langle\omega, h\rangle} F_{y, y', j, k}({\mathrm{d}}\omega),\qquad h \in{\mathbb{R}}^l; y,y'\in{\mathbb{R}}^d, j,k=1,\ldots,m,$$ for some finite, not necessarily positive measures $F_{y, y', j, k}$ (@yaglomII, page 115). It now remains to demonstrate that equality (\[eq:CA\]) holds. Fix $n\in{\mathbb{N}}$, $y_1,\ldots, y_n\in{\mathbb{R}}^d$, and $a_1,\ldots, a_n\in{\mathbb{C}}^m$. Then a non-negative finite measure $F$ and a function $f\dvtx{\mathbb{R}}^{l+ 2d}\rightarrow {\mathbb{C}}^{m\times m}$ exist, such that Equation (\[eq:fF\]) holds. By assumption, $ \sum_{p=1}^n \sum_{q=1}^n a_p^\top C(\cdot, y_p, y_q) \overline{a_q} $ is a positive definite, continuous function and its Fourier transform is non-negative. Following directly from the linearity of the Fourier transform, we have that for $F$-almost all $\omega\in{\mathbb{R}}^l$ $$\sum_{p=1}^n \sum_{q=1}^n a_p^\top f(\omega, y_p, y_q) \overline{a_q}\ge0,$$ which finally leads to Equation (\[eq:CA\]). If a covariance function is translation invariant, we will write only one argument for ease of notation, for example, $C(h)$, $h=x-y\in{\mathbb{R}}^d$, instead of $C(x,y)$, $x,y\in{\mathbb{R}}^d$. Generalized Gneiting’s class {#sec:gneiting} ============================ A function $C(x,y) = \varphi(\|h\|)$, $h=x-y\in{\mathbb{R}}^d$, is a motion invariant, real-valued covariance function in ${\mathbb{R}}^d$ for all $d\in{\mathbb{N}}$ if and only if $\varphi$ is a normal scale mixture, that is, $$\varphi(h) = \int_{[0,\infty)} \exp(- a h^2) {\,\mathrm{d}}F(a),\qquad h\ge0,$$ for some non-negative measure $F$ [@schoenberg38]. Examples are the stable model [@yaglomI], the generalized Cauchy model [@cauchy], $$\varphi(h) = (1+h^\alpha)^{-\beta/\alpha},\qquad h\ge0 ,$$ $\alpha\in[0,2]$, $\beta>0$, and the generalized hyperbolic model (@barndorffnielsen78, @gneitingdual). The latter includes as special case the Whittle–Matérn model [@stein], $$\varphi(h) = W_\nu(h) = 2^{1-\nu} \Gamma(\nu)^{-1} h^\nu K_\nu (h),\qquad h>0.$$ Here, $\nu>0$ and $K_\nu$ is a modified Bessel function. \[thm:main\] Assume that $m$ and $d$ are positive integers and $H\dvtx{\mathbb{R}}^d\rightarrow{\mathbb{R}}^m$. Suppose that $\varphi$ is a normal scale mixture and $G\dvtx {\mathbb{R}}^{2d} \rightarrow{\mathbb{R}}^{m\times m}$ is a cross variogram in ${\mathbb{R}}^{d}$ or $-G$ is a cross covariance function. Let $M\in{\mathbb{R}}^{m\times m}$ be positive definite, such that $M + G(x, y)$ is strictly positive definite for all $x,y\in{\mathbb{R}}^{d}$. Then $$\label{eq:main} C(x, y) = \frac{\varphi( [(H(x)-H(y)) ^\top(M + G(x, y))^{-1} (H(x)-H(y)) ]^{1/2} )} {\sqrt{|M + G(x, y)|}},\qquad x,y\in{\mathbb{R}}^d,$$ is a covariance function in ${\mathbb{R}}^d$. \[lemma:sum\] Let $\gamma\dvtx {\mathbb{R}}^{2d} \rightarrow{\mathbb{C}}^{m\times m}$ be a cross variogram (cross covariance function) in ${\mathbb{R}}^d$ and $A\in{\mathbb{C}}^{l \times m}$. Then $\gamma_0 = A \gamma\overline{A^\top}$ is an $l$-variate, cross variogram (cross covariance function) in ${\mathbb{R}}^{d}$. [Proof of Theorem \[thm:main\]]{} We follow the proof in @gneitingnonseparable but assume first that $\varphi(h) = \mathrm{e}^{-h^2}$. If $ G(x,y)$ is a cross variogram, then, according to Lemma \[lemma:sum\], $$g(x, y) = \omega^\top G( x, y) \omega$$ is a (univariate) variogram for any $\omega\in{\mathbb{R}}^m$. Equation (\[eq:C2\]) or Theorem 2.2 in @BCR implies $$\label{eq:crucial.posdef} C_\omega(x,y)= \exp(- \omega^\top G( x, y) \omega),\qquad x,y\in{\mathbb{R}}^d,$$ and hence, $$\label{eq:crucial.posdef2} \hat C(\omega, x,y)= \exp\bigl(- \omega^\top\bigl(M+G( x, y)\bigr) \omega\bigr),\qquad x,y\in{\mathbb{R}}^d,$$ are both covariance functions for any fixed $\omega\in{\mathbb{R}}^m$. With ${\,\mathrm{d}}F_{x,y,1,1}(\omega) = \hat C(\omega, x,y) {\,\mathrm{d}}\omega$, Proposition \[thm:bochner\] yields that the univariate function $$\begin{aligned} C(h, x, y) & =& c \frac{\exp(- h^\top(M +G(x,y))^{-1} h )} {\sqrt{|M+ G( x, y)|}},\qquad h \in{\mathbb{R}}^m; x,y \in{\mathbb{R}}^d\end{aligned}$$ is a covariance function in ${\mathbb{R}}^{m+d}$ for all $c\ge0$, which is translation invariant in the first argument. Now, consider a random field $Z(\zeta, x)$ on ${\mathbb{R}}^{m+d}$ corresponding to $C(h, x, y)$ with $c=1$. Define the random field $Y$ on ${\mathbb{R}}^d$ by $$Y(x) = Z(H(x), x) .$$ Then the covariance function of $Y$ is equal to the covariance function given in the theorem. For general $\varphi$, the assertion is obtained directly from the definition of normal scale mixtures. In case $-G$ is a cross covariance function, the proof runs exactly the same way. \[ex:prod\] A well known construction of a cross covariance function in ${\mathbb{R}}^d$ used in machine learning is $$\tilde G(x,y) = f(x) f(y)^\top,\qquad x,y\in{\mathbb{R}}^d,$$ for some function $f\dvtx{\mathbb{R}}^d \rightarrow{\mathbb{R}}^{m \times l}$. Assume that $M-f(x) f(y)^\top$ is strictly positive definite for all $x$ and $y$ and some positive definite matrix $M$. Then, $C$ in Equation (\[eq:main\]) is a covariance function with $G = -\tilde G$. We denote by ${\mathbf{1}}_{d\times d}\in{\mathbb{R}}^{d\times d}$ the identity matrix. @gneitingnonseparable delivers a rather general construction of non-separable models based on completely monotone functions, containing as particular case the models developed by @cressiehuang99. Let $\varphi$ be a completely monotone function, that is, $ \varphi(t^2)$, $t\in{\mathbb{R}}$, is a normal scale mixture, and $\psi$ be a positive function with a completely monotone derivative. Then $$\label{eq:gneit} C(h, u) = \frac1{\psi(|u|^2)^{d/2}} \varphi\bigl(\|h\|^2 / \psi(|u|^2)\bigr),\qquad h \in{\mathbb{R}}^d, u \in{\mathbb{R}},$$ is a translation invariant covariance function in ${\mathbb{R}}^{d+1}$ (@gneitingnonseparable, Theorem 2). According to Bernstein’s theorem, the function $\psi(\|\cdot\|^2)-c$ is a variogram for some positive constant $c$, see also @BCR. The positive definite nature of $C$ in (\[eq:gneit\]) is also ensured by Theorem \[thm:main\] for $m=d$ and $G((x_1, x_2), (y_1,y_2)) = \psi(\|x_2 -y_2\|^2) {\mathbf{1}}_{d\times d}$, $x_1,y_1\in{\mathbb{R}}^d$, $x_2,y_2\in{\mathbb{R}}$. @gneitingnonseparable provides examples for $\psi$ and, along the way, introduces a new class of variograms, $$\gamma(h) = (\|h\|^a + 1)^b -1,\qquad a\in(0,2], b\in(0,1].$$ This class generalizes the class of variograms of fractal Brownian motion and that of multiquadric kernels [@wendland]. \[ex:cox\] In the context of modelling rainfall, @coxisham88 proposed in ${\mathbb{R}}^{d+1}$ the translation invariant covariance function $$C(h, u) = {\mathbb{E}}_V \varphi(\|h -V u\|),\qquad h \in{\mathbb{R}}^d, u \in{\mathbb{R}}.$$ Here, $\varphi(\|\cdot\|)$ is a motion invariant covariance function in ${\mathbb{R}}^d$ and $V$ is a $d$-dimensional random wind speed vector. Unfortunately, this appealing model has lacked explicit representations. Now let us assume that $V$ ![Realizations of the Cox–Isham covariance model in ${\mathbb{R}}^2\times{\mathbb{R}}$. Left time $t=0$, right $x_2=0$. See Example \[ex:cox\] for details.[]{data-label="fig:rain"}](226f01.eps) follows a $d$-variate normal distribution ${\mathcal{N}}(\mu, D/2)$ and $\varphi(x)=\exp(-x^2)$. Then, $$C(h, u) = \frac1{\sqrt{|{\mathbf{1}}_{d\times d}+ u^2 D|}} \varphi\bigl( [ (h - u \mu)^\top({\mathbf{1}}_{d\times d} + u^2 D)^{-1} (h-u\mu) ]^{1/2} \bigr),\qquad h\in{\mathbb{R}}^d, u\in{\mathbb{R}},$$ please refer to the appendix for a proof. Hence, $C(h,u)$ above is a covariance function for any normal mixture $\varphi$. Figure \[fig:rain\] provides realizations of a random field with the above covariance function where $\varphi=W_1$ is the Whittle–Matérn model, $\mu=(1,1)$ and $$D= \pmatrix{ 1 & 0.5 \cr 0.5 & 1 }.$$ \[ex:stein1\] @stein05b proposes models in ${\mathbb{R}}^d$ of the form $$C(x, y) = \frac{\varphi( [(x-y)^\top(f(x) + f(y))^{-1} (x-y) ]^{1/2} )} {\sqrt{|f(x) + f(y)|}},\qquad x,y\in{\mathbb{R}}^d,$$ in which the values of $f\dvtx {\mathbb{R}}^{2d} \rightarrow{\mathbb{R}}^{m\times m}$ are strictly positive definite matrices, see also @paciorek03 and @PMC09. Here, $f(x) + f(y)$ is not a variogram in general, but the proof of Theorem \[thm:main\] is still applicable if $\hat C$ in Equation (\[eq:crucial.posdef2\]) is replaced by $$\hat C(\omega, x,y) = \exp\bigl(- \omega^\top\bigl(f(x) + f(y)\bigr) \omega\bigr) ,$$ which is a positive definite function for all $\omega\in{\mathbb{R}}^m$. The covariance model (\[eq:main\]), which is valid in ${\mathbb{R}}^d$, does not allow for negative values, hence its value is limited in some applications [@GPMS07]. To overcome this limitation, @ma05 considers differences of positive definite functions. Let $B_1,B_2,M_1,M_2\in{\mathbb{R}}^{d\times d} $ be strictly positive definite matrices. Proposition \[thm:bochner\] yields that $$\begin{aligned} C(h, x, y) &=& \frac{\exp(- [h^\top(M_1 + (x-y)^\top B_1 (x-y){\mathbf{1}}_{d\times d})^{-1} h ])}{\sqrt{|M_1 + (x-y)^\top B_1 (x-y){\mathbf{1}}_{d\times d}|}} \\&&{} + b\frac{\exp(- [h^\top (M_2 + (x-y)^\top B_2 (x-y){\mathbf{1}}_{d\times d})^{-1} h ])}{\sqrt{|M_2 + (x-y)^\top B_2 (x-y){\mathbf{1}}_{d\times d}|}} ,\qquad h,x,y\in{\mathbb{R}}^d,\end{aligned}$$ is a positive definite function in ${\mathbb{R}}^{2d}$ that is translation invariant in its first argument if and only if for all $\omega\in{\mathbb{R}}^d$, $$\begin{aligned} \hat C_\omega(x, y) &=& \exp\bigl(- \omega^\top M_1\omega - \|\omega\|^2 (x-y)^\top B_1 (x-y) \bigr) \\&&{} + b\exp\bigl(- \omega^\top M_2\omega - \|\omega\|^2 (x-y)^\top B_2 (x-y) \bigr),\qquad x,y\in{\mathbb{R}}^d,\end{aligned}$$ is a positive definite function, that is, if and only if for all $\omega ,\xi\in{\mathbb{R}}^d$, $$\begin{aligned} |B_1|^{-1/2} \exp(- \omega^\top M_1\omega - \|\omega\|^2 \xi^\top B_1^{-1} \xi ) + b |B_2|^{-1/2}\exp(- \omega^\top M_2\omega - \|\omega\|^2 \xi^\top B_2^{-1} \xi ) \ge0.\end{aligned}$$ This is true for some negative value of $b$ if and only if both $M_2 -M_1$ and $B_2^{-1}-B_1^{-1}$ are positive definite matrices. In this case, $C(h,x,y)\dvtx{\mathbb{R}}^{3d}\rightarrow{\mathbb{R}}$ is a positive definite function in ${\mathbb{R}}^{2d}$ if and only if $$b \ge- \sqrt{ |B_2| / |B_1|} .$$ Then, $C_0$ given by $ C_0(x,y) = C(x-y,x, y) $ is a stationary covariance function in ${\mathbb{R}}^d$ that may take negative values. The condition that $M+G(x,y)$ is strictly positive definite for all $x,y\in{\mathbb{R}}^d$ can be relaxed. For example, let $d=2$ and $(h,u) =x-y\in{\mathbb{R}}^2$. Then, the function $C(h,u) = |u|^{-1/2} \exp(-h^2 / |u|)$ is of the form (\[eq:main\]) and defines a covariance function of a stationary, generalized random field on ${\mathbb{R}}^2$, see Chapter 3 in @gelvandvilenkin4 and Chapter 17 in @koralovsinai. Note that, here, $\lim_{u\rightarrow0} C(0,u) = \infty$. Hence, $C$ cannot be a translation invariant covariance function in the usual sense. Model constructions based on dependent processes {#sec:random} ================================================ The idea of the subsequent two constructions is based on the following observation. Let $C(h,u)=C_0(h) C_1(u)$, $h\in{\mathbb{R}}^d$, $u\in{\mathbb{R}}$, be a translation invariant, real-valued covariance model in ${\mathbb{R}}^{d+1}$ and assume we are interested in the corresponding random field at some fixed locations $x_1,\ldots, x_n\in{\mathbb{R}}^d$ and for all $t\in{\mathbb{R}}$. Let $Y_x$, $x\in{\mathbb{R}}^d$, be i.i.d. temporal processes with covariance function $C_1$. Then $$Z(t) =(Z_{x_1}(t),\ldots,Z_{x_n}(t)) = \bigl(C_0(x_p-x_q)\bigr)_{p,q=1,\ldots,n}^{1/2} (Y_{x_1}(t), \ldots, Y_{x_n}(t))^\top,\qquad t\in{\mathbb{R}},$$ has the required covariance structure. Now, $Z$ can be interpreted as a finite, weighted sum over $Y_x$, $x\in{\mathbb{R}}^d$. The separability is caused by the fact that $Y$ enters into the sum only through the fixed instance $t$. Non-separable models can be obtained if the argument of $Y$ also depends on the location. Moving averages based on fields of temporal processes ----------------------------------------------------- Assume that $Y(A, t)$, $A\in{\mathcal{B}}^d$ and $t\in{\mathbb{R}}^l$, is a stationary process such that $Y(A_1, \cdot),\ldots,$ $Y(A_n, \cdot)$ are independent for any disjoint sets $A_1,\ldots,A_n\in{\mathcal{B}}^d$, $n\in {\mathbb{N}}$. In the second argument, $Y$ is a stationary, zero mean Gaussian random field on ${\mathbb{R}}^{l}$ with covariance function $|A|C_1$, $C_1\dvtx{\mathbb{R}}^l \rightarrow{\mathbb{R}}$. Then, $${\operatorname{Cov}}(Y(A,t), Y(B,s)) = |A \cap B| C_1(t -s)$$ for any $s,t \in{\mathbb{R}}^l$ and $ A, B \in{\mathcal{B}}^d$. Let $f\dvtx{\mathbb{R}}^d\rightarrow{\mathbb{R}}^l$ be continuous, $g\dvtx{\mathbb{R}}^d\rightarrow{\mathbb{R}}$ be continuous and square-integrable, and $$Z(x,t) = \int_{{\mathbb{R}}^d} g(v-x) Y\bigl({\,\mathrm{d}}v, f(v-x) - t\bigr),\qquad x\in{\mathbb{R}}^d, t \in{\mathbb{R}}^l.$$ Then $Z$ is weakly stationary on ${\mathbb{R}}^{d+l}$ with translation invariant covariance function $$\begin{aligned} C(h, u) &=& \int_{{\mathbb{R}}^d} g(v) g(v+h) C_1\bigl(f(v) - f(v+h) -u\bigr) {\,\mathrm{d}}v,\qquad h\in{\mathbb{R}}^d, u \in{\mathbb{R}}^l.\end{aligned}$$ \[ex:fields\] Let $g(v) = (2\uppi^{-1})^{d/4} \exp(- \|v\|^2)$, $v\in{\mathbb{R}}^d$, $l=1$, $C_1(u) = \exp(-u^2)$, $u\in{\mathbb{R}}$, and $f(v)= v^\top A v +z^\top v$, $v\in{\mathbb{R}}^d$, for a symmetric, not necessarily positive definite matrix $A \in{\mathbb{R}}^{d\times d}$ and $z\in{\mathbb{R}}^d$. Let us further introduce a non-negative random scale $V$, that is, $$Z(x,t) = V^{d/2} \int_{{\mathbb{R}}^d} g\bigl(\sqrt{V}(v-x)\bigr) Y\bigl( {\mathrm{d}}v, \sqrt{V}\bigl(f(v-x) - t\bigr)\bigr),\qquad x\in{\mathbb{R}}^d, t \in{\mathbb{R}}.$$ Let $B=A h h^\top A$. Then the covariance function of $Z$ equals $$\begin{aligned} \label{eq:fields} C(h, u) = |{\mathbf{1}}_{d\times d} + 2 B|^{-1/2} {\mathbb{E}}_V \mathrm{e}^{-V[ \|h\|^2/2 + (z^\top h+ u)^2 (1 - 2 h^\top A({\mathbf{1}}_{d\times d} + 2 B)^{-1}A h) ]} ,\end{aligned}$$ please refer to the appendix for a proof. Equation (\[eq:fields\]) reveals that $C$ is a potential covariance model for rainfall with frozen wind direction. Figure \[fig:moving\] depicts realizations ![Realizations of a moving average random field in ${\mathbb{R}}^2\times {\mathbb{R}}$. Left time $t=0$, right $x_2=0$. See Example \[ex:fields\] for the definition of the covariance structure.[]{data-label="fig:moving"}](226f02.eps) of a random field with the above covariance function where ${\mathbb{E}}_V \exp(-V Q)$ is the Whittle–Matérn model $W_1(\sqrt Q)$, $Q\ge0$, $z=(2,0)$ and $ A = \left({0.5 \atop 0}\enskip {0 \atop 1}\right)$. Models based on a single temporal process ----------------------------------------- Another class of models may be obtained by considering only a single process $Y$. Although the subsequent approach might be generalized, an explicit model has currently only been found within the framework of normal scale mixtures. For $x \in{\mathbb{R}}^d$ let $$\label{eq:single} Z(x) = (2V/\uppi)^{d/4}|S_x|^{1/4} \mathrm{e}^{- V (U-x)^\top S_x (U-x)} Y \bigl(\sqrt V \bigl(\xi_1(U-x) + \xi_2(x) \bigr) \bigr) \frac{g(V, x) }{ \sqrt{f(U)}}.$$ Here, $V$ is a positive random variable and $U$ is a $d$-dimensional random variable with strictly positive density $f$. The one-dimensional random process $Y$ is assumed to be stationary with Gaussian covariance function $C(t) = \mathrm{e}^{-t^2}$. The matrix $S_x$ is strictly positive definite for all $x\in{\mathbb{R}}^d$, $ \xi_2\dvtx{\mathbb{R}}^d \rightarrow{\mathbb{R}}$ is arbitrary, and $g$ is a positive function such that ${\mathbb{E}}_V g(V,x)^2$ is finite for all $x\in{\mathbb{R}}^d$. The function $\xi_1$ is quadratic, that is, $$\xi_1(x)= x^\top M x+ z^\top x$$ for a symmetric $d\times d$ matrix $M$ and an arbitrary vector $z\in {\mathbb{R}}^d$. Let $$\begin{aligned} c &=& -z^\top(x-y) +\xi_2(x) - \xi_2(y), \\ A&=&S_x + S_y + 4M (x-y) (x-y) ^\top M, \\ m &=&(x-y)^\top M (x-y),\end{aligned}$$ and $$Q(x,y) = c^2- m^2 + (x-y)^\top\bigl(S_x + 2 (m +c) M\bigr) A^{-1} \bigl(S_y + 2 (m - c) M\bigr) (x-y).$$ Then the covariance function of $Z$ equals $$\label{eq:single2} C(x,y) = \frac{2^{d/2} |S_x|^{1/4}|S_y|^{1/4}}{\sqrt{|A|}}\cdot {\mathbb{E}}_V g(V, x)g(V, y)\exp(-V Q(x,y) ) ,\qquad x,y\in{\mathbb{R}}^d.$$ The proof is given in the . \[ex:single\] Translation-invariant models in ${\mathbb{R}}^d$ are obtained if both $S_x$ and $g$ do not depend on $x$. Assume $S_x$ is twice the identity matrix, $g(v) = (2\sqrt{v})^{1-\nu} / \sqrt{\Gamma(\nu)}$, $v,\nu>0$, and $V$ follows the Fréchet distribution $F(v) = \mathrm{e}^{-1/(4v)}$, $v>0$. Two particular models might be of special interest, either because of their simplicity or their explicit spatio-temporal modelling. First, if $c\equiv0$ then $$C(h) = \frac{W_\nu(\|h\|)}{|{\mathbf{1}}_{d\times d} + M h h^\top M|^{1/2}}, \qquad h\in{\mathbb{R}}^d,$$ according to formula in Gradshteyn and Ryzhik ([-@GRengl]). Second, an explicit spatio-temporal model in ${\mathbb{R}}^{d+1}$ is obtained for $$\xi_2(x,t) = t,\qquad x\in{\mathbb{R}}^d, t\in{\mathbb{R}}, \quad\mbox{and}\quad M = \pmatrix{ L & 0\cr 0 & 0 }.$$ Then, with $D={\mathbf{1}}_{d\times d} + L h h^\top L$, we get $$C(h, u) = |D|^{-1/2} W_\nu\bigl(\sqrt{Q(h,u)}\bigr),\qquad h\in{\mathbb{R}}^d, u\in{\mathbb{R}},$$ where $$Q(h,u) = (u -z^\top h)^2 - (h^\top L h)^2 + h^\top\bigl(D + (u -z^\top h)L\bigr) D^{-1} \bigl(D + (u -z^\top h)L\bigr)h.$$ Let $\xi_1\equiv\xi_2\equiv0$. Then the random process $Y(t)$ is considered only at instance $t=0$ and the exponent $Q(x,y)$ simplifies to $$Q(x, y) = (x-y)^\top S_x (S_x + S_y)^{-1} S_y (x-y) = (x-y)^\top(S_x^{-1} + S_y^{-1} )^{-1}(x-y) .$$ Let $g(v,x) = (2\sqrt v)^{1-\nu(x)} / \Gamma(\nu(x))^{1/2}$, $\nu$ a positive function on ${\mathbb{R}}^d$, and $V$ a Fréchet variable with distribution function $F(v) = \mathrm{e}^{-1/(4 v)}$, $v>0$. Then, the first model given in @stein05b is obtained, $$C(x, y) = \frac{2^{d/2} |S_x|^{1/4}|S_y|^{1/4} \Gamma((\nu(x) + \nu(y)) /2)} {[|S_x + S_y|\Gamma(\nu(x)) \Gamma(\nu(y))]^{1/2}} W_{(\nu(x) + \nu(y)) /2}(Q(x,y)^{1/2}),\qquad x,y\in{\mathbb{R}}^d .$$ The second model given in @stein05b, a generalization of the Cauchy model, is obtained by $g(v, x) = v^{(\delta(x) -1) / 2}$ and a standard exponential random variable $V$, that is, $$C(x, y) = \frac{2^{d/2} |S_x|^{1/4}|S_y|^{1/4}} {|S_x + S_y|^{1/2}(1 + Q(x,y))^{(\delta(x) + \delta(y))/2}},\qquad x,y\in{\mathbb{R}}^d.$$ If $\nu$ and $\delta$ are constant, then the above models are special cases of Theorem \[thm:main\]. See Theorem 1 in @PMC09 for a class of models that generalizes Stein’s examples. \[ex:cyclone\] A cyclone can be mimicked if rotation matrices are included in the model, $$C(x,y) = \frac{2^{d/2}|S_x|^{1/4}|S_y|^{1/4}}{\sqrt{|S_x + S_y|}} W_\nu\bigl( \bigl( h^\top S_x (S_x + S_y)^{-1} S_y h \bigr)^{1/2} \bigr),\qquad x,y,\in{\mathbb{R}}^3,$$ where $$\begin{aligned} S_x&= &{\operatorname{diag}}( 1, 1, 1) + R(x)^\top A^\top x x^\top A R(x),\qquad A \in {\mathbb{R}}^{3 \times3}, \\ R(x)&=& \pmatrix{ \cos(\alpha x_3) & -\sin(\alpha x_3) & 0 \cr \sin(\alpha x_3) & \cos(\alpha x_3)& 0 \cr 0& 0& 1 },\qquad x = (x_1, x_2, x_3)\in{\mathbb{R}}^3, \alpha\in{\mathbb{R}},\end{aligned}$$ and $$h = x^\top R(x) - y^\top R(y).$$ The positive definiteness of the model is now ensured by both Theorem \[thm:main\] and a generalized version of $Z$ in Equation (\[eq:single\]), replacing $x$ by $x^\top R(x)$ there. Note that $x\mapsto x^\top R(x)$ is a bijection. ![Realizations of a random field in ${\mathbb{R}}^3$ that mimics a cyclone. Left time $x_3=0$, right $x_2=0$. See Example \[ex:cyclone\] for the definition of the covariance structure.[]{data-label="fig:hurricane"}](226f03.eps) Figure \[fig:hurricane\] depicts realizations of a random field with the above covariance function where $\alpha= -2 \uppi$, $ \nu=1$, and $$A= \pmatrix{ 2 & 1 & 0\cr 0 & 1 & 0\cr 0 & 0 & 0 }.$$ Multivariate spatio-temporal models {#sec:multivariate} =================================== Here, we generalize Theorem \[thm:main\] to construct multivariate cross covariance functions. Let $\underline M = (M + M^\top) /2$ for any real-valued square matrix $M$. \[thm:multi\] Assume that $l$, $m$ and $d$ are positive integers, $A_{j}\in{\mathbb{R}}^{l\times d}$ for $j=1,\ldots,m$. Suppose that $\varphi$ is a normal scale mixture and $G\dvtx {\mathbb{R}}^{2 d} \rightarrow{\mathbb{R}}^{l\times l}$ is a cross covariance function. Let $M\in{\mathbb{R}}^{d\times d}$ be a positive definite matrix such that $M - \underline{A_j^\top G(x, y)A_k}$ is strictly positive definite for all $x,y\in{\mathbb{R}}^{d}$ and $j,k=1,\ldots,d$. Then $C=(C_{jk})_{j,k=1,\ldots,m}$ is a cross covariance function in ${\mathbb{R}}^d$ for $$\begin{aligned} \label{eq:extension1} &&C_{jk}(x, y)= \frac{\varphi( [(x-y) ^\top (M - \underline{A^\top_{j}G(x, y) A_{k}})^{-1} (x-y) ]^{1/2} )} {\sqrt{|M - \underline{A^\top_{j}G(x, y) A_{k}}|}},\nonumber\\[-8pt]\\[-8pt] &&\quad x,y\in{\mathbb{R}}^d, j,k=1,\ldots,m.\nonumber\end{aligned}$$ Lemma \[lemma:sum\] yields that $$\begin{aligned} (\omega^\top\underline{A_j^\top G(x,y) A_k} \omega)_{j,k=1,\ldots,m} &=& (\omega^\top A_j^\top G(x,y) A_k \omega)_{j,k=1,\ldots,m} \\&= & (A_1\omega, \ldots, A_m\omega)^\top G(x,y) (A_1\omega, \ldots, A_m\omega)\end{aligned}$$ is a cross covariance function for all $\omega\in{\mathbb{R}}^d$. Part 1 of Theorem \[thm:Mplus\] yields that $C_\omega(x,y) \kern-0.5pt = \kern-0.5pt (\exp(\omega^\top\kern-2pt\underline{A_j^\top G(x,y) A_k} \omega))_{j,k=1,\ldots,m}$ is also a cross covariance function. By assumption, $M- \underline{A_j^\top G(x,y) A_k} $ is strictly positive definite. Hence, as a result of Proposition \[thm:bochner\], the Fourier transform of the function $\omega\mapsto\exp(-\omega ^\top M \omega)C_\omega(x,y)$ is a cross covariance function, which is of the form (\[eq:extension1\]). Appendix {#app .unnumbered} ======== Proof for the covariance function in Example 9 ---------------------------------------------- Let $f_{\mu, D/2} (x)$ be the multivariate normal density with expectation $\mu$ and covariance matrix $D/2$. Then we get $$\begin{aligned} &&- \log\bigl( \varphi(h-uv)f_{\mu, D/2} (v)\bigr) + \tfrac12 \log ((2\uppi)^d |D|) \\ &&\quad= h^\top h - 2u h^\top v + u^2 v^\top v + v^\top D^{-1} v - 2\mu^\top D^{-1} v + \mu^\top D^{-1} \mu \\ &&\quad= h^\top h + \mu^\top D^{-1} \mu+ (v-\xi)^\top (u^2 {\mathbf{1}}_{d\times d} + D^{-1})(v-\xi) -\xi^\top(u^2 {\mathbf{1}}_{d\times d} + D^{-1})\xi\end{aligned}$$ with $\xi= (u^2 {\mathbf{1}}_{d\times d} + D^{-1})^{-1}(u h + D^{-1} \mu)$. Hence, $$\begin{aligned} && -\log C(h,u) + \tfrac12 \log(|D|) +\tfrac12\log(|u^2 {\mathbf{1}}_{d\times d} + D^{-1}|) \\ &&\quad= h^\top h + \mu^\top D^{-1} \mu-\xi^\top(u^2 {\mathbf{1}}_{d\times d} + D^{-1})\xi \\ &&\quad= (h- u \mu)^\top({\mathbf{1}}_{d\times d} + u^2 D)^{-1} (h- u \mu)\end{aligned}$$ which yields the assertion. Proof for the covariance function in Example 13 ----------------------------------------------- We proof the formula for the covariance function in Example \[ex:fields\], but also demonstrate that a slightly more general function $g$ does not give a more general model. To this end, let $g(v) = (|2\uppi^{-1}M|)^{1/4} \exp(- v^\top M v)$, $v\in{\mathbb{R}}^d$, for a strictly positive definite matrix $M \in{\mathbb{R}}^{d\times d}$. For ease of notation we assume that $V\equiv1$. Then $$\begin{aligned} &&-\log\bigl(g(v) g(v+h) C_1\bigl(f(v) - f(v+h) -u\bigr)\bigr) - \tfrac12 \log(|2\uppi^{-1}M|) \\ &&\quad= v^\top M v + (v+h)^\top M (v + h) + (2 v^\top A h + h^\top A h +z^\top h +u)^2 \\ &&\quad= 2 v^\top M v + 4 v^\top B v + 2 v^\top(2B + M +2uA + 2Ahz^\top)h + c\end{aligned}$$ where $B = Ahh^\top A$ and $c = [h^\top Ah +z^\top h+ u ]^2 + h^\top M h$. Hence, with $D = 2B + M +2[u + z^\top h]A$, $$\begin{aligned} &&-\log\bigl(g(v) g(v+h) C_1\bigl(f(v) - f(v+h) + u\bigr)\bigr) - \tfrac12 \log(|2\uppi^{-1}M|) \\ &&\quad= \bigl(v - (2M+4B)^{-1} Dh\bigr)^\top(2M + 4B) \bigl(v - (2M+4B)^{-1} Dh\bigr)\\ &&\qquad {}- h^\top D (2M + 4B)^{-1} D h + c.\end{aligned}$$ Thus, $$\begin{aligned} C(h, u) &=& \frac{|M|^{1/2}}{|M + 2 B|^{1/2}} \exp\bigl(-c + h^\top D (2M + 4B)^{-1} D h\bigr),\qquad h\in{\mathbb{R}}^d, u\in{\mathbb{R}}.\end{aligned}$$ Let $M^{-1/2}$ be a symmetric matrix with $M^{-1/2} M M^{-1/2}={\mathbf{1}}_{d\times d}$. Replacing on the right hand side $M^{-1/2} A M^{-1/2}$ by $\tilde A$, $M^{-1/2}z$ by $\tilde z$ and $M^{1/2} h$ by $\tilde h$ shows that $M$ causes nothing but a geometrical anisotropy effect. Hence, we may assume that $M$ is the identity matrix. Then $$C(h, u) = |{\mathbf{1}}_{d\times d} + 2 B|^{-1/2} \exp\bigl(-\bigl[c - \tfrac12h^\top D ({\mathbf{1}}_{d\times d} + 2B)^{-1} D h\bigr]\bigr)$$ which yields Equation (\[eq:fields\]). Proof of Equation (13) ---------------------- Let $h=x-y$ and $w = U-x$. Then we have $$\begin{aligned} {\operatorname{Cov}}(Z(x), Z(y)) &=& \uppi^{-d/2}|S_x|^{1/4}|S_y|^{1/4}{\mathbb{E}}_V V^{d/2}g(V, x)g(V, y) \\ && {}\times \int \exp\bigl(-V w^\top S_x w - V (w+h)^\top S_y(w+h)\\ && \hphantom{{}\times \int \exp\bigl(} {}- V \bigl( w^\top M w - (w+h)^\top M (w+h) + c \bigr)^2 \bigr){\,\mathrm{d}}w.\end{aligned}$$ The value of the integral is at most $\int\exp(-V w^\top S_x w) {\,\mathrm{d}}w$. Hence ${\operatorname{Cov}}(Z(x), Z(y))<\infty$ if ${\mathbb{E}}_V g(V, x)g(V, y) < \infty$. Now, $$\begin{aligned} &&w^\top S_x w + (w+h)^\top S_y(w+h) + \bigl(w^\top M w - (w+h)^\top M (w+h) +c\bigr)^2 \\ &&\quad= w^\top(S_x + S_y + 4 M h h^\top M)w + 2w^\top\bigl(S_y + 2 (h^\top M h -c) M\bigr) h + h^\top S_y h +(h^\top M h -c)^2 \\ &&\quad= (w - \mu)^\top A (w- \mu) -\mu^\top A \mu+ h^\top S_y h +(h^\top M h -c)^2\end{aligned}$$ with $\mu= -A^{-1}(S_y + 2 (h^\top M h -c) M)h$. That is, $$\begin{aligned} \label{eq:13A} {\operatorname{Cov}}(Z(x) ,Z(y)) &=& |A|^{-1/2}{ |S_x|^{1/4}|S_y|^{1/4}}{}{\mathbb{E}}_V g(V, x)g(V, y)\nonumber\\[-8pt]\\[-8pt] &&{}\times \mathrm{e}^{-V [ hS_y h + (h^\top M h -c)^2 - \mu^\top A \mu]}.\nonumber\end{aligned}$$ On the other hand, using the transform $w=U-y$, we get $$\begin{aligned} \label{eq:13B} &&{\operatorname{Cov}}(Z(x), Z(y))\nonumber \\ &&\quad= \uppi^{-d/2}|S_x|^{1/4}|S_y|^{1/4}{\mathbb{E}}_V V^{d/2} g(V, x)g(V, y)\nonumber\\ &&\qquad {}\times\int \exp\bigl(-V(w-h)^\top S_x (w-h) + - V hS_y h\\ && \qquad\hphantom{{}\times\int \exp\bigl(} {}- V \bigl( (w-h)^\top M (w-h) - w^\top M w + c \bigr)^2 \bigr) {\,\mathrm{d}}w\nonumber \\&&\quad= |A|^{-1/2} |S_x|^{1/4}|S_y|^{1/4} {\mathbb{E}}_V g(V, x)g(V, y) \mathrm{e}^{-V [ hS_x h + (h^\top M h +c)^2 - \nu^\top A \nu]}\nonumber\end{aligned}$$ with $\nu=A^{-1}(S_x + 2 (h^\top M h +c) M)h$. Choosing $V\equiv1$ and $g$ a constant function we obtain that the exponents in (\[eq:13A\]) and (\[eq:13B\]) must be equal, that is, $$\begin{aligned} &&hS_y h + (h^\top M h -c)^2 - \mu^\top A \mu \\ &&\quad= \tfrac12 [ hS_y h + (h^\top M h -c)^2 - \mu^\top A \mu + hS_x h + (h^\top M h +c)^2 - \nu^\top A \nu] \\ &&\quad= \tfrac12 [h(S_y + S_x + 4Mhh^\top M) h - 2 (h^\top M h)^2 + 2 c^2 - (\mu- \nu) A (\mu- \nu) - 2 \nu^\top A^{-1} \mu] \\ &&\quad= c^2- (h^\top M h)^2 - \nu^\top A^{-1} \mu.\end{aligned}$$ Acknowledgements {#acknowledgements .unnumbered} ================ The author is grateful to Zakhar Kabluchko, Emilio Porcu and the referees for valuable suggestions and comments. Barndorff-Nielsen, O. (1979). Hyperbolic distributions and distributions on hyperbolae. *Scand. J. Statist.* **5** 151–157. Berg, C., Christensen, J.P.R. and Ressel, P. (1984). *Harmonic Analysis on Semigroups. Theory of Positive Definite and Related Functions*. New York: Springer. Berlinet, A. and Thomas-Agnan, C. (2004). *Reproducing Kernel [H]{}ilbert Spaces in Probability and Statistics*. Boston: Kluwer. Cox, D.R. and Isham, V.S. (1988). A simple spatial-temporal model of rainfall. *Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.* **415** 317–328. Cressie, N.A.C. and Huang, H.-C. (1999). Classes of nonseparable, spatio-temporal stationary covariance functions. *J. Amer. Statist. Assoc.* **94** 1330–1340. Gel’fand, I.M. and Vilenkin, N.Y. (1964). *Generalized Functions: Applications of Harmonic Analysis*. New York: Academic Press. Gneiting, T. (1997). Normal scale mixtures and dual probability densities. *J. Stat. Comput. Simul.* **59** 375–384. Gneiting, T. (2002). Nonseparable, stationary covariance functions for space-time data. *J. Amer. Statist. Assoc.* **97** 590–600. Gneiting, T. and Schlather, M. (2004). Stochastic models that separate fractal dimension and the [H]{}urst effect. *SIAM Rev.* **46** 269–282. Gneiting, T., Sasvári, Z. and Schlather, M. (2001). Analogies and correspondences between variograms and covariance functions. *Adv. in Appl. Probab.* **33** 617–630. Gneiting, T., Genton, M.G. and Guttorp, P. (2007). Geostatistical space-time models, stationarity, separability and full symmetry. In *Statistical Methods for Spatio-Temporal Systems* (B. Finkenstadt, L. Held, and V. Isham, eds.) chap. 4. Boca Raton: Chapman & Hall/CRC. Gradshteyn, I.S. and Ryzhik, I.M. (2000). *Table of Integrals, Series, and Products*, 6th ed. London: Academic Press. Gregori, P., Porcu, E., Mateu, J. and Sasvári, Z. (2008). On potentially negative space time covariances obtained as sum of products of marginal ones. *Ann. Inst. Statist. Math.* **60** 865–882. Kolovos, A., Christakos, G., Hristopulos, D.T. and Serre, M.L. (2004). Methods for generating non-separable spatiotemporal covariance models with potential environmental applications. *Adv. Water. Res.* **27** 815–830. Koralov, L.B. and Sinai, Y.G. (2007). *Theory of Probability and Random Processes*, 2nd ed. Berlin: Springer. Ma, C. (2002). Spatio-temporal covariance functions generated by mixtures. *Math. Geol.* **34** 965–975. Ma, C. (2003). Families of spatio-temporal stationary covariance models. *J. Stat. Plann. Inference* **116** 489–501. Ma, C. (2005a). Semiparametric spatio-temporal covariance models with the arma temporal margin. *Ann. Inst. Statist. Math.* **57** 221–233. Ma, C. (2005b). Spatio-temporal variograms and covariance models. *Adv. in Appl. Probab.* **37** 706–725. Ma, C. (2005c). Linear combinations of space-time covariance functions and variograms. *IEEE Trans. Signal Process.* **53** 857–864. Ma, C. (2008). Recent developments on the construction of spatio-temporal covariance models. *Stoch. Environ. Res. Risk. Asses.* **22** 39–47. Mateu, J., Porcu, E. and Gregori, P. (2008). Recent advances to model anisotropic space-time data. *Stat. Methods Appl.* **17** 209–223. Matheron, G. (1972). Leçon sur les fonctions aléatoire d’ordre 2. Technical Report C-53, Ecole des Mines de Paris.\ Available at <http://cg.ensmp.fr/bibliotheque/public/MATHERON_Cours_00302.pdf>. Paciorek, C. (2003). Nonstationary Gaussian processes for regression and spatial modelling. Ph.D. thesis, Carnegie Mellon Univ., Dept. Statistics. Available at [www.biostat.harvard.edu/$\sim$paciorek/diss/paciorek-thesis.pdf](http://www.biostat.harvard.edu/~paciorek/diss/paciorek-thesis.pdf). Porcu, E., Mateu, J. and Christakos, G. (2009). Quasi-arithmetic means of covariance functions with potential applications to space-time data. *J. Multivariate Anal.* **100** 1830–1844. Reisert, M. and Burkhardt, H. (2007). Learning equivariant functions with matrix valued kernels. *J. Mach. Learn. Res.* **8** 385–408. Schoenberg, I.J. (1938a). Metric spaces and completely monotone functions. *Ann. Math.* **39** 811–841. Schoenberg, I.J. (1938b). Metric spaces and positive definite functions. *Trans. Amer. Math. Soc.* **44** 522–536. Stein, M.L. (1999). *Interpolation of [S]{}patial [D]{}ata*. Heidelberg, New York: Springer. Stein, M.L. (2005a). Space-time covariance functions. *J. Amer. Statist. Assoc.* **100** 310–321. Stein, M.L. (2005b). Nonstationary spatial covariance functions. Technical report, Univ. Chicago. Available at [www.stat.uchicago.edu/cises/research/cises-tr21.pdf](http://www.stat.uchicago.edu/cises/research/cises-tr21.pdf). Stein, M.L. (2005c). Statistical methods for regular monitoring data. *J. Roy. Statist. Soc. Ser. B* **67** 667–687. Wackernagel, H. (2003). *Multivariate Geostatistics*, 3rd ed. Berlin, Heidelberg: Springer. Wendland, H. (2005). *Scattered Data Approximation*. Cambridge: Cambridge Univ. Press. Yaglom, A.M. (1987a). *Correlation Theory of Stationary and Related Random Functions I, Basic Results*. New York, Berlin: Springer. Yaglom, A.M. (1987b). *Correlation Theory of Stationary and Related Random Functions II, Supplementary Notes and References*. New York, Berlin: Springer.
--- abstract: | Previous clustering analysis of low-power radio AGN has indicated that they preferentially live in massive groups. The X-ray surveys of the COSMOS field have achieved a sensitivity at which these groups are directly detected out to $z=1.3$. Making use of Chandra-, XMM- and VLA-COSMOS surveys we identify radio AGN members ($10^{23.6}\lesssim \mathrm{L_{1.4GHz}/(W\,Hz^{-1})}\lesssim10^{25} $) of galaxy groups ($10^{13.2}\lesssim\mathrm{M_{200}/M_\odot}\lesssim10^{14.4}$; $0.1<z<1.3$) and study i) the radio AGN – X-ray group occupation statistics as a function of group mass, and ii) the distribution of radio AGN within the groups. We find that radio AGN are preferentially associated with galaxies close to the center ($<0.2\cdot r_{200}$). Compared to our control sample of group members matched in stellar mass and color to the radio AGN host galaxies, we find a significant enhancement of radio AGN activity associated with $10^{13.6}\lesssim\mathrm{M_{200}/M_\odot}\lesssim10^{14}$ halos. We present the first direct measurement of the halo occupation distribution (HOD) for radio AGN, based on the total mass function of galaxy groups hosting radio AGN. Our results suggest a possible deviation from the usually assumed power law HOD model. We also find an overall increase of the fraction of radio AGN in galaxy groups ($<1\cdot r_{200}$), relative to that in all environments. author: - | V. Smolčić$^{1,2,3}$, A. Finoguenov$^{4,5}$, G. Zamorani$^{6}$, E. Schinnerer$^{7}$,\ \ [$\mathrm{M.~Tanaka^{8,2},\, S.~Giodini^{9}, \, N.~Scoville^{10}}$]{}\ \ $^{a}$Based on observations with the National Radio Astronomy Observatory which is a facility of the National Science Foundation operated\ under cooperative agreement by Associated Universities, Inc.; the XMM-Newton, an ESA\ science mission with instruments and contributions directly funded by ESA Member States and NASA\ $^{1}$ESO ALMA COFUND Fellow\ $^{2}$European Southern Observatory, Karl-Schwarzschild-Strasse 2, D-85748 Garching b. München, Germany\ $^{3}$Argelander Institut for Astronomy, Auf dem Hügel 71, Bonn, D-53121, Germany\ $^{4}$Max-Planck-Institut für Extraterrestrische Physik, Giessenbachstraße, 85748 Garching, Germany\ $^{5}$University of Maryland, Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, USA\ $^{6}$INAF - Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127 Bologna, Italy\ $^{7}$Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany\ $^{8}$Institute for the Physics and Mathematics of the Universe, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa-shi, Chiba 277-8583, Japan\ $^{9}$Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, the Netherlands\ $^{10}$California Institute of Technology, MC 105-24, 1200 East California Boulevard, Pasadena, CA 91125 title: 'On the occupation of X-ray selected galaxy groups by radio AGN since $z=1.3^a$' --- galaxies: active, – cosmology: observations – radio continuum: galaxies Introduction {#sec:introduction} ============ The general interest in radio AGN has increased in the last years as their energy outflows both, theoretically and observationally substantiate feedback processes highly relevant for massive galaxy formation, and galaxy cluster/group physics (e.g. Croton et al. 2006, Bower et al. 2006,  et al. 2009;  2009,  & Riechers 2011, Giodini et al. 2009, 2010). Studies of low radio power (predominantly low-excitation, i.e. with weak or absent emission lines in their optical spectra) AGN have shown that the probability of finding a radio AGN is a strong function of stellar mass of the host galaxy (e.g. Best et al. 2005,  et al. 2009). At high stellar masses ($M_*\sim10^{12}$ ) this probability approaches unity for galaxies out to $z=1.3$ ( et al. 2009). As high stellar mass galaxies usually reside in group/cluster environments (e.g. Mandelbaum et al. 2006; Leauthaud et al. 2010), it is still not entirely clear whether it is the host galaxy and/or environmental properties that determine the nature of radio AGN. Initially, Ledlow & Owen (1996) found no differences between the bivariate radio-optical luminosity function for radio AGN in the cluster and in the field, suggesting that cluster environment does not play a major role in radio AGN triggering. However, using a sample of radio AGN drawn from the NVSS (NRAO VLA Sky Survey; @condon98) and comparing the radio luminosity functions in (ROSAT selected X-ray) clusters and in the field Lin & Mohr (2007) found a factor of 6.8 higher probability of a galaxy being a radio AGN in the clusters than in the field ($z<0.0647$.). A detailed radio AGN clustering analysis at $z\sim0.55$ has been performed by Wake et al. (2008). Constructing cross(auto)-correlation functions of $0.4<z<0.8$ 2SLAQ Luminous Red Galaxies (LRGs), NVSS radio detected LRGs and a control sample matched in redshift, color and optical luminosity to the radio AGN they found that radio AGN are significantly more clustered than the control population of radio-quiet galaxies. Assuming various models for the halo occupation distribution, they find that radio AGN typically occupy more massive halos compared to the control sample galaxies, suggesting that the probability of finding a radio AGN in a massive galaxy (at $z\sim0.55$) is influenced by the halo mass and/or cluster environment. It is important to stress that both clustering and stacking of halos, used in the above analyzes (Wake et al. 2008, Mandelbaum et al. 2009) suffer from degeneracies towards the distribution of radio AGN in halos, either based on uncertainties of halo occupation of massive galaxies (van den Bosch et al. 2004) or degeneracies of HOD modelling (e.g. Miyaji et al. 2011). Thus, optimally one would directly study the HOD by identifying the radio AGN within the galaxy groups. This can be done by using a well defined statistically significant sample of radio AGN and X-ray selected galaxy groups and clusters. In this Letter we make use of such a sample in the COSMOS field in order to directly study the halo occupation of radio AGN. We adopt $H_0=71$, $\Omega_M=0.25$, $\Omega_\Lambda=0.75$. The samples {#sec:data} =========== The Cosmic Evolution Survey (COSMOS) is a panchromatic photometric and spectroscopic survey of 2 of the sky observed by the most advanced astronomical facilities (e.g. VLA, Chandra, XMM-Newton, Subaru, CFHT; see Scoville et al. 2007). It allows a robust source identification and extended selection of various source types. The statistics on both radio AGN and selection of halos based on their X-ray emission enables a construction of a statistically significant catalog of radio AGN in massive halos, as described below. Radio AGN and red galaxy control samples {#sec:radiodata} ---------------------------------------- We make use of the COSMOS radio AGN sample defined by @smo08 [see also @smo09a]. It consists of $\sim600$ low-power ( $\lesssim10^{25}$ ) radio AGN galaxies out to a redshift of $z=1.3$. The sample was generated by matching the 20 cm VLA-COSMOS Large Project sources (Schinnerer et al. 2007) with the UV-NIR COSMOS photometric catalog (Capak et al. 2007). The selection required optical counterparts with $i_\mathrm{AB}\leq26$, accurate photometry, and a $\mathrm{S/N}\geq5$ (i.e. $\gtrsim50~\mathrm{\mu}$Jy) detection at 20 cm. The radio AGN hosts were identified as red galaxies based on a rest-frame optical color ($P1$ color $\geq0.15$) that mimics the standard spectroscopic classification methods (see @smo06 [@smo08] for details). The rest-frame color method efficiently selects mostly type 2 AGN (such as LINERs and Seyferts), and absorption-line AGN (with no emission lines in the optical spectrum), while type 1 AGN (i.e. quasars, $\lesssim20\%$ of the total AGN sample) are not included (see @smo08 for detailed definitions). Furthermore, from the full COSMOS photometric redshift catalog [@ilbert09], using the same optical magnitude, redshift and color selection criteria as for their radio-AGN, @smo09a generated a control galaxy sample from which the radio AGN are drawn (“control sample” hereafter). Here we make use of both of these samples, and add two additional requirements. First, we use a luminosity limited sample of radio AGN ($23.6\leq\log{L_\mathrm{1.4GHz}/\mathrm{(W\,Hz^{-1})}}\lesssim25$; $\mathrm{M_i}\leq -22.5$) in order to avoid selection effects induced by limiting in flux rather than luminosity. Second, in order to disentangle the dependence of radio emission on the host stellar mass, we select a subsample of control galaxies ($\mathrm{M_i}\leq -22.5$) that i) are not detected at 20 cm, and ii) have a stellar mass distribution equivalent to that of our radio AGN. Given these two criteria we consider this sample to be “radio-quiet” and refer to it hereafter as ”stellar mass matched control sample”. Our criteria yield $217$ galaxies in the radio AGN sample, $\sim5300$ in the control sample and $618$ in the stellar mass matched control galaxy sample. The median redshifts of the radio and stellar mass matched control galaxy samples are 0.72 and 0.85, respectively. At stellar masses exceeding $10^{12}~\mathrm{M_\odot}$, we do not find galaxies without a radio AGN. X-ray sample of galaxy groups {#sec:xdata} ----------------------------- The COSMOS field has been subject of intensive X-ray observations by XMM-Newton (Hasinger et al. 2007; Cappelluti et al. 2009) and Chandra (Elvis et al. 2009). Following the method of Finoguenov et al. (2009) a detailed subtraction of point sources was performed, which allowed us to substantially reduce the contamination level of the catalog and to improve the localization of the center of extended X-ray emission. We use the complete set of XMM-Newton observations, described in detail in Cappelluti et al. (2008). We also added the data of the Chandra-COSMOS survey (Elvis et al. 2009), after the removal of point sources. Identification of sources using the improved COSMOS photometric redshift catalog [@ilbert09] is complete at all redshifts. Furthermore, the zCOSMOS-BRIGHT (Lilly et al. 2009) program provides spectroscopic identification for a dominant fraction (67%) of galaxies with $i_\mathrm{AB}\leq22.5$ and $z<1$. Most importantly, the mass-luminosity relation for the new sample of X-ray groups has been directly calibrated by Leauthaud et al. (2010) using weak lensing. If the systematic uncertainties of groups hosting radio AGN are consistent with those of the entire sample, then the systematic uncertainties are within 10% (Leauthaud et al. 2010). The X-ray group catalog contains 210 systems at redshifts below 1.3 with 130 groups having at least two spectroscopic redshifts that match the redshift of the red sequence. Matching radio AGN and control samples to X-ray galaxy groups {#sec:matching} ------------------------------------------------------------- We have cross-correlated the X-ray galaxy group catalog (see ) with the i) full control sample, ii) stellar mass matched control sample, and iii) radio AGN sample (see ). The correlation was done in projected 2D space as accurate redshifts are assigned for all samples. We perform the matching for each radio AGN/control galaxy by searching for the nearest cluster within a redshift slice centered at the galaxy’s redshift $z$. We take the half-width of the redshift slice to be $\Delta z = 0.0334\cdot(1+z)$, which corresponds to $\sim3$ standard deviations of the COSMOS photometric redshift distribution for $i<24$ (see Ilbert et al. 2009 for details). Prior to imposing a limit on the distance between the radio AGN/control galaxy and the galaxy group center, below we consider the surface number density of radio AGN inside groups. Results {#sec:results} ======= Surface density profile of radio AGN in galaxy groups {#sec:sd} ----------------------------------------------------- We derive the surface density profiles of the radio AGN and stellar mass matched control samples satisfying our selection criteria ($\log{L_\mathrm{1.4GHz}}\geq23.6 \mathrm{[W/Hz]}$; $M_i\leq-22.5$; $P1\geq0.15$). The profiles have been computed as distance to the X-ray center in units of $r_{200}$[^1] averaging over all the X-ray galaxy groups in our sample (with photometrically masked regions taken into account). In  we show the surface density profiles for our radio AGN and stellar mass matched control sample galaxies, scaled by a factor of $2.85$ downward to account for the larger sample of the second relative to the first. There is a clear enhancement of occurrence of radio AGN at group centers ($<0.2\cdot r_{200}$). This is consistent with results in the local universe ($z<0.0647$; @lin07), and will be further discussed in . \[fig:sd\] Stellar mass function of radio AGN in galaxy groups: Effect of group environment on radio AGN triggering -------------------------------------------------------------------------------------------------------- In the upper panel of  we show the stellar mass function computed using the $1/\mathrm{V_{max}}$ method (see  et al. 2009 for details) of both radio AGN and control galaxies (not matched in stellar mass) that occupy all environments and galaxy groups ($\leq1\cdot \mathrm{r_{200}}$). It is noteworthy that in the highest mass bin ($M_*\sim 10^{12}$ ) essentially all red galaxies are radio AGN that tend to occupy galaxy groups. Dividing the stellar mass function of radio AGN by that of the control galaxy sample yields the volume corrected fraction of radio AGN relative to red host galaxies in all environments and in groups. This is shown in the bottom panel of . These fractions can be considered as probabilities that a massive red galaxy is a radio AGN in all and group ($\leq1\cdot \mathrm{r_{200}}$) environments, respectively. From  it is obvious that the fraction of radio AGN is enhanced in galaxy groups. In the highest mass bin ($10^{12.3}$ ) the fraction of radio AGN is consistent with 1 irrespective of environment. At lower stellar masses ($10^{11.8}$ and $10^{12.3}$ ) the enhancement of the fraction of radio AGN in groups is a factor of $\sim1.7$ and $1.6$ respectively. These results imply that the triggering of radio AGN is directly linked to group environment. Radio AGN in galaxy groups: Halo mass function and halo occupation distribution ------------------------------------------------------------------------------- In  we show the total mass function of all galaxy groups, and those hosting a radio AGN or a stellar mass matched control sample galaxy within $1\cdot r_{200}$. Since the radio sample is volume-limited, in calculating the mass function we used the volume calculation ($1/\mathrm{V_{max}}$) for the X-ray group sample. The shape of the total mass function of groups hosting a massive radio-quiet (i.e. stellar mass matched control sample) galaxy within $1\cdot r_{200}$ differs from that of all groups. Massive radio-quiet galaxies tend to occupy higher mass groups with increased frequency. On the other hand, groups hosting a radio AGN galaxy ( $\geq10^{23.6}$ ) show a very different distribution with a preference for high-mass halos ($\log(\mathrm{M_{tot}/M_\odot})\gtrsim13.5$), in particular in the range of $\log(\mathrm{M_{tot}/M_\odot})\sim13.5-14.1$, compared to the halo mass function of radio-quiet hosts. This enhancement in number density (relative to a stellar-mass matched sample) shows that the radio-AGN phenomenon (at least in the given $\mathrm{M_{200}}$ range) must be governed by a factor other than stellar mass. Our data-set allows us to generate the halo occupation distribution (HOD), which describes the occupancy of dark matter halos by galaxies, directly. We simply need to divide the mass function of X-ray groups hosting radio AGN within $1\cdot r_{200}$ by that of all X-ray groups (see ). As shown in , the HOD of radio AGN ($\log{\mathrm{L_\mathrm{1.4GHz}}/\mathrm{(W\,Hz^{-1})}}\geq23.6$) within $1\cdot r_{200}$ increases with total halo mass (embedded within $r_\mathrm{200}$). It is conventional to fit the HOD with a power law. The best fit to our data assuming an $\alpha ({M \over 10^{13.6} M_\odot})^\beta$ model yields $\alpha=0.16 (0.13-0.19)$ and $\beta=0.90 (0.65-1.14)$, where the 68% confidence interval is given in brackets. The increase of the HOD with halo mass implies that the occurrence of radio AGN in higher mass halos is more frequent, compared to that in less massive ones. Such a result is consistent with that found by @wake08. @wake08 have used a sample of 2SLAQ Luminous Red Galaxies (LRGs) at redshifts 0.4-0.8, combined with FIRST and NVSS 20 cm radio data to study the clustering properties of the i) full LRG sample, ii) radio AGN with $\log{L_\mathrm{1.4GHz}}\geq24.2 \mathrm{[W/Hz]}$, and iii) radio-quiet LRGs matched in color, redshift, and optical luminosity to their radio-detected AGN. They fitted halo models to the constructed auto- (full LRG sample) and cross-correlation (radio-detected and matched LRGs) functions which allowed them to generate the radio AGN HOD. In  we compare the HOD of @wake08, scaling their results to our mass definition and the value of the Hubble constant (Leauthaud et al. 2010). The black (full and dashed) lines in  show their HOD modelling obtained from the best fit halo model to their control sample clustering. In their fitting the radio AGN fraction was left as a free parameter (assuming a power-law shape) to match the clustering and space density of radio detected LRGs. The red curve also shown in  was constructed by Wake et al. in a different way, by simply dividing their halo fits to the radio AGN and control sample data limited in luminosity. The parameters of our HOD fit (assuming $\alpha ({M \over 10^{13.6} M_\odot})^\beta$, and using only radio AGN more luminous than $\sim1.6\times10^{24}$ ) are $\alpha=0.09 (0.06-0.11)$ and $\beta=0.86 (0.45-1.22)$, in agreement with those derived by Wake et al. (2008). The agreement in the HODs suggests no significant redshift evolution between 0.55 and 0.73 (corresponding to the median redshifts of the samples analyzed in Wake et al. and here, respectively). Furthermore, the deviant point at $\mathrm{M_{200}}\sim10^{14}$  suggests a non-power law form of the HOD as supported by the (red) curve in (adopted from Wake et al. 2008). With a lower limiting radio power ($\sim4\times10^{23}$ ) we find that the HOD overall increases by a factor of 2-2.5, with a comparable slope to that given above. Summary and discussion {#sec:discussion} ====================== Using well defined samples of low-power radio AGN (which are mainly hosted by massive early type galaxies; see e.g.  et al. 2008, 2009) and X-ray groups in the COSMOS field at redshifts $z\leq1.3$, we analyzed the occupation of galaxy groups ($10^{13.2}\lesssim\mathrm{M_{200}/M_\odot}\lesssim10^{14.4}$) by radio AGN ($10^{23.6}\lesssim \mathrm{L_{1.4GHz}/(W\,Hz^{-1})}\lesssim10^{25} $), as well as the effect of group environment on radio AGN. In order to investigate the latter, it is essential to separate the stellar mass and clustering effects. The blending of these two arises from the fact that i) the probability of a massive early type galaxy to host a radio AGN strongly rises with stellar mass (e.g. Best et al. 2005;  et al. 2009), and ii) massive red galaxies preferentially reside in clustered environments (e.g. Mandelbaum et al. 2006). To overcome this bias, we have constructed a control sample of radio-quiet galaxies, selected in the same way as our radio AGN, with a stellar mass distribution matched to that of our radio AGN. Thus, any differences in the derived distributions of these samples should arise due to effects other than stellar mass. Generating surface density profiles for our radio AGN and radio-quiet stellar-mass matched control galaxies as a function of $r_{200}$ (see ) we find a strong enhancement of radio AGN in group centers ($<0.2\cdot r_{200}$). This is consistent with the results in the local universe ($z<0.0647$; @lin07). As our stellar mass matched control sample bypasses the stellar mass/clustering bias (contrary to that in @lin07), our results directly link group environment to radio-AGN triggering. The effect of groups on radio AGN is also clearly demonstrated in , where we have shown that the fraction of radio AGN relative to red control galaxies is overall enhanced in galaxy groups. Using the halo mass functions (derived from our X-ray data; see ) we generated the halo occupation distribution of radio AGN (see ). There is a very good agreement in the HOD results between our survey and Wake et al. (2008;  $\gtrsim1.6\times10^{24}$ ), despite very different calculations. Generating auto/cross- correlation functions Wake et al. (2008) estimated the mass of the halos based on the observed bias (the discrepancy between the distribution of galaxies and the underlying dark matter). In our analysis the mass of the halo has been measured directly via weak lensing. Unlike the bias-mass relation, weak lensing masses do not depend on the assumed growth of structure. Moreover, our results allow to break the degeneracy of HOD models and suggest a complex (non-power law) shape of the occupational distribution for radio AGN. As our radio data are deeper than that used in @wake08 we have studied the HOD of radio AGN to lower radio powers ( $\gtrsim4\times10^{23}$  which corresponds to a volume limited sample out to $z=1.3$). With such a cut the HOD of radio AGN overall increases by a factor of 2-2.5. The HOD can be interpreted as the probability that a massive red galaxy within $1\cdot r_{200}$ is a radio AGN. This probability can then be viewed as the fraction of time a massive galaxy spends as a radio AGN. Thus, it gives insights into the radio AGN duty cycle. Assuming that the red galaxy population was created at $z\sim2-3$ (i.e. 11 Gyrs ago) for  $\gtrsim4\times10^{23}$  this yields an average time a massive red galaxy in a galaxy group spends as a radio AGN of about 0.7 Gyr at $M_{200}\sim2\times10^{13}$  to $\sim4$ Gyrs at the high mass end ($M_{200}\sim10^{14}$ ). If we restrain the elapsed cosmic time to the observed redshift range ($0<z<1.3$) we obtain an average time scale of $\sim0.6-3.5$ Gyrs. If radio AGN triggering is caused by fueling of the supermassive black hole via cooling of the large-scale hot gas @churazov02, then the increase of the HOD with halo mass implies an overall higher fueling efficiency in high mass halos. In summary, our results yield that group environment enhances (on average by a factor of $\sim2$) the probability of a red massive galaxy being a radio AGN, and that radio AGN occupy higher mass halos with increased frequency. Acknowledgments {#acknowledgments .unnumbered} =============== This researchwas was funded by the European Union’s Seventh Framework programme under grant agreement 229517; and contract PRIN-INAF 2007. The XMM–Newton project is supported by the Bundesministerium fuer Wirtschaft und Technologie/Deutsches Zentrum fuer Luft- und Raumfahrt (BMWI/DLR, FKZ 50 OX 0001), the Max-Planck Society. Best, P. N., Kauffmann, G., Heckman, T. M., Brinchmann, J., Charlot, S., Ivezi[ć]{}, [Ž]{}., & White, S. D. M. 2005, MNRAS, 362, 25 Bower, R. G., Benson, A. J., Malbon, R., Helly, J. C., Frenk, C. S., Baugh, C. M., Cole, S., Lacey, C. G. 2006, MNRAS, 370, 645 Capak, P., et al. 2007, ApJS, 172, 99 Cappelluti, N., et al. 2009, A&A, 497, 635 Churazov E., Sunyaev R., Forman W., B[ö]{}hringer H., 2002, MNRAS, 332, 729 Condon, J. J., Cotton, W. D., Greisen, E. W., Yin, Q. F., Perley, R. A., Taylor, G. B., & Broderick, J. J. 1998, AJ, 115, 1693 Croton, D. J., et al.  2006, MNRAS, 365, 11 Elvis, M., et al. 2009, ApJS, 184, 158 Finoguenov, A., et al. 2009, ApJ, 704, 564 Giodini, S., et al.  2009, ApJ, 703, 982 Giodini, S., et al.  2010, ApJ, 714, 218 Hasinger, G., et al.  2007, ApJS, 172, 29 Ilbert, O., et al.  2009, ApJ, 690, 1236 Leauthaud, A., et al. 2010, ApJ, 709, 97 Ledlow, M. J., & Owen, F. N. 1996, AJ, 112, 9 Lilly, S. J., et al.  2009, ApJS, 184, 218 Lin, Y.-T., & Mohr, J. J. 2007, ApJS, 170, 71 Mandelbaum, R., Seljak, U., Kauffmann, G., Hirata, C. M., & Brinkmann, J. 2006, MNRAS, 368, 715 Mandelbaum, R., Li, C., Kauffmann, G., & White, S. D. M. 2009, MNRAS, 393, 377 Miyaji, T., Krumpe, M., Coil, A. L., & Aceves, H. 2011, ApJ, 726, 83 Schinnerer, E., et al. 2007, ApJS, 172, 46 Scoville, N., et al.  2007, ApJS, 172, 1 Smol[v c]{}i[ć]{}, V., et al. 2006, MNRAS, 371, 121 Smol[č]{}i[ć]{}, V., et al. 2008, ApJS, 177, 14 Smol[v c]{}i[ć]{}, V., et al. 2009, ApJ, 696, 24 Smol[č]{}i[ć]{}, V.  2009, ApJl, 699, L43 Smol[č]{}i[ć]{}, V., & Riechers, D. A. 2011, ApJ, 730, 64 van den Bosch, F. C., Norberg, P., Mo, H. J., & Yang, X. 2004, MNRAS, 352, 1302 Wake, D. A., Croom, S. M., Sadler, E. M., & Johnston, H. M. 2008, MNRAS, 391, 1674 [^1]: $r_{200}$ is defined as the radius within which the average density equals 200 times the critical density. Accordingly, $\mathrm{M_{200}}$ is the total mass embedded within $r_{200}$. The values for the COSMOS sample are taken from Finoguenov et al. (2009).
--- abstract: 'The propagation of light through a Universe of (a) isothermal mass spheres amidst (b) a homogeneous matter component, is considered. We demonstrate by an analytical proof that as long as a small light bundle passes [*through*]{} sufficient number of (a) at various impact parameters - a criterion of great importance - its average convergence will exactly compensate the divergence within (b). The net effect on the light is statistically the same as if all the matter in (a) is ‘fully homogenized’. When applying the above ideas towards understanding the angular size of the primary acoustic peaks of the microwave background, however, caution is needed. The reason is that most (by mass) of (a) are in galaxies - their full mass profiles are not sampled by passing light - at least the inner 20 kpc regions of these systems are missed by the majority of rays, while the rest of the rays would map back to unresolvable but magnified, randomly located spots to compensate for the loss in angular size. Therefore, a scanning pair of WMAP beams finds most frequently that the largest temperature difference occurs when each beam is placed at diametrically opposite points of the Dyer-Roeder collapsed sections. This is the [*mode*]{} magnification, which corresponds to the acoustic [*peaks*]{}, and is less than the mean (or the homogeneous pre-clumping angular size). Since space was seen to be Euclidean without taking the said adjustment into account, the true density of the Universe should be supercritical. Our analysis gives $\Omega_m =$ 0.278 $\pm$ 0.040 and $\Omega_{\Lambda} =$ 0.782 $\pm$ 0.040.' author: - 'Richard Lieu and Jonathan P.D. Mittaz' title: 'Are the WMAP angular magnification measurements consistent with an inhomogeneous critical density Universe?' --- [**1. Introduction**]{} The propagation of light through the inhomogeneous near Universe is an intriguing phenomeon, especially from the viewpoint of the cosmic microwave background (CMB), because the subject is sufficiently unfathomable that a large number of papers appeared in the literature (triggered by the tautological ‘flux conservation’ argument of Weinberg 1976). Some of the earlier works were cited in section 9.2 of the review of Bartelmann & Schneider (2001). The recent controversy persists over whether, in a critical density Universe, the convergence of rays by mass concentrations is balanced by the divergence in between, so that the average behavior of the light continues to reflect zero curvature space (see e.g. Holz & Wald 1998, Claudel 2000, and Rose 2001). The observational status of the near Universe is that it comprises a smooth component which harbors $\sim$ 35 % (Fukugita 2003; Fukugita, Hogan, & Peebles 1998) of the $\Omega_m =$ 0.27 total matter density (Bennett et al 2003), plus mass clumps with (to zeroth order) a limited isothermal sphere density profile $\rho \propto 1/r^2$ for $r \leq$ some cutoff radius $R$, which are the galaxies, groups, and clusters. In the present work we demonstrate that the problem concerning the mean convergence of light in a Universe of isothermal spheres placed within an otherwise homogeneous space can be solved analytically. [**2. Cross section evolution of a light bundle from the Sach’s optical equations**]{} Let us express the normalized matter density parameter for the near Universe as $\Omega_m=\Omega_h+\Omega_g$, where $\Omega_h$ represents the homogeneous component and $\Omega_g$ an ensemble of uniformly but randomly placed isothermal spheres. The framework of our treatment is the Friedmann-Robertson-Walker (FRW) space-time as shaped by the homogeneous component of the matter distribution. Upon this metric we envision a null geodesic directed along the backward light cone from the spatial origin at the observer, whose clock keeps the present time. The standard solution of the geodesic equation (Eq. (14.40), Peebles 1993) gives $$\dot t=-\frac{a_0}{a}=-(1+z),$$ where the dot derivative is w.r.t. the affine parameter $\lambda$, $a(t)$ is the (Hubble) expansion parameter at world time $t$, and the initial condition $\dot t=-1$ at $z=0$ was applied along with the notational shorthand $c=1$. Now consider small excursions of the actual light path from a given radial null geodesic. It is convenient to introduce transverse coordinates $\vec l=(l_\alpha)_{\alpha=1,2}$, such that $$d\vec l^2=dl_\alpha dl^\alpha=a^2r^2 (d\vartheta^2+\sin^2\vartheta\,d\varphi^2).$$ Take in particular a pair of backward null geodesics starting from the origin at the present time. Let their separation at affine distance $\lambda$ be $\delta\vec l(\lambda)$. The rate of change of this separation is governed indirectly by the Sachs scalar optical equations, which in the (reasonable) limit of vanishing Weyl tensor read: $$\begin{aligned} \dot\theta&=&-\theta^2-\frac{1}{2}w^2- \frac{1}{2}R_{\mu\nu}u^\mu u^\nu,\\ \dot w_{\alpha\beta}&=&-2\theta w_{\alpha\beta}. \end{aligned}$$ where $$u^\mu=\dot x^\mu=(\dot t,0,0,\dot r),$$ and $w^2=w_{\alpha\beta}w^{\alpha\beta}$. Here $\theta$ is the expansion and $w_{\alpha\beta}$ is the shear (the latter is a symmetric and traceless tensor) - they are quantities which determine the evolution of $\delta\vec l(\lambda)$ via the null Raychaudhuri equation $$\delta \dot l_\alpha=\theta\delta l_\alpha +w_{\alpha\beta}\delta l_\beta.$$ In Eq. (3) the Ricci tensor $R_{\mu \nu}$ is obtained from the Einstein’s field equations with the stress energy tensor $T_{\mu \nu}$ having $T_{00} = \rho$ as the only non-vanishing entry. The solution, $R_{00} = R_{33} = 4 \pi G \rho$, is well known, and may be coupled with Eqs. (5) and (1) to yield an expression for the Ricci focussing source term as $R_{\mu\nu}u^\mu u^\nu=8\pi G\rho(1+z)^2$. Hence Eq. (3) may be written as $$\dot\theta=-\theta^2-\frac{1}{2}w^2- \frac{3}{2}H_0^2\Omega_h(1+z)^5$$ For the present purpose it is only necessary to work with the two scalar variables $\theta$ and $w^2$, the evolution of the latter is according to the equation $$\frac{d}{d\lambda}(w^2)=-4\theta w^2. \label{wdot}$$ which is obtainable from Eq. (4). In the next step, we suppose that light from a source at affine distance $\lambda_s$ passes through one single isothermal sphere of mass $M$, centered at $\lambda_l$, and with an impact parameter $b$ (a physical distance measured at the lensing epoch). The angle of deflection $\psi(b)$ is given by: $$\psi(b)=\frac{4GM}{R}\left[\arccos\left(\frac{b}{R}\right)+ \frac{R-\sqrt{R^2-b^2}}{b}\right]\qquad(b \leq R),$$ and $$\psi(b)=\frac{4GM}{b}\qquad(b>R).$$ From Eq. (6) we see that the changes in $\theta$ and $w_{\alpha\beta}$ due to the presence of the lensing mass are of the form: $$\begin{aligned} \delta(\theta+w_{\rho\rho})&=&-(1+z)\frac{d\psi(b)}{db},\\ \delta(\theta+w_{\phi\phi})&=&-(1+z)\frac{\psi(b)}{b}, \end{aligned}$$ where $z$ is the redshift for the epoch of interaction. The factor of $(1+z)$ arises because of the relation between $\delta\lambda$ and the proper distance. It follows that $$\delta\theta = -\frac{1+z}{2} \left[\frac{\psi(b)}{b}+\frac{d\psi(b)}{db}\right],$$ Substituting Eqs. (9) and (10) into Eq. (13), we obtain $$\delta \theta = 0~~{\rm for}~b > R;~~ \delta\theta=-\frac{(1+z)}{2}\frac{4GM}{R}\arccos\left(\frac{b}{R}\right)~~ {\rm for}~ b \leq R.$$ For the shear $w^2$, the calculations are more complicated. Yet the quantity is easily shown to assume importance only in the strong lensing limit, i.e. in the present context we can ignore it. We have to compute the average effect of all the mass inhomogeneities. The number density of the isothermal spheres (neglecting evolution and assuming uniform distribution in [*total*]{} density FRW space) is $$n=n_0(1+z)^3,\qquad n_0=\frac{3H_0^2\Omega_g}{8\pi GM}.$$ The probability of finding a clump with center at the position $(\lambda, b)$ to within small ranges $d\lambda,db$ is $$P(\lambda,b)\,d\lambda\,db =2\pi n_0(1+z)^4\,d\lambda\,b\,db.$$ Since the expansion is [*additive*]{}, the globally averaged change of $\theta$ with $\lambda$ is, from Eqs. (14) and (16), $$\left\langle\frac{d\theta}{d\lambda}\right\rangle_g = -\frac{3H_0^2\Omega_g}{4GM}(1+z)^5 \int_{b_{\mathrm {min}}}^R \frac{2GM}{Rb}\arccos\left(\frac{b}{R}\right)\,b\,db = -\frac{3}{2}H_0^2\Omega_g(1+z)^5 \nonumber\\$$ where at the last step the integral was evaluated with $b_{\mathrm {min}} \ll R$ in mind. Putting together the effects of both smooth and clumped matter, we find for $\theta$, from Eqs. (7) and (17) the equation $$\langle\dot\theta\rangle=-\theta^2-\frac{1}{2}w^2 -\frac{3}{2}H_0^2\Omega_h(1+z)^5 -\frac{3}{2}H_0^2\Omega_g(1+z)^5.$$ The contribution from the $w^2$ (shear) term may be estimated by noting that, from Eq. (11) and (12), $$\delta(w^2) = \frac{(1+z)^2}{2}\left[ \frac{\psi(b)}{b}-\frac{d\psi(b)}{db}\right]^2.$$ Hence, using Eqs. (9), (10) and the fact that, because of the random orientation of $\vec{l}$, $w^2$ is additive, we arrive after integration w.r.t. $\lambda$ at $$\langle w^2\rangle \sim H_0^2 \Omega_g \frac{GMx_s}{R^2}.$$ where $x_s$ is the Euclidean distance to the source as measured at $z=0$. When compared with the last term of Eq. (18), however, we see that the shear from the mass concentrations remains relatively unimportant until $GMx_s/R^2 \geq$ 1, i.e. violation of the weak lensing criterion (also the value of $w^2$ for homogeneous matter is zero). As long as the lensing is weak, then, one may ignore the 2nd term on the right side of Eq. (18). The outcome is that the expansion of a light bundle depends only on the total matter content, and [*not*]{} on the degree of homogeneity of space. If space is Euclidean the solution of Eq. (18) is $$\theta = -(1+z)H(z) + \frac{(1+z)^2}{a_0 r} = \frac{1}{ar} \frac{d(ar)} {d\lambda}.$$ This is in full agreement with the expected value of the angular diameter distance at zero curvature, viz. $a(t) r$. The chief conclusion of this section may also be obtained by first directly integrating each percentage angular magnification $\eta = \psi (x_s-x)x/[2(1+z)x_s b]$ over $dP = 2\pi n_0 [1+z(x)]^2 bdb dx$, the latter because of randomly located lenses in an inhomogeneous critical density Universe. Here $x$ and $x_s$ are respectively critical density FRW physical distances at the present epoch (the same meaning as $a(t)r$ in the line element of Eq. (2) with $t=t_0$), to a lens and the source. We find $$\langle\eta\rangle = \int \eta dP = \frac{3}{2} \Omega_g H_0^2 \int_0^{x_f}dx\, [1+z(x)]\frac{(x_s-x)x}{x_s},$$ where $x_f$ is the distance to the furthest lens, beyond which space is smooth. Then, it has been shown (Lieu & Mittaz 2005) that, irrespective of the lensing strength, $\langle\eta\rangle$ [*is exactly equal to the demagnification due to the Dyer-Roeder (DR) beam divergence*]{} in between these encounters, Dyer & Roeder (1972). This method of proof, though no less valid, is not as elegant in that the two counteracting effects have to be calculated separately, and subtracted from each other afterwards. [**3. Interpretation of the results, flux conservation; comparison with observations**]{} We must now understand what the result of section 2 means. In particular, we need to know when the integration over a cylindrical probability element like that of Eq. (16) corresponds to observational reality. Among the isothermal spheres of different scales, viz. galaxies, groups, and clusters, the first encompasses by far the lion’s share of the matter budget at low $z$, with $\approx$ 50 % of the entire matter content clumped into this kind of large scale structures, i.e. $$\Omega_g \approx \frac{\Omega_m}{2}~{\rm for~galaxies}$$ (Fukugita 2003, and Fukugita, Hogan, & Peebles 1998). Thus, in order to secure the precarious balance between beam convergence and divergence, a light signal must pass through sufficient numbers of galaxies - sampling the full range of impact parameters - larger systems like clusters have too small an associated $\Omega$ to play a significant role. From the observed density of galaxies $$n_0 = 0.17h^{-3}~{\rm Mpc}^{-3} = 0.06 ~{\rm Mpc}^{-3}~{\rm for}~h=0.71$$ (Ramella et al 1999) one estimates that throughout the 3 Gpc distance between $z=0$ and $z=1$ a typical light ray is within $\approx$ 40 kpc from only one galaxy. If these isothermal spheres cutoff at $R \approx$ 20 kpc (which implies a circular velocity $\sim$ 250 km s$^{-1}$ using the value of $M$ deduced from Eqs. (22) and (23)), then for a separation $\Delta \geq$ 40 kpc the only ‘clump’ contribution to the beam expansion will come from the shear term $w^2$ of Eq. (19) with $R$ replaced by $\Delta \approx$ 40 kpc. This calls for an insignificant correction to $\dot{\theta}$. The conclusion is that despite the euphoria arising from Eqs. (18) and (20), most light rays experience to lowest order only the gravity of homogeneous matter. What are the ramifications? Specifically what will the appearance of features be on a large scale? Consider a sequence of small and contiguous emission pixels on the outlining contour of an emitting source. If the ray bundles connecting them and the observer miss the clumps, their expansion will evolve according to Eq. (18) without the last term and with a negligible second term. This is precisely the ‘partially loaded’ DR beam. It implies demagnification of the pixels in question. By Liouville theorem, the pixels remain adjoint, so to prevent the entire segment from shrinking they must be tangentially sheared and pushed back outwards by the clumps within. In other words, when the bulk of a [*randomly*]{} located source boundary is shrunk, it can be restored to original shape only if the enclosed foreground matter acts as a systematic gravitational lens. Yet this is clearly an absurd scenario. In fact, given that the clumps are uniformly distributed on either side of any boundary ray, there is no preferential deflection of the ray, i.e. concerning most of the boundary which is demagnified by the DR beam, the pixels involved should not on average be mobilized radially inwards or outwards by shear - this is consistent with the smallness of the $w^2$ term. The situation is quite unlike a more homogeneous Universe where each ray passes through enough representative matter and (from section 2) all pixels are magnified, thereby enlarging the boundary without any center of radial migration. If under the ‘Poisson regime’ of clump distribution large sections of the main boundary of an extended source shorten without distortion how may this be reconciled with the expected source flux? Since lensing conserves surface brightness, a smaller source means less detected flux, yet from section 2 we saw that on balance the effects of lensing and the DR beam cancel, i.e. the flux (or source size) should be unchanged by clumping. The answer comes from that minor fraction of the boundary rays which [*do*]{} go through clumps. From the figures given in and after Eq. (22), we found that this is $\approx$ 25 %. The segments involved here are substantially enlarged, leading to bulges on randomly located portions of the boundary, such that the perimeter now acquires sufficient total length to enclose a re-magnified area. We apply the above development to the CMB observations, which have conventionally been modeled in terms of a critical density FRW Universe, even though during the ‘last leg’ of the light propagation, the matter at low redshift is anything but smooth. In a more accurate picture, we assume that within $z=1$ some of the matter is clumped into galaxies, which have properties as given in Eqs. (15), (22) and (23), since galaxies exhibit no evidence for significant evolution up to this redshift (Ofek, Rix, & Maoz 2003). At earlier epochs, the effect of mass clumping is completely ignored, i.e. the Universe is treated as homogeneous, and the possibility of masses missed by the propagating light above $z=1$ will not be taken into account. This understates the outcome of our analysis, which is: the angular scale of temperature variation like the CMB primary acoustic peaks (hereafter PAPs in short) must in the circumstance demagnify by the percentage expected from a ‘half-loaded’ DR beam between $z=0$ and $z=1$. From the reasoning at the end of section 2, we see that the required quantity is $\langle\eta\rangle$ of Eq. (21) with $\Omega_g = \Omega_m/2$, $x_f =$ 3.3 Gpc ($z_f =$ 1), and $x_s =$ 14.02 Gpc. The percentage of shrinking is then $\langle\eta\rangle =$ 10 %. When a pair of WMAP TT cross correlation beams surveys the CMB sky to measure temperature differences at some beam separation, it most frequently finds that the largest temperature difference occurs when each beam is placed at diametrically opposite points of the DR collapsed contour sections. This is the [*mode*]{} magnification, and corresponds directly to the acoustic [*peaks*]{}. Some of the time, however, the maximum temperature difference is seen at larger angular separations, when e.g. one beam is on a point of the DR demagnified section while the other is on a bulge (or lensed section). This makes the distribution skewed. As a result, the mean magnification remains at the pre-clumping value. Yet the mean is not relevant, for it is the peak position that determines the total density of the Universe. Although each beam cannot resolve the lensed and unlensed sections of the contour (e.g. the former is $\approx$ 0.1 arcmin in size for a galaxy lens of $R \approx$ 20 kpc, while the beam width is $\sim$ 30 arcmin for WMAP), that does not change the conclusion. All it means is that the skewed distribution becomes blurred after convolution with the beam, the mode stays at its DR demagnified position. We determined, as is illustrated in Figure 1, the CMB parameters required to match the data from WMAP’s TT cross correlation power spectrum, after including the effect of a 10% systematic shift (towards smaller sizes relative to the pre-clumping value) in the spherical harmonic number of all structures within the harmonic range of the PAPs. Since the observed angular size is Euclidean, one now expects the best fit total density to be supercritical. They are found at $\Omega_m =$ 0.278 $\pm$ 0.040 and $\Omega_{\Lambda} =$ 0.782 $\pm$ 0.040, i.e. $\Omega =$ 1.06. Given our estimate of the bias by which the PAP positions represent the true mean density of the Universe, we also fitted the WMAP data with a smaller bias, $\langle\eta\rangle =$ 5 %. The figures so obtained are $\Omega_m =$ 0.275 $\pm$ 0.040 and $\Omega_{\Lambda} =$ 0.755 $\pm$ 0.040, i.e. $\Omega =$ 1.03. The other, perhaps even more interesting, point is that the skewness of the angular size distribution induced by galaxy lensing, which separates the peak from the mean at this $\approx$ 10 % level, is not apparent in the WMAP data, because the PAPs are symmetric gaussians. ![Re-interpreting the WMAP TT power spectrum, taking account of the fact that the angular size of large structures as determined with the cross correlation beam are underestimated by 10 % w.r.t. the homogeneous (pre-clumping) benchmark. Our approach involved using the CMBFAST code to generate a model spectrum, then shifting it by 10 % to the right (i.e. towards higher values of $\l$, or smaller size) and adjusting the parameters so that the match with the data is secured. The new parameters are $\Omega_m =$ 0.278 $\pm$ 0.040, $\Omega_{\Lambda} =$ 0.782 $\pm$ 0.040, spectral slope = 0.975 $\pm$ 0.030, and Hubble constant $h =$ 0.72 $\pm$ 0.03. Goodness of fit is $\chi^2 =$ 41.7 for 38 degrees of freedom. This model, when used to make predictions for a homogeneous Universe, is shown on the plot as the dot-dashed curve. When shifted by 10 % to account for the effect of inhomogeneities as discussed above, it becomes the dashed curve.](shifted.ps){height="3.0in"} Authors are indebted to Tom Kibble for helpful discussions. [**References**]{} Bartelmann, M. & Schneider, P., 2001, Physics Reports, 340, 291. Bennett, C.L. et al, 2003, ApJS, 148, 1-27. Claudel, C. -M., 2000, Proc. Roy. Soc. Lond., A456, 1455. Dyer, C.C., & Roeder R.C., 1972, ApJ, 174, L115. Fukugita, M., 2003, in ‘Dark matter in galaxies’, IAU Symp. 220, Sydney (astro-ph/0312517). Fukugita, M., Hogan, C.J., & Peebles, P.J.E., 1998, ApJ, 503, 518. Holz, D.E., & Wald, R.M., 1998, Phys. Rev. D, 58, 3501. Lieu, R., & Mittaz, J.P.D., 2005, ApJ in press (astro-ph/0308305). Ofek, E.O, Rix, H. -W., & Maoz, D., 2003, MNRAS, 343, 639. Peebles, P.J.E., 1993, [*Principles of Physical Cosmology*]{}, Princeton Univ. Press. Ramella, M. et al 1999, A & A, 342, 1. Rose, H.G., 2001, ApJ, 560, L15. Weinberg, S., 1976, ApJ, 208, L1. [**Note added in Proof**]{} (though too late to appear in the ApJL article itself): Lyman Page was among several who questioned whether the galaxy lensing bias effect we discussed could still lead to an acceptable match between the $\Omega =$ 1 standard model and the WMAP data, if we are prepared to adjust the value of the Hubble constant $H_0$. The answer is no. In fact, such an undertaking yielded a minimum $\chi^2$ of 401.8 for 28 degrees of freedom - a completely unacceptable fit. The best fit values of $H_0$ and $\sigma_8$ then become 72.6 and 0.845 respectively. discussed
--- author: - | Agostino Dovier\ Dip. di Matematica e Informatica, Univ. di Udine, Italy\ - | Vítor Santos Costa\ CRACS INESC-TEC and Dep. de Ciência de Computadores, Univ. do Porto, Portugal\ title: | Introduction to the 28th International\ Conference on Logic Programming\ Special Issue --- We are proud to introduce this special issue of the Journal of Theory and Practice of Logic Programming (TPLP), dedicated to the full papers accepted for the 28th International Conference on Logic Programming (ICLP). The ICLP meetings started in Marseille in 1982 and since then constitute the main venue for presenting and discussing work in the area of logic programming. We contributed to ICLP for the first time in 1991. The first guest-editor had a paper on logic programming with sets, and the second had two papers on the parallel implementation of the Andorra model. Since then, we continued pursuing research in this exciting area and ICLP has always been the major venue for our work. Thus, when the ALP EC committee kindly invited us for chairing the 2012 edition we were delighted to accept. We particularly appreciate the honor and responsability of organising ICLP in Budapest. Hungary has had a central role both in implementation and in the application of logic programming. Indeed, the role of Hungary in general in Computer Science is widely recognized, and organizing this meeting in the town of John von Neumann, one of the “talent-scouts” of Turing, in the centenary of the birth of the latter, is just another reason for justifying the fact that the fascinating Budapest is the unique town to host ICLP twice. Publishing the ICLP full papers as a special issue is a joint initiative taken by the Association for Logic Programming and by Cambridge University Press. The goal is to achieve fast journal publication of the highest quality papers from the logic programming community, by selecting the best ICLP submissions before the meeting. This approach benefits the authors, by facilitating journal publication, and benefits the community, by allowing researchers to access high quality journal papers on the more recent and important results in the field. Quality is ensured by a two-step refereeing process, and by an active and very much participating program committee. The approach was first experimented in 2010, and has had favorable feedback since. This year, ICLP sought contributions in all areas of logic programming, including but not restricted to the following areas. *Theory:* Semantic Foundations, Formalisms, Non- monotonic Reasoning, Knowledge Representation; *Implementation:* Compilation, Memory Management, Virtual Machines, Parallelism; *Environments:* Program Analysis, Transformation, Validation, Verification, Debugging, Profiling, Testing; *Language Issues:* Concurrency, Objects, Coordination, Mobility, Higher Order, Types, Modes, Assertions, Programming Techniques; *Related Paradigms:* Abductive Logic Programming, Inductive Logic Programming, Constraint Logic Programming, Answer-Set Programming; *Applications:* Databases, Data Integration and Federation, Software Engineering, Natural Language Processing, Web and Semantic Web, Agents, Artificial Intelligence, Bioinformatics. In response to the call for papers we received 102 abstracts, 90 of which remained as complete submissions. Of these, 81 were submitted as full papers and 9 as technical communications. Each paper was reviewed by at least three anonymous program committee members, selected by the program chairs. Sub-reviewers were allowed. After discussion, involving the whole program committee, 10 submitted papers were considered as deserving of TPLP publication with minor changes. The authors of other 10 submitted paper were asked to address more serious concerns, mostly regarding presentation improvements or more complete experimental validation. 37 papers instead have been judged to deserve a slot for a short presentation at the Meeting and a “technical communication” publication in the Volume 17 of the Leibniz International Proceedings in Informatics (LIPIcs) series. The whole set of accepted papers includes 36 technical papers, 12 application papers, 5 system and tool papers, and 4 papers submitted directly as technical communications. The Conference program was honored to include contributions from three keynote speakers and from a tutorialist. Two invited speakers come from industry, namely Ferenc Darvas from *ThalesNano* (a Budapest company specialized in developing and providing microscale flow instruments for chemistry), and Mike Elston from *SecuritEase* (an Australian company developing stock brokering tools). Moreover, Jan Wielemaker, of the VU University Amsterdam, presented an history of the first 25 years of SWI Prolog, one of the major (and free) Prolog releases. Tutorialist Viviana Mascardi from University of Genova (Italy) introduced us to the hot topic of “Logic-based Agents and the Semantic Web”. The first ICLP Conference was organized 30 years to this year, in Marseille. During those 30 years, ICLP has been a major venue in Computer Science. In order to acknowledge some of the major contributions that have been fundamental to the success of LP as a field, the ALP executive committee decided that ICLP should recognize the most influential papers presented in the ICLP and ILPS conferences (ILPS was another major meeting in logic programming, organized until 1998), that, 10 and 20 years onwards, have been shown to be a major influence in the field. As program co-chairs of ICLP2012, we were the first to be charged with this delicate task. We included papers from ICLP 1992 and ILPS 1992 , 20 years onwards, and of ICLP 2002, 10 years onwards. Our procedure was to use biblio-metric information in a first stage, and to use our own personal criteria in a second stage, if necessary. Given that this is the first time this award was given we also considered 1991, and 2001 papers. Although there are an impressive number of excellent papers in 1991 and 1992, one paper emerges with an outstanding record of roughly 600 citations. Further, the paper clearly has a very major influence in the field. The paper is - Michael Gelfond and Vladimir Lifschitz: Representing Actions in Extended Logic Programming. JICSLP 1992: 559-573 The 10 years onward analysis again produced a group of excellent papers (as expected, the number of citations was strictly less than for 20 years old papers). In this case choosing the winner in a very short list was more difficult. Ackowledging their influence over the very active field of Web Databases and Semantic Web, our selection went to: - François Bry and Sebastian Schaffert: Towards a Declarative Query and Transformation Language for XML and Semistructured Data: Simulation Unification. ICLP 2002: 255-270 We therefore invited these authors for an invited talk in a special session at the meeting. We would like to remark that in 2004 three papers were prized as “Most Influential Paper in 20 Years Award from the Association for Logic Programming”. The award went (again) to Gelfond-Lifschitz for their 1988 ICLP/SLP paper on stable model semantics, to Jaffar-Lassez for their POPL 1987 paper on Constraint Logic Programming, and to Saraswat, Rinard, and Panangaden for their POPL 1991 paper on Concurrent Constraint Programming. Together, the journal special issue and the LIPIcs volume of short technical communications constitute the proceedings of ICLP 2012. The list of the 20 accepted full papers appearing in this special issue follows: [ \ \[-2.2mm\] Disjunctive Datalog with Existential Quantifiers: Semantics, Decidability, and Complexity Issues\ \[1mm\]]{} *Mario Alviano, Wolfgang Faber, Nicola Leone, and Marco Manna*\ \[1.6mm\] [ \ \[-2.2mm\] Towards Multi-Threaded Local Tabling Using a Common Table Space\ \[1mm\]]{} *Miguel Areias and Ricardo Rocha*\ \[1.6mm\] [ \ \[-2.2mm\] Module Theorem for the General Theory of Stable Models\ \[1mm\]]{} *Joseph Babb and Joohyung Lee*\ \[1.6mm\] [ \ \[-2.2mm\] Typed Answer Set Programming Lambda Calculus and Corresponding Inverse Lambda Algorithms\ \[1mm\]]{} *Chitta Baral, Juraj Dzifcak, Marcos Gonzalez, and Aaron Gottesman*\ \[1.6mm\] [ \ \[-2.2mm\] D-FLAT: Declarative Problem Solving Using Tree Decompositions and Answer-Set Programming\ \[1mm\]]{} *Bernhard Bliem, Michael Morak, and Stefan Woltran*\ \[1.6mm\] [ \ \[-2.2mm\] An Improved Proof-Theoretic Compilation of Logic Programs\ \[1mm\]]{} *Iliano Cervesato*\ \[1.6mm\] [ \ \[-2.2mm\] Annotating Answer-Set Programs in LANA\ \[1mm\]]{} *Marina De Vos, Doga Gizem Kisa, Johannes Oetsch, Jörg Pührer, and Hans Tompits*\ \[1.6mm\] [ \ \[-2.2mm\] SMCHR: Satisfiability Modulo Constraint Handling Rules\ \[1mm\]]{} *Gregory Duck*\ \[1.6mm\] [ \ \[-2.2mm\] Conflict-driven ASP Solving with External Sources\ \[1mm\]]{} *Thomas Eiter, Michael Fink, Thomas Krennwallner, and Christoph Redl*\ \[1.6mm\] [ \ \[-2.2mm\] Multi-threaded ASP Solving with clasp\ \[1mm\]]{} *Martin Gebser, Benjamin Kaufmann, and Torsten Schaub*\ \[1.6mm\] [ \ \[-2.2mm\] Model Checking with Probabilistic Tabled Logic Programming\ \[1mm\]]{} *Andrey Gorlin, C. R. Ramakrishnan, and Scott Smolka*\ \[1.6mm\] [ \ \[-2.2mm\] Diagrammatic confluence for Constraint Handling Rules\ \[1mm\]]{} *Rémy Haemmerlé*\ \[1.6mm\] [ \ \[-2.2mm\] Inference in Probabilistic Logic Programs with Continuous Random Variables\ \[1mm\]]{} *Muhammad Islam, C.R. Ramakrishnan, and I.V. Ramakrishnan*\ \[1.6mm\] [ \ \[-2.2mm\] Relational Theories with Null Values and Non-Herbrand Stable Models\ \[1mm\]]{} *Vladimir Lifschitz, Karl Pichotta, and Fangkai Yang*\ \[1.6mm\] [ \ \[-2.2mm\] The Relative Expressiveness of Defeasible Logics\ \[1mm\]]{} *Michael Maher*\ \[1.6mm\] [ \ \[-2.2mm\] Compiling Finite Domain Constraints to SAT with BEE\ \[1mm\]]{} *Amit Metodi and Michael Codish*\ \[1.6mm\] [ \ \[-2.2mm\] Lightweight Compilation of (C)LP to JavaScript\ \[1mm\]]{} *Jose F. Morales, Rémy Haemmerlé, Manuel Carro, and Manuel Hermenegildo*\ \[1.6mm\] [ \ \[-2.2mm\] ASP modulo CSP: The clingcon system\ \[1mm\]]{} *Max Ostrowski and Torsten Schaub*\ \[1.6mm\] [ \ \[-2.2mm\] Annotation of Logic Programs for Independent AND-Parallelism by Partial Evaluation\ \[1mm\]]{} *German Vidal*\ \[1.6mm\] [ \ \[-2.2mm\] Efficient Tabling of Structured Data with Enhanced Hash-Consing\ \[1mm\]]{} *Neng-Fa Zhou and Christian Theil Have*\ \[1.6mm\] The technical communications, a short paper, and the contributions to the doctoral consortium are published on-line through the Dagstuhl Research Online Publication Server (DROPS), as DOI 10.4230/LIPIcs.ICLP.2012.42. A listing of these papers is reported in the electronic appendix to this preface. We would like to take this opportunity to acknowledge and thank the other ICLP organisers. Without their work and support this event would not have been possible. We would like to start with the General chair Péter Szeredi (Budapest Univ. of Technology and Economics), and all the organizing chairs, namely the Workshop Chair Mats Carlsson (SICS, Sweden), the Doctoral Consortium Chairs Marco Gavanelli (Univ. of Ferrara) and Stefan Woltran (Vienna University of Technology), the Prolog Programming Contest Chair Tom Schrijvers (Universiteit Gent), the Publicity Chair Gergely Lukácsy (Cisco Systems Inc.), and the Web Manager: János Csorba (Budapest Univ. of Technology and Economics). Thanks also to Alessandro Dal Palù for allowing us to publish his pictures of Budapest on the website. We benefited from material and advice kindly given by last year’s program chairs Michael Gelfond and John Gallagher. Thank you very much! On behalf of the whole LP community, we would like to thank all authors who have submitted a paper, the 41 members of the program committee: Elvira Albert (U.C. Madrid), Sergio Antoy (Portland State Univ.), Marcello Balduccini (Kodak Research Laboratories), Manuel Carro (Technical University of Madrid (UPM)), Michael Codish (Ben Gurion Univ.), Veronica Dahl (Simon Fraser Univ.), Marina De Vos (Univ. of Bath), Alessandro Dal Palù (Universita degli Studi di Parma), Bart Demoen (K.U. Leuven), Thomas Eiter (T.U. Wien), Esra Erdem (Sabanci University), Thom Frühwirth (Univ. of Ulm), Andrea Formisano (Univ. of Perugia), Maria Garcia de la Banda (Monash Univ.), Marco Gavanelli (University of Ferrara), Hai-Feng Guo (Univ. of Nebraska, Omaha), Gopal Gupta (Univ. of Texas, Dallas), Katsumi Inoue (National Inst. of Informatics, Japan), Angelika Kimmig (K.U. Leuven), Joohyung Lee (Arizona State University), Evelina Lamma (Univ. of Ferrara), Nicola Leone (University of Calabria), Yuliya Lierler (Univ. of Kentucky), Boon Thau Loo (Univ. of Pennsylvania), Michael Maher (R.R.I., Sydney), Alessandra Mileo (DERI Galway), Jose Morales (U.P. Madrid), Enrico Pontelli (New Mexico State Univ.), Gianfranco Rossi (Univ. of Parma), Beata Sarna-Starosta (Cambian, Vancouver), Torsten Schaub (Univ. of Potsdam), Tom Schrijvers (Universiteit Gent), Fernando Silva (Univ. of Porto), Tran Cao Son (New Mexico State University), Terrance Swift (Univ. Nova de Lisboa), Péter Szeredi (Budapest Univ. of Technology and Economics), Francesca Toni (Imperial College London), Mirek Truszczynski (University of Kentucky), Germán Vidal (U.P. of Valencia), Stefan Woltran (Vienna University of Technology), and Neng-Fa Zhou (CUNY, New York). A particular thanks goes to the 96 external referees, namely: Alicia Villanueva, Amira Zaki, Ana Paula Tomás, Andrea Bracciali, Antonis Bikakis, Antonis Kakas, Brian Devries, C. R. Ramakrishnan, Chiaki Sakama, Christoph Redl, Christopher Mears, Dale Miller, Daniel De Schreye, Daniela Inclezan, David Brown, Demis Ballis, Dimitar Shterionov, Dragan Ivanovic, Evgenia Ternovska, Fabio Fioravanti, Fabrizio Riguzzi, Fangkai Yang, Fausto Spoto, Feliks Kluźniak, Francesco Calimeri, Francesco Ricca, Fred Mesnard, Gianluigi Greco, Giovanni Grasso, Gregory Duck, Gregory Gelfond, Inês Dutra, Jesus M. Almendros-Jimenez, Joost Vennekens, Juan Manuel Crespo, Julio Mariño, Kyle Marple, Marco Alberti, Marco Maratea, Mario Alviano, Mário Florido, Marius Schneider, Martin Gebser, Masakazu Ishihata, Massimiliano Cattafi, Matthias Knorr, Maurice Bruynooghe, Max Ostrowski, Michael Bartholomew, Michael Hanus, Michael Morak, Minh Dao-Tran, Mutsunori Banbara, Naoki Nishida, Naoyuki Tamura, Neda Saeedloei, Nicola Capuano, Nicolas Schwind, Noson Yanofsky, Nysret Musliu, Orkunt Sabuncu, Pablo Chico De Guzmán Paolo Torroni, Paul Tarau, Peter James Stuckey, Peter Schüller, Philipp Obermeier, Puri Arenas-Sanchez, Rémy Haemmerlé, Rafael Del Vado Virsela, Ricardo Rocha, Richard Min, Robert Craven, Roland Kaminski, Samir Genaim, Sandeep Chintabathina, Santiago Escobar, Sara Girotto, Sean Policarpio, Simona Perri, Slim Abdennadher, Sofia Gomes, Stefania Costantini, Stefano Bistarelli, Thomas Krennwallner, Thomas Ströder, Tomoya Tanjo, Torben Mogensen, Umut Oztok, Valerio Senni, Victor Marek, Victor Pablos Ceruelo, Wolfgang Dvořák, Wolfgang Faber, Yana Todorova, and Yunsong Meng. Throughout this period, we could always rely on ALP. Our gratitude goes to the ALP president Gopal Gupta, to the Conference chair Manuel (Manolo) Carro, and to all the ALP Executive committe members. We already thanked the invited speakers and the tutorialist above, but we would like to stress here our thank to them. David Tranah (Cambridge) and Ilkka Niemelä (TPLP editor) deserve our thanks for their kindness and their precious support in all publication stages. Similarly, a thanks goes to Marc Herbstritt from Dagstuhl, Leibniz Center for Informatics, for the support in publication of the Technical Communication. Our thanks also go to the the sponsors of the meeting, namely the Association for Logic Programming (ALP), the Artificial Intelligence Section of the John von Neumann Computer Society, the Aquincum Institute of Technology (AIT) of Budapest, Alerant System Inc, and Google (female researchers grant). Finally, a well-deserved thank you goes to Easychair developers and managers. This amazing free software allowed us to save days of low level activities. Similarly, the joint work of the two co-chairs would have been extremely more difficult and expensive without the Dropbox and Skype services. -------------------------------------------- September 2012 Agostino Dovier and Vítor Santos Costa Program Committee Chairs and Guest Editors -------------------------------------------- This electronic appendix reports the contents of the Volume 17, Issue 1 (DOI 10.4230/LIPIcs.ICLP.2012.42) of the Leibniz International Proceedings in Informatics (LIPIcs) series, published on-line through the Dagstuhl Research Online Publication Server (DROPS) publishing position papers, technical communications, and doctoral consortium contributed to the 28th International Conference on Logic Programming (ICLP 2012). Position Paper {#position-paper .unnumbered} ============== [ \ \[-2.2mm\] Simulation Unification: Beyond Querying Semistructured Data\ \[1mm\]]{} *François Bry, Sebastian Schaffert*\ \[1.6mm\] Technical Communication {#technical-communication .unnumbered} ======================= [ \ \[-2.2mm\] A Logic Programming approach for Access Control over RDF\ \[1mm\]]{} *Nuno Lopes, Sabrina Kirrane, Antoine Zimmermann, Axel Polleres and Alessandra Mileo*\ \[1.6mm\] [ \ \[-2.2mm\] Modeling Machine Learning and Data Mining Problems with FO($\cdot $)\ \[1mm\]]{} *Hendrik Blockeel, Bart Bogaerts, Maurice Bruynooghe, Broes De Cat, Stef De Pooter, Marc Denecker, Anthony Labarre, Jan Ramon and Sicco Verwer.*\ \[1.6mm\] [ \ \[-2.2mm\] Paving the Way for Temporal Grounding\ \[1mm\]]{} *Felicidad Aguado, Pedro Cabalar, Martín Diéguez, Gilberto Pérez and Concepcion Vidal.*\ \[1.6mm\] [ \ \[-2.2mm\] Preprocessing of Complex Non-Ground Rules in Answer Set Programming\ \[1mm\]]{} *Michael Morak and Stefan Woltran.*\ \[1.6mm\] [ \ \[-2.2mm\] Two-Valued Logic Programs\ \[1mm\]]{} *Vladimir Lifschitz.*\ \[1.6mm\] [ \ \[-2.2mm\] LOG-IDEAH: ASP for Architectonic Asset Preservation\ \[1mm\]]{} *Marina De Vos, Julian Padget, Vivana Novelli and Dina D’Ayala.*\ \[1.6mm\] [ \ \[-2.2mm\] aspeed: ASP-based Solver Scheduling\ \[1mm\]]{} *Holger Hoos, Roland Kaminski, Torsten Schaub and Marius Schneider.*\ \[1.6mm\] [ \ \[-2.2mm\] Static Type Inference for the Q language using Constraint Logic Programming\ \[1mm\]]{} *Zsolt Zombori, János Csorba and Péter Szeredi.*\ \[1.6mm\] [ \ \[-2.2mm\] ASP at Work: An ASP Implementation of PhyloWS\ \[1mm\]]{} *Enrico Pontelli, Tiep Le, Hieu Nguyen and Tran Cao Son.*\ \[1.6mm\] [ \ \[-2.2mm\] Improving Quality and Efficiency in Home Health Care: an application of Constraint Logic Programming for the Ferrara NHS unit\ \[1mm\]]{} *Massimiliano Cattafi, Rosa Herrero, Marco Gavanelli, Maddalena Nonato and Juan José Ramos Gonzalez.*\ \[1.6mm\] [ \ \[-2.2mm\] An FLP-Style Answer-Set Semantics for Abstract-Constraint Programs with Disjunctions\ \[1mm\]]{} *Johannes Oetsch, Jörg Pührer and Hans Tompits.*\ \[1.6mm\] [ \ \[-2.2mm\] A Tarskian Semantics for Answer Set Programming\ \[1mm\]]{} *Marc Denecker, Yuliya Lierler, Mirek Truszczynski and Joost Vennekens.*\ \[1.6mm\] [ \ \[-2.2mm\] Lazy model expansion by incremental grounding\ \[1mm\]]{} *Broes De Cat, Marc Denecker and Peter Stuckey.*\ \[1.6mm\] [ \ \[-2.2mm\] Logic + control: An example\ \[1mm\]]{} *Włodek Drabent.*\ \[1.6mm\] [ \ \[-2.2mm\] Tabling for infinite probability computation\ \[1mm\]]{} *Taisuke Sato and Philipp Meyer.*\ \[1.6mm\] [ \ \[-2.2mm\] Improving Lazy Non-Deterministic Computations by Demand Analysis\ \[1mm\]]{} *Michael Hanus.*\ \[1.6mm\] [ \ \[-2.2mm\] Surviving Solver Sensitivity: An ASP Practitioner’s Guide\ \[1mm\]]{} *Bryan Silverthorn, Yuliya Lierler and Marius Schneider.*\ \[1.6mm\] [ \ \[-2.2mm\] On the Termination of Logic Programs with Function Symbols\ \[1mm\]]{} *Sergio Greco, Francesca Spezzano and Irina Trubitsyna.*\ \[1.6mm\] [ \ \[-2.2mm\] Towards Testing Concurrent Objects in CLP\ \[1mm\]]{} *Elvira Albert, Puri Arenas and Miguel Gomez-Zamalloa.*\ \[1.6mm\] [ \ \[-2.2mm\] Generating Event-Sequence Test Cases by Answer SetProgramming with the Incidence Matrix\ \[1mm\]]{} *Mutsunori Banbara, Naoyuki Tamura and Katsumi Inoue.*\ \[1.6mm\] [ \ \[-2.2mm\] Answer Set Solving with Lazy Nogood Generation\ \[1mm\]]{} *Christian Drescher and Toby Walsh.*\ \[1.6mm\] [ \ \[-2.2mm\] Reconciling Well-Founded Semantics of DL-Programs and AggregatePrograms\ \[1mm\]]{} *Jia-Huai You, John Morris and Yi Bi.*\ \[1.6mm\] [ \ \[-2.2mm\] Stable Models of Formulas with Generalized Quantifiers\ \[1mm\]]{} *Joohyung Lee and Yunsong Meng.*\ \[1.6mm\] [ \ \[-2.2mm\] Extending C+ with Composite Actions for Robotic Task Planning\ \[1mm\]]{} *Xiaoping Chen, Guoqiang Jin and Fangkai Yang.*\ \[1.6mm\] [ \ \[-2.2mm\] Deriving a Fast Inverse of the Generalized Cantor N-tupling Bijection\ \[1mm\]]{} *Paul Tarau.*\ \[1.6mm\] [ \ \[-2.2mm\] Unsatisfiability-based optimization in clasp\ \[1mm\]]{} *Benjamin Andres, Benjamin Kaufmann, Oliver Matheis and Torsten Schaub.*\ \[1.6mm\] [ \ \[-2.2mm\] Visualization of Constraint Handling Rules through Source-to-Source Transformation\ \[1mm\]]{} *Slim Abdennadher and Nada Sharaf.*\ \[1.6mm\] [ \ \[-2.2mm\] Applying Machine Learning Techniques to ASP Solving\ \[1mm\]]{} *Marco Maratea, Luca Pulina and Francesco Ricca.*\ \[1.6mm\] [ \ \[-2.2mm\] Flexible Solvers for Finite Arithmetic Circuits\ \[1mm\]]{} *Nathaniel Filardo and Jason Eisner.*\ \[1.6mm\] [ \ \[-2.2mm\] The additional difficulties for the automatic synthesis of specifications posed by logic features in functional-logic languages\ \[1mm\]]{} *Giovanni Bacci, Marco Comini, Marco A. Feliú and Alicia Villanueva.*\ \[1.6mm\] [ \ \[-2.2mm\] Answering Why and How questions with respect to a frame-based knowledge base: a preliminary report\ \[1mm\]]{} *Shanshan Liang, Nguyen Ha Vo and Chitta Baral.*\ \[1.6mm\] [ \ \[-2.2mm\] Logic Programming in Tabular Allegories\ \[1mm\]]{} *Emilio Jesús Gallego Arias and James B. Lipton.*\ \[1.6mm\] [ \ \[-2.2mm\] Possibilistic Nested Logic Programs\ \[1mm\]]{} *Juan Carlos Nieves and Helena Lindgren.*\ \[1.6mm\] [ \ \[-2.2mm\] Using Answer Set Programming in the Development of Verified Software\ \[1mm\]]{} *Florian Schanda and Martin Brain.*\ \[1.6mm\] [ \ \[-2.2mm\] An Answer Set Solver for non-Herbrand Programs: Progress Report\ \[1mm\]]{} *Marcello Balduccini.*\ \[1.6mm\] [ \ \[-2.2mm\] CHR for Social Responsibility\ \[1mm\]]{} *Veronica Dahl, Bradley Coleman, Emilio Miralles and Erez Maharshak.*\ \[1.6mm\] [ \ \[-2.2mm\] A Concurrent Operational Semantics for Constraint Functional Logic Programming\ \[1mm\]]{} *Rafael Del Vado Vírseda, Fernando Pérez Morente and Marcos Miguel García Toledo.*\ \[1.6mm\] Doctoral Consortium Contributions {#doctoral-consortium-contributions .unnumbered} ================================= [ \ \[-2.2mm\] Software Model Checking by Program Specialization\ \[1mm\]]{} *Emanuele De Angelis*\ \[1.6mm\] [ \ \[-2.2mm\] Temporal Answer Set Programming\ \[1mm\]]{} *Martín Diéguez*\ \[1.6mm\] [ \ \[-2.2mm\] A Gradual Polymorphic Type System with Subtyping for Prolog\ \[1mm\]]{} *Spyros Hadjichristodoulou*\ \[1.6mm\] [ \ \[-2.2mm\] ASP modulo CSP: The clingcon system\ \[1mm\]]{} *Max Ostrowski*\ \[1.6mm\] [ \ \[-2.2mm\] An ASP Approach for the Optimal Placement of the Isolation Valves in a Water Distribution System\ \[1mm\]]{} *Andrea Peano*\ \[1.6mm\] [ \ \[-2.2mm\] Answer Set Programming with External Sources\ \[1mm\]]{} *Christoph Redl*\ \[1.6mm\] [ \ \[-2.2mm\] Together, Is Anything Possible? A Look at Collective Commitments for Agents\ \[1mm\]]{} *Ben Wright*\ \[1.6mm\]
--- abstract: 'Quarkonium production in hadron collisions at low transverse momentum $q_\perp \ll M$ with $M$ as the quarkonium mass can be used for probing transverse momentum dependent (TMD) gluon distributions. For this purpose, one needs to establish the TMD factorization for the process. We examine the factorization at the one-loop level for the production of $\eta_c$ or $\eta_b$. The perturbative coefficient in the factorization is determined at one-loop accuracy. Comparing the factorization derived at tree level and that beyond the tree level, a soft factor is, in general, needed to completely cancel soft divergences. We have also discussed possible complications of TMD factorization of p-wave quarkonium production.' --- \#1[[/]{}]{} 3 [\_[u|d g]{}]{} 4n [\_[u|d gg]{}]{} [**Transverse Momentum Dependent Factorization for Quarkonium Production at Low Transverse Momentum** ]{} J.P. Ma$^{1,2}$, J.X. Wang$^{3}$ and S. Zhao$^{1}$\ \ Quarkonium production in hadron collision can be used to explore the gluon content of hadrons, because the quarkonium is dominantly produced through gluon-gluon fusions. For the produced quarkonium with large transverse momentum, one can apply QCD collinear factorizations for long distance effects of the initial hadrons. In this case, one can extract the standard gluon distributions(see, e.g., [@MNS]). If the quarkonium is produced with small transverse momentum $q_\perp$, it can be thought that the small $q_\perp$ is generated at least partly from the transverse motion of gluons inside the initial hadrons. In this case, one can apply transverse momentum dependent(TMD) factorization for initial hadrons. Therefore, the production with small $q_\perp$ allows access to TMD gluon distributions. Factorizations with TMD quark distributions and fragmentation functions have been studied intensively beyond tree level in different processes in [@CS; @CSS; @JMY1; @CAM]. In comparison, the factorization with TMD gluon distributions beyond tree level has only been studied for Higgs production in hadron collision in [@JMYG]. Recently, the TMD factorization of quarkonium production has been derived at tree level in [@BoPi], and based on it numerical predictions have been obtained. For theoretical consistency and precision, it is important to examine the TMD factorization beyond tree level. From early studies in [@CS; @CSS; @JMY1; @JMYG], it is known that a soft factor needs to be implemented into the factorization. In this work, we examine TMD factorization of $\eta_c$ or $\eta_b$ production at one-loop level. A quarkonium is dominantly a bound state of a heavy quark $Q$ and its antiquark $\bar Q$. Because of the heavy mass the $Q\bar Q$ pair is of a nonrelativistic system. To separate the nonperturbative effects related to the quarkonium in its production, one can employ nonrelativistic QCD (NRQCD) factorization [@nrqcd] by an expansion of the small velocity of $Q$ relative to $\bar Q$ . The inclusive production of a quarkonium at moderate or large $q_\perp$ has been studied intensively both in theory and in experiments. In the last five years, important progresses were made in the study of the next-to-leading order QCD correction for $J/\psi$ production in hadron collisions [@NJ1] and power corrections [@KQS]. The activities in this field can be seen in [@HQPPO]. It should be noted that in experiment it is also possible to study the inclusive production at low $q_\perp$. For example, a $J/\psi$ produced at LHCb can be measured with $q_\perp$ smaller than $1$GeV [@LHCb]. Therefore, with theoretically established TMD factorization, one can extract from experimental results TMD gluon distributions. We will use the light-cone coordinate system, in which a vector $a^\mu$ is expressed as $a^\mu = (a^+, a^-, \vec a_\perp) = ((a^0+a^3)/\sqrt{2}, (a^0-a^3)/\sqrt{2}, a^1, a^2)$ and $a_\perp^2 =(a^1)^2+(a^2)^2$. We introduce two light cone vectors $ n^\mu = (0,1,0,0)$ and $l^\mu =(1,0,0,0)$ and the transverse metric $g_\perp^{\mu\nu}=g^{\mu\nu}-n^\mu l^\nu -n^\nu l^\mu$. We consider the process $$h_A (P_A) + h_B (P_B) \to \eta_Q (q) + X, \label{proc}$$ in the kinematical region $Q^2 =q^2 \gg q^2_\perp$ with $Q=M_{\eta_Q}$ as the mass of $\eta_Q$, where $\eta_Q$ stands for $\eta_c$ or $\eta_b$. The momenta of the initial hadrons and of the quarkonium are given by $$P_A^\mu \approx (P_A^+,0,0,0), \ \ \ P_B^\mu \approx (0,P_B^-, 0,0), \ \ \ q^\mu =(x P_A^+, y P_B^-, \vec q_\perp),$$ where we have neglected masses of hadrons, i.e., $P_A^- \approx 0$ and $P_B^+ \approx 0$. In the kinematic region of $q_\perp \ll Q$ TMD factorization can be applied with corrections suppressed by positive powers of $q_\perp/Q$. It is clear that in the kinematical region with $q_\perp \sim Q$ or $q_\perp \gg Q$ the TMD factorization can not be used. In these regions one can use collinear factorization as studied in [@NJ1]. For each hadron in the initial state, one can define its TMD gluon distribution. We introduce the gauge link along the direction $u^\mu = (u^+, u^-,0,0)$, $${\mathcal L}_u (z,-\infty) = P \exp \left ( -i g_s \int^0_{-\infty} d\lambda u\cdot G (\lambda u + z) \right ) ,$$ where the gluon field is in the adjoint representation. At leading twist, one can define two TMD gluon distributions through the gluon density matrix [@TMDGP], $$\begin{aligned} && \frac{1}{x P^+} \int \frac{ d\xi^- d^2 \xi_\perp}{(2\pi)^3} e^{ - i x \xi^- P^+_A + i \vec \xi_\perp \cdot \vec k_\perp} \langle h_A \vert \left ( G^{+\mu} (\xi ) {\mathcal L}_u (\xi,-\infty) \right )^a \left ( {\mathcal L}_u^\dagger (0,-\infty) G^{+\nu}(0) \right )^a \vert h_A \rangle \nonumber\\ && =-\frac{1}{2} g_\perp^{\mu\nu} f_{g/A} (x,k_\perp, \zeta^2_u,\mu) + \left (k_\perp^\mu k_\perp^\nu + \frac{1}{2}g_\perp^{\mu\nu} k_\perp^2 \right ) h_{g/A} (x,k_\perp, \zeta^2_u,\mu), \label{DEF}\end{aligned}$$ with $\xi^\mu =(0,\xi^-,\vec \xi_\perp)$. $x$ is the momentum fraction carried by the gluon inside $h_A$. The gluon has also a nonzero transverse momentum $\vec k_\perp$. The definition is given in nonsingular gauges. It is gauge invariant. In singular gauges, one needs to add gauge links along transverse direction at $\xi^-=-\infty$ [@TMDJi]. Due to the gauge links, the TMD gluon distributions also depend on the vector $u$ through the variable $$\zeta^2_u = \frac {(2u\cdot P_A)^2}{u^2} \approx \frac {2 u^-}{u^+} \left (P_A^+\right )^2. \label{Zeta}$$ In the definition, the limit $u^+ \ll u^-$ is taken in the sense that one neglects all contributions suppressed by negative powers of $\zeta_u^2$. From the definition in Eq. (\[DEF\]), there are two TMD gluon distributions. The distribution $f_{g/A}$ corresponds to the standard gluon distribution in collinear factorization. The distribution $h_{g/A}$ describes gluons with linear polarization inside $h_A$. The relevant phenomenology of $h_{g/A}$ has been only recently studied [@SecG1; @SecG2; @SecG3]. Through the process studied here, one can also obtain information about this distribution [@BoPi]. For $h_B$, one can also define two TMD gluon distributions $f_{g/B}$ and $h_{g/B}$ similar to those in Eq. (\[DEF\]), in which the gauge links are along the direction $v^\mu=(v^+,v^-,0,0)$ instead of $u$ and the limit $v^+\gg v^-$ is taken. Therefore, the two distributions $f_{g/B}$ and $h_{g/B}$ depend on the parameter $\zeta_v$ which is defined by replacing in $\zeta_u$ $P_A$ with $P_B$ and $u$ with $v$ in Eq. (\[Zeta\]). To study the TMD factorization of the process in Eq. (\[proc\]), we need to study $$g(p) + g(\bar p) \to \eta_Q(q) +X , \label{gg}$$ with $p^\mu =(P^+_A,0,0,0)$ and $\bar p^\mu =(0,P_B^-,0,0)$. Since we are interested in the kinematical region of the small transverse momentum, we need to study the process in the limit of $q_\perp \ll Q=M_{\eta_Q}$. In reality, initial hadrons are bound states of partons. One can imagines that $\eta_Q$ can be produced through two-gluon fusion, as in Eq. (\[gg\]), in which one gluon is from the hadron $h_A$ and another is from the hadron $h_B$. Certainly, there can be interactions or gluon exchanges between spectators in $h_A$ and those in $h_B$ and between partons involved in Eq. (\[gg\]) and spectators. If these interactions are of short distances or if the exchanged gluons are hard, their effects in cross sections can be factorized with operators of higher twists because that the involved processes are a scattering of multipartons. These effects are power suppressed and can be neglected. A factorization may not be obtained if the interactions are of long distance or if the exchanged gluons are soft. It has been shown in Drell-Yan processes [@CSS; @JCSOST] that the effects of soft-gluon exchanges are canceled or power suppressed if the sum of the unobserved states is completed. The exchanged gluons can be those collinear to the initial hadron $h_A$ or $h_B$; the effects of these collinear gluons can be factorized into the gauge links in the corresponding parton distribution functions. Since the process in Eq. (1) is similar to Drell-Yan processes, we expect that the conclusion made in [@CSS; @JCSOST] for Drell-Yan processes also applies here. In our case, we have an observed $\eta_Q$ in the final state. In general, $\eta_Q$ is a bound state of a heavy-quark pair and possible light partons. We will use NRQCD for $\eta_Q$. In the approximation explained later, $\eta_Q$ is effectively taken as a $Q\bar Q$ state in which the state is in color singlet and there is no relative momentum between $Q$ and $\bar Q$. This $Q\bar Q$ is effectively pointlike and cannot emit soft gluons. Hence, there are no soft interactions between $\eta_Q$ and spectators at leading power. With the arguments given in the above, we only need to study the process in Eq. (\[gg\]) for factorization. The reason why we only need to study the process in Eq. (\[gg\]) at the leading power for the factorization can be understood in another way: If the factorization holds or is proven, it holds for arbitrary hadrons in the initial state. Especially, it also holds if the initial states are of partons. In the case with a factorization which is not rigorously proven, one can use parton states to study or to examine it, and to eventually prove it. In this work, we use the process in Eq. (\[gg\]) to study the relevant factorization beyond tree level. For long-distance effects related to $\eta_Q$, we use NRQCD factorization. We will work at the leading order of the small velocity expansion in NRQCD. At this order, the production of $\eta_Q$ can be thought as a two-step process. In the first step, a $Q\bar Q$ pair is produced in which the heavy quark $Q$ and its antiquark $\bar Q$ carry the same momentum $q/2$. The pair is in color-singlet and spin-singlet $^1 S_0$. Then, the pair is transmitted into $\eta_Q$ with the mass $Q=2m_Q=M_{\eta_Q}$. The transition is described by a NRQCD matrix element. It is noted that the considered $Q\bar Q$ pair is in color singlet and hence there is no interaction of long distance between the $Q\bar Q$ pair and spectators of initial hadrons, as discussed before. At higher orders of the small velocity expansion the $Q\bar Q$ pair can be in the color-octet state [@nrqcd]. With the color-octet $Q\bar Q$ pair it is possible that the NRQCD factorization proposed in [@nrqcd] is violated beyond the one-loop level indicated by the study in [@NQS]. At tree level, the process in Eq. (\[gg\]) is with $X$ as nothing. It is straightforward to obtain the differential cross section, $$\begin{aligned} \frac {d \sigma } { d x d y d^2 q_\perp} &=& \sigma_0 \frac{\pi}{ Q^2 }\delta (xy s -Q^2) \delta (1-x) \delta (1-y) \delta^2 (\vec q_\perp) , \nonumber\\ && \sigma_0 =\frac{(4\pi \alpha_s)^2 }{N_c (N_c^2-1) m_Q} \vert \psi (0) \vert^2, \label{tree}\end{aligned}$$ with $s = 2 p^+ \bar p^-$ and $m_Q$ being the pole mass of the heavy quark. $\psi(0)$ is the wave function of $\eta_Q$ at the origin. In fact, $ \vert \psi (0) \vert^2$ should be expressed as a NRQCD matrix element. Beyond tree level, Coulomb singularities representing long-distance effects related to $\eta_Q$ appear. These singularities are factorized into the NRQCD matrix element. At tree level, one easily finds $$f^{(0)}_{g/A} (x,k_\perp, \zeta^2_u,\mu) =f^{(0)}_{g/B} (x,k_\perp, \zeta^2_v,\mu) =\delta(1-x) \delta^2 (\vec k_\perp), \label{TMDg}$$ while $h_{g/A}$ and $h_{g/B}$ are zero. They become nonzero at order of $\alpha_s$. With these results, one can write the tree-level cross section as a factorized form $$\begin{aligned} \frac {d \sigma} { d x d y d^2 q_\perp} &=& \frac{\pi\sigma_0}{ Q^2 } \int d^2k_{a\perp} d^2 k_{b\perp} f_{g/A} (x,k_{a\perp}) f_{g/B} (y,k_{b\perp}) \delta^2 (\vec k_{a\perp} + \vec k_{b\perp} -\vec q_\perp) \delta (xy s - Q^2) {\mathcal H}, \nonumber\\ {\mathcal H} &=& 1 +{\mathcal O}(\alpha_s). \label{Tree-Fac}\end{aligned}$$ Beyond the tree level, one needs to introduce a soft factor. As we will see explicitly , all soft divergences will be factorized into the soft factor and TMD gluon distributions so that the perturbative coefficient ${\mathcal H}$ is free from soft divergence. We will then determine ${\mathcal H}$ at one-loop level. ![The one-loop corrections to the gluon TMD. The double lines represent the gauge link. The black bubble in Fig. 1a is for self-energy correction. []{data-label="P2"}](Fig1.eps){width="10cm"} To derive the factorization at one loop, we need to study the one-loop corrections to TMD gluon distributions and the differential cross section. The one-loop correction to TMD gluon distribution has been studied in [@JMYG], where the collinear divergence has been regularized with an infinitely small off-shellness of the gluon. Here, we regularize all divergences in $d=4-\epsilon$ space-time. The correction can be divided into the virtual and real corrections. The virtual correction is given by diagrams in Fig. 1. We will use the $\overline{\rm MS}$ scheme to subtract ultraviolet (UV) divergences. After the subtraction, we have the virtual correction from Fig. 1, $$\begin{aligned} f_{g/A}^{(1)}(x,k_\perp,\zeta_u,\mu )\biggr\vert_{vir.} &=& \frac{\alpha_s}{4\pi} \delta (1-x) \delta^2 (\vec k_\perp) \left [ \left ( -\frac{2}{\epsilon_{s}} + \ln \frac{e^\gamma \mu^2}{4\pi \mu_s^2} \right ) \left ( \frac{11}{3} N_c -\frac{2}{3} N_F \right ) \right. \nonumber\\ && \left. + 2 N_c \left ( -\frac{4}{\epsilon_s^2} -\frac{2}{\epsilon_s} \ln \frac{4\pi\mu_s^2}{e^\gamma \zeta_u^2} - \frac{1}{2} \ln^2 \frac{4\pi\mu_s^2}{e^\gamma \zeta_u^2} -\frac{5\pi^2}{12} + \left ( -\frac{2}{\epsilon_s} +\ln\frac{e^\gamma\zeta_u^2}{4 \pi \mu_s^2} \right ) \right. \right. \nonumber\\ && \left. \left. + \frac{1}{2} \ln \frac{\mu^2}{\zeta_u^2} -\frac{3}{2} \right ) \right ] ,\end{aligned}$$ where the poles in $\epsilon_s =4-d$ stand for collinear or infrared divergences, i.e., soft divergences. $\mu_s$ is the scale associated with these poles. $\mu$ is the UV scale. The terms in the first line in Eq. (10) is the sum of the contributions from Fig. 1a and Fig. 1c with their conjugated diagrams. The remaining terms are from Fig. 1b and its conjugated diagram. ![The real correction at one loop to the gluon TMD. The double lines represent the gauge link. These diagrams are for real corrections. []{data-label="P2"}](Fig2.eps){width="11cm"} The corrections from Fig. 2 are real corrections. They can be found in [@JMYG] as $$\begin{aligned} f_{g/A}^{(1)}(x,k_\perp,\zeta_u,\mu )\biggr\vert_{re.} &=& \frac{\alpha_s N_c}{\pi^2 k^2_\perp} \left [ \left ( \frac{1-x}{x} + x(1-x) +\frac{x}{2} \right ) -\frac{1}{2}\delta(1-x) \right. \nonumber\\ && \left. \ \ \ \ \ \ \ \ \ + \frac{x}{(1-x)_+} -\frac{x}{2} + \frac{1}{2} \delta (1-x) \ln\frac{\zeta_u^2}{k_\perp^2} \right ] ,\end{aligned}$$ where the terms in the first line are from Fig. 2a and Fig. 2d. The total one-loop correction is then the sum of the virtual and real corrections. At one loop, $h_{g/A}$ becomes nonzero. It receives a contribution from Fig. 2a. We have $$h_{g/A}(x,k_\perp,\zeta_u,\mu) = \frac{2 \alpha_s N_c}{\pi^2 (k_\perp^2)^2}\frac{1-x}{x} + {\mathcal O}(\alpha_s^2).$$ By replacing $\zeta_u$ with $\zeta_v$ we obtain $f_{g/B}$ and $h_{g/B}$ from $f_{g/A}$ and $h_{g/A}$, respectively. ![The class of diagrams where a gluon is emitted from the initial gluon $g(p)$ and is attached to a possible place. There are six diagrams. Three of them are given here. Another three diagrams are obtained by reversing the direction of the heavy quark line. []{data-label="P1"}](Fig3.eps){width="9cm"} Now we turn to one-loop corrections of the differential cross section. The corrections can be divided into the virtual correction and the real correction. The virtual correction is the one-loop correction to the process $g(p) + g(\bar p)\to \eta_Q(q)$. We denote the total contribution from the virtual correction as $$\begin{aligned} \frac {d \sigma (gg\to \eta_Q) } { d x d y d^2 q_\perp}\biggr\vert_{vir.} = \frac{1}{2 s (2\pi)^3 } \delta ( xy s -Q^2) \delta (1-x)\delta (1-y) \delta^2(\vec q_\perp) \sigma_1.\end{aligned}$$ The contributions to $\sigma_1$ can be divided into four parts, $$\sigma_1 = \sigma_{1A} + \sigma_{1B} + \sigma_{1C} +\sigma_{1D},$$ $\sigma_{1A}$ receives contributions from diagrams in which a virtual gluon is emitted by the initial gluon $g(p)$. The diagrams for this part are given in Fig. 3. $\sigma_{1B}$ receives contributions from diagrams in which a virtual gluon is emitted by the initial gluon $g(\bar p)$. $\sigma_{1C}$ denotes the contributions from diagrams in which a virtual gluon is exchanged between heavy quark line. $\sigma_{1D}$ denotes the one-loop corrections of external gluon lines. This part will not contribute to ${\mathcal H}$, because the contributions to $\sigma_{1D}$ are automatically subtracted into TMD gluon distributions. Below, we will only give and discuss the results of $\sigma_{1A,1B,1C}$. In the above classification, Fig. 3a can contribute both to $\sigma_{1A}$ and $\sigma_{1B}$. We put the half of the contribution Fig. 3a into $\sigma_{1A}$ and another half into $\sigma_{1B}$. With symmetry arguments one easily finds $\sigma_{1A}=\sigma_{1B}$. We have then $$\begin{aligned} \sigma_{1A} = \sigma_{1B} = \frac{1}{2}\sigma_1\biggr\vert_{3a} +\sigma_1\biggr\vert_{3b} +\sigma_1\biggr\vert_{3c}.\end{aligned}$$ By summing contributions from each diagram we obtain the following results for the virtual corrections: $$\begin{aligned} \frac{\sigma_{1A}}{\sigma_0} &=& \frac{\alpha_sN_c }{12 \pi} \biggr [ -6 \frac{4}{\epsilon_s^2} -6 \frac{2}{\epsilon_s} \biggr ( 1 + \ln \frac{e^{-\gamma} 4\pi \mu_s^2}{Q^2} \biggr ) -3\ln ^2\frac{e^{-\gamma} 4\pi \mu_s^2}{Q^2} -6\ln \frac{e^{-\gamma} 4\pi \mu_s^2}{Q^2} \nonumber\\ && + 9 \ln\frac{\mu^2}{Q^2} -6\ln 2 + 6 +\frac{11}{4} \pi^2 \biggr ], \nonumber\\ \frac{\sigma_{1C}}{\sigma_0} &=& \frac{\alpha_s}{2 \pi} \biggr [ -N_c \ln \frac{\mu^2}{Q^2} + C_F \biggr ( -2 + 4 \ln 2 \biggr ) +\frac{1}{N_c} \biggr ( 2 \ln 2 -\frac{1}{4}\pi^2 \biggr ) \biggr ].\end{aligned}$$ In these results, the UV poles are subtracted in the ${\overline {\rm MS}}$ scheme. The on-shell scheme for the renormalization of heavy quark propagators is used so that $m_Q=Q/2$ is the pole mass of heavy quark. In $\sigma_{1A}$, the pole terms of $\epsilon_s$ are for soft divergences coming only from Fig. 3a. The contributions from Fig. 3b and Fig. 3c also contain collinear divergences and infrared divergences. The infrared divergences are canceled in the sum of the two diagrams, because the $Q\bar Q$ is in color singlet. The collinear divergences are also canceled. In calculating the diagrams for $\sigma_{1C}$ one will meet Coulomb singularity. This singularity is factorized into NRQCD matrix element. Hence, we have finite $\sigma_{1C}$. The real correction is from the tree-level process $$g(p) + g(\bar p) \to \eta_Q (q) + g(k). \label{Real}$$ For the color-single $Q\bar Q$ pair, there are 12 diagrams for the amplitude. Since we are interested in the low $q_\perp$ region, we expand the differential cross section in $q_\perp/ Q$ and only take the leading order in the expansion. At the leading order, we have only those diagrams given in Fig. 4 for the differential cross section. The result for the process in Eq. (\[Real\]) in the limit of $q_\perp\to 0$ is $$\begin{aligned} \frac{d\sigma}{dx dy d^2q_\perp} &=& \frac{\pi \sigma_0}{Q^2} \frac{N_c \alpha_s}{4 \pi^2 q^2_\perp} \delta (xy s- Q^2) \left [ \frac{2 \delta (1-y) }{ x } \biggr ( 2 -2 x + 3 x^2 -2 x^3 \biggr ) + \frac{x (1+x)}{(1-x)_+} \delta (1-y) \right. \nonumber\\ && \left. \ \ - \delta (1-x)\delta (1-y) \ln \frac{q_\perp^2}{Q^2} + ( x\leftrightarrow y ) \right ] + {\mathcal O}(q_\perp^0).\end{aligned}$$ ![The diagrams for the cross section of $g+ g\to g +\eta_Q$. In these diagrams, the gluon in the intermediate state is emitted or absorbed by gluons. The black dots denote the projection of the $Q\bar Q$ pair into the color singlet $^1S_0$ state. By reversing the quark lines, one can obtain other three diagrams from each diagram. []{data-label="P2"}](Fig4.eps){width="13cm"} The factorized result in Eq. (\[Tree-Fac\]) is derived at tree level. If we extend the factorization beyond tree level, with the one-loop results, in the above we will find the following: (i) The soft divergences are not factorized, i.e., ${\mathcal H}$ will contain some infrared divergences represented by poles in $\epsilon_s$. (ii) The real correction of the differential cross section is not totally generated by TMD gluon distributions. In other words, ${\mathcal H}$ will receive correction from the real correction. It results in that ${\mathcal H}$ depends on $q_\perp$. All of these have a common reason. In the one-loop corrections to the differential cross section, there is an exchange of a soft gluon between the two initial gluons $g(p)$ and $g(\bar p)$. In the virtual correction, the exchange results in infrared divergences, and in the real correction it results in contributions proportional to $\delta(1-x)\delta(1-y)$. The effects of the soft gluon exchange are not exactly generated by the corresponding soft gluon exchange in TMD gluon distributions. The effects of soft gluon exchange are of long distance. Therefore, one needs to introduce a soft factor in the factorization to completely factorize these effects from ${\mathcal H}$ determined with Eq. (\[Tree-Fac\]). The effects of soft gluon exchange between a gluon moving in the $+$ direction and a gluon moving in the $-$ direction can be described by the expectation value of a product with four gauge links. We introduce, as in [@JMYG], $$\begin{aligned} S(\vec b_\perp,\mu,\rho) = \frac{1}{N_c^2-1} \langle 0\vert {\rm Tr} \left [ {\mathcal L}^\dagger_v (\vec b_\perp,-\infty) {\mathcal L}_u (\vec b_\perp,-\infty) {\mathcal L}_u^\dagger (\vec 0,-\infty){\mathcal L}_v (\vec 0,-\infty) \right ] \vert 0\rangle. \label{SoftS}\end{aligned}$$ The gauge links are past pointing. It reflects the fact that the two gluons $g(p)$ and $g(\bar p)$ are in the initial state. The dependence on the directions of gauge links is only through the parameter $\rho^2=(2 u\cdot v)^2/(u^2 v^2) \approx u^-v^+/(u^+v^-)$. The limits $u^-\gg u^+$ and $v^+\gg v^-$ are taken similarly to that in TMD gluon distributions. The gauge links or the gauge field is in the adjoint representation. At leading order, one has $$S^{(0)}(\vec b_\perp,\mu,\rho) =1.$$ ![One-loop corrections for the soft factor. The first three diagrams plus their complex conjugated are virtual corrections. The last four diagrams are real corrections. A cut line is implied.[]{data-label="soft"}](Fig5.eps){width="11cm"} At one loop, there are corrections from Fig. 5. One can divide the corrections into a virtual and a real part. The diagrams in the first row are of the virtual part. Those in the second row are of the real part. The virtual correction reads $$\begin{aligned} S^{(1)}_{vir.} (\vec b_\perp,\mu,\rho) = \frac{\alpha_s N_c}{2\pi} \left [ -\frac{2}{\epsilon_s} + \ln\frac{e^\gamma \mu^2}{4 \pi \mu_s^2} \right ] \left ( 2 -\ln\rho^2 \right ),\end{aligned}$$ where the UV pole is subtracted. The pole in $\epsilon_s$ represents the IR divergence with the scale $\mu_s$. The real part is $$\begin{aligned} S^{(1)}_{re.} (\vec b_\perp,\mu,\rho) = -\frac{\alpha_s N_c}{2\pi^2} \left ( 2 -\ln\rho^2 \right ) \int d^2 k_\perp \frac{e^{-i\vec b_\perp\cdot \vec k_\perp}}{ k_\perp^2}.\end{aligned}$$ The total one-loop contribution is the sum of the virtual and real contributions. We now define our soft factor which will enter the TMD factorization as $$\begin{aligned} \tilde S(\vec\ell_\perp,\mu,\rho) &&= \int\frac{d^2 b_\perp}{(2\pi)^2} e^{ i\vec b_\perp\cdot \vec\ell_\perp} S^{-1}(\vec b_\perp,\mu,\rho) \nonumber\\ && = \delta^2(\vec\ell_\perp) -\frac{\alpha_s N_c}{2\pi} \left ( 2 -\ln\rho^2 \right ) \left [ \left ( -\frac{2}{\epsilon_s} +\ln\frac{ e^\gamma \mu^2}{4\pi \mu_s^2} \right ) \delta^2 (\vec\ell_\perp) - \frac{1}{\pi \ell^2_\perp} \right ] +{\mathcal O}(\alpha_s^2). \label{SoftS1}\end{aligned}$$ With the introduced soft factor, we propose the TMD factorization as $$\begin{aligned} \frac {d \sigma} { d x d y d^2 q_\perp} &=& \frac{\pi \sigma_0}{Q^2 } \int d^2k_{a\perp} d^2 k_{b\perp} d^2\vec \ell_\perp \delta^2 (\vec k_{a\perp} + \vec k_{b\perp}+\vec\ell_\perp -\vec q_\perp) \delta (xy s -Q^2) \nonumber \\ && \ \ \ \ \ \ \cdot f_{g/A} (x,k_{a\perp},\zeta_u, \mu) f_{g/B} (y,k_{b\perp},\zeta_v,\mu) \tilde S(\ell_\perp, \mu,\rho) {\mathcal H}(Q,\mu, \zeta_u,\zeta_v). \label{S-Fac}\end{aligned}$$ From one-loop results of the differential cross section, TMD gluon distributions, and the soft factor, we derive $$\begin{aligned} {\mathcal H}(Q,\mu, \zeta_u,\zeta_v) &=& 1 + \frac{\alpha_s N_c}{4\pi} \left [ \ln^2 \frac{\zeta_u^2}{Q^2}+\ln^2\frac{\zeta_v^2}{Q^2} -\ln\rho^2\left (1+ 2 \ln\frac{\mu^2}{Q^2} \right ) + 2 \ln\frac{\mu^2}{Q^2} + \frac{7}{2} \pi^2 \right. \nonumber\\ && \left. +\frac{2}{N_c^2} \biggr ( 1 -\frac{1}{4}\pi^2 \biggr ) \right ] + {\mathcal O}(\alpha_s^2). \label{Fac}\end{aligned}$$ It is clear that ${\mathcal H}$ is free from any soft divergence and does not depend on $q_\perp$. With the factorization the small transverse momentum, $q_\perp$ is generated by the transverse motion of gluons in the initial hadrons and by soft gluon radiation. Equations (\[S-Fac\]) and (\[Fac\]) are our main results. It should be noted that the factorization holds for arbitrary large $\zeta_u$ and $\zeta_v$. For practical applications, one may take a frame to simplify the results in Eqs. (\[S-Fac\]) and (\[Fac\]). One can take $\zeta_u^2 =\zeta_v^2 = \rho Q^2$ so that the TMD gluon distributions in Eq. (\[S-Fac\]) depend on $\rho$ and $Q^2$ and the perturbative coefficient becomes a function of $Q$, $\mu$, and $\rho$, $$\begin{aligned} {\mathcal H}(Q,\mu, \rho) = 1 + \frac{\alpha_s N_c}{2\pi} \left [ \ln^2 \rho - \ln\rho\left (1+ 2 \ln\frac{\mu^2}{Q^2} \right ) + \ln\frac{\mu^2}{Q^2} + \frac{7}{4} \pi^2 +\frac{1}{N_c^2} \biggr ( 1 -\frac{1}{4}\pi^2 \biggr ) \right ] + {\mathcal O}(\alpha_s^2). \label{Fac1}\end{aligned}$$ ![The diagrams for the cross section of $g+ g\to g+g+\eta_Q$ give the contributions factorized with the gluon TMD $g_k$. By reversing the quark lines, one can obtain other three diagrams from each diagram. []{data-label="P2"}](Fig6.eps){width="7cm"} At the considered orders, we will not find the contributions which can be factorized with $h_{g/A}$ or $h_{g/B}$ defined in Eq. (\[DEF\]). However, there is a contribution involving these distributions of linearly polarized gluons in the TMD factorization. This contribution can be found at a higher order of $\alpha_s$ from diagrams given in Fig. 6. It is straightforward to calculate these diagrams in the limit $q_\perp \to 0$. We find that the contribution takes the factorized form $$\begin{aligned} \frac {d \sigma} { d x d y d^2 q_\perp}\biggr\vert_{Fig.6} &=& \frac{\pi\sigma_0}{ Q^2 } \delta (xy s - Q^2) \int d^2k_{a\perp} d^2 k_{b\perp} \delta^2 (\vec k_{a\perp} + \vec k_{b\perp} -\vec q_\perp) \biggr [ f_{g/A} (x,k_{a\perp}) \biggr\vert_{2a} f_{g/B} (y,k_{b\perp}) \biggr\vert_{2a} \nonumber\\ && \ \ \ \ - \frac{1}{2} \biggr ( (\vec k_{a\perp}\cdot \vec k_{b\perp})^2 -\frac{1}{2} k^2_{a\perp} k^2_{b\perp} \biggr ) h_{g/A} (x,k_{a\perp}) h_{g/B} (y,k_{b\perp}) \biggr ].\end{aligned}$$ Our result in the last line has also been derived in [@BoPi] with a different method. The perturbative coefficient of the contribution in the last line is at order of $\alpha_s^0$. This contribution should be added to Eq. (\[S-Fac\]). In principle one can determine the perturbative coefficient of the contribution beyond the leading order of $\alpha_s$ following the same way as has been done for Eqs. (\[S-Fac\]) and (\[Fac\]). However, this will be very tedious because one needs to calculated the partonic process $g+g\to \eta_Q +X$ at the 3-loop level. We leave this for a future study. In the factorized form of the differential cross section in Eq. (\[Fac\]), the TMD gluon distributions do not depend on processes, they only depend on hadrons. The perturbative coefficient ${\mathcal H}$ does not depend on initial hadrons. The soft factor $\tilde S$ defined in Eqs. (\[SoftS\]) and (\[SoftS1\]) is a basic quantity of QCD, i.e., it depends neither on hadrons or on processes. It is noted that the same soft factor also appears in TMD factorization of Higgs production studied in [@JMYG]. This indicates that soft divergences in different processes or in a class of processes can be factorized into the same object. This implies that the soft factor is universal at certain level. In TMD factorization of Drell-Yan processes, one also needs a soft factor to take radiation of soft gluons to complete the factorization[@CSS; @JMY1]. The soft factor there is similar to that defined in Eqs. (\[SoftS\]) and (\[SoftS1\]). The only difference is that they are defined in different $SU(3)$ representations. The studied TMD factorization can be used for the region with $q_\perp\sim \Lambda_{QCD}$ for extracting TMD gluon distributions. However, its usage is not limited to this kinematic region, because the factorization holds, in general, in the region $q_\perp/Q \ll 1$. In the region $Q\gg q_\perp \gg \Lambda_{QCD}$, both TMD factorization and collinear factorization hold. In the collinear factorization, the perturbative coefficient functions in this region contain large log of $q_\perp/Q$. The results from TMD factorization can be used to resum these large logs. This leads to the well-known Collins-Soper-Sterman resummation [@CSS]. Based on our result here, one can also derive the resummation in the case of quarkonium production, similarly to that derived in [@JMYG]. We, therefore, do not discuss the details about the resummation here. We note that such a resummation has been studied very recently in [@SYY]. An early work about the resummation can be found in [@BQW], where the formation of a quarkonium from a $Q\bar Q$ pair is described with a color evaporation model instead of NRQCD factorization. At the orders we have considered, the production of a p-wave quarkonium is possible. However, the TMD factorization in this case can be complicated. According to the NRQCD factorization in [@nrqcd], one needs to consider not only the contribution from the production of a color singlet p-wave $Q\bar Q$ pair, but also the contribution of a color-octet s-wave $Q\bar Q$ pair. The formation of a p-wave quarkonium from the color-singlet and the color-octet $Q\bar Q$ pair is at the same order in the small velocity expansion. In the case we studied here, we only need to consider the contribution from production of a color-singlet s-wave $Q\bar Q$ pair. At the leading power the pair decouples with soft gluons. However, in the case of p-wave quarkonia, the color-singlet p-wave and color-octet s-wave $Q\bar Q$ pair can emit soft gluons at leading power. To completely separate the effects of soft gluons, one may need a different soft factor than that introduced here. This is also the reason why we write our TMD factorization in Eq. (\[S-Fac\]) explicitly with the unsubtracted TMD gluon distributions and the soft factor. Another complication with p-wave quarkonia is that one needs a gauge link for the NRQCD matrix element of the contribution from the color-octet $Q\bar Q$ pair to establish NRQCD factorization beyond one loop, as shown in [@NQS]. We will examine the TMD factorization for p-wave quarkonium in a separate publication. Before summarizing our work, we note that one can define subtracted TMD gluon distributions as those used for Higgs production in [@JMYG], to factorize the differential cross section. Then, our result can be factorized as the same form in [@JMYG] only with the difference that the perturbative coefficient is different. One may also redefine TMD gluon distributions as suggested in [@JC1] so that the differential cross section is factorized only with the redefined TMD gluon distributions. Due to this and the reason discussed for p-wave quarkonium, we only give our results factorized with the unsubtracted TMD gluon distributions as in Eq. (\[S-Fac\]). To summarize, we have studied the one-loop TMD factorization of $^1S_0$-quarkonium production in a hadron collision at low transverse momentum. We find that the differential cross section can be factorized with the TMD gluon distributions, the soft factor, and the perturbative coefficient. The TMD gluon distributions and the soft factor are consistently defined with QCD operators; the perturbative coefficient is determined here at one loop. In comparison with the factorization derived at tree level, the soft factor is needed to cancel all effects of soft gluons. Our result will be useful not only for extracting TMD gluon distributions from experimental data, but also for resumming large logs of $q_\perp$ appearing in the collinear factorization. [**Acknowledgments**]{} We would like to thank Prof. Y.-N. Gao for a discussion about LHCb experiment and G. P. Zhang for discussions about TMD parton distributions. The work of J. P. M. is supported by National Nature Science Foundation of People’s Republic of China (Grants No. 10975169, 11021092, and No. 11275244). The work of J. X. W. is supported by the National Natural Science Foundation of Peoples Republic of China (Grants No. 10979056 and No. 10935012), and in part by DFG and NSFC (CRC 110). [99]{} A. D. Martin, C.-K. Ng, and W. J. Stirling, Phys. Lett. B [**191**]{}, 200 (1987). J. C. Collins and D. E. Soper, Nucl. Phys. [**B193**]{}, 381 (1981); Nucl. Phys. [**B213**]{}, 545(E) (1983); Nucl. Phys. [**B197**]{}, 446 (1982); Nucl. Phys. [**B194**]{}, 445 (1982). J. C. Collins, D. E. Soper, and G. Sterman, Nucl. Phys. [**B250**]{}, 199 (1985). X. D. Ji, J. P. Ma, and F. Yuan, Phys. Rev. D [**71**]{}, 034005 (2005); Phys. Lett. B [**597**]{}, 299 (2004). J. C. Collins and A. Metz, Phys. Rev. Lett. [**93**]{}, 252001 (2004). X. D. Ji, J. P. Ma, and F. Yuan, J. High Energy Phys. [**07**]{} (2005) 020. D. Boer and C. Pisano, Phys. Rev. D [**86**]{}, 094007 (2012). G. Bodwin, E. Braaten, and G. P. Lepage, Phys. Rev. D [**51**]{}, 1125 (1995); *ibid*. [**55**]{}, 5853(E) (1997). J. Campbell, F. Maltoni, F. Tramontano, Phys. Rev. Lett. [**98**]{}, 252002 (2007); B. Gong and J.-X. Wang, Phys. Rev. Lett. [**100**]{}, 232001 (2008); B. Gong, X.-Q. Li, and J.-X. Wang, Phys. Lett. B [**673**]{}, 197 (2009); Y. Q. Ma, K. Wang, and K. T. Chao, Phys. Rev. Lett. [**106**]{}, 042002 (2011); Y. Q. Ma [*et al.*]{}, Phys. Rev. Lett. [**108**]{}, 242004 (2012); M. Butenschoen and B. A. Kniehl, Phys. Rev. Lett. [**106**]{}, 022003 (2011); Phys. Rev. Lett. [**108**]{}, 172002 (2012); B. Gong, L. P. Wan, J. X. Wang, and H. F. Zhang, Phys. Rev. Lett. [**110**]{}, 042002 (2013). Z.-B. Kang, J.-W. Qiu, and G. Sterman, Phys. Rev. Lett. [**108**]{}, 102002 (2012); S. Fleming, A. K. Leibovich, T. Mehen, and I. Z. Rothstein, Phys. Rev. D [**86**]{}, 094012 (2012). N. Brambilla [*et al.*]{}, Eur. Phys. J. C [**71**]{}, 1534 (2011). R. Aaij [*et al.*]{}(LHCb Collaboration), Eur. Phys. J. C [**71**]{}, 1645 (2011). P. J. Mulders and J. Rodrigues, Phys. Rev. D [**63**]{}, 094021 (2001). X. D. Ji and F. Yuan, Phys. Lett. B [**543**]{}, 66 (2002); A. V. Belitsky, X. D. Ji, and F. Yuan, Nucl. Phys. [**B656**]{}, 165 (2003). D. Boer, W. J. den Dunnen, C. Pisano, M. Schlegel, and W. Vogelsang, Phys. Rev. Lett. [**108**]{}, 032002 (2012). F. Dominguez, J.-W. Qiu, B.-W. Xiao, and F. Yuan, Phys. Rev. D [**85**]{}, 045003 (2012), e-Print: arXiv:1109.6293 \[hep-ph\]. A. Metz and J. Zhou, Phys. Rev. D [**84**]{}, 051503 (2011). J. C. Collins, D. E. Soper, and G. T. Sterman, Nucl. Phys. [**B261**]{}, 104 (1985). G. C. Nayak, J.-W. Qiu, and G. F. Sterman, Phys. Lett. B [**613**]{}, 45 (2005), Phys. Rev. D [**74**]{}, 074007 (2006). P. Sun, C.-P. Yuan, and F. Yuan, arXiv:1210.3432. E. L. Berger, J.-W. Qiu, and Y.-L. Wang, Phys. Rev. D [**71**]{}, 034007 (2005). J. Collins, *Foundations of perturbative QCD*, (Cambridge University Press, Cambridge, 2011); Int. J. Mod. Phys. Conf. Ser. [**04**]{}, 85 (2011).
--- abstract: | Many micro-architectural attacks rely on the capability of an attacker to efficiently find small [*eviction sets*]{}: groups of virtual addresses that map to the same cache set. This capability has become a decisive primitive for cache side-channel, rowhammer, and speculative execution attacks. Despite their importance, algorithms for finding small eviction sets have not been systematically studied in the literature. In this paper, we perform such a systematic study. We begin by formalizing the problem and analyzing the probability that a set of random virtual addresses is an eviction set. We then present novel algorithms, based on ideas from threshold group testing, that reduce random eviction sets to their minimal core in linear time, improving over the quadratic state-of-the-art. We complement the theoretical analysis of our algorithms with a rigorous empirical evaluation in which we identify and isolate factors that affect their reliability in practice, such as adaptive cache replacement strategies and TLB thrashing. Our results indicate that our algorithms enable finding small eviction sets much faster than before, and under conditions where this was previously deemed impractical. author: - Pepe Vila - 'Boris K[ö]{}pf' - 'José F. Morales' bibliography: - 'biblio.bib' title: Theory and Practice of Finding Eviction Sets ---
--- abstract: 'This paper considers packet scheduling over a broadcast channel with packet erasures to multiple receivers with different messages (multiple uni-cast) each with possibly different hard deadline constraints. A novel metric is proposed and evaluated: the [*global deadline outage probability*]{}, which gives the probability that the hard communication deadline is not met for at least one of the receivers. The cut-set upper bound is derived and a scheduling policy is proposed to determine which receiver’s packets should be sent in each time slot. This policy is shown to be optimal among all scheduling policies, [*i.e.,*]{} it achieves all boundary points of cut-set upper bounds when the transmitter knows the erasure patterns for all the receivers ahead of making the scheduling decision. An expression for the global deadline outage probability is obtained for two receivers and is plotted and interpreted for various system parameters. These plots are not Monte-Carlo simulations, and hence the obtained expression may be used in the design of future downlink broadcast networks. Future extensions to per-user deadline outage probabilities as well as to scenarios with causal knowledge of the channel states are briefly discussed.' author: - bibliography: - 'referenc.bib' title: On Erasure Broadcast Channels with Hard Deadlines --- Introduction {#sec:intro} ============ Recently, applications have emerged in which latency rather than data rate alone, are of prime importance. In particular, there exists a latency, rate and reliability tradeoff for data communications, which, in information theory, have mainly been looked at in the asymptotic sense of reliability (using probability of error as a proxy) $\rightarrow 1$, and latency (using the number of channel uses or blocklength as a proxy) $\rightarrow \infty$. In this work, we seek to characterize a different tradeoff – what (reliability, rate) pairs can be achieved for a given latency. As downlink communications are of critical interest in the last hop of wireless systems, we formulate a packet scheduling problem with hard deadline constraints over a broadcast channel with packet erasures. In particular, a base station serves a number “impatient receivers” with hard deadline constraints, where periodically generated packets at the base station are expected to be delivered to each receiver before their respective hard deadlines. In this setup, we define a novel metric: the [*global deadline outage probability*]{} as the probability that the hard communication deadlines are not met for at least one of the receivers. Our goal is to develop a packet scheduling policy that minimizes the global outage probability. [**Prior Work.**]{} The packet scheduling problem with hard deadline guarantees has been considered in past literature. Here, we mention those that are most relevant to our work. The work most closely related to our formulation is that of Hou  [@hou2009admission; @hou2014scheduling]. In  [@hou2009admission] clients transmit their randomly generated bits to an access point under hard deadline constraints over an erasure channel and receive ACK/NACK information through feedback. The channel state is assumed to be static, the deadline constraints are the same for all clients, the number of packets of each client is at most one per frame, all generated within the same period. In both  [@hou2014scheduling] and  [@hou2009admission], each client $n$ needs a long-term average throughput requirement of $q_n$ delivered jobs per interval. The author first analytically derived a condition for a set of clients to be feasible: a feasible region of a system is defined to be the set of all feasible $[q_n]$. Such region is characterized given a deadlines and link reliability of each user. He then proposed feasibility optimal scheduling policies, meaning that they can fulfill every feasible set of clients. (A set of clients are feasible if they fall into the feasible region and are said to be fulfilled under a policy, if the long term average throughput of each client $n$ is at least $q_n$ jobs per interval). We note that strict optimality is only shown for equal deadlines; heuristics are proposed for unequal deadlines. The problem examined here considers downlink transmission, multiple packets rather than one, and different deadlines for the different users. We will also assume – in order to obtain an upper bound on real-world performance – that the channel erasures are all known a priori, making ACK/NACK information irrelevant. We propose an optimal scheduling policy that minimizes our newly defined metric, the global deadline outage probability. Note that in  [@hou2009admission; @hou2014scheduling], the average throughput requirement (% of data that must be delivered by the deadline) is defined individually for each user and each user tolerates not receiving a percentage of its packets. However, in this paper, a unique global deadline outage probability is defined for all users and is stricter in the sense that all users should receive 100% of their packets, else the system is considered to be in outage. Problems of a similar flavor to this one have been considered in the [*multi-cast*]{} setting in  [@li2010throughput; @tran2012adaptive; @fu2014dynamic; @kim2014scheduling] in which all users want the [*same*]{} content. What we consider here is the notably more challenging [*multiple uni-cast*]{} setting, where all users want different information. In contrast to some of the work in  [@li2010throughput; @tran2012adaptive; @fu2014dynamic; @kim2014scheduling], we seek to obtain more analytical insight through explicit characterization of the [*deadline outage probability*]{}, a new metric. If one prohibits the possibility of a packet being erased, a similar type of problem is considered in [@liu2016spatial], where a routing and scheduling mechanism is developed over multi-hop wireless networks with deterministically-generated packets. Packet routing and scheduling decisions are made by taking into account different hard deadline constraints. As mentioned before, [@liu2016spatial] does not consider any packet losses (erasures) over links. We consider an erasure broadcast channel. If one is only interested in a single user (rather than the multi-user broadcast scenario considered here), in  [@fu2006optimal] an optimal scheduling policy that maximizes average data throughput for a fixed amount of energy and deadline is developed in the presence of a point-to-point Gaussian fading channel with hard deadline constraints and three types of channel state information: when the channel state is completely known (like here), when only the current state is known just before the transmission, and when the channel state is not known at all. Rate-control and scheduling problems when broadcasting [*network coded packets*]{} with similar hard deadline constraints over erasure channels are considered in [@gangammanavar2010dynamic], and a novel strategy that jointly determines the incoming flow rates and coding to maximize the weighted sum rate is developed by taking into account reliability requirements. Two linear encoding strategies that achieve the capacity and stability regions over an erasure broadcast channel with feedback are proposed in [@sagduyu2013capacity]. Since our formulation assumes the source has channel state information ahead of scheduling, coding is unnecessary and the main focus is on packet scheduling. [**Contributions.**]{} The key contributions of this work are: $\bullet$ In Section \[sec:model\] we present the system model, the problem formulation and propose a new metric, [*the global deadline outage probability*]{}, defined as the probability that the hard communication deadline is not met for at least one of the receivers. $\bullet$ In Sections \[sec:greedy\] and \[sec:global outage\] we derive the information theoretic cut-set outer bound for each erasure pattern and propose a greedy scheduling policy to determine which receiver’s packets should be sent in each time slot. This policy is shown to be optimal in the sense that it both achieves the cut-set bound for a given erasure pattern and also minimizes the global deadline outage probability when the transmitter is assumed to have a Pretoria knowledge of the channel erasures for the coming block. We use this to plot the tradeoffs between latency (deadlines), reliability (global deadline outage probability) and rate (supported arrival rates), noting that the plots obtained are not obtained from Monte-Carlo simulation, but rather from our analytic expressions. $\bullet$ In Section \[sec:num\] we propose heuristics for a more practical scenario where only the current (transmission) time slot is known to base station and also a scenario where neither the current time slot, nor any future time slots are known to the base station for similar deadline constraints. System Model and Problem Formulation {#sec:model} ==================================== In this section we describe the channel model and state the precise problem to be solved. We consider a [*discrete memoryless broadcast channel with erasures*]{}. It consists of one transmitter (base-station) and $K$ receivers (users). At each channel use ([*slot*]{}) the transmitter sends one symbol (packet) from the input alphabet $\mathcal{X}$, assumed to be a discrete finite set. A packet either reaches a receiver error-free or it is erased (lost). Erasures are assumed to be independent and identically distributed (iid) across channel uses, that is, the channel is memoryless. A fixed number of bits $\lambda_k \log_2(|\mathcal{X}|)$ arrives at the transmitter every $T_k$ slots and must be delivered within the next arrival to receiver $k\in[1:K]$. We refer to $T_k$ as the [*hard deadline*]{} for receiver $k\in[1:K]$. We let $T := \mathsf{LCM}(T_1, \ldots, T_K)$, where $\mathsf{LCM}$ stands for least common multiple. A group of $T$ slots is referred to as a [*frame*]{} and represents the time window over which the transmitter needs to make joint scheduling decisions. For receiver $k\in[1:K]$, each frame is composed of $T/T_k$ [*sub-frames*]{}. [*Blocks*]{} are the groups of time slots between consecutive deadlines considering both channels. Sub-frames and blocks will be illustrated next; blocks are of importance as we will make scheduling decisions block by block. \[ex:2\*T1=T2\] In Fig. \[fig:network\], (a) illustrates the described communication system for the case of $K=2$ receivers, (b) and (c) illustrate two possible erasure pattern realization for a frame of length $T=6$ slots for the case of $2T_1 = T_2$ and $3T_1 = 2T_2$, respectively. Erased slots are shown with a hatch pattern, erasure-free slots with white color. $\lambda_1$ (resp. $\lambda_2$) is the amount of data intended for the first (resp. second) receiver and is generated every $T_1=3$ (resp. $T_2=6$) slots in (a) and every $T_1=2$ (resp. $T_2=3$) slots in (b) . Slots forming a sub-frame are shown with dotted ovals while a group of slots shown with horizontal parenthesis form a block. Notice here that the sub-frames are defined for each user but blocks are common to both users. Let $\mathbf{E}_t \in [0:1]^{K\times 1}$ denote the (random) *erasure pattern* in slot $t$, where $[\mathbf{E}_t]_k=0$ means that the transmitted packet in slot $t\in[1:T]$ has not been received by receiver $k\in[1:K]$, and $[\mathbf{E}_t]_k=1$ otherwise. Denote the erasure probabilities as $\varepsilon_v := \Pr [\mathbf{E}_t = v]$, where we note that $\sum_{v \in [0:1]^{K\times 1}} \varepsilon_v = 1$. Let $\mathbf{E} := [\mathbf{E}_1,\ldots,\mathbf{E}_T] \in [0:1]^{K\times T}$ be the information available at the transmitter in order to make scheduling decisions for a frame. The [*erasure pattern probability*]{} is $$\begin{aligned} \Pr[\mathbf{E} = \mathbf{e}] = \!\!\!\! \prod_{v \in [0:1]^{K\times 1}} \left(\varepsilon_v\right)^{n_v(\mathbf{e})}, \ n_v(\mathbf{e}) := \!\!\!\! \sum_{t\in[1:T]} 1_{\{\mathbf{e}_t = v\}} \label{eq:erasure pattern probability}\end{aligned}$$ where $1_{\mathcal{A}}$ is the indicator function, which is one if the condition in $\mathcal{A}$ is true and zero otherwise and where $\mathbf{e} = [\mathbf{e}_1,\ldots,\mathbf{e}_T]\in [0:1]^{K\times T}$. A [*scheduling policy*]{} is a mapping $$\begin{aligned} \Pi &: [0:1]^{K\times T} \to [0,1]^{K\times T} \notag \\ &: \mathbf{e} \to \Pi(\mathbf{e}) = [\Pi_1(\mathbf{e}), \ldots, \Pi_T(\mathbf{e})],\end{aligned}$$ where $[\Pi_t (\mathbf{e})]_k \in[0,1]$ is the amount of data, as a fraction of the slot capacity $\log_2(|\mathcal{X}|)$, sent to receiver $k\in[1:K]$ in slot $t\in[1:T]$ when the erasure pattern is $\mathbf{e} \in [0:1]^{K\times T}$. A feasible policy must satisfy $\sum_{k\in[1:K]} [\Pi_t(\mathbf{e})]_k \leq 1, \ \forall t\in[1:T]$, as it is not possible to transmit above the slot capacity. \[eq:policy def\] We say that [*deadlines of all $K$ receivers are met*]{} by the policy $\Pi$ for erasure pattern $\mathbf{e} = [\mathbf{e}_1,\ldots,\mathbf{e}_T]\in [0:1]^{K\times T}$ if and only if the arrival rates are within the following set $$\begin{aligned} \mathcal{E}_{\Pi}(\mathbf{e}) := \Big\{ &\lambda_k \leq \sum_{t\in[1:T_k]} [\mathbf{e}_{(m_k-1)T_k+t}]_k [\Pi_{(m_k-1)T_k+t}(\mathbf{e})]_k, \notag \\&\forall m_k\in\left[1:T/T_k\right], \forall k\in[1:K] \Big\}, \label{eq:condition for no outage}\end{aligned}$$ that is, each receiver $k\in[1:K]$ must be able to receive $\lambda_k$ packets within $T_k$ slots in every one of the $T/T_k$ sub-frames. The objective is to find the policy in  that minimizes the [*global deadline outage probability*]{}, defined as $$\begin{aligned} P_\text{out}(\lambda_1, \ldots, \lambda_K, T_1, \ldots, T_K) := 1 - \max_{\Pi}\Pr[\mathcal{E}_{\Pi}(\mathbf{E})], \label{eq:pout def}\end{aligned}$$ where the event $\mathcal{E}_{\Pi}(\cdot)$ is defined in  as a function of the arrival rates $(\lambda_1, \ldots, \lambda_K)$ and the hard deadlines $(T_1, \ldots, T_K)$. \[eq:pout use\] The ability to design rate control policies is a critical advantage of having a closed form expression for the deadline outage probability: the set of arrival rates $(\lambda_1, \ldots, \lambda_K)$ can be determined for a given desired set of deadlines $(T_1, \ldots, T_K)$ and a tolerable global outage probability $p$ by solving $P_\text{out}=p$ for $P_\text{out}$ given in . Similarly, the set of deadlines by which packets can reach their destination can be predicted for a given fixed set of arrival rates and a deadline outage probability. This paper will deal with $T_2=NT_1$ and $K=2$ users, but we note that generalizations are possible to the general $MT_1=NT_2$. We do not present this here as the notation becomes quite involved. Let $k$ denote the block index, $k \in [1:N]$ $$\begin{aligned} N_{k,v}(\mathbf{E}) := \sum_{t\in[1:T_1]} 1_{ \{ \mathbf{E}_{(k-1)T_1+t} = v \} }, \end{aligned}$$ be the random variable denoting the number of slots with erasure pattern $v\in [0:1]^{2\times 1}$. For notational convenience, we omit the dependence of $N_{k,v}$ on the random erasure pattern $\mathbf{E}$. A Greedy Scheduling Policy and its Optimality {#sec:greedy} ============================================= [To obtain the scheduling policy that minimizes the global outage probability in (the scheduling policy that does so will be called “optimal” but need not be unique), we fix the erasure pattern ${\bf e}$, which in turn allows us to characterize a cut-set outer bound on the pairs of arrival rates $(\lambda_1, \lambda_2)$ that can be supported by that erasure pattern (in order to meet the hard deadlines). We then define a scheduling policy that achieves that outer bound for each possible erasure pattern, and hence minimizes the probability of outage, [*i.e.,*]{} the probability that not all the packets generated in the beginning of each frame can be served by the hard deadlines. ]{} Outer Bound ----------- It is intuitive to see that an optimal policy has the following characteristics. In each block $k\in [1:N]$ of length $T_1$ slots: i) $N_{k,00}$ slots are useless; ii) all the $N_{k,10}$ slots must be allocated to receiver 1 (as they are useless to receiver 2) and similarly the $N_{k,01}$ slots to receiver 2; iii) the $N_{k,11}$ slots must be split among the two receivers. This intuition is confirmed by finding an optimal policy that reaches the cut-set bound – a well-known outer bound on the channel capacity of networks in network information theory [@ElGamalKim:book] – which we detail next for the channel in Fig. \[fig:network\] (b). ![Cut-set upper bound for the erasure pattern shown in Fig. \[fig:network\].](FIG/region.pdf){width="70mm"} \[region\] To describe the cut-set outer bound for a given erasure pattern ${\bf E} = {\bf e}$, let $n_{k,v}$, $v\in [0:1]^{2\times 1}, k\in [1:N]$ denote the realizations of random variables $N_{k,v}$ corresponding to the realized erasure pattern ${\bf e}$. For each block $k\in [1:N]$ define $$\begin{aligned} a_k &:= n_{k,10}+n_{k,11}, \\ b_k &:= n_{k,01}+n_{k,11}, \\ c_k &:= n_{k,10}+n_{k,01}+n_{k,11},\end{aligned}$$ where $a_k$ is the number of slots available for transmission to receiver 1 without considering receiver 2, $b_k$ is the number of slots available for transmission to receiver 2 without considering receiver 1, and finally $c_k$ is the number of slots available for transmission to receivers 1 and 2 as if a “super-receiver” had the packets received by both receivers. The (instantaneous, for a given erasure pattern realization) cut-set upper bound for the channel in Fig. \[fig:network\] ($T_2=2T_1$) can be characterized as follows. Since the transmission to receiver 2 occupies the whole frame, which is made of $N=2$ blocks, the optimal scheduler decides to send $\zeta\lambda_2$ packets to receiver 2 during the first block and the remaining $(1-\zeta)\lambda_2$ during the second block, for some optimized “rate split” parameter $\zeta \in [0,1]$. Receiver 1 is not in outage if $\lambda_1\leq a_1$ and $\lambda_1\leq a_2$; receiver 2 is not in outage if $\lambda_2\leq b_1+b_2$; the combination of both receivers is not in outage if $\lambda_1+\zeta\lambda_2\leq c_1$ and $\lambda_1+(1-\zeta)\lambda_2\leq c_2$ for some $\zeta \in [0,1]$. The cut-set upper bound can thus be expressed as $$\begin{aligned} & \bigcup_{\zeta \in [0,1]} \begin{cases} \lambda_1\leq a_1, &\lambda_1\leq a_2, \\ \zeta\lambda_2\leq b_1, &(1-\zeta)\lambda_2\leq b_2, \\ \lambda_1+\zeta\lambda_2\leq c_1, &\lambda_1+(1-\zeta)\lambda_2\leq c_2, \\ \end{cases} \label{eq:cuset 1} \\&= \begin{cases} \lambda_1 \leq \min(a_1,a_2), \\ \lambda_2 \leq v_0, & \hspace*{-2mm} v_0:= b_1+b_2, \\ \lambda_1+\lambda_2 \leq v_1, & \hspace*{-2mm} v_1:= \min(b_1+c_2,c_1+b_2), \\ 2\lambda_1+\lambda_2 \leq v_2, & \hspace*{-2mm} v_2:= c_1+ c_2, \\ \\ \end{cases} \label{eq:cuset 2} \\&= \begin{cases} \lambda_1 \leq \min(a_1,a_2), \\ \lambda_2 \leq v_0 - (\lambda_1-v_1+v_0)^+ - (\lambda_1-v_2+v_1)^+ \\ \phantom{\lambda_2} = v_0 - (\lambda_1-N_{1,10})^+ - (\lambda_1-N_{2,10})^+. \label{eq:cuset 4} \end{cases}\end{aligned}$$ because $v_1-v_0 = \min(c_1-b_1,c_2-b_2) = \min(n_{1,10}, n_{2,10})$ and $v_2-v_1 = \max(c_1-b_1,c_2-b_2) = \max(n_{1,10}, n_{2,10})$, and where $(x)^+ := \max(0,x)$. Such a region is illustrated in Fig. \[region\] for the erasure pattern shown in Fig. \[fig:network\] (a). By extending to a generic integer $N$ (from $T_2=NT_1$) the previous reasoning we see that the cut-set bound reads $$\begin{aligned} \begin{cases} \lambda_1\leq \min_{k\in [1:N]}(a_k),\\ \lambda_2 \leq v_k - k \lambda_1 = v_0 -\sum_{k=1}^{k}(\lambda_1 - v_{k} + v_{k-1})^+,\\ v_k := \min_{\mathcal{S}\subseteq [1:N] : |\mathcal{S}|=k}(\sum_{k\in\mathcal{S}} c_k+\sum_{k \not\in \mathcal{S}} b_k).\\ \end{cases} \label{eq: cutset N*T1=T2}\end{aligned}$$ or equivalently $$\begin{aligned} \mathcal{E}_{\Pi}(\mathbf{e}) := \begin{cases} \lambda_1\leq \min_{k\in [1:N]}(a_k),\\ \lambda_2 \leq v_0 - \sum_{k=1}^{N}(\lambda_1-N_{k,10})^+. \end{cases} \label{eq: final cutse}\end{aligned}$$ where $v_0=\sum_{k=1}^{N} b_k$. The generalization of  to the case $MT_1 = N T_2$ follows by the same reasoning; in this case the outer bound contains bounds of the form $k_1 \lambda_1 + k_2 \lambda_2 \leq v_{k_1,k_2}$ for some real-value $v_{k_1,k_2}$ and integers $(k_1,k_2)\in[1:T/T_1]\times[1:T/T_2]$. Generalization to more than two receivers also follows easily and the cut-set bound contains bounds on integer-valued linear combinations of the arrival rates. \[sec:greedy\_policy\] Greedy Policy ------------------------------------ We now demonstrate a scheduling policy that can achieve the cut-set bound in , which will be first described for an example and then in a general form. \[ex:2\*T1=T2:out\] Consider the communication system shown in Fig. \[fig:network\] (b). In the first block, slot 3 is allocated to user 1 since it is useless to user 2. The only real question is how to allocate slots which are simultaneously available to both users (slot 2). Our “greedy” policy prioritizes the user with the sooner deadline, [*i.e.,*]{} the more urgent packet. In this example, the first user’s packet has priority to be sent over slot 2 since its deadline in this time slot is earlier than the second user packets. If the capacity of this slot is completely used by this user, the second user cannot use that slot. If this slot is not completely occupied by user 1, the remaining capacity can be allocated to user 2. With the same reasoning, in the second block, slot 4 and 6 is allocated to user 1 and 2, respectively. Since the deadline of both packets are the same in time-slot 5, none of their packets has priority to be sent over this time-slot. [**Greedy policy definition:**]{} Given a priori and full knowledge of the erasure pattern of each block $k\in [1:N]$, the greedy policy $\Pi$ in each block first allocates all the $N_{k,10}$ slots to receiver 1’s packets, and all the $N_{k,01}$ slots to receiver 2’s packets. After allocating all $N_{k,10}$ and $N_{k,01}$ slots, the $N_{k,11}$ slots in the same block should be allocated to the remaining packets of each user (if any) as follows: priority is given to the user with the earlier deadline until that user’s packets are all sent; then they are allocated to the other user. If the capacity of such slots are completely used by the first receiver message, the second user will not be allocated any slots. Otherwise, the remaining capacity can be allocated to the second receiver message. This block by block scheduling continues until a scheduling decision is made over a frame of length $T$. Note that scheduling decisions occur block by block and that the base-station [*only needs knowledge of the erasure pattern of that block, knowledge of the erasures in the full frame is not needed!*]{} However, in order to calculate the probability of outage, we need to keep track of whether the deadlines were met within the entire frame, consisting of multiple blocks in general. Optimality of the Greedy Policy ------------------------------- The greedy policy is optimal. Our central proof idea is to show that for a given erasure pattern ${\bf e}$, the greedy policy schedules packets so as to achieve the cut-set upper bound for that erasure pattern. Thus, the greedy policy supports the largest set of arrival rates $(\lambda_1, \lambda_2)$ for a given erasure pattern, [*i.e.*]{} minimizes the probability of outage given an erasure pattern. Summing over the probability of each erasure pattern yields the minimal probability of outage. To prove that for a given erasure pattern, the greedy policy minimizes the probability of outage, as mentioned before, our greedy policy first allocates the $N_{k,10}$ and $N_{k,01}$ (this is why we need full CSI) and then allocates $N_{k,11}$ slots greedily for the remaining packet (if any). Since $N_{k,10}$ and $N_{k,01}$ are usable by only one user, it is obvious that allocating such slots to packets in the first step of our greedy policy is optimal. Here we focus on optimality proof of using the $N_{k,11}$ slots greedily ([*i.e.,*]{} earliest deadline first until that users’ packets are fully satisfied). Suppose now that there exists an optimal scheduler that does not prioritize the user with most urgent message, and that this optimal scheduler can find a way to send data before their deadlines while the greedy policy cannot. In the optimal non-greedy scheduler, starting from first simultaneous erasure-free time slot, sending data in such slots and moving forward, we will reach a simultaneous erasure-free slot, $t_i$, that is not assigned to the user with closer deadline (user 1 in this paper) while this user has still some data to receive. So its remaining data should be sent in one of the next $t_j$ simultaneous erasure-free slots (such slot exists before the deadline of user 1 since the optimal scheduler is able to send packets and all $N_{k,10}$ and $N_{k,01}$ slots have been occupied). This amount of data can also be sent in the $t_i^{th}$ slot if it is not used by the second user. Even if it is occupied by the second user, we can send the second user packet in the $t_j^{th}$ slot since $t_j \leq T_2$. Allocating the $t_i^{th}$ slot to the first user does not cause an outage and the greedy policy is also able to transfer the same packets as the optimal non-greedy scheduler. This contradicts our assumption that the greedy policy cannot send the packets prior to their deadlines. This reasoning is also applicable to the case $MT_1 = N T_2$. Derivation of the Global Deadline Outage Probability {#sec:global outage} ==================================================== In this section, we derive the global deadline outage probability for the discrete memoryless broadcast erasure channel for two users where $T=T_2=N T_1$. [The derivation for $MT_2=NT_1$ will appear in a journal version of this work.]{} The [*deadline outage probability*]{} can be calculated by plugging into . The greedy policy satisfies the deadlines only for patterns for which holds. Therefore, the [*deadline outage probability*]{}, can be calculated as follows. Let $\{N\}_k := \{n_{k,00}, n_{k,01}, n_{k,10}, n_{k,11} : \sum_{m,n \in [0:1]} n_{k,mn} = \frac{T}{N}\}$. Then $$\begin{aligned} &P_\text{out}\left(\lambda_1, \lambda_2, \frac{T}{N}, T\right)= 1- \sum_{k=1}^N \sum_{\{N\}_k} \notag\\ &\prod_{j=1}^{N}{\frac{T}{N} \choose n_{j,00},n_{j,01},n_{j,10},n_{j,11}} \Pr[\mathbf{E} = \mathbf{e}] \ \ 1_{\{\mathcal{E}_{\pi}(\mathbf{e})\}} \label{sec:pout closed form}\end{aligned}$$ where in we see the notation for the multi-nomial coefficient and $\Pr[\mathbf{E} = \mathbf{e}]$ can be obtained by . Note that for notational convenience, we omit the dependence of $N_{k,v}$ (or its instances $n_{k,v}$) on the erasure pattern $\mathbf{e}$. ![Probability of outage versus $\lambda$ and $T$ where $T=T_1=T_2, \, \lambda=\lambda_1=\lambda_2, \, \epsilon_{01}= \epsilon_{10}=0.2, \epsilon_{11}= 0.5, \epsilon_{00}=0.1$.[]{data-label="fig:2"}](FIG/2.pdf){width="80mm"} ![Probability of outage versus $T$ for some values of $m$ where $T=T_1= T_2$ and $\lambda_2=1, \lambda_1=m\lambda_2$, $\epsilon_{01}= \epsilon_{10}=0.2, \epsilon_{11}= 0.5, \epsilon_{00}=0.1$.[]{data-label="fig:3"}](FIG/3.pdf){width="70mm"} ![Probability of outage versus $\lambda_2$ for some values of $m$ where $T_1=T_2=12, \lambda_1=m\lambda_2,\epsilon_{01}= \epsilon_{10}=0.2, \epsilon_{11}= 0.5, \epsilon_{00}=0.1$.[]{data-label="fig:4"}](FIG/4.pdf){width="80mm"} ![Probability of outage versus $T$ and $P$ where $T_1=T_2=12, \lambda_1=\lambda_2=1$ and $P$ is the erasure probability of each user.[]{data-label="fig:6"}](FIG/6.pdf){width="80mm"} Fig. \[fig:2\] illustrates the probability of outage for some values of $T$ and $\lambda$ where $T=T_1=T_2$ and $\lambda=\lambda_1=\lambda_2$. As expected, the probability of outage is zero when $\lambda=0$ for all values of $T$ and is one when $T=0$ for all values of $\lambda$. Moreover, for a fixed value of $T$, the probability of outage is increasing in $\lambda$, and for a given value of $\lambda$ is decreasing in $T$. The probability of outage jumps to the next value when either $\lambda_1$, $\lambda_2$ or $\lambda_1+\lambda_2$ reaches the next integer value. This is because the erasures are binary while $\lambda_1$ and $\lambda_2$ are non-negative and real-valued: the probability of outage remains constant as long as the free slot is not completely occupied by both users’ traffic and jumps when it is completely used. Fig. \[fig:3\] illustrates the probability of outage versus $T$ for some values of $m$ where $T_1=T_2=T$ and $\lambda_2=1$ and $\lambda_1=m\lambda_2$. It can be seen that by increasing $T$ the probability of outage goes to zero. For small $m$, the probability of outage tends to zero more quickly. Fig. \[fig:4\] illustrates the probability of outage for some values of $\lambda_2$ where $T_1=T_2=12$ and $\lambda_1=m\lambda_2$. It can be seen that by increasing $m$ the probability of outage goes to one more quickly. The probability of outage is one when $\lambda_1+\lambda_2 > 12$ (as it would be impossible to serve more than 12 packets with a deadline of $T_1=T_2=12$). Fig. \[fig:6\] illustrates the probability of outage versus $T$ and $P$ where $T=T_1=T_2=12, \lambda_1=\lambda_2=1$ and $P$ is the erasure probability of each user (erasures are assumed to be i.i.d. across time and users). Here we see the tradeoff between the channel parameter $P$, the deadline $T$ and the probability of outage. As expected, the larger the deadline for a fixed arrival rate, the smaller the probability of outage for a given $P$. Scheduling policy without future knowledge of channel states {#sec:num} ============================================================ In this section, we consider a more practical scenario where the future channel states are not known to the base station. As a first step, we assume that the current state is known to the base station, but the future states are not available. Then, we will consider the scenario that both current and future states are unavailable. These scheduling policies are heuristics at the moment – proving optimality is left for future work. We first consider that the scheduler (base station) has the knowledge of the current channel state only. In particular, when we transmit at time slot $t \in [1:T]$, the scheduler knows the channel state information ${\bf E}_t$, but it does not know the channel states ${\bf E}_\tau$ s.t. $\tau > t$. For the future channel states, the scheduler only knows the erasure probabilities. Next, we discuss how a greedy algorithm is designed when the deadlines of both users is the same; [*i.e.,*]{} $T_1 = T_2 = T$. Our algorithm is summarized in Algorithm \[alg:only\_one\_slot\_known\]. Let us consider that at each time slot $t$, the number of packets that are still supposed to be transmitted is $\lambda_1^{\text{rem}}$ and $\lambda_2^{\text{rem}} $ for users 1 and 2, and the remaining time to schedule packets are $T^{\text{rem}}$. We can express the global deadline outage probability in (\[eq:pout def\]) as $P_{\text{out}}(\lambda_1^{\text{rem}}, \lambda_2^{\text{rem}}, T^{\text{rem}})$ for the packets after (and including) slot $t$ by abusing the notation (note that we abbreviate $P_{\text{out}}(\lambda_1^{\text{rem}}, \lambda_2^{\text{rem}}, T^{\text{rem}}, T^{\text{rem}} ) $ to $P_{\text{out}}(\lambda_1^{\text{rem}}, \lambda_2^{\text{rem}}, T^{\text{rem}})$). Init: $\lambda_1^{\text{rem}} = \lambda_1$, $\lambda_2^{\text{rem}} = \lambda_2$, $t=1$, $T^{\text{rem}} = T$ $P_{\text{out}}^{\text{Xmit 1}} = P_{\text{out}}(\lambda_1^{\text{rem}} - 1, \lambda_2^{\text{rem}}, T^{\text{rem}}-1)$ $P_{\text{out}}^{\text{Xmit 2}} = P_{\text{out}}(\lambda_1^{\text{rem}}, \lambda_2^{\text{rem}} - 1, T^{\text{rem}}-1)$ Transmit a packet to 1 $\lambda_1^{\text{rem}} = \max\{0,\lambda_1^{\text{rem}}-1\}$ Transmit a packet to 2 $\lambda_2^{\text{rem}} = \max\{0,\lambda_2^{\text{rem}}-1\}$ $T^{\text{rem}} = T^{\text{rem}} - 1$ $t = t + 1$ If a packet is successfully transmitted from user 1 at time slot $t$, then the outage probability after time slot $t$ will be $P_{\text{out}}^{\text{Xmit 1}} = P_{\text{out}}(\lambda_1^{\text{rem}} - 1, \lambda_2^{\text{rem}}, T^{\text{rem}}-1)$, because (i) $\lambda_1^{\text{rem}} - 1$ packets will remain to be scheduled after transmitting a packet from user 1, (ii) the number packets that should be scheduled from user 2 will not change, [*i.e.,*]{} $\lambda_2^{\text{rem}}$ will not change, and (iii) the remaining time to schedule the packets will reduce to $T^{\text{rem}}-1)$. Similarly, we can define the outage probability as $P_{\text{out}}^{\text{Xmit 2}} = P_{\text{out}}(\lambda_1^{\text{rem}}, \lambda_2^{\text{rem}} - 1, T^{\text{rem}}-1)$ after a packet from user 2 is scheduled at time slot $t$. Our scheduling policy examines the channel state at slot $t$. If $ {[\bf E}_t]_1 = [{\bf E}_t]_2 = 0$, we do not schedule any packets at slot $t$. If $ {[\bf E}_t]_1 = 1, [{\bf E}_t]_2 = 0 $ and $ {[\bf E}_t]_1 = 0, [{\bf E}_t]_2 = 1 $, we schedule users 1 and 2, respectively. On the other hand, if $ {[\bf E}_t]_1 = [{\bf E}_t]_2 = 1$, we compare $P_{\text{out}}^{\text{Xmit 1}}$ and $P_{\text{out}}^{\text{Xmit 2}}$, and schedule a packet for user 1 if $ (P_{\text{out}}^{\text{Xmit 1}} < P_{\text{out}}^{\text{Xmit 2}})$, otherwise schedule a packet for user 2. We evaluated Algorithm \[alg:only\_one\_slot\_known\] when $\lambda_1=\lambda_2=1$ and $\epsilon_{01}= \epsilon_{10}=0.2, \epsilon_{11}= 0.5, \epsilon_{00}=0.1$. Fig. \[fig:all\_algs\] shows the outage probability versus deadline. As expected, Algorithm \[alg:only\_one\_slot\_known\] performs worse than non-casual CSI (the greedy algorithm in Section \[sec:greedy\_policy\]), but the outage probability delay curves follow the same decay pattern. The proof of optimality for this algorithm (optimal in the sens of minimizing the globale outage probability under these limited CSI constraints) is left for future work. ![$P_\text{out}$ versus $T$, when $\lambda_1=\lambda_2=1$ and $\epsilon_{01}= \epsilon_{10}=0.2, \epsilon_{11}= 0.5, \epsilon_{00}=0.1$. (i) Non-casual CSI: The greedy algorithm in Section \[sec:greedy\_policy\]. (ii) Current CSI: Algorithm \[alg:only\_one\_slot\_known\]. (iii) Past CSI.[]{data-label="fig:all_algs"}](FIG/all_algorithms_v2.pdf){width="80mm"} We now assume that in slot $t$, the channel states of the past slots up to time $t_1$ are available (perfectly or partially), but that the current and future states are unavailable. In this setup, we make scheduling decisions at the start of the slot and receive feedback on successful (or not) transmission at the end of the slot (we assumed no delayed feedback). In particular, at the start of the slot $t$, we calculate the outage probability when a packet from user 1 is transmitted as $P_{\text{out}}^{\text{Xmit 1}} = P_{\text{out}}(\lambda_1^{\text{rem}} - 1, \lambda_2^{\text{rem}}, T^{\text{rem}}-1)(\epsilon_{10} + \epsilon_{11}) + P_{\text{out}}(\lambda_1^{\text{rem}} , \lambda_2^{\text{rem}}, T^{\text{rem}}-1)(\epsilon_{00} + \epsilon_{01})$. The first term, [*i.e.,*]{} $ P_{\text{out}}(\lambda_1^{\text{rem}} - 1, \lambda_2^{\text{rem}}, T^{\text{rem}}-1)(\epsilon_{10} + \epsilon_{11})$ corresponds to successful transmission scenario, while the second term, [*i.e.,*]{} $P_{\text{out}}(\lambda_1^{\text{rem}} , \lambda_2^{\text{rem}}, T^{\text{rem}}-1)(\epsilon_{00} + \epsilon_{01})$ corresponds to the failure event. Our proposed scheduling policy compares outage probabilities, and if $P_{\text{out}}^{\text{Xmit 1}} < P_{\text{out}}^{\text{Xmit 2}}$, a packet is transmitted to user 1, otherwise to user 2. At the end of the slot, the scheduler receives perfect and immediate feedback from the users, and determines if packets are correctly delivered. Note that if a packet is not correctly delivered, the remaining packets; [*i.e.,*]{} $\lambda_1^{\text{rem}} $ and $\lambda_2^{\text{rem}} $ does not reduce. This means the same packet will be re-transmitted again. The global deadline outage probability with past CSI is shown in Fig. \[fig:all\_algs\]. Past CSI performs worse than non-casual CSI and limited CSI. This confirms the necessity of optimizing re-transmission mechanisms, which is left for the future work. Conclusion {#sec:conclusion} ========== This paper studies the scheduling problem for transmitting periodically generated flows with hard-deadline constraints over an erasure broadcast channel. We propose a ‘greedy’ scheduling policy (serving the user with the earliest deadline first) that is optimal in the sense that it minimizes the [*global deadline outage probability*]{} when the channel erasures are known ahead of transmission (only knowledge block by block is needed). We obtain a closed form expression for this global probability of outage, yielding a tradeoff between the arrival rates, the hard deadlines, and the reliability (probability of meeting those deadlines). Furthermore, two heuristics are proposed for more practical scenarios, where the channel state information is not known to the base station ahead of time. Future work includes extensions to per-user probabilities of outage, extensions to Gaussian broadcast channels, and extensions to the dual of this problem – the uplink multiple-access channel.
--- abstract: 'The theorems of vector analysis (divergence theorem, etc.) are typically first applied in the undergraduate physics curriculum in the context of the electromagnetic field and the differential forms of Maxwell’s equations. However, these tools are analyzed in depth several courses later in the junior-senior level. I discuss here a “bridge" problem, using the language of vector calculus in a mechanics setting to understand Archimedes’ principle as a consequence of hydrostatic equilibrium and the superposition of the external forces. It is my hope that this treatment will help students better integrate and understand understand these and similar vector analysis results in contexts beyond electromagnetism.' author: - 'Galen T. Pickett' title: 'Draft: Volume Integral of the Pressure Gradient and Archimede’s Principle' --- Introduction ============ There is a qualitative transition in the sophistication of the mathematical tools required to describe mechanical systems of a few free bodies to those dominated by the behavior of continuous fields. Second-order ordinary differential equations are indispensable in the study of point-system mechanics, particularly in a first year course, and many curricula are organized around first understanding the properties of these equations (particularly in the case in which the equation of motion has the form $\ddot{x} = const$). In a first course in electromagnetism, the fundamental quantities to understand are the distributed electromagnetic fields, $\vec{E},\vec{B}$, with dynamics given by Maxwell’s Equations using the language of partial differential equations with appropriate boundary conditions. Even very good companion works can give the impression to undergraduates that the ideas of “divergence, gradient, and curl” have their primary (perhaps only) role in describing electricity and magnetism. [@dgcandallthat] Thus, beginning students get the impression that these mathematical tools are somehow wedded to the separate subjects of the physics canon, rather than being independent, widely applicable tools. My aim here is to present a mechanics problem which is usually dealt with in the algebra-based mechanics curriculum and recast it using the mathematical language students expect in electricity and magnetism. I have two aims. First, I will connect Archimedes’ principle (usually developed as a “just-so" statement of how a “buoyant force" works) to real forces acting on real bodies. The ideas of buoyancy, density, and volume are known areas of difficulty for science teaching candidates, K-12 students, and even quite advanced students majoring in physics. [@loverude1; @loverude2] As we shall see, the buoyant force is merely a consequence of the superposition of many forces acting on the outer edge of an object, and thus is no more mysterious than any other force in mechanics. Secondly, I wish to give an undergraduate a context in which vector analysis has a definite application in the most familiar, concrete physics a student knows, namely mechanics. Thus, the integration of these mathematical tools and the “net forces" ideas in this nexus of problems and placing that integrated understanding firmly within the grasp of an intermediate-level student of physics is the goal. It is not my intention to merely display yet another derivation of the principle, although I will certainly do that. Rather, it is my intention to integrate this basic idea into our curriculum in a fundamental way. If this work is used to merely guide a new set of derivations in a lecture hall, its purpose will be unfulfilled. There is a growing consensus across the academy that the value of integrating methods and content, and creating a unified, flexible intellectual framework supporting lifelong learning is the eventual goal of a baccalaureate course of study — even in professional and STEM fields, perhaps [*particularly so*]{}. Too often a university education can be experienced as a list of topics or courses logically disconnected from each other. The work of integrating the education into a seamless whole can be left to a student, and may or may not occur. [@aacu_integrate] Physics is particularly blessed in this regard, in that our discipline celebrates the unification of disparate phenomena in a way that few disciplines have a history of doing. There are a number of excellent introductory texts that take this integrated point of view as opposed to a historical development. The “right" ideas are developed at the beginning of study, and the special cases amenable to analysis (exact or approximate) are labeled as such.[@mandi] In the physics program at CSU Long Beach where I am a faculty member, we have for a number of years been intentional with this very issue, stemming from our activity as a PhysTEC supported site in 2010.[@phystec] One element of this program was to develop a Learning Assistant Program on the Colorado model, along with a course in physics pedagogy [@phys390] we use to train physics majors to teach in our lower-level laboratories as Learning Assistants. [@la_model] A part of that course, and the key to explaining to physics majors why this low-level material is so hard to learn (and teach) is discussing the excellent article by Reddish on mental models in physics learning. [@reddish] One of the most striking features of this treatment is the realization that mental structures of physics in a student’s head are incomplete, disorganized, and even for very well-trained students (and faculty) there exists “holes” in even very basic concepts. In taking this course, students are required to be present in a science tutoring center, and as is often the case, have duties to help students with courses they themselves had never taken. Algebra-based mechanics and K-6 level physical sciences are never taken by physics students, and we assume that advanced students (with advanced mathematical and physics tools) should be able to handle the more elementary ideas in those “precursor” courses. It is a relatively simple problem that stumped my stable of learning assistants that prompted the development below — a problem using Archimedes’ principle. It is my experience that anything involving fluids is inordinately tricky to explain. The basic functioning of a centrifuge can be understood at an engineering level and qualitatively well enough for the devices to work correctly in the hands of research-caliber scientists and engineers. Yet, the explanation in terms of hydrostatic equilibrium in a non-inertial frame is subtle with tendrils reaching up into very sophisticated mechanics.[@taylor; @goldstein] Of the many treatments of the principle (starting with a good exposition of Archimedes’ words themselves [@whatsaid] and even quite interesting derivations centering on energy ideas [@apfrome]) not many cement the principle into the basic foundation of mechanics many students understand and can already use quite well. Even in a static case, in which Archimedes’ principle and buoyancy should be sufficient to solve many problems, my group of junior level physics students “knew” Archimedes’ principle, but were unable to apply it. And, embarrassingly enough, likewise for myself. Only when I had put the problem into the language of vector analysis did Archimedes’ principle actually make sense to me (in the sense that I had connected it to my own ideas of mechanics). The point of this paper is to elucidate the conditions that lead me to “reconstruct” a theorem of vector calculus Archimedes himself must have discovered: [@joke] $$\varoiint d \vec{A} p = \iiint dv \vec{\nabla} p \label{at}$$ which some will recognize as the three-dimensional version of the “mean-value theorem,” but which I will simply regard as “Archimedes’ Theorem" — a fitting companion to Gauss’ law. I will first describe the elementary problem that revealed to us all that we had a buoyancy gap in our understanding. Then I will describe hydrostatic equilibrium and the new vector identity, and in the discussion describe some applications and “clicker” style questions. Elementary Problem ================== The problem that send my physics major Learning Assistants off into a cycle of recursive algebra is a relatively straightforward problem encountered in the algebra-based mechanics sequence. Here is a variation: [*A hot air balloon of volume $V$ is in equilibrium surrounded by air of density $\rho_a$. At $t=0$ ballast of mass $m_o$ is released from the balloon, and it is observed to accelerate upwards with an acceleration $a$. What is the final mass of the balloon (in terms of $m_o, \rho_a, g, V$)?*]{} These were students, by and large, who had just taken our junior-level course in mechanics, or had completed a course in introductory modern physics and differential equations, so there was no issue with their preparation. Yet, they reported that they spent an inordinate amount of time circling around the problem without coming to a resolution, much to the consternation of the the first-year students who were hoping to get some help on the problem in our tutoring center. What my students knew, and what they had connected to the stock of problems they had personally solved, were two different things. It was only the presence of our Master Teacher in Residence, Ms. Kathryn Beck of Bolsa Grande High School that saved our collective bacon by remarking that the buoyant force is just $\rho_a V g$ both before the ballast had been dropped and after.[@tir] Given that this force is the same before and after leads immediately to the observation that $$M = m_o g / a$$ In short, my students had “heard” of Archimedes’ principle (as I had myself), but in never having had to use it in a physical context (this sort of problem has fallen out of our “calculus”-based mechanics curriculum). They were quite confident in their use of free body diagrams, but the nature of the buoyant force was not connected from the beginning to the end of the problem. And, once I had the benefit of Ms. Beck’s observation (she was clearly very, very familiar with this sort of problem, having taught it many years in many guises and having charged generations of students to construct and predict the displacement of cardboard boats) I was left with a conundrum. There is in reality no [**buoyant**]{} force, there is only the effect of the surrounding air pressure on the balloon. As I had drilled into my own mind in the course of working through my own Ph.D. thesis [@pickett], only gradients of the pressure field give forces. The presence of a pressure gradient is the [**only**]{} way the air can affect the balloon, not through an unsystematic “watch me pull a rabbit out of a hat” buoyant force. Pressure ======== And, here is how I connect the buoyant force and the presence of a non-uniform pressure field. ![A small cube of the ambient air at density $\rho$ is subjected to an inward force on its upper face, an upward force on its lower face, and its weight. An abstract “net force" or “free body" diagram is at right.[]{data-label="schematic"}](schematic.eps) The air around the balloon is in hydrostatic equilibrium. Thus, a small cube of air with edge lengths $dx, dy, dz$ is in equilibrium when: $$(\rho_a dx dy dz) g = P(x,y,z) dx dy - P(x,y,z+dz) dx dy$$ Here, the pressure on the top face of the box produces a downward force, and the pressure on the bottom face produces an upward force. Rearranging a bit gives a differential equation for the pressure: $$\rho_a g = - \partial_z P \label{pgrad}$$ with the familiar solution: $$P = P_o - \rho_a g z, \label{hydro}$$ where $P_o$ is the pressure at $z=0$. This treatment is elementary enough, certainly not worth writing a manuscript to share. So, now we know the pressure field around the balloon. What is the microscopic mechanism of transferring force to the balloon from this pressre field? Again, mirroring the constructions in Ref. [@dgcandallthat] I consider an infinitesimal element of the surface area (with normal directed outward) of the ballon, $d\vec{A}$. As in Figure \[at\_schematic\], the air puts a force on this area element, $d\vec{F} = -P d\vec{A}$. ![An arbitrary shaped object is impinged by pressure forces from the surrounding medium. Given the pressure field, $P(\vec{r})$, our task is to add up all of the forces resulting from each infinitesimal directed area element, $d\vec{A}$. Also shown schematically is a “box discretization” of the sort used to motivate theorems in vector calculus.[@dgcndallthat][]{data-label="at_schematic"}](at_fig2.eps) Note that there is a pressure field inside the balloon as well, but if we are concerned with the balloon as the system, all such internal forces cancel in pairs — a distinct advantage of the Matter and Interactions curriculum[@mandi] is the clear demarcation between system and surroundings in every phsyical situation. Thus, the total force from the air is a sum over all of these small air forces: $$\vec{F} = \varoiint - d\vec{A} P \label{fair}$$ This looks suspiciously like the “flux” expression in Gauss’ Law, except $\vec{F}_a$ is a vector quantity and not a scalar formed by the flux of a vector field through a closed surface. Here is the clever bit, which generates a new (to me) theorem of vector calculus. Let us dot bot sides of Eq. \[fair\] with $\hat{z}$: $$F_z = \varoiint - d \vec{A} \cdot (P \hat{z}) = \iiint dv -\vec{\nabla} ( P \hat{z} ).$$ The second equality follows form the divergence theorem applied to the vector field $P \hat{z}$. Thus, we have $$F_z = - \iiint dv \partial_z P = - (-g) \iiint dv \rho_a = g M_a,$$ where the pressure gradient in Eq. \[pgrad\] is used in the second equality. Eureka. This gives just the $z$ component of the total force, and in this case, as the pressure has no gradients in either the $x$ nor $y$ directions, this is all the analysis can tell us, although if $P$ depends on each of $x,y, z$ nontrivially, we have derived the the full version of Archimedes’ theorem, Eq. \[at\]. Discussion ========== The development of Archimedes’ Principle as a consequence of hydrostatic equilibrium, Eq. \[hydro\] and a generalization of the divergence theorem, Eq. \[at\] is clearly within the ability of an advanced undergraduate to use, and, in my opinion, relaxes the appearance of the Archimedes’ Principle from an [*ad hoc*]{} statement and grounds it solidly upon the development of general equilibrium, forces, and superposition of forces. It is altogether more clear to an advanced student [*why the principle is true*]{} which allows it to be used in different contexts. The design of numerous hand-made “bob” accelerometers is a case in point. Here, an object less dense than water is submerged completely, yet tethered to the bottom of a clear, plastic bottle. The bob reacts quickly to an applied acceleration ... accelerate forward, and the bob immediately leans forward, and the magnitude of the acceleration can be measured by the angle ot the tilt. I have seen many students fumbling with [**why**]{} this should occur. There is some loose talk of “water rushing backward, the bob getting out of the way,” and yet this is clearly an unsystematic treatment. Suppose the bottle is accelerated along the $x$-axis. A pressure field is immediately set up in the fluid (on the timescale it would take a shockwave to traverse the bottle) with a large pressure at the rear of the bottle, and a smaller pressure at the front. This component of the gradient of the pressure field has to match the hydrostatic equilibrium condition, Eq. \[hydro\], with an acceleration $\vec{a}$: $$P = P_o + (\vec{g}-\vec{a}) \cdot \vec{r}.$$ Then, an application of Eq. \[at\] yields $$\vec{F}_{fluid} = (\vec{a}-\vec{g}) M_{fluid}.$$ Thus, for a quiescent bottle, there is a net pressure force in the direction opposite “$\vec{g}$”, but given the equivalence principle, we should be able to replace $\vec{g}$ with an accelerating noninertial frame with acceleration equal to $-\vec{g}$ with no observable effects. And, if we moved the whole discussion off into the Internationa Space Station, we would have $\vec{g}=0$, and to get the bob to point away form the bottom of the bottle, we would have to accelerate the bottle in the $\hat{z}$ direction with the magnitude of the terrestrial $g$ to get the same behavior. A suitable probe for the reasonableness of this whole discussion is the following “clicker"-style question. *A helium-filled balloon is suspended from a thread one meter from the floor of an enclosed elevator. At the instant you cut the thread, the cable supporting the elevator is severed. While you are hurtling toward certain severe injury, you observe the balloon* - [A)]{} Rise toward the ceiling of the elevator. - [B)]{} Remain one meter from the floor of the elevator. - [C)]{} Fall toward the floor of the elevator. The correct answer is B), of course, for a very physical reason that is opaque in the usual reasoning of the principle. In the freely falling elevator, the pressure field is uniform, and hence the net force from the air on the helium balloon is zero. The balloon thus has only its weight force acting on it, and it freely falls along with the elevator. An unsophisticated reading of the problem will assume that the buoyant force is [**always**]{} acting, so the balloon will not only rise upward, it will rise upward faster and faster until it hits the roof of the elevator. A suitable demonstration (in a very special case) is found in dropping a water-filled container with a neutrally buoyant object floating in its midsection. If the buoyant force always acts, the ball will remain in equilibrium while the container falls, and he top of the container will fall toward the ball. Yet, in the freely falling system, the water, ball, and container all should share the same acceleration. And further, we have a solid explanation in terms of real forces in an inertial frame for the functioning of a centrifuge. For, as the centrifuge spins, there is an acceleration field of the form $ - \omega^2 \vec{r}_{\perp}$. Thus, the equivalent of the hydrostatic condition is (in cylindrical coordinates) $$\partial_r P = -\omega^2 r \rho_l$$ with the analog of Eq. \[at\] is then $$F_{fluid} = omega^2 \rho_l \iiint dv = \omega^2 \rho_l M_l R$$ where $R$ radial position of the geometric center of the displaced volume. This is exactly the position of the center of mass of the fluid that would exists had the object not been submerged in the fluid. The dependence on $R$ on $F_{fluid}$ provides an enhancement of the density-discriminating effect, crucial for the operation of a gas-centrifuge. But again, this is solely a function of adding up the pressure on each surface of an object, and determining the net force. From there, we can connect to a student’s early training in free body diagrams (and the Momentum Principle, in the language of Ref. [@mandi]. What is going on here is extremely clear from a computational point of view. As in Figure \[at\_schematic\], we attempt to calculate the effect of the pressure forces on the finite elements in the entire body under consideration. As the pressures are all equal in the $x$ and $y$ directions, it is just the $z$ direction that I will concentrate upon here. Each element of volume has two faces, and the pressures at neighboring surfaces give canceling forces. It is just the pressure at the top, and the pressure at the bottom of the overall column that is important, and this gradient is just adding up the increments in the weights of the fluid elements. What we have here is little more than $$\Delta P = dx dy (P_{top} - P_{n-1} + P_{n-1} - P_{n-2} + \cdots P_1 - P_{bot})$$ with $$(P_{i} - P_{i-1}) dx dy = (dx dy dz) \rho g.$$ A student asked to [**compute the effect of the pressure gradient**]{} will find Eq \[at\] as a trivial restatement of a massive internal cancellation of terms. Exactly this sort of exercise is well within the capability of students completing a computation-enriched curriculum.[@mandi; @taylor; @goldstein] Conclusion ========== Here, I have related an episode revealing a “hole” in the preparation of undergraduate students in physics at CSU Long Beach (and perhaps in many more institutions beside), and proposed a solution. The solution brings the mathematics of vector analysis back into the physics curriculum between the hiatus in electrodynamics courses (just when it would be very useful [*e.g.*]{} in an introduction to quantum mechanics). And, the argument provides a clear insight into how deeply Archimedes himself understood the theorems of vector calculus.[@joke] [99]{} Harry Moritz Schey, *Div, Grad, Curl, and All That*, 4th edition (W. W. Norton, 2005). Michael E. Loverude1, Christian H. Kautz1 and Paula R. L. Heron, Am. J. Phys. **71**, 1178 (2003). Paula R. L. Heron1, Michael E. Loverude1, P. S. Shaffer1 and L. C. McDermott, Am. J. Phys. **71**, 1188 (2003); George D. Kuh, *High-Impact Educational Practices: What They Are, Who Has Access to Them, and Why They Matter*, (AAC and U, 2008). Ruth W. Chabay, Bruce A. Sherwood, *Matter and Interactions, vol. 1*, 3rd edition (John Wiley and Sons, 2011). David E. Meltzer and Peter S. Shaffer, eds. *Teacher Education in Physics*, (American Physical Society, 2011). PHYS 390 “Exploring Physics Teaching” is taken by approximately 30 physics majors anually at CSU Long Beach, and counts as upper-division elective credit in the physics major. V. Otero, S. Pollock, N. Finkelstein, Am. J. of Phys. **78**, 1218 (20100. Edward F. Reddish, Am. J. Phys. **62**(9), 796 (1994). John Robert Taylor, *Classical Mechanics*, (University Science Books, 2005). Herbert Goldstein, Charles P. Poole Jr., John L. Safko *Classical Mechanics*, 3rd edition (Pearson, 2001). Erlend H. Graf, The Phys. Teach. **42**, 296 (2004). V. M. Rudiak Phys. Teach. **2** , 293 (1964). While there is evidence that Archimedes had developed ideas of infinitesimals in various geometric problems, he almost certainly had no concept of a vector field or vector calculus. It is a testament to his powers of insight that his principle has such deep roots in vector analysis. We have had four TIR’s at CSU Long Beach: Rod Ziolkowski of Whitney High School in Cerritos, CA; Kathryn Beck of Bolsa Grande High School in Garden Grove, CA; Kevin Dwyer of Cypress High School in Cypress, CA; and, Meredith Ashbran of Da Vinci High School in Hawthorne, CA. Interacting with these physicists has been extremely valuable for both our students and our faculty. Generally, high school physicists have an enormous wealth of techniques and a depth of pedagogical experience that I, for one, envy. Galen T. Pickett, J. Chem. Phys. **104**, 1657 (1996);
--- abstract: | We give the first $2$-approximation algorithm for the cluster vertex deletion problem. This is tight, since approximating the problem within any constant factor smaller than $2$ is UGC-hard. Our algorithm combines the previous approaches, based on the local ratio technique and the management of true twins, with a novel construction of a “good” cost function on the vertices at distance at most $2$ from any vertex of the input graph. As an additional contribution, we also study cluster vertex deletion from the polyhedral perspective, where we prove almost matching upper and lower bounds on how well linear programming relaxations can approximate the problem. address: - 'Département de Mathématique Université libre de Bruxelles Brussels, Belgium' - 'School of Mathematics Monash University Melbourne, Australia' author: - Manuel Aprile - Matthew Drescher - Samuel Fiorini - Tony Huynh bibliography: - 'references.bib' title: A Tight Approximation Algorithm for the Cluster Vertex Deletion Problem --- [^1] Introduction {#sec:intro} ============ A *cluster graph* is a graph that is a disjoint union of complete graphs. Let $G$ be any graph. A set $X \subseteq V(G)$ is called a *hitting set* if $G - X$ is a cluster graph. Given a graph $G$ and (vertex) cost function $c : V(G) \to {\mathbb{Q}}_{\geqslant 0}$, the *cluster vertex deletion* problem ([<span style="font-variant:small-caps;">Cluster-VD</span>]{}) asks to find a hitting set $X$ whose cost $c(X) := \sum_{v \in X} c(v)$ is minimum. We denote by $\operatorname{\mathrm{OPT}}(G,c)$ the minimum cost of a hitting set. If $G$ and $H$ are two graphs, we say that $G$ *contains* $H$ if some induced subgraph of $G$ is isomorphic to $H$. Otherwise, $G$ is said to be *$H$-free*. Denoting by $P_k$ the path on $k$ vertices, we easily see that a graph is a cluster graph if and only if it is $P_3$-free. Hence, $X \subseteq V(G)$ is a hitting set if and only if $X$ contains a vertex from each induced $P_3$. From what precedes, [<span style="font-variant:small-caps;">Cluster-VD</span>]{} is a hitting set problem in a $3$-uniform hypergraph, and as such has a “textbook” $3$-approximation algorithm[^2] [@Vazirani; @Williamson_Shmoys]. Moreover, the problem has an approximation-preserving reduction from [<span style="font-variant:small-caps;">Vertex Cover</span>]{}, hence obtaining a $(2-{\varepsilon})$-approximation algorithm for some ${\varepsilon}> 0$ would contradict either the Unique Games Conjecture or P $\neq$ NP. [<span style="font-variant:small-caps;">Cluster-VD</span>]{} has applications in graph modeled data clustering in which an unknown set of samples may be contaminated. An optimal solution for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} can recover a clustered data model, retaining as much of the original data as possible [@HKMN10]. The first non-trivial approximation algorithm for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} was a $5/2$-approximation due to You, Wang and Cao [@YWC17]. Shortly afterward, Fiorini, Joret and Schaudt gave a $7/3$-approximation [@fiorini2016improved], and subsequently a $9/4$-approximation [@FJS20]. Our contribution {#sec:ourcontribution} ---------------- In this paper, we close the gap between $2$ and $9/4 = 2.25$ and prove the following tight result. \[thm:2apx\] [<span style="font-variant:small-caps;">Cluster-VD</span>]{} has a $2$-approximation algorithm. All known approximation algorithms for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} are based on the local ratio technique. See the survey of Bar-Yehuda, Bendel, Freund, and Rawitz [@bbfr2004] for background on this standard algorithmic technique. Our algorithm is no exception, see Algorithm \[algo\] below.[^3] However, it significantly differs from previous algorithms in its crucial step, namely, Step \[step:find\_good\_H\]. (Algorithm \[rulesalgo\] in Section \[sec:analysis\] gives a detailed version of Step \[step:find\_good\_H\].) Let $H$ be an induced subgraph of $G$, and let $c_H: V(H) \to {\mathbb{Q}}_{\geqslant 0}$. The weighted graph $(H,c_H)$ is said to be *$\alpha$-good in $G$* (for some factor $\alpha \geqslant 1$) if $c_H$ is not identically $0$ and $$\label{eq:alpha_good} \sum_{v\in X\cap V(H)} c_{H}(v) \leqslant \alpha \cdot \operatorname{\mathrm{OPT}}(H, c_H)$$ holds for every (inclusionwise) minimal hitting set $X$ of $G$. We overload terminology and say that an induced subgraph $H$ is *$\alpha$-good in $G$* if there exists a cost function $c_{H}$ such that $(H,c_{H})$ is $\alpha$-good in $G$. We stress that the local cost function $c_H$ is not necessarily the restriction of the global cost function $c : V(G) \to {\mathbb{Q}}_{\geqslant 0}$ to $H$. $(G,c)$ a weighted graph $X$ a minimal hitting set of $G$ \[step:lambda\_star\] We will use two methods to establish $\alpha$-goodness of weighted induced subgraphs. We say that $(H,c_H)$ is *strongly* $\alpha$-good if $c_H$ is not identically $0$ and $$\sum_{v \in V(H)} c_H(v) \leqslant \alpha \cdot \operatorname{\mathrm{OPT}}(H,c_H)\,.$$ Clearly, if $(H,c_H)$ is strongly $\alpha$-good then $(H, c_H)$ is $\alpha$-good. Moreover, unlike $\alpha$-goodness, strong $\alpha$-goodness does not depend on $G$. We say that $H$ itself is *strongly $\alpha$-good* if $(H,c_H)$ is strongly $\alpha$-good for some cost function $c_H$. If we cannot find a strongly $\alpha$-good induced subgraph in $G$, we will find an induced subgraph $H$ such that at least one vertex of $H$ has no neighbor in $G - V(H)$, and a cost function $c_H : V(H) \to {\mathbb{Z}}_{\geqslant 1}$ such that $$\sum_{v \in V(H)} c_H(v) \leqslant \alpha \cdot \operatorname{\mathrm{OPT}}(H,c_H) + 1\,.$$ Since at least one vertex of $H$ has no neighbor in $G - V(H)$, no minimal hitting set $X$ can contain *all* the vertices of $H$. Therefore, $$\sum_{v \in X \cap V(H)} c_H(v) \leqslant \sum_{v \in V(H)} c_H(v) - 1 \leqslant \alpha \cdot \operatorname{\mathrm{OPT}}(H,c_H)\,,$$ and so $(H,c_H)$ is $\alpha$-good in $G$. In order to illustrate these ideas, consider the following two examples, see Figure \[fig:C4\_P3\]. First, let $H$ be a $4$-cycle and $\mathbf{1}_H$ denote the unit cost function on $V(H)$. Then $(H,\mathbf{1}_H)$ is strongly $2$-good, since $\sum_{v \in V(H)} \mathbf{1}_H(v) = 4 = 2 \operatorname{\mathrm{OPT}}(H,\mathbf{1}_H)$. Second, let $H$ be an induced $P_3$ in $G$, starting at a degree-$1$ vertex in $G$. Then $(H,\mathbf{1}_H)$ is $2$-good in $G$, but $(H,\mathbf{1}_H)$ is not strongly $2$-good. [.42]{} =\[circle,draw,thick,fill=white\] (0,0)–(4,0)–(4,4)–(0,4)–cycle; (0,0) node\[vtx\][$1$]{} (4,0) node\[vtx\][$1$]{} (4,4) node\[vtx\][$1$]{} (0,4) node\[vtx\][$1$]{}; [.42]{} =\[circle,draw,thick,fill=white\] (-4,0)–(0,0)–(4,0); (0,0) node\[vtx\][$1$]{} (-4,0) node\[vtx,fill=black!25\][$1$]{} (4,0) node\[vtx\][$1$]{}; (4,0) ellipse (6 and 4); Each time we find a $2$-good weighted induced subgraph in $G$, the local ratio technique allows us to recurse on a smaller subgraph $G'$ of $G$ in which at least one vertex of $H$ is deleted from $G$. For example, the $2$-good induced subgraphs mentioned above allow us to reduce to input graphs $G$ that are $C_4$-free and have minimum degree at least $2$. In order to facilitate the search for $\alpha$-good induced subgraphs, it greatly helps to assume that $G$ is *twin-free*. That is, $G$ has no two distinct vertices $u, u'$ such that $uu' \in E(G)$ and for all $v \in V(G-u-u')$, $uv \in E(G)$ if and only if $u'v \in E(G)$. Two such vertices $u, u'$ are called *true twins*. As in the previous algorithms [@fiorini2016improved; @FJS20], our algorithm reduces $G$ whenever it has a pair of true twins $u, u'$ (see Steps \[step:true\_twins\_start\]–\[step:true\_twins\_end\]). The idea is simply to add the cost of $u'$ to that of $u$ and delete $u'$. The crux of our algorithm is Step \[step:find\_good\_H\] (see Algorithm \[rulesalgo\] for more details), which relies entirely on the following structural result. Below, we denote by $N_{\leqslant i}[v]$ (resp. $N_i(v)$) the set of vertices at distance at most (resp. equal to) $i$ from vertex $v$, omitting the subscript if $i=1$. \[thm:localratio\] Let $G$ be a twin-free graph, let $v_0$ be any vertex of $G$, and let $H$ be the subgraph of $G$ induced by $N_{\leqslant 2}[v_0]$. Then, there exists a cost function $c_H : V(H) \to {\mathbb{Z}}_{\geqslant 0}$ such that $(H,c_H)$ is $2$-good in $G$. Moreover, $c_H$ can be constructed in polynomial time. In the second part of the paper, we study [<span style="font-variant:small-caps;">Cluster-VD</span>]{} from the polyhedral point of view. In particular we investigate how well linear programming (LP) relaxations can approximate the optimal solution of [<span style="font-variant:small-caps;">Cluster-VD</span>]{}. Following [@CLRS13; @BPZ; @bazzi2019no], we use a model of LP relaxations which, by design, allows for extended formulations. Fix a graph $G$. Let $d \in {\mathbb{Z}}_{\geqslant 0}$ be an arbitrary dimension. A system of linear inequalities $Ax \geqslant b$ in ${\mathbb{R}}^d$ defines an *LP relaxation* of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} on $G$ if the following hold: - For every hitting set $X \subseteq V(G)$, we have a point $\pi^X \in {\mathbb{R}}^d$ satisfying $A\pi^X \geqslant b$. - For every cost function $c : V(G) \to {\mathbb{Q}}_{\geqslant 0}$, we have an affine function $f_c : {\mathbb{R}}^d \to {\mathbb{R}}$. - For all hitting sets $X \subseteq V(G)$ and cost functions $c : V(G) \to {\mathbb{Q}}_{\geqslant 0}$, the condition $f_c(\pi^X) = \sum_{v \in X} c(v)$ holds. The *size* of the LP relaxation $Ax \geqslant b$ is defined as the number of rows of $A$. For every cost function $c$, the quantity $\operatorname{\mathrm{LP}}(G,c) := \min \{f_c(x) \mid Ax \geqslant b\}$ gives a lower bound on $\operatorname{\mathrm{OPT}}(G,c)$. The *integrality gap* of the LP relaxation $Ax \geqslant b$ is defined as $$\sup_{c} \frac{\operatorname{\mathrm{OPT}}(G,c)}{\operatorname{\mathrm{LP}}(G,c)}$$ where the supremum is taken over all cost functions $c : V(G) \to {\mathbb{Q}}_{\geqslant 0}$. It is not hard to see that the straightforward LP relaxation in ${\mathbb{R}}^{V(G)}$ that includes one constraint for every induced $P_3$ of $G$ (see Section \[sec:polyhedral\]) has worst case integrality gap $3$ (by *worst case*, we mean that we take the supremum over all graphs $G$). Indeed, for a random $n$-vertex graph, $\operatorname{\mathrm{OPT}}(G,\mathbf{1}_G) = n - O(\log^2 n)$ with high probability, while $\operatorname{\mathrm{LP}}(G,\mathbf{1}_G) \leqslant n/3$. On the positive side, we show how applying one round of the *Sherali-Adams hierarchy* [@SA1990], a standard procedure to derive strengthened LP relaxations of binary linear programming problems, gives a relaxation with integrality gap at most $5/2 = 2.5$, see Theorem \[thm:SA1CVD\_UB\]. To complement this, we prove that the worst case integrality gap of the relaxation is precisely $5/2$, see Theorem \[thm:SA1CVD\_LB\]. Then, by relying on Theorem \[thm:localratio\], we show that the integrality gap decreases to $2+{\varepsilon}$ after applying $\mathrm{poly}(1/{\varepsilon})$ rounds, see Theorem  \[thm:SA\_2+eps\]. On the negative side, applying known results on [<span style="font-variant:small-caps;">Vertex Cover</span>]{} [@bazzi2019no] we observe that no polynomial-size LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} can have worst case integrality gap better than $2$, see Proposition \[prop:EFlowerbound\]. We stress that this last result is unconditional; it does not rely on P $\neq$ NP nor the Unique Games Conjecture. Comparison to previous work. ---------------------------- We now revisit all previous approximation algorithms for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} [@YWC17; @fiorini2016improved; @FJS20]. The presentation given here departs from [@YWC17; @fiorini2016improved], and explains in a unified manner what is the bottleneck in each of the algorithms. Fix $k \in \{3,4,5\}$, and let $\alpha := (2k-1)/(k-1)$. Notice that $\alpha = 5/2$ if $k = 3$, $\alpha = 7/3$ if $k = 4$ and $\alpha = 9/4$ if $k = 5$. In [@FJS20 Lemma 3], it is shown that if a twin-free graph $G$ contains a $k$-clique, then one can find an induced subgraph $H$ containing the $k$-clique and a cost function $c_H$ such that $(H,c_H)$ is strongly $\alpha$-good. Therefore, in order to derive an $\alpha$-approximation for [<span style="font-variant:small-caps;">Cluster-VD</span>]{}, one may assume without loss of generality that the input graph $G$ is twin-free and has no $k$-clique. Let $v_0$ be a maximum degree vertex in $G$, and let $H$ denote the subgraph of $G$ induced by $N_{\leqslant 2}[v_0]$. In [@FJS20], is it shown that one can define an ad-hoc cost function $c_H$ such that $(H,c_H)$ is $2$-good in $G$, using the fact that $G$ has no $k$-clique. The simplest case occurs when $k = 3$. Then $N(v_0)$ is a stable set. Letting $c_H(v_0) := d(v_0)-1$, $c_H(v) := 1$ for $v \in N(v_0)$ and $c_H(v) := 0$ for the other vertices of $H$, one easily sees that $(H,c_H)$ is $2$-good in $G$. For higher values of $k$, one has to work harder. In this paper, we show that one can *always*, and in polynomial time, construct a cost function $c_H$ on the vertices at distance at most $2$ from $v_0$ that makes $(H,c_H)$ $2$-good in $G$, provided that $G$ is twin-free, see Theorem \[thm:localratio\]. This result was the main missing ingredient in previous approaches, and single-handedly closes the approximability status of [<span style="font-variant:small-caps;">Cluster-VD</span>]{}. Other related works. -------------------- [<span style="font-variant:small-caps;">Cluster-VD</span>]{} has also been widely studied from the perspective of *fixed parameter tractability*. Given a graph $G$ and parameter $k$ as input, the task is to decide if $G$ has a hitting set $X$ of size at most $k$. A $2^k n^{\mathcal{O}(1)}$-time algorithm for this problem was given by Hüffner, Komusiewicz, Moser, and Niedermeier [@HKMN10]. This was subsequently improved to a $1.911^k n^{\mathcal{O}(1)}$-time algorithm by Boral, Cygan, Kociumaka, and Pilipczuk [@BCKP16], and a $1.811^k n^{\mathcal{O}(1)}$-time algorithm by Tsur [@Tsur19]. By the general framework of Fomin, Gaspers, Lokshtanov, and Saurabh [@FGLS19], these parametrized algorithms can be transformed into exponential algorithms which compute the size of a minimum hitting set for $G$ exactly, the fastest of which runs in time $O(1.488^n)$. For polyhedral results, [@hosseinian2019polyhedral] gives some facet-defining inequalities of the [<span style="font-variant:small-caps;">Cluster-VD</span>]{} polytope, as well as complete linear descriptions for special classes of graphs. Another related problem is the *feedback vertex set* problem in tournaments ([<span style="font-variant:small-caps;">FVST</span>]{}). Given a tournament $T$ with costs on the vertices, the task is to find a minimum cost set of vertices $X$ such that $T-X$ does not contain a directed cycle. For unit costs, note that [<span style="font-variant:small-caps;">Cluster-VD</span>]{} is equivalent to the problem of deleting as few elements as possible from a *symmetric* relation to obtain a transitive relation, while [<span style="font-variant:small-caps;">FVST</span>]{} is equivalent to the problem of deleting as few elements as possible from an *antisymmetric* and complete relation to obtain a transitive relation. In a tournament, hitting all directed cycles is equivalent to hitting all directed triangles, so [<span style="font-variant:small-caps;">FVST</span>]{} is also a hitting set problem in a $3$-uniform hypergraph. Moreover, [<span style="font-variant:small-caps;">FVST</span>]{} is also UCG-hard to approximate to a constant factor smaller than $2$. Cai, Deng, and Zang [@CDZ01] gave a $5/2$-approximation algorithm for [<span style="font-variant:small-caps;">FVST</span>]{}, which was later improved to a $7/3$-approximation algorithm by Mnich, Williams, and Végh [@MWV16]. Lokshtanov, Misra, Mukherjee, Panolan, Philip, and Saurabh [@LMMPPS20] recently gave a *randomized* $2$-approximation algorithm, but no deterministic (polynomial-time) $2$-approximation algorithm is known. Among other related covering and packing problems, Fomin, Le, Lokshtanov, Saurabh, Thomassé, and Zehavi [@FominLLSTZ19] studied both [<span style="font-variant:small-caps;">Cluster-VD</span>]{} and [<span style="font-variant:small-caps;">FVST</span>]{} from the *kernelization* perspective. They proved that the unweighted versions of both problems admit subquadratic kernels: $\mathcal{O}(k^{\frac{5}{3}})$ for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} and $\mathcal{O}(k^{\frac{3}{2}})$ for ${\textsc{FVST}}$. Proof and paper outline ----------------------- First, we give a sketch of the proof of Theorem \[thm:localratio\]. An outline of the paper is given right after this, see below. If the subgraph induced by $N(v_0)$ contains a hole[^4], then $H$ contains a wheel, which makes $H$ strongly $2$-good, see Lemma \[lem:wheel\]. This allows us to reduce to the case where $H[N(v_0)]$ is chordal. We show by induction that for such an $H$, there exists an integer cost function $c_H$ whose support contains $N[v_0]$ and such that the total cost $c_H(H) := \sum_{v \in V(H)} c_H(v)$ is at most $2 \operatorname{\mathrm{OPT}}(H,c_H) + 1$, see Lemma \[lem:technical\]. The induction we set up considers $H$ independently of the host graph $G$. We define three rules that reduce any given $H$ (satisfying certain conditions which include that $H[N(v_0)]$ is chordal) to $P_3$, in several rounds. In each round, one or more vertices get deleted. Let $H'$ denote the graph obtained from $H$ after applying one reduction. We show how to obtain a suitable cost function $c_H$ for $H$, given a suitable cost function $c_{H'}$ for $H'$. In a first phase, we get rid of all the vertices in $N_2(v_0)$ and of the true twins that might appear in the process. This allows us to reduce to the case where $H$ is a twin-free, apex-chordal[^5] graph. In the second phase, we start each round by taking a simplicial[^6] vertex $v$ and deleting it from $H$, together with the true twins that might appear. The hard case turns out to be when $H' := H-v$ is twin-free, and the neighborhood of $v$ is a maximum weight clique with respect to $c_{H'}$. There we find a special stable set $S$ containing $v$ (see Lemma \[lem:stableset\]), increase the costs on the vertices of $S$ by $1$ and set the cost of $v_0$ according to the other costs (see Lemma \[lem:ctilde\]). We conclude the introduction with a brief description of the different sections of the paper. Section \[sec:finding\] is entirely devoted to the proof of Theorem \[thm:localratio\]. The proof of Theorem \[thm:2apx\] is given in Section \[sec:analysis\], together with a complexity analysis of Algorithm \[algo\]. Section \[sec:polyhedral\] presents our polyhedral results. A conclusion is given in Section \[sec:conclusion\]. There, we state a few open problems for future research. Finding $2$-good induced subgraphs {#sec:finding} ================================== The goal of this section is to prove Theorem \[thm:localratio\]. Restricting to chordal neighborhoods ------------------------------------ As pointed out earlier in the introduction, $4$-cycles are strongly $2$-good. This implies that wheels of order $5$ are strongly $2$-good (putting a zero cost on the apex). Recall that a *wheel* is a graph obtained from a cycle by adding an apex vertex (called the *center*). We now show that *all* wheels of order at least $5$ are also strongly $2$-good. This allows our algorithm to restrict to input graphs such that the subgraph induced on each neighborhood is chordal. \[lem:wheel\] Let $H := W_k$ be a wheel on $k \geqslant 5$ vertices and center $v_0$, let $c_H(v_0) := k-5$ and $c_H(v) := 1$ for $v \in V(H-v_0)$. Then $(H,c_H)$ is strongly $2$-good. Notice that $\operatorname{\mathrm{OPT}}(H,c_H) \geqslant k-3$ since a hitting set either contains $v_0$ and at least $2$ more vertices, or does not contain $v_0$ but contains $k-3$ other vertices. Hence, $\sum_{v \in V(H)} c_H(v) = k-5 + k-1 = 2(k-3) \leqslant 2 \operatorname{\mathrm{OPT}}(H,c_H)$. Setting up the induction ------------------------ Below, we state a lemma which gives, in full detail, the actual statement we prove by induction in order to prove Theorem \[thm:localratio\]. Before stating the lemma, we need some extra terminology. Let $G$ be a twin-free graph, and $v_0 \in V(G)$. Suppose that $u, u'$ are true twins in $G[N[v_0]]$. Since $G$ is twin-free, there exists a vertex $v$ that is adjacent to exactly one of $u$, $u'$. We say that $v$ is a *distinguisher* for the edge $uu'$ (or for the pair $\{u,u'\}$). Notice that either $uu'v$ or $u'uv$ is an induced $P_3$. \[lem:technical\] Let $H$ be any graph with a special *root vertex* $v_0$ such that: 1. \[it:radius\_2\] every vertex is at distance at most $2$ from $v_0$, 2. \[it:chordal\] $H[N(v_0)]$ is chordal, 3. \[it:P\_3\] $H$ contains a $P_3$, and 4. \[it:twin-free\] every pair of vertices that are true twins in $H[N[v_0]]$ has a distinguisher. For every such graph $H$, there exists a cost function $c_H : V(H) \to {\mathbb{Z}}_{\geqslant 0}$ such that 1. \[it:support\] $c_H(v) \geqslant 1$ for all $v \in N[v_0]$ and 2. \[it:2OPT+1\] $c_H(H) \leqslant 2 \operatorname{\mathrm{OPT}}(H,c_H) + 1$. The base case of Lemma \[lem:technical\] occurs when $H$ is a $P_3$. In this case, we let $c_H(v) := 1$ for all $v \in V(H)$. Clearly, this cost function satisfies (C\[it:support\]) and (C\[it:2OPT+1\]). Notice that in (H\[it:twin-free\]), the distinguisher is necessarily in $N_2(v_0)$. Hence, if $N_2(v_0)$ is empty then the condition simply says that $H$ is twin-free. Before proceeding with the proof of Lemma \[lem:technical\], we now show that Theorem \[thm:localratio\] easily follows from Lemmas \[lem:wheel\] and \[lem:technical\]. Notice that $H$ satisfies (H\[it:radius\_2\]) and (H\[it:twin-free\]) by definition. Suppose $H=N_{\leqslant 2}[v_0]$ is a cluster graph. Then $H$ must be a clique disconnected from the rest of $G$. If $|V(H)|=1$, then we can take $c_H$ to be any non-zero cost function. If $|V(H)| \geqslant 2$, this contradicts that $G$ is twin-free. Thus, we may assume that $H$ also satisfies (H\[it:P\_3\]). We can decide in linear time (see for instance [@tarjan1984]) if $H[N(v_0)]$ is chordal, and if not, output a hole of $H[N(v_0)]$. If the latter holds, we are done by Lemma \[lem:wheel\]. If the former holds, we notice that properties (H\[it:radius\_2\])–(H\[it:twin-free\]) hold for $H$: in particular, for (H\[it:twin-free\]), since $G$ is twin-free, every pair of true twins in $H[N[v_0]]$ must have a distinguisher in $G$, which, being adjacent to one of the twins, must be in $H$ as well. Hence we can apply Lemma \[lem:technical\] to obtain a function $c_H$ such that $(H, c_H)$ is 2-good (see the discussion in Section \[sec:ourcontribution\]). This takes polynomial time, by Lemma \[lem:rulesalgoanalysis\]. The rest of the section will be spent proving Lemma \[lem:technical\]. Principles of the deletion process ---------------------------------- We define a process by which we delete the vertices of $H$ in rounds. In each round, the goal is to get rid of one vertex $v \in V(H)$, while preserving the induction hypotheses (H\[it:radius\_2\])–(H\[it:twin-free\]). Because of (H\[it:twin-free\]) we may be forced to delete more vertices besides $v$. We never delete $v_0$, which remains the root vertex throughout the process. Thanks to (H\[it:P\_3\]), we will delete vertices until we are left with a $P_3$, for which we set all costs equal to 1. We now describe the general deletion process. Further details and the specific rules according to which we choose the vertex $v$ and set the cost function $c_H$, given a cost function for a smaller subgraph are given in the next section. Let $H'$ denote the smaller graph obtained from $H$ after one round. The way $H'$ is obtained from $H$ is simple: we first delete $v$. If $H-v$ does not violate (H\[it:twin-free\]), then we let $H' := H-v$. Otherwise, we consider the relation $\equiv$ on $N[v_0] - v$ with $u \equiv u'$ whenever $u = u'$ or $u, u'$ are true twins in $H-v$. It is not hard to see that $\equiv$ is an equivalence relation. To get $H'$, we keep one vertex in each equivalence class and delete the vertices of $H-v$ that are redundant. Notice that the equivalence classes of $\equiv$ are of size at most $2$ since, if $u, u', u''$ are distinct vertices with $u \equiv u' \equiv u''$, then two of them are necessarily true twins in $H$, which contradicts (H\[it:twin-free\]). Hence, the edges contained in $N[v_0]-v$ that do not have a distinguisher in $H-v$ form a *matching* $M := \{u_1u'_1,\ldots,u_ku'_k\}$. For each edge of $M$, we delete exactly one endpoint from $H-v$. Notice that the resulting subgraph is the same, up to isomorphism, no matter which endpoint is chosen. Hence we may assume that $u'_1$, …, $u'_k$ get deleted, so we let $H' := H - v - u'_1 - \dots - u'_k$. The reduction rules ------------------- We now describe the rules under which we apply the deletion process, together with how we set the cost function $c_{H} : V(H) \to {\mathbb{Z}}_{\geqslant 0}$, given a cost function $c_{H'} : V(H') \to {\mathbb{Z}}_{\geqslant 0}$ that satisfies (by induction) conditions (C\[it:support\]) and (C\[it:2OPT+1\]). Rule 1 : irrelevant vertices : Suppose that $v$ is a vertex of $N_2(v_0)$ such that $H' := H-v$ satisfies (H\[it:twin-free\]). We let $c_{H}(u) := c_{H'}(u)$ for all $u \neq v$ and $c_{H}(v) := 0$. Rule 2 : distinguishers : Suppose that Rule 1 cannot be applied and that $v$ is a vertex distinct from $v_0$ such that $H-v$ violates (H\[it:twin-free\]). In case $v \in N(v_0)$, we require that $N_2(v_0)$ is empty. As above, let $M := \{u_1u'_1,\ldots,u_ku'_k\}$ be the matching formed by the edges in $N[v_0] - v$ whose unique distinguisher is $v$, where $u'_i \neq v_0$ for all $i$. We let $H' := H - u'_1 - \dots - u'_k - v$, $c_H(u'_i) := c_{H'}(u_i)$ for $i \in [k]$, $c_H(v) := \sum_{i = 1}^k c_{H'}(u_i) = \sum_{i=1}^k c_{H}(u'_i)$, and $c_{H}(u) := c_{H'}(u)$ otherwise. See Figure \[fig:rule2\] for an example. =\[circle,draw,thick\]; =\[circle,draw,thick,fill=black!50\]; =\[circle,draw,thick,fill=red!50\]; (v5) at (-3,1) ; at (-3,.5)[$v$]{}; (v3) at (1,0) ; (v1) at (-1,0) ; (v2) at (0,-1.2) ; (v4) at (3,1) ; (v0) at (0,-3) ; (v0) – (v2); (v0) – (v5); (v0) – (v1); (v0) – (v3); (v0) – (v4); (v2) – (v1); (v1) – (v3); (v2) – (v3); (v5) – (v1); (v4) – (v3); at (2,-3) [$H$]{}; (w3) at (7,0) ; (w1) at (5,0) ; (w2) at (6,-1.2) ; (w4) at (9,1) ; (w0) at (6,-3) ; (w0) – (w2); (w0) – (w1); (w0) – (w3); (w0) – (w4); (w2) – (w1); (w1) – (w3); (w2) – (w3); (w4) – (w3); at (8,-3) [$H-v$]{}; (z1) at (11,0) ; (z4) at (15,1) ; (z0) at (12,-3) ; (z0) – (z1); (z0) – (z4); at (14,-3) [$H'$]{}; Rule 3 : relevant non-distinguishers : Suppose that Rule 1 cannot be applied, that $N_2(v_0)$ is empty and that $v$ is a simplicial vertex of $H[N(v_0)] = H - v_0$ such that $H' := H - v$ satisfies (H\[it:twin-free\]). Let $K$ denote the neighborhood of $v$ in $H-v_0$. Since $v$ is simplicial, $K$ is a clique. *Case 1.* If $c_{H'}(K) < \omega(H'-v_0,c_{H'})$, then we let $c_{H}(v) := 1$, $c_{H}(u) := c_{H'}(u)$ for all $u$ distinct from $v_0$, and $c_{H}(v_0) := c_{H'}(v_0) + 1$. See Figure \[fig:rule31\] for an example. =\[circle,draw,thick\]; =\[circle,draw,thick,fill=black!50\]; =\[circle,draw,thick,fill=red!50\]; =\[circle,draw,thick,fill=blue!50\]; =\[circle,draw,thick,fill=orange!50\]; (v5) at (-3,1) ; at (-3,-1.5)[$v$]{}; (v6) at (-3,-1) ; (v3) at (1,0) ; (v1) at (-1,0) ; (v2) at (0,-1.2) ; (v4) at (3,1) ; (v0) at (0,-3) ; (v0) – (v2); (v0) – (v5); (v0) – (v1); (v0) – (v3); (v0) – (v4); (v2) – (v1); (v1) – (v3); (v2) – (v3); (v5) – (v1); (v4) – (v3); (v6) – (v0); (v2) – (v6); at (2,-3) [$H$]{}; (w5) at (5,1) ; (w3) at (9,0) ; (w1) at (7,0) ; (w2) at (8,-1.2) ; (w4) at (11,1) ; (w0) at (8,-3) ; (w0) – (w2); (w0) – (w1); (w0) – (w3); (w0) – (w4); (w2) – (w1); (w1) – (w3); (w2) – (w3); (w4) – (w3); (w5) – (w1); (w5) – (w0); at (10,-3) [$H':= H-v$]{}; *Case 2.* If $c_{H'}(K) = \omega(H'-v_0,c_{H'})$, then we find a stable set $S$ containing $v$ and satisfying the following extra property: for every $u \in K$, there is $w \in S$ such that $vuw$ is an induced $P_3$. The existence of $S$ is established in Lemma \[lem:stableset\]. We let $c_{H}(u) := c_{H'}(u) + 1$ if $u \in S$, $c_{H}(v) := 1$, $c_{H}(u) := c_{H'}(u)$ if $u\in H- S - v - v_0$, and $c_{H}(v_0) := c_{H}(H-v_0)-2\omega(H-v_0,c_H)+1$. See Figure \[fig:rule32\] for an example. =\[circle,draw,thick\]; =\[circle,draw,thick,fill=black!50\]; =\[circle,draw,thick,fill=red!50\]; =\[circle,draw,thick,fill=pink!50\]; =\[circle,draw,thick,fill=blue!50\]; (v3) at (-4.5,-.5) ; (v1) at (-4,.5) ; (v2) at (-6,1) ; at (-3,1.5)[$v$]{}; (v4) at (-3,1.0) ; (v5) at (-3.5,2.0) ; (v6) at (-4.5,2.5) ; (v7) at (-5.5,-.2) ; (v0) at (-5.5,2) ; (v3) – (v1); (v1) – (v2); (v2) – (v0); (v0) – (v3); (v2) – (v4); (v1) – (v4); (v2) – (v3); (v2) – (v5); (v3) – (v4); (v0) – (v5); (v0) – (v4); (v0) – (v1); (v0) – (v6); (v1) – (v6); (v3) – (v7); (v0) – (v7); at (-3,0) [$H$]{}; (w3) at (1,-.5) ; (w1) at (1.5,.5) ; (w2) at (-0.5,1) ; (w5) at (2,2.0) ; (w0) at (0,2) ; (w6) at (1.0,2.5) ; (w7) at (0,-.2) ; (w3) – (w1); (w1) – (w2); (w2) – (w0); (w0) – (w3); (w2) – (w3); (w2) – (w5); (w0) – (w5); (w0) – (w1); (w0) – (w6); (w1) – (w6); (w3) – (w7); (w0) – (w7); at (2.5,0) [$H'$]{}; at (7.5,1.5)[$v$]{}; (z4) at (7.5,1.0) ; (z5) at (7.0,2.0) ; (z6) at (6.0,2.5) ; (z7) at (5.0,-.2) ; at (7.5,0) [$S$]{}; Correctness of the reductions rules ----------------------------------- In this section we show that the reduction rules described above preserve the induction hypotheses (H\[it:radius\_2\])–(H\[it:twin-free\]), and prove Lemma \[lem:technical\]. \[lem:satisfy\_H\_i\] Let $H$ be a graph with root vertex $v_0$, that is not a $P_3$ and that satisfies (H\[it:radius\_2\])–(H\[it:twin-free\]). Let $H'$ be the induced subgraph of $H$ resulting from the application of a single reduction rule to $H$. Then $v_0 \in V(H')$ and $H'$ satisfies (H\[it:radius\_2\])–(H\[it:twin-free\]). By definition of each of the reduction rules, $v_0 \in V(H')$. Since chordality is a hereditary property, $H'$ clearly satisfies (H\[it:chordal\]). To see that (H\[it:P\_3\]) holds for $H'$, recall that $H'$ is obtained from $H-v$ by shrinking each equivalence class of $\equiv$ to a single vertex. Towards a contradiction, suppose that $H'$ contains no $P_3$, that is, $H'$ is a cluster graph. Hence $H-v$ is also a cluster graph. By (H\[it:radius\_2\]) and our choice of $v$, $H-v$ is connected, and hence complete. Thus $H$ consists of a clique with an extra vertex that is adjacent to at least one vertex of the clique. Since $H$ satisfies (H\[it:twin-free\]), $v$ has at most one neighbor in the clique and at most one non-neighbor in the clique. Hence, by (H\[it:P\_3\]), $H$ is a $P_3$, which contradicts our assumption. Therefore, $H'$ satisfies (H\[it:P\_3\]). For the other two properties, we distinguish cases according to the rule used. *Rule 1*. $H'=H-v$ satisfies (H\[it:radius\_2\]), since we deleted a vertex $v\in N_2(v_0)$, and (H\[it:twin-free\]) is satisfied by assumption. *Rule 2*. $H-v$ satisfies (H\[it:radius\_2\]) since $v$ is chosen to be in $N_2(v_0)$ if the latter is non-empty, hence so does $H' = H - u'_1 - \dots - u'_k - v$. Moreover, $H'$ satisfies (H\[it:twin-free\]) since $\{u_1, u'_1\},\ldots,\{u_k, u'_k\}$ were all the non-trivial equivalence classes of $\equiv$. *Rule 3*. Properties (H\[it:radius\_2\]) and (H\[it:twin-free\]) are clearly satisfied by construction. Before proceeding to prove Lemma \[lem:technical\], we need two lemmas that will be crucial in dealing with Rule 3. The first lemma shows that we can assume that $c(v_0)$ has a precise form. \[lem:ctilde\] Let $H$ be any graph with a universal vertex $v_0$. Assume that we are given a function $c_H: V(H)\rightarrow {\mathbb{Z}}_{\geqslant0}$ satisfying (C\[it:support\]) and (C\[it:2OPT+1\]). If $\tilde{c}_H : V(H) \to {\mathbb{Z}}_{\geqslant0}$ denotes the cost function obtained from $c_H$ by changing the cost of $v_0$ to $\tilde{c}_H(v_0):=c_H(H-v_0)-2\omega(H-v_0,c_H)+1$, then $\tilde{c}_H$ also satisfies (C\[it:support\]) and (C\[it:2OPT+1\]). We will use the following inequalities, which can be easily checked: $$\begin{aligned} \label{ineq:1} \operatorname{\mathrm{OPT}}(H,c_H) &\leqslant c_H(H-v_0)-\omega(H-v_0,c_H)\,,\\ \label{ineq:2} \operatorname{\mathrm{OPT}}(H,c_H) &\leqslant c_H(v_0)+\operatorname{\mathrm{OPT}}(H-v_0,c_H)\,.\end{aligned}$$ To show that $\tilde{c}_H$ satisfies (C\[it:support\]), it suffices to prove that $\tilde{c}_H(v_0)\geqslant c_H(v_0)$. From (\[ineq:1\]) and the fact that $c_H$ satisfies (C\[it:2OPT+1\]), we get $$c_H(H) \leqslant 2 \operatorname{\mathrm{OPT}}(H,c_H) + 1 \leqslant 2\big[c_H(H-v_0)-\omega(H-v_0,c_H)\big]+1\,.$$ This implies that $c_H(v_0) \leqslant \tilde{c}_H(v_0)$, as required. Hence going from $c_H$ to $\tilde{c}_H$, we actually *raise* the cost of $v_0$ (or possibly leave it unchanged, in which case there is nothing to prove). Now, we show that $\tilde{c}_H$ satisfies (C\[it:2OPT+1\]). Since $c_H$ satisfies property (C\[it:2OPT+1\]), we have $$\begin{aligned} c_H(H) &\leqslant 2\operatorname{\mathrm{OPT}}(H,c_H) + 1\\ &\leqslant c_H(H-v_0)-\omega(H-v_0,c_H) + c_H(v_0)+\operatorname{\mathrm{OPT}}(H-v_0,c_H) + 1\end{aligned}$$ where we summed and to obtain the second inequality above. Hence, $$\label{ineq:3} 0 \leqslant - \omega(H-v_0,c_H) + \operatorname{\mathrm{OPT}}(H-v_0,c_H) + 1 .$$ Letting $c$ be any cost function on the vertices of $H$, we denote by $\operatorname{\mathrm{OPT}}_0(H,c)$ (resp. $\operatorname{\mathrm{OPT}}_1(H,c)$) the minimum cost of a hitting set avoiding (resp. containing) $v_0$. Then, using , we have $$\begin{aligned} \operatorname{\mathrm{OPT}}_1(H,\tilde{c}_H) &= \tilde{c}_H(v_0) + \operatorname{\mathrm{OPT}}(H-v_0,c_H)\\ &= c_H(H-v_0)-2\omega(H-v_0,c_H)+1+\operatorname{\mathrm{OPT}}(H-v_0,c_H)\\ &\geqslant c_H(H-v_0)-\omega(H-v_0,c_H)\\ &= \operatorname{\mathrm{OPT}}_0(H,\tilde{c}_H) \end{aligned}$$ which means that $\operatorname{\mathrm{OPT}}(H,\tilde{c}_H) = \operatorname{\mathrm{OPT}}_0(H,\tilde{c}_H)$. Intuitively, we raised the cost of $v_0$ so much that the optimum hitting set for $(H,\tilde{c}_H)$ can be assumed to avoid $v_0$. Now, $$\tilde{c}_H(H) = 2c_H(H-v_0)-2\omega(H-v_0,c_H)+1 =2 \operatorname{\mathrm{OPT}}(H,\tilde{c}_H) + 1$$ and, in particular, $\tilde{c}_H$ satisfies (C\[it:2OPT+1\]). The second lemma ensures that the stable set $S$ used in Case 2 of Rule 3 exists. \[lem:stableset\] Let $H$ be an apex-chordal graph with a universal vertex $v_0$ that satisfies (H\[it:radius\_2\])–(H\[it:twin-free\]). Let $v$ be a simplicial vertex of $H-v_0$. Then there exists a stable set $S$ in $H-v_0$ such that $v\in S$ and that satisfies the following property: for every $u \in N(v)$ ($u \neq v_0$), there is $w \in S$ such that $vuw$ is an induced $P_3$. Let $v_1,\dots, v_k$ be the neighbours of $v$ in $H-v_0$. For $i \in [k]$, denote by $D_i$ the set of distinguishers for the edge $vv_i$. Each $D_i$ is non-empty by (H\[it:twin-free\]). Notice that vertices of $D_i$ are adjacent to $v_i$, and not to $v$, since $v$ is simplicial. Now, we construct $S$ so that $S \cap D_i$ is non-empty for each $i\in [k]$, proving the claim. We start with $D := \bigcup_{i=1}^k D_i$ and $S=\{v\}$. At each step, we pick the vertex $u\in D$ which maximizes $\delta(u):= |\{i\in [k]: u\in D_i\}|$, and add $u$ to $S$. Then we delete $N[u]$ from $D$, and repeat until $D$ is empty. First, it is clear that the set $S$ built this way is a stable set. Second, suppose by contradiction that there is some $i\in [k]$ such that $S\cap D_i=\emptyset$. This implies that all of $D_i$ was deleted during the procedure. Consider the first step in which a vertex $u'$ from $D_i$ was deleted, and let $u$ be the vertex added to $S$ during that step. Notice that $u$ and $u'$ are adjacent. Since $u$ was chosen and not $u'$, we have that $\delta(u) \geqslant \delta(u')$. But then, since $u\not\in D_i$, $u'\in D_i$, there must be $j\in [k]$ with $u\in D_j$, $u'\not\in D_j$. However this implies that $\{v_i, v_j, u, u'\}$ induces a 4-cycle in $H-v_0$, a contradiction to $H-v_0$ being chordal. We now proceed to complete the proof of Lemma \[lem:technical\]. We say that a reduction rule is *correct*, if the final cost function $c_{H}$ constructed from $c_{H'}$ satisfies (C\[it:support\]) and (C\[it:2OPT+1\]), provided that $c_{H'}$ does. By Lemma \[lem:satisfy\_H\_i\], it suffices to check that all three reduction rules are correct. Clearly, (C\[it:support\]) is preserved by all three rules, since for each vertex $u \in V(H) \cap N[v_0]$ we either set $c_H(u)=1$, or we have $c_H(u) \geqslant c_{H'}(u')$ for some $u' \in V(H') \cap N[v_0]$. Therefore, it suffices to prove that (C\[it:2OPT+1\]) is preserved. Rule 1 is correct. We get $c_{H}(H) = c_{H'}(H') \leqslant 2 \operatorname{\mathrm{OPT}}(H',c_{H'}) + 1 \leqslant 2\operatorname{\mathrm{OPT}}(H,c_{H}) + 1$. Rule 2 is correct. Let $Y$ be a hitting set of $H$. If $v \in Y$, then $Y' := Y - u'_1 - \ldots - u'_k - v$ is a hitting set of $H'$, and $$c_H(Y) \geqslant c_{H}(v) + \operatorname{\mathrm{OPT}}(H',c_{H'}) = \sum_{i=1}^k c_{H'}(u_i) + \operatorname{\mathrm{OPT}}(H',c_{H'})\,.$$ If $v \notin Y$, then $Y$ contains at least one of $u_i$ or $u'_i$ for each $i \in [k]$ because $\{u_i,u'_i,v\}$ induces a $P_3$ for each $i \in [k]$. By symmetry, we may assume that $Y$ contains $u'_i$ for all $i \in [k]$. Then $Y' := Y - u'_1 - \ldots - u'_k$ is a hitting set of $H'$, and $$c_H(Y) \geqslant \sum_{i=1}^k c_{H'}(u_i) + \operatorname{\mathrm{OPT}}(H',c_{H'})\,.$$ Hence we have $$\operatorname{\mathrm{OPT}}(H,c_{H})\geqslant \sum_{i=1}^k c_{H'}(u_i) + \operatorname{\mathrm{OPT}}(H',c_{H'})\,.$$ Therefore, $$\begin{aligned} c_{H}(H) &= c_{H}(v) + \sum_{i=1}^k c_{H}(u'_i) + c_{H'}(H')\\ &\leqslant 2 \sum_{i=1}^k c_{H'}(u_i) + 2 \operatorname{\mathrm{OPT}}(H',c_{H'}) + 1 \quad \text{\big[$c_{H'}$ satisfies (C\ref{it:2OPT+1})\big]}, \\ &\leqslant 2 \operatorname{\mathrm{OPT}}(H,c_{H}) + 1\,,\end{aligned}$$ as required. Rule 3 is correct. Recall that, since we are applying Rule 3 to $H$, we must have $N_2(v_0) = \varnothing$ and $H' := H - v$, where $v$ is a simplicial vertex of $H-v_0$. We start with Case 1, in which we have $c_{H'}(K) < \omega(H'-v_0,c_{H'})$, where $K:=N_{H-v_0}(v)$. Equivalently, $\omega(H-v_0,c_{H}) = \omega(H'-v_0,c_{H'})$. Notice that $c_{H}(H) = c_{H'}(H')+2$. We simply have to check that $\operatorname{\mathrm{OPT}}(H,c_{H}) \geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + 1$. Let $Y$ be any hitting set of $H$. We want to show that $c_H(Y) \geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + 1$. If $Y$ contains $v_0$, there is nothing to show. Otherwise, $Y$ is the complement of a clique in $H-v_0$ and we get $$\begin{aligned} c_{H}(Y) &\geqslant c_H(H-v_0)-\omega(H-v_0,c_{H})\\ &= c_{H'}(H'-v_0)-\omega(H'-v_0,c_{H'})+1 \quad \text{\big[$c_H(v) = c_{H'}(v)+1$\big]}\\ &\geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + 1\,,\end{aligned}$$ where the last inequality follows because the complement of each clique in $H'-v_0$ is a hitting set for $H'$. Now, we switch to Case 2, where we have $c_{H'}(K) = \omega(H'-v_0, c_{H'})$, hence $\omega(H-v_0,c_H)= \omega(H'-v_0, c_{H'})+1$. By applying Lemma \[lem:ctilde\] to $(H', c_{H'})$, we may assume that $c_{H'}(v_0)=c_{H'}(H'-v_0)-2\omega(H'-v_0,c_{H'})+1$. Since $c_{H}(H-v_0) = c_{H'}(H-v_0)+|S|$, we have $c_{H}(v_0) = c_{H'}(v_0) + |S|-2$. Next, by Lemma \[lem:stableset\], we find the required stable set $S$. Notice that $c_{H}(H) = c_{H'}(H')+2(|S|-1)$. What we need to prove now is that $\operatorname{\mathrm{OPT}}(H,c_{H}) \geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + |S|-1$. Let $Y$ be a hitting set of $H$. First, assume that $v_0\not\in Y$, hence $Y$ must be the complement of a clique in $H - v_0$. That means: $$\begin{aligned} c_{H}(Y) &\geqslant c_{H}(H-v_0) - \omega(H-v_0,c_H)\\ &= c_{H'}(H'-v_0) + |S| - \omega(H'-v_0,c_{H'}) - 1\\ &\geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + |S| - 1\,.\end{aligned}$$ Now, assume that $Y$ contains $v_0$. We get: $$c_H(Y)=c_{H'}(Y - v) + |S|-2 + |Y\cap S|.$$ If $Y\cap S\neq \varnothing$, we are done since $Y - v$ is a hitting set for $H'$. If $Y\cap S = \varnothing$, then $K \subseteq Y$, since every $u\in K$ is in an induced $P_3$ whose other two vertices are in $S$, by Lemma \[lem:stableset\]. Hence we have: $$\begin{aligned} c_{H}(Y) &\geqslant c_H(v_0)+ \omega(H'-v_0,c_{H'})\\ &= |S|-2+c_{H'}(v_0)+ \omega(H'-v_0, c_{H'}) \\ &=|S|-2 + c_{H'}(H'-v_0)-\omega(H'-v_0, c_{H'}) +1\\ &\geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + |S| - 1\,.\end{aligned}$$ In conclusion, $\operatorname{\mathrm{OPT}}(H,c_H) \geqslant \operatorname{\mathrm{OPT}}(H',c_{H'}) + |S| - 1$, and $c_H$ satisfies (C\[it:2OPT+1\]). This completes the entire proof. Running-time Analysis {#sec:analysis} ===================== We now analyse the running-time of Algorithm \[algo\]. We assume that input graphs are given by their adjacency matrix. The crux of the analysis is to determine the running-time of an algorithmic version of Lemma \[lem:technical\], which we call Algorithm \[rulesalgo\]. We also need the following easy lemma, whose proof we include for completeness. \[lem:equalrows\] Given a matrix $N \in \{0,1\}^{r \times c}$, the set of all equivalence classes of equal rows of $N$ can be found in time $O(r c)$. Let $R_0$ and $R_1$ be the set of rows of $N$ whose first entry is $0$ and $1$, respectively. We can determine $R_0$ and $R_1$ by reading the first column of $N$, which takes time $O (r)$. We then recurse on $N_0'$ and $N_1'$, where $N_i'$ is the submatrix of $N$ induced by $R_i$ and the last $c-1$ columns of $N$. Graph $H$ and $v_0 \in V(H)$ satisfying properties (H1)–(H4) of Lemma \[lem:technical\] $c_H$ such that $(H, c_H)$ satisfies (C1) and (C2) of Lemma \[lem:technical\] \[lem:rulesalgoanalysis\] Algorithm \[rulesalgo\] runs in $O (|V(H)|^3)$-time. Let $h:=|V(H)|$. On Step \[step:check(H4)\], we check if Rule 1 applies. We claim this can be done in time $O(h^2)$. To see this, let $A$ be the adjacency matrix of $H':=H-v$ and $B := A+I$. Note that $x, y \in V(H')$ are true twins in $H'$ if and only if the rows of $B$ corresponding to $x$ and $y$ are identical. By Lemma \[lem:equalrows\], we can compute all equivalence classes of equal rows of $B$ in time $O(h^2)$. Observe that (H4) is satisfied if and only if each equivalence class $X$ contains at most one vertex from $N[v_0]$. Checking if $N_2(v_0)$ is empty is $O(h^2)$. Computing the matching $M$ is also $O(h^2)$, by Lemma \[lem:equalrows\]. On Step \[step:simplicial\], a simplicial vertex of $H$ can be found by taking the last vertex of a perfect elimination ordering of $H-v_0$, which can be computed in time $O (h^2)$ by [@RTL76]. Finally, for Case 1 of Rule 3, we need to compute $\omega(H-v-v_0,c_{H'})$. Since $c_{H'} \geqslant \mathbf{0}$, the maximum weight clique will also be a maximal clique. Therefore, it can be found in $O(h)$ time by using the perfect elimination ordering of $H-v_0$. For Case 2 of Rule 3, we need to find the stable set $S$. This can be achieved by the greedy algorithm outlined in the proof of Lemma \[lem:stableset\], which can be easily seen to be $O(h^2)$. Therefore, letting $T(h)$ be the running-time of Algorithm \[rulesalgo\], we have $T(h) \leqslant T(h-1) + O(h^2)$, from which we see $T(h) = O(h^3)$. \[lem:costtime\] Let $G$ be an $n$-vertex, twin-free graph. Step \[step:find\_good\_H\] of Algorithm 1, that is, the construction of the $2$-good weighted induced subgraph $(H,c_H)$, can be performed in time $O(n^3)$. We fix any vertex $v_0\in V(G)$, and let $H=G[N_{\leqslant 2}[v_0]]$. We can check in linear time whether $H[N(v_0)]$ is chordal by using the algorithm from [@tarjan1984]. If $G$ is not chordal this algorithm returns, as a certificate, an induced hole $C$. By Lemma \[lem:wheel\], $H[V(C)+v_0]$ is strongly 2-good and the corresponding function $c_H$ can be computed straightforwardly, hence we are done. Otherwise $H$ fulfills the conditions of Lemma \[lem:technical\], and thus Algorithm \[rulesalgo\] on input $(v_0, H)$ computes $c_H$ in time $O(n^3)$, by Lemma  \[lem:rulesalgoanalysis\]. \[lem:analysis\] Algorithm 1 runs in $O(n^4)$-time. By Lemma \[lem:equalrows\], finding all true twins in $G$ can be done in time $O (n^2)$. Therefore, the most expensive recursive call of the algorithm is the construction of the $2$-good weighted induced subgraph $(H,c_H)$ from Lemma \[lem:costtime\], which can be done in time $O(n^3)$. Therefore, the running-time $T(n)$ of the algorithm satisfies $T(n) \leqslant T(n-1) + O(n^3)$, which gives $T(n) = O(n^4)$. We conclude the section with a proof of our main result. The proof is identical to [@FJS20 Proof of Theorem 1, pages 365–366], except that factor $9/4$ needs to be replaced everywhere by $2$. One easily proves by induction that the vertex set $X$ output by the algorithm on input $(G,c)$ is a minimal hitting set with $c(X) \leqslant 2 \operatorname{\mathrm{OPT}}(G,c)$. We do not include more details here, and instead refer the reader to [@FJS20]. Lemma \[lem:analysis\] guarantees that Algorithm \[algo\] runs in polynomial time. We remark that it may be possible to incur a speedup with the use of dynamic data structures. We leave this as an open question. Polyhedral results {#sec:polyhedral} ================== We begin with a brief description of the Sherali-Adams hierarchy [@SA1990], which is a standard procedure to obtain strengthened LP relaxations for binary linear programs. For a more thorough introduction, we refer the reader to [@laurent03]. Let $P = \{ x \in {\mathbb{R}}^n \mid A x \geqslant b \}$ be a polytope contained in $[0,1]^n$ and $P_I := \mathrm{conv}(P \cap {\mathbb{Z}}^n)$. For each $r \in \mathbb N$, we define a polytope $P \supseteq \operatorname{\mathsf{SA}}_1(P) \supseteq \dots \supseteq \operatorname{\mathsf{SA}}_r(P) \supseteq P_I$ as follows. Let $N_r$ be the *nonlinear* system obtained from $P$ by multiplying each constraint by $\prod_{i \in I} x_i \prod_{j \in J}(1-x_j)$ for all disjoint subsets $I, J$ of $[n]$ such that $1 \leqslant |I|+|J| \leqslant r$. Note that if $x_i \in \{0,1\}$, then $x_i^2=x_i$. Therefore, we can obtain a *linear* system $L_r$ from $N_r$ by setting $x_i^2:=x_i$ for all $i \in [n]$ and then $x_I:=\prod_{i \in I} x_i$ for all $I \subseteq [n]$ with $|I| \geqslant 2$. We then let $\operatorname{\mathsf{SA}}_r(P)$ be the projection of $L_r$ onto the variables $x_i$, $i \in [n]$. Let $\mathcal{P}_3(G)$ denote the collection of all vertex sets $\{u,v,w\}$ that induce a $P_3$ in $G$ and let $\operatorname{\mathsf{SA}}_r(G):=\operatorname{\mathsf{SA}}_r(P(G))$, where $$P(G) := \{ x \in [0,1]^{V(G)} \mid \forall \{u,v,w\} \in \mathcal{P}_3(G) : x_u + x_v + x_w \geqslant 1 \}\,.$$ If a cost function $c : V(G) \to {\mathbb{R}}_{\geqslant 0}$ is provided, we let $$\operatorname{\mathsf{SA}}_r(G,c) := \min \left\{\sum_{v \in V(G)} c(v) x_v \mid x \in \operatorname{\mathsf{SA}}_r(G)\right\}$$ denote the optimum value of the corresponding linear programming relaxation. For the sake of simplicity, we sometimes denote by $\operatorname{\mathsf{SA}}_r(G,c)$ the above linear program itself. We say vertices $a$ and $b$ form a *diagonal* if there are vertices $u,v$ such that $\{u,v,a\} \in \mathcal{P}_3(G)$ and $\{u,v,b\} \in \mathcal{P}_3(G)$. We say that a path *contains a diagonal* if any of its pairs of vertices are diagonals. Note that a diagonal pair in a path need not be an edge in the path. Our first results concern $\operatorname{\mathsf{SA}}_1(G)$. For later use, we list here the inequalities defining $\operatorname{\mathsf{SA}}_1(G)$. For all $\{u,v,w\} \in \mathcal{P}_3(G)$ and $z \in V(G-u-v-w)$, we have the inequalities $$\begin{aligned} \label{ineq:SAtype1} x_u + x_v + x_w &\geqslant 1 + x_{uv} + x_{vw}\,,\\ \label{ineq:SAtype2} x_{uz} + x_{vz} + x_{wz} &\geqslant x_z \quad \text{and}\\ \label{ineq:SAtype3} x_u + x_v + x_w + x_{z} &\geqslant 1 + x_{uz} + x_{vz} + x_{wz}\,.\end{aligned}$$ In addition, there are the inequalities $$\label{ineq:SAbounds} 1 \geqslant x_{v} \geqslant x_{vu} \geqslant 0$$ for all distinct $u, v \in V(G)$. The polytope $\operatorname{\mathsf{SA}}_1(G)$ is the set of all $(x_v)$ such that there exists $(x_{uv})$ such that inequalities – are satisfied. Note that by definition, $x_{uv}$ and $x_{vu}$ are the same variable. \[lem:diagonal\] Let $x \in \operatorname{\mathsf{SA}}_1(G)$. If $G$ contains a $P_3$ which has a diagonal, then $x_v \geqslant 2/5$ for some vertex of the $P_3$. Assume that $ab$ is a diagonal and all components of $x$ are less than 2/5. Since $ab$ is a diagonal, there exist $u,v \in V(G)$ with $\{u,v,a\}, \{u,v,b\} \in \mathcal{P}_3(G)$. In particular, from we have $x_a + x_u + x_v \geqslant 1 + x_{au} + x_{av}$ and from $x_{ab} + x_{au} + x_{av} \geqslant x_a$. Adding these two inequalities, we obtain $x_u + x_v + x_{ab} \geqslant 1$. We must have $x_{ab} \geqslant 1/5$ since otherwise $\max(x_u,x_v) \geqslant 2/5$. Now let $c$ denote the third vertex of the $P_3$ (possibly $c$ is the middle vertex). By and , we have $x_a + x_b + x_c \geqslant 1 + x_{ab} + x_{ac} \geqslant 6/5$ which means that $\max(x_a,x_b,x_c) \geqslant 2/5$. A contradiction which concludes the proof. In the next result, the *order* of weighted graph $(G,c)$ is simply defined as the order $|G|$ of $G$. \[lem:itrounding\] Fix $\alpha \geqslant 1$ and $r \in {\mathbb{Z}}_{\geqslant 0}$. Let $(G,c)$ be a minimum order weighted graph such that $\operatorname{\mathrm{OPT}}(G,c) > \alpha \cdot \operatorname{\mathsf{SA}}_r(G,c)$. The following two assertions hold: (i) if $x$ is an optimal solution to $\operatorname{\mathsf{SA}}_r(G,c)$, then $x_v < 1/\alpha$ for all $v \in V(G)$; (ii) $G$ is connected and twin-free. \(i) Suppose for contradiction that there is some component $x_v \geqslant 1/\alpha$. Note that $x$ restricted to components other than $v$ is a feasible solution to $\operatorname{\mathsf{SA}}_r(G-v,c)$. Thus $\operatorname{\mathsf{SA}}_r(G-v,c) \leqslant \operatorname{\mathsf{SA}}_r(G,c) - c(v)x_v$. By minimality of $G$, there is a hitting set $X'$ of $G-v$ such that $c(X') \leqslant \alpha \cdot \operatorname{\mathsf{SA}}_r(G-v,c)$. Therefore $X:= X' + v$ is a hitting set of $G$ with $c(X) = c(v) + c(X') \leqslant c(v) + \alpha \cdot \operatorname{\mathsf{SA}}_r(G-v,c) \leqslant \alpha \cdot c(v) x_v + \alpha \cdot \operatorname{\mathsf{SA}}_r(G-v,c) \leqslant \alpha \cdot \operatorname{\mathsf{SA}}_r(G,c)$, a contradiction. \(ii) First, note that $G$ is connected, otherwise there exists a connected component $H$ of $G$ such that $\operatorname{\mathrm{OPT}}(H,c_H) > \alpha \cdot \operatorname{\mathsf{SA}}_r(H,c_H)$, where $c_H$ is the restriction of $c$ to $V(H)$, contradicting the minimality of $G$. Second, we show that $G$ is twin-free. Note that if $u$, $v$ are true twins we can delete either of them, say $v$, and set $c'(u) := c(u)+c(v)$, $c'(w) := c(w)$ for $w$ distinct from $u$ and obtain a smaller weighted graph $(G',c')$. We claim that $\operatorname{\mathsf{SA}}_r(G',c') \leqslant \operatorname{\mathsf{SA}}_r(G,c)$. To see this, let $x$ be an optimal solution to $\operatorname{\mathsf{SA}}_r(G,c)$, let $x'_u := \min (x_u,x_v)$ and $x'_w := x_w$ for $w \neq u,v$. By symmetry, this defines a feasible solution $x'$ to $\operatorname{\mathsf{SA}}_r(G',c')$ of cost $$\sum_{w \neq v} c'(w) x'_w = (c(u) + c(v)) \min(x_u,x_v) + \sum_{w \neq u,v} c(w) x_w \leqslant \sum_w c(w) x_w\,.$$ This proves the claim. Since $|G'| < |G|$ there is some hitting set $X'$ of $G'$ such that $c'(X') \leqslant \alpha \cdot \operatorname{\mathsf{SA}}_r(G',c')$. Let $X := X'$ if $u \notin X'$ and $X := X' + v$ otherwise. In either case, $c(X) = c'(X') \leqslant \alpha \cdot \operatorname{\mathsf{SA}}_r(G',c') \leqslant \alpha \cdot \operatorname{\mathsf{SA}}_r(G,c)$, a contradiction. Hence, $G$ is twin-free. \[lem:triangle\_claw\] If $(G,c)$ is a minimum order weighted graph such that $\operatorname{\mathrm{OPT}}(G,c) > 5/2 \cdot \operatorname{\mathsf{SA}}_1(G,c)$, then $G$ is triangle-free and claw-free. First, we show that $G$ is triangle-free. Suppose $G$ contains a triangle with vertices $u$, $v$ and $w$. Since $G$ is twin-free, every edge of the triangle has a distinguisher. Without loss of generality, $\mathcal{P}_3(G)$ contains $\{u,v,y\}$, $\{u,w,y\}$, $\{v,w,z\}$ and $\{u,v,z\}$ where $y, z$ are distinct vertices outside the triangle. It is easy to see that, for instance, edge $uw$ is a diagonal contained in a $P_3$. By Lemmas \[lem:diagonal\] and \[lem:itrounding\].(i), we obtain a contradiction, and conclude that $G$ is triangle-free. A similar argument shows that $G$ cannot contain a claw because again there will be at least one induced $P_3$ containing a diagonal, which yields a contradiction by Lemmas \[lem:diagonal\] and \[lem:itrounding\].(i). We leave the details to the reader. \[thm:SA1CVD\_UB\] For every graph $G$, the integrality gap of $\operatorname{\mathsf{SA}}_1(G)$ is at most $5/2$. We show that for every cost function $c$ on the vertices of $G$, there exists some hitting set $X$ such that $c(X) \leqslant 5/2 \cdot \operatorname{\mathsf{SA}}_1(G,c)$. Suppose not, and let $(G,c)$ be a minimum order counter-example. By Lemma \[lem:triangle\_claw\], $G$ is triangle-free and claw-free. Hence the maximum degree of $G$ is at most $2$. Since $G$ is connected, by Lemma \[lem:itrounding\].(ii), $G$ is either a path or a cycle. We claim that in fact $\operatorname{\mathsf{SA}}_0(G,c)$ (the basic LP) has integrality gap at most $2$ in this case. Paths are solved exactly since the coefficient matrix of the LP is totally unimodular in this case, by the consecutive ones property [@schrijver98]. Now suppose that $G$ is a cycle, and let $x$ be an extreme optimal solution of $\operatorname{\mathsf{SA}}_0(G,c)$. First, assume that there is some $v \in V(G)$ such that $x_v \geqslant 1/2$. Since $G - v$ is a path, there exists a hitting set $X'$ in $G-v$ of cost $c(X') \leqslant \sum_{u \neq v} c(u)x_u$, by the previous paragraph. Hence, we see that $X:= X' + v$ is a hitting set of $G$ with $c(X) = c(v) + c(X') \leqslant c(v) + \sum_{u \neq v}c(u) x_u \leqslant 2\sum_{u} c(u) x_u = 2\operatorname{\mathsf{SA}}_0(G,c)$. On the other hand if $x_v < 1/2$ for all vertices $v$, then there can be no vertex $v$ with $x_v = 0$ since then $x_u + x_v + x_w \geqslant 1$ implies $\max(x_u,x_w) \geqslant 1/2$, where $u, w$ are the neighbors of $v$ in $G$. So $0 < x_v < 1/2$ for all $v \in V(G)$. Therefore, extreme point $x$ is the unique solution of $|V(G)|$ equations of the form $x_u + x_v + x_w = 1$ for $\{u,v,w\} \in \mathcal{P}_3(G)$. Hence $x_v = 1/3$ for all vertices. Thus $\operatorname{\mathsf{SA}}_0(G,c) = 1/3 \cdot c(G)$. Now notice that since $G$ is a cycle we can partition the vertices of $G$ into two disjoint hitting sets $X$ and $Y$. Without loss of generality assume that $c(X) \leqslant 1/2 \cdot c(G)$. Then $c(X) \leqslant 3/2 \cdot \operatorname{\mathsf{SA}}_0(G,c)$. This concludes the proof that the integrality gap is at most $5/2$. \[thm:SA1CVD\_LB\] For every ${\varepsilon}> 0$ there is some instance $(G,c)$ of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} such that $\operatorname{\mathrm{OPT}}(G,c) \geqslant (5/2 - {\varepsilon}) \operatorname{\mathsf{SA}}_1(G,c)$. We show there is a graph $G$ for which every hitting set $X$ has $c(X) \geqslant (5/2 - {\varepsilon}) \operatorname{\mathsf{SA}}_1(G,c)$ for $c := \mathbf{1}_G$. Let $G$ be a graph whose girth is at least $k$ for some constant $k \geqslant 5$ and with $\alpha(G) \leqslant n/k$ where $n := |G|$. It can be shown via the probabilistic method that such a $G$ exists, see [@alonspencer2016]. Set $c(v) := 1$ for all $v \in V(G)$. We have $c(X) \geqslant n(1-2/k)$ for every hitting set $X$. To see this observe that since $G$ is triangle-free and $\alpha(G) \leqslant n/k$, when we remove $X$ we will get at most $n/k$ components each of size at most 2. Thus there are at most $2n/k$ vertices in $G-X$, so $|X| \geqslant n-2n/k$. Therefore, $\operatorname{\mathrm{OPT}}(G,c) \geqslant (1-2/k) n$. In order to show $\operatorname{\mathsf{SA}}_1(G,c) \leqslant 2n/5$, we construct the following feasible solution $x$ to $\operatorname{\mathsf{SA}}_1(G,c)$. Set $x_{v} := 2/5$ for all $v \in V(G)$ and $x_{vw} := 0$ if $vw \in E(G)$ and $x_{vw} := 1/5$ if $vw \notin E(G)$. The inequalities defining $\operatorname{\mathsf{SA}}_1(G)$ are all satisfied by $x$. This is obvious for inequalities , and . For inequality , notice that at most one of $uz$, $vz$, $wz$ can be an edge of $G$, since otherwise $G$ would have a cycle of length at most $4$. Thus is satisfied too, $x \in \operatorname{\mathsf{SA}}_1(G)$ and $\operatorname{\mathsf{SA}}_1(G,c) \leqslant 2n/5$. This completes the proof since, by taking $k \geqslant 5/{\varepsilon}$, we have $\operatorname{\mathrm{OPT}}(G,c) \geqslant n(1-2/k) \geqslant (5/2-{\varepsilon}) 2n/5 \geqslant (5/2-{\varepsilon}) \operatorname{\mathsf{SA}}_1(G,c)$. \[thm:SA\_2+eps\] For every fixed ${\varepsilon}> 0$, performing $r = \mathrm{poly}(1/{\varepsilon})$ rounds of the Sherali-Adams hierarchy produces an LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} whose integrality gap is at most $2+{\varepsilon}$. That is, $\operatorname{\mathrm{OPT}}(G,c) \leqslant (2+{\varepsilon}) \operatorname{\mathsf{SA}}_r(G,c)$ for all weighted graphs $(G,c)$. In order to simplify the notations below, let us assume that $2/{\varepsilon}$ is integer. For instance, we could restrict to ${\varepsilon}= 2^{-k}$ for some $k \in {\mathbb{Z}}_{\geqslant 1}$. This does not hurt the generality of the argument. We take $r := 1 + (2/{\varepsilon})^4$. We may assume that ${\varepsilon}< 1/2$ since otherwise we invoke Theorem \[thm:SA1CVD\_UB\] (taking $r=1$ suffices in this case). Let $(G,c)$ be a counter-example to the theorem, with $|G|$ minimum. By Lemma  \[lem:itrounding\].(i), for every optimal solution $x$ to $\operatorname{\mathsf{SA}}_r(G,c)$, every vertex $v \in V(G)$ has $x_v < 1/(2+{\varepsilon})$. By Lemma \[lem:itrounding\].(ii), $G$ is twin-free (and connected). We will use the following fact several times in the proof: for all $R \subseteq V(G)$ with $|R| \leqslant r$ and every $x \in \operatorname{\mathsf{SA}}_r(G)$, the restriction of $x$ to the variables in $R$ is a convex combination of hitting sets of $G[R]$. This is easy to see since, denoting by $x_R$ the restriction of $x$, we get $x_R \in \operatorname{\mathsf{SA}}_r(G[R])$ and the Sherali-Adams hierarchy is known to converge in at most “dimension-many” rounds, see for instance [@CCZ14]. First, we claim that $G$ has no clique of size at least $2/{\varepsilon}$. Suppose otherwise, let $C$ be a clique of size $k := 2/{\varepsilon}$ and let $D$ be a minimal set such that each edge of $C$ has a distinguisher in $D$. Let $H := G[C \cup D]$ and consider the cost function $c_H$ constructed in Lemma \[lem:technical\] (see also [@FJS20 Lemma 3]). Since $r \geqslant 2k-1 \geqslant |H|$ and since every valid inequality supported on at most $r$ vertices is valid for $\operatorname{\mathsf{SA}}_r(G)$, the inequality $\sum_v c_H(v) x_v \geqslant k-1$ is valid for $\operatorname{\mathsf{SA}}_r(G)$. Since $c_H(H) = 2k-1$, this implies that for all $x \in \operatorname{\mathsf{SA}}_r(G)$, there is some vertex $a \in V(H)$ with $x_a \geqslant (k-1)/(2k-1)$. Since $(k-1)/(2k-1) \geqslant 1/(2+{\varepsilon})$, we get a contradiction. This proves our first claim. Second, we claim that for every $v_0 \in V(G)$, the subgraph of $G$ induced by the neighborhood $N(v_0)$ has no stable set of size at least $2/{\varepsilon}$. The proof is similar to that for cliques given above, except that this time we let $H$ be the induced star ($K_{1,k}$) on $v_0$ and a stable set $S$ of size $k = 2/{\varepsilon}$. The cost function $c_H$ given by Lemma \[lem:technical\] has $c_H(v_0) = k-1$ and $c_H(v) = 1$ for all $v \in S$. Notice that once again $c_H(H) = 2k-1$. The *star inequality* $\sum_v c_H(v) \geqslant k-1$ is valid for $\operatorname{\mathsf{SA}}_r(G)$, which guarantees that for every $x \in \operatorname{\mathsf{SA}}_r(G)$ there is some $a \in V(H)$ which has $x_a \geqslant (k-1)/(2k-1) \geqslant 1/(2+{\varepsilon})$. This establishes our second claim. Third, we claim that the neighborhood of every vertex $v_0$ induces a chordal subgraph of $G$. Suppose that $C$ is a hole in $G[N(v_0)]$. We first deal with the case $|C| \leqslant r-1 = (2/{\varepsilon})^4$. We can repeat the same proof as above, letting $H$ be the induced wheel on $V(C) + v_0$ and using the cost function $c_H$ defined in the proof of Lemma \[lem:wheel\]. Consider the *wheel inequality* $\sum_{v} c_H(v) x_v \geqslant k-3$, where $k := |H| = |C|+1$. Since the wheel has at most $r$ vertices, the wheel inequality is valid for $\operatorname{\mathsf{SA}}_r(G)$. Since $c_H(H) = 2k-6 = 2(k-3)$, for every $x \in \operatorname{\mathsf{SA}}_r(G)$, there is some $a \in V(H)$ which has $x_a \geqslant 1/2 \geqslant 1/(2+{\varepsilon})$. This concludes the case where $|C|$ is “small”. Now, assume that $|C| \geqslant r$, and consider the wheel inequality with right-hand side scaled by $2/(2+{\varepsilon})$. Suppose this inequality is valid for $\operatorname{\mathsf{SA}}_r(G)$. This still implies that some vertex $a$ of $H$ has $x_a \geqslant 1/(2+{\varepsilon})$, for all $x \in \operatorname{\mathsf{SA}}_r(G)$, which produces the desired contradiction. It remains to prove that the scaled wheel inequality is valid for $\operatorname{\mathsf{SA}}_r(G)$. Let $F$ denote any $r$-vertex induced subgraph of $H$ that is a fan. Hence, $F$ contains $v_0$ as a universal vertex, plus a path on $r-1$ vertices. Letting $c_F(v_0) := r-3 - \lfloor (r-1) / 3 \rfloor$ and $c_F(v) := 1$ for $v \in V(F-v_0)$, we get the inequality $\sum_{v} c_F(v) x_v \geqslant r-3$, which is valid for $\operatorname{\mathsf{SA}}_r(G)$. By taking all possible choices for $F$, and averaging the corresponding inequalities, we see that the inequality $$\begin{aligned} &\left( r-3 - \left\lfloor \frac{r-1}{3} \right\rfloor \right) x_{v_0} + \frac{r-1}{k-1} \sum_{v \in V(H-v_0)} x_v \geqslant r-3\\ \iff & \frac{k-1}{r-1} \left( r-3 - \left\lfloor \frac{r-1}{3} \right\rfloor \right) x_{v_0} + \sum_{v \in V(H-v_0)} x_v \geqslant \frac{k-1}{r-1}(r-3)\end{aligned}$$ is valid for $\operatorname{\mathsf{SA}}_r(G)$. It can be seen that this inequality dominates the scaled wheel inequality, in the sense that each left-hand side coefficient is not larger than the corresponding coefficient in the scaled wheel inequality, while the right-hand side is not smaller than the right-hand side of the scaled wheel inequality. Therefore, the scaled wheel inequality is valid for $\operatorname{\mathsf{SA}}_r(G)$. This concludes the proof of our third claim. By the first, second and third claim, $|N(v_0)| \leqslant \omega(G[N(v_0)]) \cdot \alpha(G[N(v_0)]) \leqslant 4/{\varepsilon}^2$ for all choices of $v_0$. This implies in particular that $|N_{\leqslant 2}[v_0]| \leqslant 1+16/{\varepsilon}^4 = r$. Now let $H := G[N_{\leqslant 2}[v_0]]$. Lemma \[lem:technical\] applies since $G$ is twin-free, by our second claim. Let $c_H$ be the cost function constructed in Lemma \[lem:technical\]. The inequality $\sum_{v} c_H(v) x_v \geqslant \operatorname{\mathrm{OPT}}(H,c_H)$ is valid for $\operatorname{\mathsf{SA}}_r(G)$. Let $\lambda^*$ be defined as in Step \[step:lambda\_star\] of Algorithm \[algo\], and let $a \in V(G)$ denote any vertex such that $(c - \lambda^* c_H)(a) = 0$. By minimality of $G$, there exists in $(G',c') := (G-a,c - \lambda^* c_H)$ a hitting set $X'$ of cost $c'(X') \leqslant (2+{\varepsilon})\operatorname{\mathsf{SA}}_r(G',c')$. We let $X := X'$ in case $X'$ is a hitting set of $G$, and $X := X' + a$ otherwise. Assume that $X = X' + a$, the other case is easier. We have $$\begin{aligned} c(X) &= c'(X') + \lambda^* c_H(X)\\ &\leqslant (2+{\varepsilon}) \operatorname{\mathsf{SA}}_r(G',c') + \lambda^* (c_H(H)-1)\\ &\leqslant (2+{\varepsilon}) \operatorname{\mathsf{SA}}_r(G',c') + 2 \lambda^* \operatorname{\mathrm{OPT}}(H,c_H)\\ &\leqslant (2+{\varepsilon}) \left(\operatorname{\mathsf{SA}}_r(G',c') + \lambda^* \operatorname{\mathrm{OPT}}(H,c_H)\right)\,.\end{aligned}$$ By LP duality, we have $\operatorname{\mathsf{SA}}_r(G,c) \geqslant \operatorname{\mathsf{SA}}_r(G',c') + \lambda^* \operatorname{\mathrm{OPT}}(H,c_H)$. This implies that $c(X) \leqslant (2+{\varepsilon}) \operatorname{\mathsf{SA}}_r(G,c)$, contradicting the fact that $(G,c)$ is a counter-example. This concludes the proof. We now complement the result above by showing that every LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} with (worst case) integrality gap at most $2-{\varepsilon}$ must have super-polynomial size. The result is a simple consequence of an analogous result of [@bazzi2019no] on the integrality gap of [<span style="font-variant:small-caps;">Vertex Cover</span>]{}, and of the straightforward reduction from [<span style="font-variant:small-caps;">Vertex Cover</span>]{} to [<span style="font-variant:small-caps;">Cluster-VD</span>]{}. \[prop:EFlowerbound\] For infinitely many values of $n$, there is a graph $G$ on $n$ vertices such that every size-$n^{o(\log n/ \log \log n)}$ LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} on $G$ has integrality gap $2 - o(1)$. In [@bazzi2019no] a similar result is proved for LP-relaxations of [<span style="font-variant:small-caps;">Vertex Cover</span>]{}: for infinitely many values of $n$, there is a graph $G$ on $n$ vertices such that every size-$n^{o(\log n/ \log \log n)}$ LP relaxation of [<span style="font-variant:small-caps;">Vertex Cover</span>]{} on $G$ has integrality gap at least $2 - {\varepsilon}$, where ${\varepsilon}={\varepsilon}(n)=o(1)$ is a non-negative function. Let $G$ be such a graph, and let $G^+$ be the graph obtained from $G$ by attaching a pendant edge to every vertex. It is easy to see that $U \subseteq V(G)$ is a hitting set for $G^+$ if and only if $U$ is a vertex cover of $G$. Toward a contradiction, suppose that $Ax \geqslant b$ is a size-$n^{o(\log n / \log \log n)}$ LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{} on $G^+$ with integrality gap at most $2-\delta$, for a fixed $\delta > {\varepsilon}$ (where $x \in {\mathbb{R}}^d$ for some dimension $d$ depending on $G$). For every $c^+ \in{\mathbb{Q}}^{V(G^+)}_{\geqslant 0}$ there exists a hitting set $X$ of $G^+$ such that $c^+(X) \leqslant (2-\delta) \operatorname{\mathrm{LP}}(G^+,c^+)$. We can easily turn $Ax \geqslant b$ into an LP relaxation for [<span style="font-variant:small-caps;">Vertex Cover</span>]{}. For every vertex cover $U$ of $G$, we let the corresponding point be the point $\pi^U \in {\mathbb{R}}^d$ for $U$ seen as a hitting set in $G^+$. For every $c \in {\mathbb{Q}}_{\geqslant 0}^{V(G)}$, we define $c^+ \in {\mathbb{Q}}^{V(G^+)}_{\geqslant 0}$ via $c^+(v) := c(v)$ for $v \in V(G)$, and $c^+(v) := \sum_{u \in V(G)} c(u)$ for $v \in V(G^+) \setminus V(G)$. Then, we let the affine function $f_c$ for $c$ be the affine function $f_{c^+}$ for $c^+$. Since the integrality gap of $Ax \geqslant b$, seen as an LP relaxation of [<span style="font-variant:small-caps;">Cluster-VD</span>]{}, is at most $2 - \delta$, for every $c \in {\mathbb{Q}}_{\geqslant 0}^{V(G)}$ there exists a hitting set $X$ of $G^+$ whose cost is at most $(2-\delta) \operatorname{\mathrm{LP}}(G^+,c^+)$, where $c^+$ is the cost function corresponding to $c$. If $X$ contains any vertex of $V(G^+) \setminus V(G)$, we can replace this vertex by its unique neighbor in $V(G)$, without any increase in cost. In this way, we can find a vertex cover $U$ of $G$ whose cost satisfies $c(U) \leqslant c^+(X) \leqslant (2-\delta) \operatorname{\mathrm{LP}}(G^+,c^+) = (2-\delta)\operatorname{\mathrm{LP}}(G,c)$. Hence, the integrality gap of $Ax \geqslant b$ as an LP relaxation of [<span style="font-variant:small-caps;">Vertex Cover</span>]{} is also at most $2-\delta < 2 - {\varepsilon}$. As the size of $Ax \geqslant b$ is $n^{o(\log n / \log \log n)}$, this provides the desired contradiction. We point out that the size bound in the previous result can be improved. Kothari, Meka and Raghavendra [@KMR17] have shown that for every ${\varepsilon}> 0$ there is a constant $\delta = \delta({\varepsilon}) > 0$ such that no LP relaxation of size less than $2^{n^{\delta}}$ has integrality gap less than $2 - {\varepsilon}$ for Max-CUT. Since Max-CUT acts as the source problem in [@bazzi2019no], one gets a $2^{n^{\delta}}$ size lower bound for [<span style="font-variant:small-caps;">Vertex Cover</span>]{} in order to achieve integrality gap $2 - {\varepsilon}$. This also follows in a black-box manner from [@KMR17] and [@BPR18]. The proof of Proposition \[prop:EFlowerbound\] shows that the same bound applies to [<span style="font-variant:small-caps;">Cluster-VD</span>]{}. Conclusion {#sec:conclusion} ========== In this paper we provide a tight approximation algorithm for the cluster vertex deletion problem ([<span style="font-variant:small-caps;">Cluster-VD</span>]{}). Our main contribution is the efficient construction of a local cost function on the vertices at distance at most $2$ from any vertex $v_0$ such that every minimal hitting set of the input graph has local cost at most *twice* the local optimum. If the subgraph induced by $N(v_0)$ (the first neighborhood of $v_0$) contains a hole, the input graph contains a wheel, and this turns out to be straightforward. The most interesting case arises when the subgraph induced by $N(v_0)$ is chordal. In this case, we crucially exploit properties implied by chordality. One such property is the existence of simplicial vertices, and its consequence on the structure of maximal cliques in chordal graphs. Cliques play a special role for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} since if a hitting set avoids $v_0$, then its intersection with $N(v_0)$ is the complement of a clique. Actually, we can interpret our local cost function as a hyperplane “almost” separating the [<span style="font-variant:small-caps;">Cluster-VD</span>]{} polytope and the clique polytope of the same chordal graph. This was a key intuition which led us to the proof of Theorem \[thm:localratio\]. While our algorithm does not need to solve [<span style="font-variant:small-caps;">Cluster-VD</span>]{} on chordal graphs, one natural remaining question is the following: is [<span style="font-variant:small-caps;">Cluster-VD</span>]{} polynomial-time solvable on chordal graphs? This seems to be a non-trivial open question, also mentioned in [@cao2018vertex], where similar vertex deletion problems are studied for chordal graphs. We propose this as our first open question. Our second contribution is to study the [<span style="font-variant:small-caps;">Cluster-VD</span>]{} problem from the polyhedral point of view, in particular with respect to the tightness of the Sherali-Adams hierarchy. Our results on Sherali-Adams fail to match the 2-approximation factor of our algorithm (by epsilon), and we suspect this is not by chance. We believe that, already for certain classes of triangle-free graphs, the LP relaxation given by a bounded number of rounds of the Sherali-Adams hierarchy has an integrality gap strictly larger than $2$. This is our second open question. As mentioned already in the introduction, we do not know any polynomial-size LP or SDP relaxation with integrality gap at most $2$ for [<span style="font-variant:small-caps;">Cluster-VD</span>]{}. In order to obtain such a relaxation, it suffices to derive each valid inequality implied by Lemmas \[lem:wheel\] and \[lem:technical\]. A partial result in this direction is that the star inequality $(k-1) x_{v_0} + \sum_{i=1}^k x_{v_i} \geqslant k-1$, valid when $N(v_0)=\{v_1,\dots, v_k\}$ is a stable set, has a bounded-degree sum-of-squares proof. Using earlier results [@FJS20 Algorithm 1], this implies that a bounded number of rounds of the Lasserre hierarchy provides an SDP relaxation for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} with integrality gap at most $2$, whenever the input graph is triangle-free. This should readily generalize to the wheel inequalities of Lemma \[lem:wheel\]. However, we do not know if this generalizes to all valid inequalities from Lemma \[lem:technical\]. We leave this for future work as our third open question. Our fourth open question was already stated in Section \[sec:analysis\]: what is the best running time for Algorithm \[algo\]? We think that it is possible to improve on our $O(n^4)$ upper bound. Another intriguing problem is to what extent our methods can be adapted to hitting set problems in other $3$-uniform hypergraphs. We mention an open question due to L. Végh [@VeghPC]: for which classes of $3$-uniform hypergraphs and which ${\varepsilon}> 0$ does the hitting set problem admit a $(3-{\varepsilon})$-approximation algorithm? As mentioned in the introduction, [<span style="font-variant:small-caps;">FVST</span>]{} (feedback vertex set in tournaments) is another hitting set problem in a $3$-uniform hypergraph, which is also UCG-hard to approximate to a factor smaller than $2$. There is a recent *randomized* $2$-approximation algorithm [@LMMPPS20], but no deterministic (polynomial-time) algorithm is known. Let us repeat here the relevant open question from [@LMMPPS20]: does [<span style="font-variant:small-caps;">FVST</span>]{} admit a deterministic $2$-approximation algorithm? [^1]: This project was supported by ERC Consolidator Grant 615640-ForEFront. Samuel Fiorini and Manuel Aprile are also supported by FNRS grant T008720F-35293308-BD-OCP. Tony Huynh is also supported by the Australian Research Council. [^2]: An $\alpha$-approximation algorithm for [<span style="font-variant:small-caps;">Cluster-VD</span>]{} is a polynomial-time algorithm computing a hitting set $X$ with $c(X) \leqslant \alpha \cdot \operatorname{\mathrm{OPT}}(G,c)$. Here, $\alpha \geqslant 1$ and is known as the *approximation factor* of the algorithm. [^3]: In Algorithm \[algo\], and throughout the paper, we use the simplified notation $A+a := A\cup\{a\}$, $A-a := A\setminus\{a\}$ for a set $A$ and an element $a$. [^4]: A *hole* is a cycle of length at least $4$. [^5]: We call a graph $H$ *apex-chordal* if $H$ has a universal vertex $v_0$ (that is, $v_0$ is adjacent to all other vertices of $H$) such that $H-v_0$ is chordal. [^6]: A vertex is *simplicial* if its neighborhood is a clique.
--- author: - | [Sergey Yuzvinsky ]{}\ [*University of Oregon, Eugene, OR 97403 USA*]{}\ [*yuz@math.uoregon.edu*]{} date: 'May 14, 1999' title: Taylor and minimal resolutions of homogeneous polynomial ideals --- -.5truein 8.5truein \[section\] \[theorem\][Definition]{} \[theorem\][Example]{} \[theorem\][Examples]{} \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Conjecture]{} Introduction ============ In the theory of monomial ideals of a polynomial ring $S$ over a field $k$, it is convenient that for each such ideal $I$ there is a standard free resolution, so called Taylor resolution, that can be canonically constructed from the minimal system of monomial generators of $I$ (see [@Es], p.439 and section 2). On the other hand no construction of a minimal resolution for an arbitrary monomial ideal has been known. Recently a minimal resolution was constructed in [@BPS] for a class of so called generic monomial ideals. Also in [@BC; @GPW1; @GPW2] various invariants of monomial ideals were related to combinatorics of the lattice $D$ of the least common multiples (lcm) of generating monomials. In particular in [@GPW2] the Betti numbers of the $S$-module $S/I$ were expressed through homology of $D$ and it was proved that even the algebra structure of ${\rm Tor}_*^S(S/I,k)$ was defined by that lattice although explicit formula was not given in that paper. Given a system of generators of an arbitrary ideal $I$ of $S$, one can factor the generators in irreducibles and construct the Taylor complex similarly to the Taylor resolution of a monomial ideals. In general this complex is not acyclic. One non-monomial case where it is acyclic was used in [@Yu1]. In the present paper, a necessary and sufficient condition is given for the Taylor complex of a system ${{\cal A}}$ of homogeneous polynomials to be acyclic (Theorem \[taylor\]). This condition involves the local homology of the lattice $D$ of lcm of elements form ${{\cal A}}$ and the depth of ideals generated by their irreducible factors. If this condition holds then the Betti numbers of $S/I$ are defined by the local homology of $D$ similarly to the case of monomial ideals (Theorem \[betti\]). Moreover in section 3 we exhibit a DGA defined by combinatorics whose cohomology algebra is isomorphic to the algebra ${\rm Tor}_*^S(S/I,k)$. This construction makes sense for arbitrary graded lattice (see definition in section 3) and generalizes the DGA constructed in [@Yu2; @Yu3] for cohomology algebra of complex subspace complement. Section 4 contains that can be considered the main result of the paper (Theorem \[minimal\]). There for any ideal $I$ having the Taylor resolution we give a combinatorial construction of a subcomplex of it that is a minimal resolution of $S/I$. This construction is not canonical and involves computations of homology of posets which hardly can be avoided in general. We also describe completely the class of ideals for which our minimal resolution reduces to the minimal resolution from [@BPS]. Finally in section 5 we give examples of classes of ${{\cal A}}$ satisfying the condition of Theorem \[taylor\]. For instance we consider the ideals whose generators are products of linear polynomials which is important for theory of hyperplane and subspace arrangements. The author is grateful to D.Eisenbud for discussions of the results of the paper. In particular the proof of Theorem \[minimal\] would have been longer without his advice. Taylor complex ============== Let $S=k[x_1,\ldots,x_{n}]$ be the polynomial ring over a field $k$, ${{\cal A}}=\{Q_1,\ldots,Q_m\}$ a set of homogeneous polynomials from $S$, and $I=I({{\cal A}})$ the ideal of $S$ generated by ${{\cal A}}$. In this section we give a necessary and sufficient condition for the Taylor complex of ${{\cal A}}$ to be a resolution of the $S$-module $S/I$. We can assume that ${{\cal A}}$ generates $I$ minimally, in particular none of polynomials from ${{\cal A}}$ divides another one. Also we will always regard all the subsets of ${{\cal A}}$ provided with the ordering induces by a fixed linear ordering on ${{\cal A}}$. Let $K: 0\to K_{m}\to \cdots \to K_1\to K_{0}=k\to 0$ be the augmented shifted by -1 chain complex over $k$ of the simplex whose set of vertices is ${{\cal A}}$. Explicitly the linear space $K_p$ has a basis consisting of all the subsets of ${{\cal A}}$ with $p$ elements. The differential $d_p:K_p\to K_{p-1}$ is given with respect to these bases by the matrix with entries $d_{\sigma,\tau}$ ($\sigma,\tau\in {{\cal A}}, |\sigma|=p-1, |\tau|=p$) equal to $(-1)^{\epsilon(\tau,i)}$ if $\sigma=\tau\setminus\{Q_i\}$ where $\epsilon(\tau,i)=|\{Q_j\in\tau|j<i\}|$ and 0 if $\sigma\not\subset\tau$. For each $\sigma\subset{{\cal A}}$ denote by $Q_{\sigma}$ the lcm of all $Q_i\in\sigma$. The Taylor complex of ${{\cal A}}$ is the complex $\tilde K$ of the free $S$-modules $\tilde K_p=K_p\otimes S$ with the differentials $\tilde d_p:\tilde K_p\to\tilde K_{p-1}$ given by the matrix $\tilde d_{\sigma,\tau}=(Q_{\tau}/Q_{\sigma})d_{\sigma,\tau}$. Notice that $H_0(\tilde K)=S/I$. Our goal is to give equivalent conditions for the acyclisity of $\tilde K$. We will be dealing with finite posets and need to introduce some notation. To every poset one can assign its complex of flags (i.e., increasing sequences of elements) and attribute the topological invariants of this complex to the poset itself. In this sense we will use homology of a poset and homotopy equivalence of two of them. If $L$ is a lattice with the minimal element $\hat 0$ and the maximal one $\hat 1$ then the flag complex of $L\setminus\{\hat 0,\hat 1\}$ is homotopy equivalent to its atomic and coatomic complexes (e.g., see [@Bj]). For instance, recall that the atomic complex is the abstract complex whose vertices are all the atoms of $L$ and a set of atoms is a simplex if it is bounded above in $L\setminus\{\hat 1\}$. If $\sigma\subset L$ then $\bigvee(\sigma)$ denotes the least upper bound (join) of $\sigma$. For every $X\in L$ we put $L_{\leq X}=(\hat 0,X]=\{Y\in L|0<Y\leq X\}$ and $L_{<X}=L_{\leq X}\setminus\{X\}$. Now denote by $D=D({{\cal A}})$ the lattice of the lcm of all subsets of ${{\cal A}}$ ordered by divisibility. This lattice is provided with the monotone map $\phi: {{\cal B}}\to D$ from the Boolean lattice ${{\cal B}}$ of all the subsets of ${{\cal A}}$ given by $\phi(\sigma)= Q_{\sigma}$. Denote by $D_0$ the poset $D$ with the smallest element (constant polynomial 1) deleted and call a subset of a poset [*decreasing*]{} if with each element it contains all smaller ones. \[homotopic\] For every decreasing subset $F$ of $D_0$ the restriction of $\phi$ to $\phi^{-1}(F)$ is a homotopy equivalence. [[**Proof. **]{}]{}For $Q\in F$ put ${{\cal B}}(\leq Q)=\phi^{-1}(\{P\in F|P\leq Q\})$ and notice that the poset ${{\cal B}}(\leq Q)$ has the unique maximal element equals $\{Q_i\in{{\cal A}}|Q_i {\rm \ divides}\ Q\}$. Thus ${{\cal B}}(Q)$ is contractable and the restriction of $\phi$ is a homotopy equivalence (cf. [@Qu]). [  ]{} The lattice $D$ is naturally embedded as a join-sublattice in a larger polynomial lattice $W$. Let $E$ be the set of all irreducible factors of elements form ${{\cal A}}$ (chosen one from each equivalence class) and $W$ the lattice of all the products of elements from $E$ ordered by divisibility (repetitions are allowed). Clearly $W$ is isomorphic as a partially ordered semigroup to ${\mathbb{N}}^k$ ($k=|E|$). To study the complex $\tilde K$ we use the evaluation at vectors $v\in\bar k^n$. This means that for every $v\in\bar k^n$ we consider the complex $K(v)$ with $K(v)_p=K_p\otimes \bar k$ and $d(v)_p:K(v)_{p+1}\to K(v)_p$ given by the matrix $d(v)_{\sigma,\tau}=\tilde d_{\sigma,\tau}(v)$. On the other hand, every $v$ defines a subset $E_v$ of $E$ of all the elements of $E$ that vanish at $v$. Denote by $W(v)$ the sublattice of $W$ of all the elements of $W$ whose all irreducible factors are from $E_v$ and by $\psi_v$ the monotone map (“projection”) $D\to W(v)$ defined by $$\psi_v(Q)=\max\{P\in W(v)|P\leq Q\}.$$ For each $P\in W(v)$ we put $D(v,<P)=\{Q\in D|\phi_v(Q)<P\}$. Besides every $P\in W(v)$ defines the subcomplex $K_{v,P}$ of $K\otimes \bar k$ spanned by $\sigma\subset{{\cal A}}$ such that $\psi_v(Q_{\sigma})=P$. Here and in the rest of the paper we consider only $P\in W(v)$ such that $P\in \psi_v(D)$. Substituting in the last definition “=” by “$\leq$” or “$<$” we obtain complexes $K_{v,\leq P}$ and $K_{v,<P}$ respectively. Clearly $K_{v,P}=K_{v,\leq P}/K_{v,<P}$. Notice that if $E(v)=\emptyset$ then $W(v)=\{1\}$ and $K(v)\simeq K$. We need the following lemma. \[eval\] (i) $K(v)\simeq\oplus_{P\in \psi_v(D)}K_{v,P}$ for every $v\in\bar k^n$. \(ii) The complex $K_{v,\leq P}$ is acyclic for every $v$ and $P\in \psi_v(D)$. \(iii) The complex $K_{v,<P}$ is homotopy equivalent to the chain complex of $D(v,<P)$ shifted by -1. [[**Proof. **]{}]{}(i) The decomposition of $K(v)$ in the direct sum according to $\psi_v(Q_{\sigma})$ follows from definitions. The required isomorphism can be given by $\sigma\mapsto (Q_{\sigma}/P)(v)\sigma$ where $P=\psi_v(Q_{\sigma})$. \(ii) The complex $K_{v,\leq P}$ is the chain complex of the simplex on the vertices $Q_i\in{{\cal A}}$ with $\psi_v(Q_i)\leq P$. The result follows. \(iii) The complex $K_{v,<P}$ is the shifted by -1 chain complex of an abstract complex whence it is homotopy equivalent to the poset of its non-empty simplexes ordered by inclusion. More explicitly $K_{v,<P}$ is homotopy equivalent (after the shift) to the subposet of ${{\cal B}}$ equal to $\phi^{-1}(D(v,<P))$. Since $D(v,<P)$ is decreasing we can apply Lemma \[homotopic\] which completes the proof. [  ]{} It will be convenient for us to express properties of $v$ in terms of the set $E_v$ of irreducible polynomials. Notice that these sets can be characterized intrinsically as subsets $G\subset E$ such that the ideal $J(G)$ generated by $G$ does not contain a power of an element from $E\setminus G$. We will call those sets [*saturated*]{}. For every two $v,v'$ such that $E_v=E_{v'}$ we have $\psi_v=\psi_{v'}$. Thus we will often use $\psi_G$, $W(G)$, and $D(G,<P)$ for $P\in W(G)$ instead of $\psi_v$, $W(v)$, and $D(v,<P)$ respectively for $v$ such that $E_v=G$. \[taylor\] The Taylor complex of ${{\cal A}}$ is a resolution of $S/I({{\cal A}})$ if and only if for every saturated set $G\subset E$ and $P\in \psi_G(D)$ we have $\tilde H_p(D(G,<P)))=0$ for every $p$ such that $p\geq{\mbox{\rm depth}}(J(G))-1$. [[**Proof. **]{}]{}According to the celebrated Buchsbaum - Eisenbud criterion (see for example [@No], Sect. 6.4, Theorem 15) the complex $\tilde K$ is a resolution of $S/I$ if and only if the following three conditions hold: (a)${\mbox{\rm depth}}F(\tilde d_p)\geq p$ for every $p=1,2,\ldots$ where $F(\tilde d_p)$ (or shortly $F_p$) is the Fitting ideal of $\tilde d_p$; (b)${\mbox{\rm rk}}\tilde d_m={\mbox{\rm rk}}\tilde K_m$; (c)${\mbox{\rm rk}}\tilde d_{p+1}+{\mbox{\rm rk}}\tilde d_p={\mbox{\rm rk}}\tilde K_p$ for $1\leq p<m$. Consider $K(v)$ for a $v\in\bar k^n$. Clearly we have ${\mbox{\rm rk}}\tilde K_p=\dim K_p$ and ${\mbox{\rm rk}}d_p(v)\leq{\mbox{\rm rk}}_S\tilde d_p$ for all $p$. Using these and choosing $v$ so that $E_v=\emptyset$ we can obtain by induction on $p$ that ${\mbox{\rm rk}}_S\tilde d_p={\mbox{\rm rk}}d_p$ and the conditions (b) and (c) (cf. [@Yu1], Theorem 1.3). Thus we will focus on the condition (a). For an arbitrary $v$, Lemma \[eval\] (i) and (ii) and the exact sequence of the pair $(K_{v,\leq P},K_{v,<P})$ ($P\in \psi_v(D)$) give for all $p$ $$\label{2.2} H_p(K(v))=\oplus_{P\in \psi_v(D)}H_{p-1}(K_{v,<P}).$$ Then Lemma \[eval\] (iii) gives the following equivalent form of $$\label{2.3} H_p(K(v))=\oplus_{P\in \psi_v(D)}\tilde H_{p-2}(D(v,<P)).$$ Now we include the Fitting ideals into consideration. Clearly the statement $H_p(K(v))\not=0$ is equivalent to ${\mbox{\rm rk}}d_i(v)<{\mbox{\rm rk}}d_i$ for $i=p$ or $i=p+1$ which is equivalent to the inclusion $v\in{{\cal V}}(F_{p+1})\cup{{\cal V}}(F_p)$. Here and in the rest of the paper for any ideal $J$ of $S$ we denote by ${{\cal V}}(J)$ its variety in $\bar k^n$. In particular the left hand side of is nonzero for $v$ from a closed algebraic subset of $\bar k^n$. The right hand side of is nonzero for $v$ if there exists $P\in \psi_v(D)$ (whence $v\in {{\cal V}}(J(P))$) such that $$\label{2.3'} \tilde H_{p-2}(D(v,<P))\not=0.$$ If the condition holds for one $v_0\in J({{\cal V}}(P))$ it may happen that it holds only for $v$ from a closed algebraic subset of smaller dimension. On the other hand if we put $G=E_{v_0}$ then holds for the same $P$ and $v$ from the open dense subset of ${{\cal V}}(J(G))$ (of all $v$ such that $E_{v}=G$). Thus implies $$\label{2.4} {{\cal V}}(F_{p+1})\cup{{\cal V}}(F_p)=\bigcup_{G\in{{\cal S}}_p}{{\cal V}}(J(G))$$ where ${{\cal S}}_p$ consists of all saturated subsets $G$ of $E$ such that for each of them there exists $P\in \psi_G(D)$ satisfying for some (whence every) $v$ with $E_v=G$. Applying the Nullstellensatz we can rewrite in the form $$\label{2.5} \bar F_{p+1}\cap\bar F_p=\bigcap_{G\in{{\cal S}}(p)}\bar J(G)$$ where the bar means the radical of an ideal. Now we are ready to prove that the condition (a) above is equivalent to the condition of the theorem. We are going to use that for any proper ideal there exists containing it prime ideal of the same depth (see for example [@No], 5.5, Theorem 16). Suppose that ${\mbox{\rm depth}}F_p\geq p$ for every $p$. Then due to the same is true for any prime ideal containing $\bar J(G)$ for $G\in{{\cal S}}(p)$ whence for any ideal $J(G)$. In other words if for some $G\subset E$ as in the condition of the theorem $r\geq{\mbox{\rm depth}}(J(G))-1$ then $G\not\in {{\cal S}}(r+2)$, i.e., for every $v$ with $E_v=G$ and $P\in\psi_G(D)$ we have $\tilde H_r(D(v,<P))=0$. This is precisely the condition of the theorem. Conversely suppose that the condition of the theorem holds and consider $G\in{{\cal S}}(p)$ for some $p$. Then the assumption implies that ${\mbox{\rm depth}}J(G)\geq p$ whence the same is true for any prime ideal containing either side of . This imeddiately implies condition (a) for $p$. [  ]{} If a set ${{\cal A}}$ of homogeneous polynomials satisfies the condition of Theorem \[taylor\] then it is easy to give a combinatorial interpretation of the Betti numbers of $S/I({{\cal A}})$, i.e., $b_p=\dim {\rm Tor}_p^S(S/I({{\cal A}}),k)$. \[betti\] Suppose a set ${{\cal A}}$ of homogeneous polynomials from $S$ satisfies the condition of Theorem \[taylor\]. Then $$b_p=\sum_{Q\in D}\dim \tilde H_{p-2}(D_{<Q}).$$ [[**Proof. **]{}]{}The complex whose homology is ${\rm Tor}_*^S(S/I,k)$, is $\tilde K\otimes_S k$ that coincides with $K(0)$. Clearly $W(0)=W$ whence $\psi_0$ is the embedding $D\subset W$. Thus $D(0,<P)=D_{<P}$ for any $P\in D=\psi_0(D)$. Applying Lemma \[eval\] for $v=0$ we obtain the result. [  ]{} Using Lemma \[homotopic\] and the exact sequence of a pair we can rewrite the formula for the Betti numbers as $$b_p=\sum_{Q\in D}\dim H_{p-1}(K(Q))$$ where the complex $K(Q)$ is generated by ${{\cal B}}(Q)=\{\sigma\in{{\cal B}}|Q_{\sigma}=Q\}$ and its differential is the restriction of $d$. This should be compared with Theorem \[minimal\]. Multiplication ============== First in this section we consider a pure combinatorial set up of a grade lattice (see definition below) and define a DGA on the relative atomic complex of this lattice. A special case of this definition was used in [@Yu2] in order to describe the rational cohomology ring of a subspace complement. In certain special cases it is known to give even the integer cohomology [@Fe; @ML]. Let $L$ be a lattice with the minimal element $\hat 0$. The atoms of $L$ are provided with an arbitrary but fixed linear ordering. A [*grading*]{} of $L$ is a strictly monotone map ${\mbox{\rm rk}}:L\to {\mathbb{N}}^s$ (for some positive integer $s$) with ${\mbox{\rm rk}}(0)=0$ and $${\mbox{\rm rk}}(X\vee Y)+{\mbox{\rm rk}}(X\wedge Y)\leq {\mbox{\rm rk}}(X)+{\mbox{\rm rk}}(Y)$$ for $X,Y\in L$. We call ${\mbox{\rm rk}}X$ [*rank of $X$*]{} for $X\in L$. Let us recall the relative atomic complex $\Delta=\Delta(L)$ of $L$ (cf. [@Yu3]). It is a chain complex (over a field or $\mathbb{Z}$) of linear spaces (free modules resp.) whose $p$-th term $\Delta_p$ has a basis consisting of subsets $\sigma$ of atoms of $L$ with $|\sigma|=p$ (notice the unusual grading) and with differential defined by $$d\sigma=\sum_{Z_i\in \sigma,\bigvee(\sigma_i)=\bigvee(\sigma)} (-1)^{\epsilon(\sigma,i)}\sigma_i.$$ (Here as in section 2, $\epsilon(\omega,i)=|\{Z_j\in E|j<i|$ and if $Z_i\in\omega$ then $\omega_i=\omega\setminus\{Z_i\}$). As usual $\Delta_0$ is spanned by the empty set of atoms of $L$. It is easy to see that $\Delta=\oplus_{X\in L}\Delta_X$ where $\Delta_X$ is spanned by $\sigma$ with $\bigvee(\sigma)=X$ and $\tilde H_p(\Delta_X)= \tilde H_{p-2}(L_{<X})$ where the latter natural isomorphism is given by the boundary map in an exact sequence of $(L_{\leq X},L_{<X})$ (with a shift of dimension). \[dga\] The complex $\Delta$ gets the structure of a DGA via the bilinear multiplication defined by $$\label{4.1} \sigma\cdot\tau=\begin{cases} 0& \text{if ${\mbox{\rm rk}}(\bigvee(\sigma\cup\tau))\not={\mbox{\rm rk}}(\bigvee(\sigma)) +{\mbox{\rm rk}}(\bigvee(\tau))$},\\ (-1)^{\epsilon(\sigma,\tau)}\sigma\cup\tau & \text{otherwise}. \end{cases}$$ where $\epsilon(\sigma,\tau)$ is the parity of the permutation of $\sigma\cup\tau$ (shuffle) putting all elements of $\tau$ after elements of $\sigma$ and preserving fixed orders inside these sets. [[**Proof. **]{}]{}It suffices to check the Leibniz rule. We consider two cases. \(i) ${\mbox{\rm rk}}(\bigvee(\sigma\cup\tau))\not={\mbox{\rm rk}}(\bigvee(\sigma)) +{\mbox{\rm rk}}(\bigvee(\tau)).$ Then $\sigma\cdot\tau=0$. On the other hand, for every $\sigma_i$ such that $\bigvee(\sigma_i)=\bigvee(\sigma)$ $$\begin{aligned} {\mbox{\rm rk}}(\bigvee(\sigma_i\cup\tau))\leq{\mbox{\rm rk}}\bigvee((\sigma\cup\tau))<{\mbox{\rm rk}}(\bigvee (\sigma))+{\mbox{\rm rk}}(\bigvee(\tau))\notag\\ ={\mbox{\rm rk}}(\bigvee(\sigma_i))+{\mbox{\rm rk}}(\bigvee(\tau))\end{aligned}$$ whence $d(\sigma)\cdot\tau=0$. Similarly $\sigma\cdot d(\tau)=0$ that gives the Leibniz rule in this case. \(ii) ${\mbox{\rm rk}}(\bigvee(\sigma\cup\tau))={\mbox{\rm rk}}(\bigvee(\sigma)) +{\mbox{\rm rk}}(\bigvee(\tau)).$ First notice that in this case $\sigma\cap\tau=\emptyset$. Indeed $Z\in\sigma\cap\tau$ would imply $${\mbox{\rm rk}}(\bigvee(\sigma\cup\tau))\leq{\mbox{\rm rk}}(\bigvee (\sigma_i))+{\mbox{\rm rk}}(\bigvee(\tau))-{\mbox{\rm rk}}Z<{\mbox{\rm rk}}(\bigvee(\sigma))+{\mbox{\rm rk}}(\bigvee(\tau)).$$ Further both sides of the Leibniz equality are combinations of basic elements $\omega_i=(\sigma\cup\tau)_i$. Suppose $Z_i\in\sigma$. Then the coefficient of $\omega_i$ in the left hand side is $$(-1)^{\epsilon(\omega,i)+\epsilon(\sigma,\tau)}$$ and that in the right hand side side is $$(-1)^{\epsilon(\sigma,i)+\epsilon(\sigma_i,\tau)}.$$ We have $$\begin{aligned} \epsilon(\sigma,i)+\epsilon(\sigma_i,\tau)\equiv \epsilon(\sigma,i)+\epsilon(\tau,i)+\epsilon(\sigma,\tau)\notag\\ \equiv \epsilon(\omega,i) +\epsilon(\sigma,\tau) ({\rm mod}\ 2).\end{aligned}$$ which implies the Leibniz equality in this case. The case where $Z_i\in\tau$ can be completed similarly. [  ]{} Let us consider several important particular cases of this construction. [**Examples.**]{} 1. If $L$ is a geometric lattice graded by its standard rank then the complex $\Delta$ over $\mathbb{Z}$ is homotopy equivalent to the shifted by -1 Whitney complex (e.g., see [@OT], p.142). The homology ring of the DGA in this case is the Orlik-Solomon algebra of $L$ after the shift. 2\. If $L$ is the intersection lattice of a complex subspace arrangement graded by the codimensions of its elements then the DGA $\Delta$ is the DGA from [@Yu2] whose homology ring is isomorphic with properly regraded cohomology ring of the subspace complement. 3\. Suppose $L=D$ that is the least common multiple lattice from above corresponding to a set ${{\cal A}}$ of homogeneous polynomials. As in the previous sections denote the set of atoms of $D$ by $E$ and put $r=|E|$. Fix a linear order on $E$. Then one defines the grading $ {\mbox{\rm rk}}:D\to {\mathbb{N}}^r$ assigning to each $Q\in D$ the vector of multiplicities of the irreducible factors of $Q$. The complex $\Delta(D)$ is homotopy equivalent to the complex $\oplus_{Q\in D}D_{<Q}$ (cf. [@Yu3]). One can generalize this definition using more general ordered semigroups instead of ${\mathbb{N}}^s$ but we will not use this in the paper. To show that the example 3 above can be used for computation of the algebra structure on ${\rm Tor}_*^S(S/I,k)$ one can use the well-known graded algebra structure on the Taylor complex (see for example [@Av], p. 6). To define a (bilinear over $S$) product on $\tilde K$ it is enough to define it on the standard generators. For $\sigma,\tau\subset{{\cal A}}$ we put $\sigma\cdot\tau=0$ if $\sigma\cap\tau\not=\emptyset$ and otherwise $$\label{2.6} \sigma\cdot\tau=(-1)^{\epsilon(\sigma,\tau)}[Q_{\sigma}, Q_{\tau}]\sigma\cup\tau$$ where $[\ ,\ ]$ is the greatest common divisor. \[pairing/complex\] There is a canonical $k$-algebra isomorphism ${\rm Tor}^S_*(S/I,k)\to H_*(\Delta(D))$ where the algebra structure in the right hand side is induced by the DGA structure on $\Delta(D)$ defined in Example 3 above. [[**Proof. **]{}]{} It is obvious from definitions that $K(0)$ can be identified with the relative atomic complex $\Delta(D)$. Since any DGA structure on a free resolution can be used for defining the multiplicative structure on Tor, we can use the DGA defined above on $\tilde K$. It is straightforward to check that it induces on $\Delta(D)$ the DGA structure defined in Example 3 above. [  ]{} Minimal resolution ================== In this section we fix a set ${{\cal A}}$ of homogeneous polynomials generating ideal $I$ satisfying the condition of Theorem \[taylor\] and starting from the Taylor resolution $\tilde K$ of $S/I$ construct its minimal resolution $\tilde M$. The complex $\tilde M$ will be realized as a subcomplex of $\tilde K$. Moreover we first find a needed subcomplex $M$ of $K$ and then pass to $\tilde M$ in the same way as we obtained $\tilde K$ from $K$. Recall that the acyclic complex $K$ has the standard basis consisting of the elements $\sigma$ of the Boolean lattice ${{\cal B}}$ (i.e., subsets of ${{\cal A}}$) graded by $|\sigma|$. The monotone map $\phi:{{\cal B}}\to D$ defines the partition of ${{\cal B}}$ into sets ${{\cal B}}(Q)=\phi^{-1}(Q)$ ($Q\in D$). Denote by $K(Q)$ the graded subspace of $K$ generated by ${{\cal B}}(Q)$ and notice that $K=\oplus_{Q\in D}K(Q)$ as a graded linear space. If we provide $K(Q)$ with the restriction $d(Q)$ of $d$ it becomes a chain complex. More precisely $d(Q)(a)=\pi_Qd(a)$ for every $a\in K(Q)$ where $\pi_Q:K\to K(Q)$ is the canonical projection. Now we need to make a noncanonical choice. Let $Z(Q)_p$ and $B(Q)_p$ be the spaces of cycles and boundaries respectively of degree $p$ in $K(Q)$. For each $p$ there are two exact sequences $$0\to Z(Q)_p\to K(Q)_p\to B(Q)_{p-1}\to 0$$ and $$0\to B(Q)_p\to Z(Q)_p\to H(Q)_p\to 0.$$ We fix a splitting of each of the sequences. In other words we represent $$K(Q)_p=B(Q)_p\oplus H'(Q)_p\oplus B'(Q)_{p-1}$$ where $B(Q)_p\oplus H'(Q)_p=Z(Q)_p$ with the restriction of $d(Q)$ giving an isomorphism $B'(Q)_{p-1}\to B(Q)_{p-1}$ and the restriction of the projection $Z(Q)_p\to H(Q)_p$ giving an isomorphism $H'(Q)_p\to H(Q)_p$. In the case where the graded subspace $H'=\oplus_QH'(Q)$ of $K$ is invariant under $d$ we can take this subspace for $M$. However it is easy to find examples where this is false (cf. Example 4.2). Our goal is to find a subcomplex $M$ of $K$ that is the graph of a degree -1 linear map $f:H'\to B'=\oplus_QB'(Q)$. The map $f$ is defined by the following lemma. \[map\] For each $a\in H'_p$ there exists and unique an element $f(a)\in\oplus_{P} B'(P)_{p-1}$ such that $d(a+f(a))\in H'\oplus B'$. [[**Proof. **]{}]{}Let us prove the uniqueness first. By subtraction we reduce the problem to proving that there is no nonzero $b\in B'_{p-1}$ with $d(b)\in H'\oplus B'$. Suppose such an element $b$ exists and let $P$ be a maximal element in $D$ with the property $b_P=\pi_P(b)\not=0$. By the maximality of $P$ we have $d(P)(b_P)=\pi_Pd(b)\in H'(P)\oplus B'(P)$ whence $d(P)(b_P)=0$. Since $b_P\in B'(P)$ we conclude that $b_P=0$ which is a contradiction. Now we prove the existence. Using downward induction on $D$, it suffices to prove the following; let $c\in K_p$ be such that $\pi_Pd(c)\in H'(P)\oplus B'(P)$ for all $P$ greater than a given $R\in D$. Then there exists $b\in B'(R)_{p-1}$ such that $\pi_Pd(c+b)\in H'(P)\oplus B'(P)$ for $P\geq R$. This claim is immediate. Indeed one can take $b\in B'(R)_{p-1}$ with the condition that $d(R)(b)=-[\pi_Rd(c)]_{B(R)}$ where $[\ \ ]_{B(R)}$ means the projection of $K(R)$ to $B(R)$. [  ]{} Lemma \[map\] defines a degree -1 linear (by uniquenss) map $f:H'\to B'$. Notice that by construction if $a\in H'(Q)$ then $f(a)\in\oplus_{P<Q}B'(P)$. We put $M=\{a+f(a)|a\in H'\}$. Clearly $M$ is a graded linear subspace of $K$. \[subcomplex\] The subspace $M$ is a subcomplex of $K$. [[**Proof. **]{}]{}We need to prove that $d(M)\subset M$. Suppose $a\in H'$ and consider $c=d(a+f(a))$. By construction $c\in H'\oplus B'$, i.e., $c=e+b$ where $e\in H'$ and $b\in B'$. Since $d(c)=0\in H'\oplus B'$ we have by the uniqueness part of Lemma \[map\] that $b=f(e)$, i.e., $c\in M$. [  ]{} Now we define the graded free $S$-submodule $\tilde M$ of $\tilde K$ as generated by $M$. Lemma \[subcomplex\] implies that $\tilde M$ is a subcomplex of $\tilde K$. The following result is the main one of this section. \[minimal\] The complex $\tilde M$ is a minimal resolution of $S/I$. [[**Proof. **]{}]{}To prove this theorem it suffices to prove that the complex $\tilde K/\tilde M$ is exact. For that, in turn, it suffices to prove that $K'= (\tilde K/\tilde M)\otimes k=(\tilde K\otimes k)/(\tilde M\otimes k)$ is exact. To analyze the complex $K'$ notice first that as graded linear spaces $\tilde K \otimes k=K$ and $\tilde M\otimes k=M$. Moreover in the decomposition $K=B \oplus H'\oplus B'$ we have $M\subset H'\oplus B'$ where $M$ is the graph of $f:H'\to B'$. Thus up to natural isomorphism $K'=B\oplus B'$ as graded linear space. In particular $$\label{factor} K'=\oplus_Q(B(Q)\oplus B'(Q))$$ (again as graded linear spaces). Moreover, unlike for $K$, the complexes $K(Q)$ are subcomplexes of $\tilde K\otimes k$ whence \[factor\] holds in the category of chain complexes, i.e., the differential in $K'$ coincides with $\oplus_Qd(Q)$. Now the statement follows immediately form the isomorphisms $d(Q):B'(Q)_p\to B(Q)_{p-1}$. [  ]{} Since in general the minimal resolution $\tilde M$ of $S/I$ is not constructed canonically, it is interesting to consider a case when it is canonical. This resolution was discovered in [@BPS] for so called generic monomial ideals (see below). We want to show how $\tilde M$ reduces to this resolution for a significantly wider class of ${{\cal A}}$ (even among monomial ideals). In [@BPS], the Scarf complex is the subcomplex of $\tilde M$ generated by $\sigma\subset{{\cal A}}$ such that $|{{\cal B}}(Q)|=1$ for $Q=Q_{\sigma}$. \[BPS\] The complex $\tilde M$ coincides with the Scurf complex (in particular the latter is acyclic) if and only if for every $Q\in D$ either $|{{\cal B}}(Q)|=1$ or the complex $K(Q)$ is exact. [[**Proof. **]{}]{}The condition is obviously necessary. Let us prove that it is sufficient. Put $\bar D=\{Q\in D| |{{\cal B}}(Q)|=1\}$. We have $K(Q)=K_1(Q)=H'_1(Q)$ for every $Q\in\bar D$ and $H'(Q)=0$ for other $Q$. Thus the following claim suffices for the proposition. [**Claim**]{}. Let $\sigma\subset {{\cal A}}$ be such that $Q_{\sigma}\in \bar D$. Then $Q_{\sigma_i}\in\bar D$ for every $Q_i\in\sigma$. [**Proof of Claim.**]{} Suppose $Q_{\sigma_i}\not\in \bar D$ for some $i$. Then there exists $Q_j\in{{\cal A}}$ such that $Q_j$ divides $Q_{\sigma_i}$. There are two possibilities. One possibility is that $Q_j\in\sigma_i$. This implies $Q_{\sigma_j}=Q_{\sigma}$ which is a contradiction. The other possibility is that $Q_j\not\in\sigma_i$. If $Q_j\in\sigma$ then $j=i$ and $Q_{\sigma}=Q_{\sigma_i}$ which is a contradiction. If $Q_j\not\in\sigma$ then $Q_{\sigma}=Q_{\sigma\cup\{Q_j\}}$ which is again a contradiction. This proves the claim and the proposition. [  ]{} In [@BPS], a monomial ideal $I$ is called generic if no variable appears with the same nonzero exponent in two distinct minimal generators of $I$. This (though ambiguous) term can be used for an arbitrary set ${{\cal A}}$ of homogeneous polynomials (with respect ot their factorizations into irreducible factors). \[boolean\] Let ${{\cal A}}$ be generic in the above sense. Then for every $Q\in D$ the poset ${{\cal B}}(Q)$ is Boolean. [[**Proof. **]{}]{}The key observation is that ${{\cal B}}(Q)$ has a unique minimal element. Indeed write $Q=\prod_{i=1}^rZ_i^{m_i}$ where $Z_i$ are irreducibles and $m_i>0$. Then for each $i$ there exists a unique $Q_{j_i}\in{{\cal A}}$ such that $Z_i$ has exponent $m_i$ in decomposition of $Q_{j_i}$. Put $\sigma=\{Q_{j_i}|i=1,\ldots,r\}$. It is easy to see that $Q_{\sigma}=Q$ and $\sigma$ is the unique minimal element of ${{\cal B}}(Q)$. Now the result follows since ${{\cal B}}(Q)$ always has a unique maximal element and with any two elements of ${{\cal B}}$ contains every element between them. [  ]{} \[scarf\] If ${{\cal A}}$ is generic then $\tilde M$ coincides with the Scarf complex (cf. [@BPS]). It is easy to find examples with nongeneric ${{\cal A}}$ and all Boolean ${{\cal B}}(Q)$ (e.g., ${{\cal A}}=\{xy,xz\})$. Moreover Propositions \[BPS\] and \[boolean\] give a class of ${{\cal A}}$ defined by an easily checkable condition with the Scarf resolution. The condition is that **** $Q_{\sigma}=Q_{\tau}$ implies $Q_{\sigma\cap\tau}=Q_{\sigma}$ for every subsets $\sigma$ and $\tau$ of ${{\cal A}}$. The following example shows that ${{\cal B}}(Q)$ do not have to be Boolean to satisfy the condition of Proposition \[BPS\]. [**Example 4.1.**]{} Let ${{\cal A}}=\{x^2yz,xy^2w,x^2zw,xy^2z\}$. One can find easily that the condition of Proposition \[BPS\] holds whence the minimal resolution coincides with the Scarf complex. On the other hand ${{\cal B}}(Q)$ for $Q=x^2y^2zw$ has three minimal elements. Now we give a couple of examples with $H'$ not being invariant with respect to the differential (in particular nonvanishing). [**Example 4.2.**]{} Let ${{\cal A}}=\{xy,xz,yu,uv\}$ enumerated in the order they are written. There are three ${{\cal B}}(Q)$ that have more than one element: ${{\cal B}}(xyzu)=\{\{2,3\},\{1,2,3\}\}, {{\cal B}}(xyuv)=\{\{3,4\},\{1,3,4\}\}$, and ${{\cal B}}(xyzuv)=\{\{1,2,4\},\{2,3,4\},\{1,2,3,4\}\}$. Only the last $K(Q)$ has nontrivial homology, namely $\dim H_3=1$ and as a cycle representative of a nonzero class one can take $\sigma=\{1,2,4\}$. Then one has $d(\sigma)=\{1,2\}-\{1,4\}+\{2,4\}$ where $\{1,4\}$ is the trivial cycle in $K(xyuv)$, namely the boundary of $\tau=-\{1,3,4\}$. Thus using our construction we have $M_3=ka$ where $a=\sigma+\tau$ and then $d(a)=\{1,2\}+\{2,4\}-\{1,3\}-\{3,4\}\in M_2$ where $M_2$ is generated by $\{\{1,2\},\{1,3\},\{2,4\},\{3,4\}\}$. In particular the Betti numbers of $S/I$ are 1,4,4,1. This is Avramov’s example from [@Av]. Let $${{\cal A}}=\{x^2,xy,yz,zw,w^2\}$$ again with the natural linear order. There are six ${{\cal B}}(Q)$ with more than one element: ${{\cal B}}(x^2yz)=\{\{1,3\},\{1,2,3\}\}, {{\cal B}}(xyzw)= \{\{2,4\},\{2,3,4\}\}, {{\cal B}}(yzw^2)=\{\{3,5\},\{3,4,5\}\}, {{\cal B}}(x^2yzw)=\{\{1,2,4\}, \{1,3,4\},\{1,2,3,4\}\}, \hfill\break {{\cal B}}(xyzw^2)=\{\{2,3,5\},\{2,4,5\},\{2,3,4,5\}\}$ and ${{\cal B}}(x^2yzw^2)=\{\{1,3,5\},\{1,2,3,5\},\{1,2,4,5\},\{1,3,4,5\},\{1,2,3,4,5\}\}$. Only the last three give $K(Q)$ with nonvanishing homology. As cycle representatives one can take $\{1,2,4\},\{2,4,5\}, \{1,2,4,5\}$. Then one gets the generators of $M_4$ as $\{1,2,4,5\}$, of $M_3$ as $\{1,2,4\}+\{2,3,4\}, \{2,4,5\}+\{2,3,4\}, \{1,2,5\},\{1,4,5\}$ and of $M_2$ as all the pairs of generators except $\{1,3\},\{2,4\}$, and $\{3,5\}$ ($M_1=K_1$ and $M_0=K_0$ as always). It is easy to see that $M$ is invariant under $d$ and the dimensions of $M_p$ coincide with the respective Betti numbers of $S/I$ that are 1,5,7,4,1. Examples of ideals with Taylor resolution ========================================= In this section we show examples of classes of sets ${{\cal A}}$ of homogeneous polynomials such that the conditions of Theorem \[taylor\] hold. The following result is useful for that. For every $P\in W$ denote by $E_P$ the subset of $E$ of the irreducible polynomials taking part in the factorization of $P$. Also put $J(P)=J(E_P)$. \[coatomic\] For every saturated $G\subset E$ and every $P\in W(G)$ we have $\tilde H_p(D(G,<P))=0$ for $p\geq |E_P|-1$. [[**Proof. **]{}]{}Suppose $G$ and $P$ are as in the statement and $P=\prod_{i=1}^rZ_i^{m_i}$ is the factorization of $P$ into irreducible factors. Notice that $r=|E_P|$. Let now $X$ and $Y$ be distinct maximal elements of $D(G,P)$. Then $\psi_G(X\vee Y)\geq P$. This implies that for every $i$, $1\leq i \leq r$, all but at most one maximal elements of $D(G,P)$ have the factor $Z_i^{\ell_i}$ with $\ell_i<m_i$ in their factorizations. Since on the other hand each element of $D(G,P)$ has the multiplicity of at least one of $Z_i$ smaller than $m_i$ we have $$\label{3.1} |\max D(G,P)|\leq r.$$ Now the poset $D(G,P)$ can be viewed as a lattice with the maximal and minimal elements deleted. Thus by [@Fo] its homology can be computed as the homology of its coatomic complex of the lattice. The vertices of this complex are the maximal elements of $D(G,P)$ whence implies that either this complex is a simplex or its dimension is less than $r-1$. The result follows. [  ]{} \[regular\] If the set $E_P$ forms a regular sequence (for the module $S$) then the condition of Theorem \[taylor\] holds for $P$ and any $G$ such that $P\in \psi_G(D)$. [[**Proof. **]{}]{}We have in this case ${\mbox{\rm depth}}J(G)\geq {\mbox{\rm depth}}J(P)=|E_P|.$ Thus the result follows from the previous proposition. [  ]{} In particular Corollary \[regular\] recovers the well-known result that for any monomial ideal the Taylor complex is acyclic. Another class of ideals where the condition of Theorem \[taylor\] simplifies (although stays nontrivial) consists of ideals generated by products of linear polynomials. For this class the condition of Theorem \[taylor\] specializes to $$\label{3.2} \tilde H_p(D(G,P))=0\ {\rm for}\ p\geq \dim E_P-1$$ where $\dim E_p$ is the usual dimension of the linear space generated by $E_P$. We can find a sufficient condition for using a result similar to (although more subtle than) Proposition \[coatomic\] but for atomic complexes. First we need a lemma. \[homotopic2\] Let $G$ be a saturated subset of $E$ and $U$ a decreasing subset of the subposet $\psi_G(D)$ of $W(G)$. Then $\psi_G$ is a homotopy equivalence of $\psi^{-1}_G(U)$ and $U$. [[**Proof. **]{}]{}As in the proof of Lemma \[homotopic\] it is easy to see that for every $P\in U$ the poset $\psi_G^{-1}(U_{\leq P})$ has the unique maximal element lcm$\{Q_i\in{{\cal A}}|\psi_G(Q_i)\leq P\}$. Thus this poset is contractable and the result follows. [  ]{} For each $i=1,2,\dots,m$ put $E_i=E_{Q_i}$. \[atomic\] Suppose all $Q_i$ are square free products of linear polynomials. If for every $i=1,2,\ldots, m$ the set $G_i=E\setminus E_i$ is saturated (that in this case means no element of $E_i$ is a linear combination of elements from $G_i$) then holds. [[**Proof. **]{}]{}Fix a saturated set $G\subset E$ and $P\in W(G)$. Lemma \[homotopic2\] implies that $\tilde H_p(D(G,P))\approx \tilde H_p(\psi_G(D)_{<P})$ for every $p$. Let $s$ be the number of the minimal elements $Z_{i_1},\ldots,Z_{i_s}$ of $\psi_G(D)_{<P}$, i.e., $Z_{i_j}=\psi_G(Q_{i_j})$ and $Z_{i_j}<P$. If $s\leq\dim E_P$ then the dimension of the atomic complex of $\psi_G(D)_{<P}$ is less than $\dim E_P-1$ unless this complex is a simplex (cf. the proof of Proposition \[coatomic\]). Then follows. Thus we have to prove only the impossibility of $s>\dim E_P$. Suppose it is the case. Let $r$ be the largest dimension of $G_i$. Then the conditions on $G_i$ and $Q_i$ imply that $\dim\bigcap_{j=1}^sG_{i_j}\leq r-s+1<r-\dim E_P+1$. The condition $Z_{i_j}<P$ is equivalent to $G_{i_j}\supset E\setminus E_P$ whence $\bigcap_{j=1}^s G_{i_j}\supset E\setminus E_P$. We obtain $$\dim (E\setminus E_P)< r-\dim E_P+1$$ whence $$\dim E\leq \dim E_P+\dim(E\setminus E_P)<r+1$$ which is a contradiction. [  ]{} The condition simplifies significantly if we assume that the set $E$ of linear polynomials is generic, i.e., any subset of $E$ with at most $n$ elements is linearly independent. \[generic\] Suppose that all $Q_i$ are products of linear polynomials and $E$ is generic. Then is equivalent to $$\label{3.2'} \oplus_{P\in D}\tilde H_p(D_{<P})=0\ {\rm for}\ p\geq n-1.$$ Equality for $P\in D$ with ${\mbox{\rm rk}}E_P=n$ suffices for . [[**Proof. **]{}]{}Suppose first that $G$ is a proper saturated subset of $E$ and $P\in W(G)$. Since $E$ is generic, $G$ is linearly independent and so is $E_P\subset G$, i.e., ${\mbox{\rm rk}}E_P=|E_P|$. Proposition \[coatomic\] implies for this $G$ and $P$. Now suppose that $G=E$ whence $\psi_G$ is the canonical embedding $D\subset W$. Then the only nontrivial case is where $P\in D$ whence $D(G,<P)=D_{<P}$. If ${\mbox{\rm rk}}E_P<n$ then $E_P$ is again linearly independent and holds by the same reason as in the previous paragraph. If ${\mbox{\rm rk}}E_P=n$ then is obviously equivalent to . [  ]{} Notice that equality using only the combinatorics of the lattice $D$ and $n$. This combinatorics can be expressed in terms of other polynomial ideals, the simplest from them being monomial ones. More precisely assign to each linear polynomial $Z_i\in E$ ($i=1,2,\ldots,r$) an indeterminant $y_i$ and consider the polynomial ring $\tilde S=k[y_1,\ldots,y_r]$. The natural algebra map $\tilde S\to S$ via $y_i\mapsto Z_i$ assigns to each $Q_j\in {{\cal A}}$ a monomial $\tilde Q_j\in\tilde S$ which generate the monomial ideal $\tilde I$ of $\tilde S$. Then the following result follows straightforwardly from the results of the previous sections. \[monoms\] Undeer the conditions of Proposition \[generic\] the Taylor complex of $S/I$ is a resolution if and only if $b_p(\tilde S/\tilde I)=0$ for $p\geq n+1$. If this condition holds then the $k$-algebras ${\rm Tor}_*^S(S/I,k)$ and ${\rm Tor}_*^{\tilde S}(\tilde S/\tilde I,k)$ are naturally isomorphic. Using Proposition \[generic\], we can easily give a series of examples of non-monomial ideals whose Taylor complexes are exact. [**Example 5.1.**]{} Consider generic arrangement of more than $n$ hyperplanes in a space of dimension $n$. Let ${{\cal A}}$ consist of no more than $n$ arbitrary products of functionals of hyperplanes. Then the Taylor complex of ${{\cal A}}$ is its resolution. Indeed for every $P\in D_0$ the poset $D_{<P}$ has at most $n$ atoms whence its atomic complex is either a simplex or has dimension less than $n-1$. The condition follows. In the case where $Q_i$ are square free products of linear polynomials the atomic complex of $D_{<P}$ can be interpreted as the nerve of the collection $\{G_i=E_P\setminus E_i|E_i\subset E_P\}$ of subsets of $E_P$. This is the reason for the relations between the Betti numbers of $S/I$ and the complement of a coordinate subspace arrangement studied for monomial ideals in [@GPW1]. In particular the duality between $E_i$ and $G_i$ leads to the appearance of the Alexander dual complexes. [99]{} L.Avramov, Obstructions to the existence of multiplicative structures on minimal free resolutions, American J. Math. [**103**]{} (1981), 1-32. E.Babson and C.Chan, personal communication. K.Baclawski, Whitney numbers of geometric lattices, Advances in Math. [**16**]{} (1975), 125-138. D.Bayer, I.Peeva, and B.Sturmfels, Monomial resolutions, Math. Research Letters [**5**]{} (1998), 31-46. A.Björner, Topological methods, in Handbook of combinatorics (R. Graham et al eds.), p. 1819-1872, North-Holland, 1994. M.De Longueville, The ring structure on the cohomology of coordinate subspace arrangements, preprint, 1998. D.Eisenbud, Commutative algebra with a view toward algebraic geometry, Springer Verlag, 1995. E.M.Feichtner, Cohomology algebras of subspace arrangements and of classical configuration spaces, Cuvillier Verlag Göttingen, 1997 (Doctors Dissertation at TU, Berlin). J.Folkman, The homology groups of a lattice, J. Math. and Mech. [**15**]{} (1966), 631-636. V.Gasharov, I.Peeva, V.Welker, Coordinate subspace arrangements and monomial ideals, preprint 1998. V.Gasharov, I.Peeva, V.Welker, The lcm-lattice in monomial resolutions, preprint, 1998. D.G.Northcott, Finite free resolutions, Cambridge, 1976. P.Orlik and H.Terao, Arrangements of hyperplanes, Springer Verlag, 1992. D.Quillen, Homotopy properties of the poset of nontrivial $p$-subgroups of a group, Advances in Math. [**28**]{} (1978), 101-128. S.Yuzvinsky, On generators of the module of logarithmic 1-forms with poles along an arangement, J. Algebraic Comb. [**4**]{} (1995), 253-269. S.Yuzvinsky, Small rational model of subspace complement, preprint math.CO/9806143. S.Yuzvinsky, Rational model of subspace complement on atomic complex, preprint 1998.
--- abstract: 'A summary of the [SAURON]{} project and its current status is presented. [SAURON]{} is a panoramic integral-field spectrograph designed to study the stellar kinematics, gaseous kinematics, and stellar populations of spheroids. Here, the sample of galaxies and its properties are described. The instrument is detailed and its capabilities illustrated through observational examples. These includes results on the structure of central stellar disks, the kinematics and ionization state of gaseous disks, and the stellar populations of galaxies with decoupled cores.' author: - 'M. Bureau, M. Cappellari, Y. Copin, E.K. Verolme, P.T. de Zeeuw' - 'R. Bacon, E. Emsellem' - 'R.L. Davies, H. Kuntschner, R. McDermid' - 'B.W. Miller' - 'R.F. Peletier' title: 'SAURON: An Innovative Look at Early-Type Galaxies' --- \#1 1.25in .125in .25in Introduction ============ The physical properties of early-type galaxies correlate with luminosity and environment. The morphology-density relation shows that ellipticals and lenticular galaxies are much more common in clusters than in regions of lower local density (Dressler 1980). Giant ellipticals ($M_B\lesssim-20.5$) are red, have a high metal content, often have boxy isophotes and shallow cusps, and are supported by anisotropic velocity distributions, associated with triaxial shapes (e.g. de Zeeuw & Franx 1991; Faber et al. 1997). Lower-luminosity systems ($M_B\gtrsim-20.5$) are bluer, less metal-rich, have disky isophotes and steep cusps, and are flattened by rotation, suggesting nearly oblate shapes (Davies et al. 1983; Bender & Nieto 1990). The mass of the central black hole in spheroids also correlates with the central velocity dispersion (e.g. Gebhardt et al. 2000; Ferrarese & Merritt 2000). It is unclear to what extent these properties and the correlations between them were acquired at the epoch of galaxy formation or result from subsequent dynamical evolution. Key questions to which [SAURON]{} hopes to provide answers include: What is the distribution of intrinsic shapes, tumbling speeds, and internal orbital structure among early-type galaxies? How do these depend on total luminosity and environment? What is the shape and extent of dark halos? What is the dynamical importance of central black holes? What is the distribution of metals, and what is the relation between the kinematics of stars (and gas), the local metal enrichment, and the star formation history? Progress towards answering these questions requires a systematic investigation of the kinematics and line-strengths of a representative sample of early-type galaxies. The intrinsic shape, internal orbital structure, and radial dependence of the mass-to-light ratio are constrained by the stellar and gas kinematics (e.g. van der Marel & Franx 1993; Cretton, Rix, & de Zeeuw 2000); the age and metallicity of the stellar populations by the absorption line-strengths (Gonzalez 1993; Davies, Sadler, & Peletier 1993). The [ SAURON]{} project will provide all of these data, and more, for a large and well-defined sample of objects.=-2 The Instrument ============== Long-slit spectroscopy along a few position angles is insufficient to map the rich internal kinematics of early-type galaxies (e.g. Statler 1991, 1994). We thus built [SAURON]{} ([S]{}pectral [A]{}real [U]{}nit for [ R]{}esearch on [O]{}ptical [N]{}ebulae), a panoramic integral-field spectrograph optimized for studies of the large-scale kinematics and stellar populations of spheroids (Bacon et al. 2001, hereafter Paper I). [SAURON]{} uses a lenslet array and is based on the [TIGER]{} concept (Bacon et al.1995). In its low-resolution (LR) mode, it has a $41\arcsec\times33\arcsec$ field-of-view sampled with $0\farcs94\times0\farcs94$ lenslets, 100% coverage, and high throughput. In high-resolution (HR) mode, the field-of-view is $11\arcsec\times9\arcsec$ sampled at $0\farcs27\times0\farcs27$. [ SAURON]{} simultaneously provides 1577 spectra over the wavelength range 4810–5350 Å, 146 of which are used for sky subtraction. Stellar kinematic information is derived from the Mg[*b*]{} triplet and the Fe lines; the \[OIII\], H$\beta$, and \[NI\] emission lines provide the morphology, kinematics, and ionization state of the ionized gas. The Mg[*b*]{}, H$\beta$, and Fe5270 absorption lines are sensitive to the age and metallicity of the stellar populations. The main characteristics of [SAURON]{} are listed in Table 1. Paper I provides a full description of its design, construction, and of the extensive data reduction software we developed. A pipeline called [ PALANTIR]{} is also described. The Sample ========== Observing any complete sample which spans a wide range of properties is costly in telescope time, even with [SAURON]{}. We therefore constructed a [ *representative*]{} sample of nearby ellipticals, lenticulars, and early-type bulges, as free of biases as possible, but ensuring the existence of complementary data. We also target some objects with known decoupled kinematics (e.g. Davies et al. 2001). We will combine the [SAURON]{} observations with high-spatial resolution spectroscopy of the nuclei, mainly from CFHT/[OASIS]{} and HST/[STIS]{}, and interpret them through dynamical and stellar population modeling. [lcc]{} Property &\ & LR & HR\ Projected size of lenslet & 094 & 027\ Field-of-view & $41\arcsec\times33\arcsec$ & $11\arcsec\times9\arcsec$\ Spectral resolution (FWHM) & 3.6 Å& 2.8 Å\ Wavelength coverage &\ Number of object lenslets &\ Number of sky lenslets &\ Grism &\ Spectral sampling & 1.1 Å pix$^{-1}$ & 0.9 Å pix$^{-1}$\ Instrumental dispersion ($\sigma$) & 90 km s$^{-1}$ & 70 km s$^{-1}$\ Spectra separation/PSF ratio & 1.4 & 2.3\ Important spectral features &\ Calibration lamps &\ Telescope &\ Detector &\ Pixel size &\ Efficiency (optics/total) &\ To construct the sample, we first compiled a complete list of ellipticals, lenticulars, and spiral bulges for which [SAURON]{} can measure the stellar kinematics. Given the specifications of the instrument, this leads to the following constraints: $-6^\circ \leq \delta \leq 64^\circ$ (zenith distance), $cz \leq 3000$ km s$^{-1}$ (spectral range), $M_B \leq -18$ and $\sigma_c \geq 75$ km s$^{-1}$ (spectral resolution). We further restricted the objects to $|b| \geq 15^\circ$ to avoid crowded fields and large Galactic extinctions. All distances are based on a Virgocentric flow model. For galaxies in the Virgo cluster, Coma I cloud, and Leo I group, which we refer to as ‘cluster’ galaxies, we adopted common mean distances based on the mean heliocentric velocity of each group (Mould et al. 1993). For galaxies outside these three associations, which we refer to as ‘field’ galaxies, we used individual distances. The complete list of galaxies contains 327 objects which we divided into six categories, first separating ‘cluster’ and ‘field’ galaxies, and then splitting each of these into E, S0, and Sa bulges. We then selected the [ *representative*]{} sample of objects by populating the six resulting ellipticity versus absolute magnitude planes nearly uniformly. The result is 36 cluster galaxies (12 E, 12 S0, and 12 Sa) and 36 field galaxies (12 E, 12 S0, and 12 Sa), as illustrated in Figure 1. By construction, our sample covers the full range of environment, flattening, rotational support, nuclear cusp slope, isophotal shape, etc. It is also large enough to be sub-divided by any of these criteria, and allow a useful comparison of the sub-samples, yet small enough that full mapping with [SAURON]{} is possible over a few observing seasons. The 72 galaxies correspond to 22% of the complete sample and, as can be seen from Figure 1, remain representative of it. A more complete description of the sample as well as a listing are available in de Zeeuw et al. (2001, hereafter Paper II). Over two-thirds of the sample have been observed as of September 2001. Completion is expected in April 2002. Stellar Kinematics ================== Our strategy is to map galaxies to one effective radius $R_e$, which for nearly half the sample requires only one pointing. For the largest galaxies, mosaics of two or three pointings reach 0.5 $R_e$. Each pointing is split into four 1800 s exposures dithered by one lenslet. We reduce the raw [ SAURON]{} data as described in Paper I and derive maps of the stellar kinematics using the FCQ method (Bender 1990). This provides the mean stellar velocity $V$, the velocity dispersion $\sigma$, and the Gauss-Hermite moments $h_3$ and $h_4$ (e.g. van der Marel & Franx 1993).=-2 In this section, we present [SAURON]{} stellar kinematics for three objects observed in the LR mode. All show the presence of a central stellar disk, with varying strengths. Other morphologies are illustrated in Papers I and II. NGC 3384 -------- NGC 3384 is a large SB0$^-$(s) galaxy in the Leo I group ($M_B$= –19.6). It forms a triple on the sky with NGC 3379 and NGC 3389 but there is only marginal evidence for interactions. The light distribution in the central $\approx20\arcsec$ is complex. The inner isophotes are elongated along the major axis, suggesting an embedded disk, but beyond $10\arcsec$ the elongation is along the minor-axis (e.g. Busarello et al. 1996). The isophotes are off-centered at much larger radii. NGC 3384 shows no emission lines, remains undetected in HI, CO, radio continuum, and X-ray, but has IRAS detections at 12 and 100 $\mu$m (e.g. Roberts et al. 1991). Figure 2 displays the stellar kinematics of NGC 3384 and illustrates a key advantage of [SAURON]{}. Integrating the flux in wavelength, the surface brightness distribution of the galaxy is recovered and there is no doubt about the relative location of the measurements. Figure 2 shows that the bulge of NGC 3384 is rotating regularly. The mean velocities increase steeply along the major axis up to $r\approx4\arcsec$, then decrease slightly, and rise again. No velocity gradient is observed along the minor axis. The velocity dispersion map shows a symmetric dumb-bell structure and the $h_3$ map is anti-correlated with $V$ in the inner parts, revealing an abrupt change in the gradient at $r\approx4\arcsec$ (see also Fisher 1997). All these facts point to the presence of central (and cold) stellar disk in NGC 3384. NGC 4526 and NGC 4459 --------------------- Many other galaxies in our sample show evidence of a central stellar disk. NGC 3623 was discussed in Paper II. Figure 3 shows two other cases where the stellar disk appears to corotate with a central gaseous disk limited by the dust lane (Rubin et al. 1997). NGC 4526 is a highly inclined SAB0$^0$(s) galaxy in the Virgo cluster ($M_B$= –20.7). The stellar disk is not readily visible in the reconstructed image, but it is evident in the velocity and velocity dispersion fields. As in NGC 3384, the rotation along the major axis first increases, then decreases, and increases again in the outer parts. However, the extent of the disk is much larger than in NGC 3384. The disk appears almost edge-on, giving rise to an elongated depression across the (hot) spheroid in the velocity dispersion map, and completely overwhelming the central velocity dispersion peak. NGC 4459 is an S0$^+$(r) galaxy ($M_B$= –20.0) also located in Virgo and harbouring a $7.3\times10^7$ M$_\odot$ black hole (Sarzi et al. 2001). The same velocity behavior as in NGC 3384 and NGC 4526 is observed along the major axis (see also Peterson 1978), although the minimum is shallower. The isovelocity contours are also less skewed, either indicating that the disk is seen more face-on or that it is intrinsically thicker (or both). This is supported by the absence of a clear disk signature in the velocity dispersion map. Gaseous Kinematics and Ionization Mechanisms ============================================ We now illustrate the scientific potential of the [SAURON]{} gaseous data. Paper II describes how the H$\beta$, \[NI\], and \[OIII\] emission lines are disentangled from the absorption lines by means of a spectral library, and it summarizes the procedures for deriving fluxes and kinematics. Results on the non-axisymmetric gaseous disks in NGC 3377 and NGC 5813 are presented in Papers I and II, respectively. NGC 7742 -------- NGC 7742 is a face-on Sb(r) spiral ($M_B$= –19.8) in a binary system. It is among the latest spirals included in our sample. De Vaucouleurs & Buta (1980) identified the inner stellar ring; Pogge & Eskridge (1993) later detected a corresponding small, bright ring of HII regions with faint floculent spiral arms. NGC 7742 possesses a large amount of HI, molecular gas, and dust (e.g.Roberts et al. 1991) and is classified as a LINER/HII object (Ho, Filippenko, & Sargent 1997). Figure 4 shows the \[OIII\] and H$\beta$ intensity maps, together with the derived velocity and velocity dispersion fields. Most of the emission is confined to a ring coinciding with the spiral arms. H$\beta$ dominates in the ring (H$\beta$/\[OIII\]$\approx7-16$) but it is much weaker in the center (H$\beta$/\[OIII\]$\approx1$). Also shown in Figure 4 is a reconstructed image composed of \[OIII\] and stellar continuum, and a similar image composed of HST/WFPC2 exposures. The [SAURON]{} data does not have HST’s spatial resolution, but it does show that our algorithms yield accurate emission-line maps. The main surprise comes from the stellar and gas kinematics: the gas and stars within the ring are counter-rotating. NGC 4278 -------- NGC 4278 is an E1-2 galaxy ($M_B$= –19.9) located in the Virgo cluster. It contains large-scale dust, as well as a blue central point source (Carollo et al. 1997). Long-slit spectroscopy reveals a peculiar stellar rotation curve, rising rapidly at small distances from the nucleus and dropping to nearly zero beyond $\approx30\arcsec$ (Davies & Birkinshaw 1988; van der Marel & Franx 1993). NGC 4278 also contains a massive HI disk extending beyond 10 $R_e$. The HI velocity field is regular but has non-perpendicular kinematic axes, indicating non-circular motions (Raimond et al 1981; Lees 1992).=-2 Figure 5 displays the reconstructed stellar intensity as well as the \[OIII\] map derived from two [SAURON]{} pointings. Despite the regular and well-aligned stellar isophotes, the distribution of ionized gas is very extended and strongly non-axisymmetric. Its shape is reminiscent of a bar terminated by ansae, as observed in spiral galaxies. It will be interesting to construct a comprehensive dynamical model for NGC 4278, and explore its orbital structure and dark matter content. [=0.9]{} Stellar Populations =================== [SAURON]{}’s wavelength range allows two-dimensional mapping of the line-strength indices H$\beta$, Mg[*b*]{}, and Fe5270. To convert measured equivalent widths to indices on the Lick/IDS system (Worthey et al. 1994), corrections for the difference in spectral resolution and velocity broadening in the galaxies must be applied. Both are described in Paper II. With stellar population models, these indices can be used to estimate luminosity-weighted ages and metallicities and study their contours. We discuss below the stellar populations of galaxies with kinematically decoupled cores. NGC 3384 shows a similar behaviour (Paper II). Davies et al. (2001) discussed the [SAURON]{} kinematics and line-strengths of the large E3 galaxy NGC 4365 ($M_B$= –20.9) in the Virgo cluster. While the center and main body of the galaxy are decoupled kinematically, both components show the same luminosity-weighted age ($\approx14$ Gyr) and a smooth metallicity gradient is observed, suggesting formation through gas-rich mergers at high redshift.=-2 NGC 5813 -------- NGC 5813 is an E1–2 galaxy in the Virgo-Libra Cloud ($M_B= -21.0$). It is undetected in HI or CO but has an unresolved, weak central radio continuum source and emission-line ratios typical of LINERS (Birkinshaw & Davies 1985; Ho, Filippenko, & Sargent 1997). The ionized gas exhibits a complex filamentary structure most likely not (yet) in equilibrium (Caon et al.2000; Paper II). As illustrated in Figure 6, NGC 5813 harbours a kinematically decoupled core (see also Efstathiou, Ellis, & Carter 1982). While the body of the galaxy appears purely pressure supported, the center rotates rapidly around an axis tilted by $\approx13^\circ$ from the photometric minor axis. As in NGC 4365, the H$\beta$ map is featureless, indicating a roughly constant luminosity-weighted age, but the Mg[*b*]{} and Fe5270 maps show a steep central gradient, indicating a strong metallicity gradient (see also Gorgas, Efstathiou, & Aragon-Salamanca 1990). Concluding remarks ================== The [SAURON]{} survey will be completed in 2002. The first results show that early-type galaxies display line-strength distributions and kinematic structures which are more varied than often assumed. The sample contains specific examples of minor axis rotation, decoupled cores, central stellar disks, and non-axisymmetric and counter-rotating gaseous disks. The provisional indication is that only a small fraction of these galaxies can have axisymmetric intrinsic shapes.=-2 We are complementing the [SAURON]{} maps with high-spatial-resolution spectroscopy of the nuclear regions using [OASIS]{} on the CFHT. [STIS]{} spectroscopy for many of the galaxies is in the HST archive. Radial velocities of planetary nebulae and/or globular clusters in the outer regions have been obtained for some of the galaxies, and many more will become available to $\approx$5$R_e$ with a special-purpose instrument now under construction (Freeman et al. 2001, in prep).=-2 Understanding the formation, structure, and evolution of galaxies is one of the central drivers in Ken Freeman’s research. His enthusiasm and guidance at all levels is an inspiration to this entire field, and in particular to our team. We wish him and Margaret all the best for a happy and exciting future. It is a pleasure to thank the ING staff for enthusiastic and competent support on La Palma. The [SAURON]{} project is made possible through grants 614.13.003 and 781.74.203 from ASTRON/NWO and financial contributions from the Institut National des Sciences de l’Univers, the Université Claude Bernard Lyon I, the universities of Durham and Leiden, the British Council, PPARC grant ‘Extragalactic Astronomy & Cosmology at Durham 1998–2002’, and the Netherlands Research School for Astronomy NOVA. Bacon R., et al. 1995, , 113, 347 Bacon R., et al. 2001, , 326, 23 (Paper I) Bender R. 1990, , 229, 441 Bender R., & Nieto J.-L. 1990, , 239, 97 Birkinshaw M., & Davies R.L. 1985, , 291, 32 Busarello G., Capaccioli M., D’Onofrio M., Longo G., Richter G., & Zaggia S. 1996, , 314, 32 Caon N., Macchetto F.D., & Pastoriza M. 2000, , 127, 39 Carollo C.M., Franx M., Illingworth G.D., & Forbes D. 1997, ApJ, 710 Cretton N., Rix H.-W., & de Zeeuw P.T. 2000, , 536, 319 Davies R.L., & Birkinshaw M. 1988, , 68, 409 Davies R.L., Efstathiou G.P., Fall S.M., Illingworth G.D., & Schechter P.L. 1983, , 266, 41 Davies R.L., et al. 2001, , 548, L33 Davies R.L., Sadler E.M., & Peletier R.F. 1993, , 262, 650 Dressler A. 1980, , 236, 351 Efstathiou G., Ellis R.S., & Carter D. 1982, , 201, 975 Faber S.M., et al. 1997, , 114, 1771 Ferrarese L., & Merritt D.R. 2000, , 539, L9 Fisher D. 1997, , 113, 950 Gebhardt K., et al. 2000, ApJ, 539, L13 Gorgas J., Efstathiou G., & Aragon-Salamanca A. 1990, , 245, 217 Gonzalez J.J. 1993, PhD Thesis, Univ. of California at Santa Cruz Ho L.C., Filippenko A.V., & Sargent W.L.W. 1997, , 112, 315 Lees J.F. 1992, PhD Thesis, Princeton University van der Marel R.P., & Franx M. 1993, , 407, 525 Mould J.R., Akeson R.L., Bothun G.D., Han M., Huchra J.P., Roth J., & Schommer R.A. 1993, , 409, 14 Peterson C.J. 1978, , 222, 84 Pogge R.W., & Eskridge P.B. 1993, , 106, 1405 Raimond E., Faber S.M., Gallagher J.S., & Knapp G.R. 1981, , 246, 708 Roberts M., Hogg D., Bregman J., Forman W., & Jones C. 1991, , 75, 751 Rubin V.C., Kenney J.D.P., & Young J.S. 1997, , 113, 1250 Sarzi M., et al. 2001, , 550, 65 Statler T.S. 1991, , 382, L11 Statler T.S. 1994, , 108, 111 de Vaucouleurs G., & Buta R. 1980, , 85, 637 Worthey G., Faber S.M., Gonzalez J.J., & Burstein D. 1994, , 94, 687 de Zeeuw P.T., et al. 2001, , submitted (Paper II) de Zeeuw P.T., & Franx M. 1991, , 29, 239
--- abstract: 'A majority of a complete sample of 3CR FR I radio galaxies show unresolved optical nuclear sources on the scales of 0.$\arcsec$1. About half of the 3CR FR II radio galaxies observed with the HST also show Compact Central Cores (CCC). These CCCs have been interpreted as the optical counterparts of the non-thermal radio cores in these radio galaxies (Chiaberge, Capetti, & Celotti 1999). We show that the optical flux density of the CCCs in FR Is is correlated with the radio core prominence. This correlation supports the argument of Chiaberge et al. that the CCC radiation is of a non-thermal synchrotron origin, which is relativistically beamed along with the radio emission.' author: - 'Preeti Kharb & Prajval Shastri' title: 'The parsec-scale central components of FR I radio galaxies' --- Introduction ============ [*Radio Galaxies*]{} are radio-loud $(\rm S(\nu)_{5 GHz}/S(\nu)_{B Band}>10)$ Active Galactic Nuclei (AGN) found in hosts that are elliptical galaxies. Their radio structure is made up of two lobes of radio-emitting plasma situated on either side of an unresolved radio core and connected to the core by plasma jets. The radio morphologies fall in two distinct sub-classes: the Fanaroff-Riley type I (FR I) with extended plumes and tails having $L_{178}< 2\times 10^{26}$ W/Hz and the Fanaroff-Riley type II (FR II) with narrow jets and hotspots having $L_{178} > 2\times 10^{26}$ W/Hz at 178 MHz. Within the [*unification scheme*]{} for radio-loud AGN, FR I and FR II radio galaxies are thought to represent the parent populations of BL Lac objects and radio-loud quasars, respectively. BL Lac objects show clear evidence for relativistic beaming resulting from the bulk relativistic motion of the plasma moving close to the line of sight. If FR I radio galaxies are the plane-of-sky counterparts of BL Lacs, FR I jets should also have bulk relativistic motion. The [*core prominence parameter*]{} $R_{c}$, which is the ratio between core and extended radio flux density, is a beaming indicator. This is because if the core is the unresolved relativistically beamed jet and the lobes are unbeamed then $R_{c}$ becomes a statistical measure of orientation. $R_{c}$ has indeed been shown to correlate with other orientation-dependent properties in the case of FR IIs (eg. Kapahi & Saikia 1982) and in FR Is (eg. Laing et al. 1999). The jet-to-counterjet surface brightness ratio $R_{j}$, is one such parameter, where differences in the surface brightness between the jet and counterjet at a given distance from the nucleus are interpreted as effects of Doppler beaming and dimming respectively, on intrinsically symmetrical flows. The [*primary motivation*]{} for our work comes from the discovery of unresolved optical nuclear components at the centres of radio galaxies by the [*Hubble Space Telescope*]{}. The optical flux of these Central Compact Cores (CCC), (Chiaberge et al. 1999), show a striking linear correlation with the radio core emission. Chiaberge et al. suggest that the CCCs are the optical counterparts of the relativistic radio jet. We study the relationship between the CCC flux densities of FR Is and FR IIs and the core prominence parameter, $R_{c}$, the jet-to-counterjet surface brightness ratios, $R_{j}$; and the X-ray core flux densities. The sample and the data ======================== We used the sample of 27 3CR FR Is in Chiaberge et al. (1999), with their data for the CCC flux densities. In addition, the CCC flux densities for the FR Is NGC 7052 and NGC 6251 are from Capetti & Celotti (1999) and Hardcastle & Worrall (2000) respectively. The FR II radio galaxies and their CCC optical flux densities are taken from the sample of 26 3CR FR IIs of Chiaberge et al.(2000). The set of BL Lacs and their optical core flux densities are from Capetti & Celotti (1999). The VLBI jet-to-counterjet surface brightness ratios come from the VLBI observations of the sample of Giovannini et al. (1990). (Giovannini et al. 1999 and references therein). The jet-to-counterjet surface brightness ratio, $R_j={B(jet)}/{B(cjet)}$, are for distances of a few pc from the core. The X-ray core flux densities are from the ROSAT data in Hardcastle & Worrall (2000). The core prominence, $R_c ={S(core)}/{S(extended)}$, was calculated using the total and core radio flux densities at 5 GHz. Results ======= Optical CCCs in FR I radio galaxies ----------------------------------- The CCC optical flux density $F_o$, is well correlated with radio core prominence $R_{c}$, the significance of the correlation being at the 0.001 level (Spearman rank test). See Figure 1a. Given that $R_{c}$ is a statistical measure of angle to the line of sight, the correlation suggests that the CCC flux density is orientation dependent in the same way as the core radio emission and it may thus constitute the optical counterpart of the radio synchrotron jet. In Figure 1a we have included a set of BL Lacs, whose extended radio luminosities span the same range as the FR Is, for comparison. The BL Lacs extend the correlation to higher $R_{c}$, as would be expected if FR Is are the parent population of BL Lacs.\ The optical flux density of CCCs $F_o$, and the jet-to-counterjet surface brightness ratios $R_j$, at 5 GHz for FR I radio galaxies show a correlation significant at the 0.01 level (Spearman rank correlation test). See Figure 1b. This further substantiates the claim that the optical CCC is a part of the jet. Optical CCCs in FR II radio galaxies ------------------------------------ The optical core flux density for the sample of FR II radio galaxies shows a weaker correlation with radio core prominence than the FR Is (significance level 0.1; Spearman rank correlation test). See Figure 2. X-ray core components in radio galaxies --------------------------------------- We examine the correlation between the X-ray core flux density and the radio core prominence. See Figure 3. This seems weaker than the previous correlations. The correlation is significant at the 0.2 level for FR Is taken alone while it is significant at the 0.1 level for FR I and FR IIs taken together. Capetti, A., & Celotti, A. 1999, ’Testing the FR I/BL Lac unifying model with HST observations’, , 304, 434-442 Chiaberge, M., Capetti, A., & Celotti, A. 1999, ’The HST view of FR I radio galaxies: evidence for non-thermal nuclear sources’, , 349, 77-87 Chiaberge, M., Capetti, A., & Celotti, A. 2000, ’The HST view of the FR I / FR II dichotomy’, , 355, 873-879 Giovannini, G., Feretti, L., & Comoretto, G. 1990, ’VLBI observations of a complete sample of radio galaxies. I - Snapshot data’, , 358, 159-163 Giovannini, G., Taylor, G. B., Arbizzani, E., Bondi, M., Cotton, W. D., Feretti, L., Lara, L., & Venturi, T. 1999, ’B2 1144+35: A Giant Low-Power Radio Galaxy with Superluminal Motion’, , 522, 101-112 Hardcastle, M. J., & Worrall, D. M. 2000. ’Radio, optical and X-ray nuclei in nearby 3CRR radio galaxies’, , 314, 359-363 Kapahi, V. K., & Saikia, D. J. 1982, ’Relativistic beaming in the central components of double radio quasars’, JApA, 3, 465-483 Laing, R. A., Parma, P., de Ruiter, H. R., & Fanti, R. 1999, ’Asymmetries in the jets of weak radio galaxies’, , 306, 513-530
--- abstract: 'The sheet current model underlying the software 3D-MLSI package for calculation of inductances of multilayer superconducting circuits, has been further elaborated. The developed approach permits to overcome serious limitations on the shape of the circuits layout and opens the way for simulation of internal contacts or vias between layers. Two models for internal contacts have been considered. They are a hole as a current terminal and distributed current source. Advantages of the developed approach are illustrated by calculating the spatial distribution of the superconducting current in several typical layouts of superconducting circuits. New meshing procedure permits now to implement triangulation for joint projection of all nets thus improving discrete physical model for inductance calculations of circuits made both in planarized and non-planarized fabrication processes. To speed-up triangulation and build mesh of better quality, we adopt known program “Triangle”.' author: - 'M. M. Khapaev' - 'M. Yu. Kupriyanov' bibliography: - 'ht.bib' title: Inductance extraction of superconductor structures with internal current sources --- Introduction ============ The challenges [@Anders; @IARPA] facing the development of the modern digital superconducting electronics urgently require not only the development of new technological solutions [@Tolpygo2014; @Tolpygo22014; @Nagasawa; @Fujima; @Nagasawa2] but also new tools needed to calculate inductances, resulting in topological configurations of designed digital cells. Inductances are the important component of all superconductor digital circuits. Calculation of inductances, currents and fields for layouts in superconductor electronics is important and challenging problem [@GajFeldman; @Fourie2013Status]. Currently several programs are used for inductances calculations [@Lmeter; @CoenradFourier2011; @khapaev20013d]. These programs are intended for different areas [@Fourie2013Status] and utilize different superconducting current models. Recently it was demonstrated [@2014arXiv1408.5828T] that 3-D inductance extractors based on FastHenry [@InducEx; @FASHENRY0; @FastHenry3.0wr] and 3D-MLSI [@khapaev2001inductance; @kupriyanov2010] software can be successfully used for calculations of inductance of various superconducting microstrip-line and stripline inductors having linewidth down to 250 nm in 8-metal layer process developed for fabricating VLSI superconductor circuits. Unfortunately, the existing inductances extraction tools have some limitations. Lmeter [@Lmeter] do not apply, if parts of a film or a wire in a multilayer structure don’t have strong magnetic coupling with other layers in the structure. For example, Lmeter can’t be used for single layer structures and structures without groundplanes. FastHenry tool [@InductExCalibration] needs accuracy calibration and meets problems for holes and groundplanes. It is difficult to use 3D-MLSI [@khapaev2001inductance] for quantitative and qualitative description of the effects caused by current injection through the internal terminals located inside multilayer structures. These terminals are staggered or stacked vias between layers [@2014arXiv1408.5828T] or connections between the films contained Josephson junction. In this paper we attack these problems by improvement of our 3D-MLSI software aimed on removing limitations on using the internal terminals. To do that we introduce two new models for current sources and improve the accuracy of our numerical algorithm and program by using the new scheme of FEM triangular meshing aligned to all film boundaries. The scheme allows to do more accurate calculations for non-planarized circuits and has as an option allowing us to use an external program Triangle [@shewchuk96b] for FEM mesh construction. In the first model of internal terminal we declare a hole or any part of hole in a multilayer film as current terminal and define inlet or outlet current on its perimeter. This new current terminal is almost identical to a similar terminal located at the external borders of the film. However, there is the difference. It consists in the fact that the new mutual inductance between current around the hole and current from hole appears. The first model doesn’t allow a current flows under the contact. It isn’t applicable if there are two contacts on same place from top and from bottom of the film. In these cases it is convenient to use the second internal terminal model. We call it “hole as a current source”. In the second model, the area of the film, which is located under the contact (via) is not cut out. It remains an integral part of the film, in which we solve the equations that determine the spatial distribution of the current. These equations are properly modified to include current sources located in the area of inner terminals. The total current provided by the current source is the given value. Advantages of the developed approach are illustrated in the last sections of the paper by calculation of the spatial distribution of the superconducting current in several typical layouts of superconducting circuits. Basic Assumptions ================= We consider multilayer, planar, multi-connected structures, which consist of superconducting (S) films separated by dielectric interlayers. The design can have or have not ground plane that reside under all wires. There are no restrictions on floor plane shapes of the S films. They can contain holes, current terminals on external boundary and current terminals (contacts) in inner areas of S layers. Current distribution in the film can be induced by different sources. These sources can be given full currents circulating around holes or fluxoids trapped in the holes, given full currents between external or internal contacts, and external magnetic field. A single S film with one hole and three current terminals (contacts) is shown in Fig. \[schem\]. It will be used to illustrate new features of the presented version of our 3D-MLSI package. The hole on Fig. \[schem\] traps zero or non-zero flux. Internal contacts can model Josephson junctions, as well as staggered or stacked vias between S layers. Terminal on external boundary (dashed segment) models external wire. For further convenience, let $P$, $P'$ stands for points in 3D space, $r$, $r'$ - for points on 2D plane. Also, consider differential operators $\partial_x={\partial}/{\partial x}$, $\partial_y={\partial}/{\partial y}$, $\nabla=(\partial_x,\partial_y,\partial_z)$, $\nabla_{xy}=(\partial_x,\partial_y)$. $\Delta$ is Laplace operator in 3D and $\Delta_{xy}$ is Laplace operators in 2D space. ![Schematic view of a single layer circuit design having two internal contacts (dashed area), one contact on external boundary and hole. $C_1$ and $C_2$ are boundaries of internal contacts. $C_3$ is contact on external boundary. $C_{ext}$ is external boundary without contacts. Currents can flow around holes and from contact to contact.[]{data-label="schem"}](schem.eps) Mathematical Model ================== Rigorous electromagnetic analysis should be started from stationary Maxwell and London equations [@Orlando:book; @van1998principles]: $$\begin{aligned} \lambda^2\,\nabla\times\vec{j}(P)+ \vec{H}(P)+\vec{H}_{ext}(P) = 0,\label{ell} \\ \nabla\times\vec{H}(P)=\vec{j}(P),\label{emx}\end{aligned}$$ where $\lambda$ is London penetration depth, $\vec{H}(P)$ is magnetic field of current $\vec{j}(P),$ $\vec{H}_{ext}(P)$ is external magnetic field. Equations (\[ell\]), (\[emx\]) can be rewritten in the form of volume current integral equations using vector potential, $\vec{A}_{ext}(P)$ for magnetic field $$\begin{aligned} \lambda^2\vec{j}(P)+\frac{1}{4\pi}\int\!\!\!\!\!\int\!\!\!\!\!\int\limits_{V}\frac{\vec{j}(P')}{|P-P'|}dv'+\vec{A}_{ext}(P)= \nabla\chi(P), \quad r\in V \label{ev1} \\ \quad \nabla\cdot\vec{j}(P)=0,\quad \Delta\chi(P)=0, \quad \vec{H}_{ext}(P) = \nabla\times\vec{A}_{ext}(P).\label{ev2}\end{aligned}$$ Here integration is performed over the volume $V$ of all conductors, $\chi(P)$ is a scalar function, which is proportional to phase of superconductor condensate function. Equations (\[ev1\]), (\[ev2\]) together with appropriate boundary conditions can be solved numerically. Typically, to do that the PEEC (Partial Element Equivalent Circuit) method is used. This method was evaluated for normal conductors [@peecRuehli], enhanced for large problems [@FASHENRY0] and recently adopted for superconductors [@FastHenry3.0wr; @CoenradFourier2011]. Boundary conditions for Eqs. (\[ev1\]), (\[ev2\]) are easy formulated for wire-like conductors with external or internal current terminals. Description of holes with trapped fluxoids and large flat structures like ground planes meets some difficulties in using these PEEC-like methods. For superconductors it can cause accuracy, memory and performance problems. Our approach is based on some assumptions concerning dimensions of the circuits. We assume that floor plan dimensions are much larger than the film thicknesses that in turn are less or of the order of London penetration depth. In this case, the volume current density in superconductor can be accurately approximated by a sheet current density. If the assumptions are violated then the accuracy of our approach is reduced. Nevertheless, the method provides sufficient accuracy of calculation in the case where the film thickness is about 2 - 3 penetration depths [@khapaev2001inductance]. Planarity assumptions allow us to introduce sheet current $J(r)$. Let $t=h_2-h_1$ is the thickness of the layer (see Fig. \[schem\]) and $\lambda_S=\lambda^2/t$ is London penetration depth for films. We assume that $j(P)=(j_x(P),j_y(P),0)$ and take average volume current density over the film thickness (Fig. \[schem\]): $$\begin{aligned} \vec{J}(r)=\frac{1}{t}\int_{h_1}^{h_2} \vec{j}(P)dz.\label{avgj}\end{aligned}$$ Then from (\[ev1\]) it follows that $\vec{J}(r)$ satisfies the integral equation: $$\begin{aligned} \lambda_S\vec{J}(r)+\frac{1}{4\pi}\int\!\!\!\!\int\limits_{S}G(r,r')\vec{J}(r')ds'= \nabla_{xy}\chi(r), \label{esh1} \\ \quad \nabla_{xy}\cdot\vec{J}(r)=0,\quad \Delta_{xy}\chi(r)=0. \quad r\in S.\end{aligned}$$ Kernel $G(r,r')$ is result of averaging procedure for (\[ev1\]). For single layer problems it can be taken simply as $$\begin{aligned} G(r,r')=\frac{1}{|r-r'|}. \label{G_simple}\end{aligned}$$ For multilayer structures with layers $m$ and $n$ and heights $h_{m,k},h_{n,l}$ [@khapaev2001inductance] $$\begin{aligned} G_{mn}(r,r')=\frac{1}{4}\sum_{k=1}^2\sum_{l=1}^2\left(|r-r'|^2+(h_{m,k}-h_{n,l})^2\right)^{-1/2}. \label{Gmn}\end{aligned}$$ On the next step it is convenient to introduce the [*stream function*]{} $\psi(r)$ $$\begin{aligned} %J_{x}(r) = \frac{\partial \psi(r)}{\partial y} J_{x}(r) = \partial_y \psi(r) , \quad %J_{y}(r) =-\frac{\partial \psi(r)}{\partial x} J_{y}(r) =-\partial_x \psi(r) \label{pssi}\end{aligned}$$ and rewrite (\[esh1\]) in the form [@khapaev2001inductance] $$\begin{aligned} -\lambda_S\Delta_{xy}\psi(r)+\frac{1}{4\pi}\int\!\!\!\!\int\limits_{S}\!\! \left( \nabla_{xy}\psi(r'),\nabla_{xy}'G(r,r')\right)ds_r'+H_{z,ext}(r)=0. \label{mp}\end{aligned}$$ Here $H_{z,ext}(r)$ is component of external magnetic field oriented in $z$ direction. For very thin conductors $G(r,r')$ can be taken in the form (\[G\_simple\]). Equation (\[mp\]) should be supplemented by boundary conditions. These boundary conditions are simple first kind boundary conditions since values of stream function on the boundary are known [@khapaev2001inductance]: $$\begin{aligned} \psi(r)=I_{h,k},\; r\in C_{h,k},\; \label{holes} \\ \psi(r)=F(r),\; r\in C_{ext}. \label{terms}\end{aligned}$$ Here $I_{h,k}$ is the full current circulating around hole $k$ with boundary $C_{h,k}$. On the external boundary $C_{ext}$ function $F(r)$ can be easily evaluated using well-known properties of stream function. Mathematically problem (\[mp\]), (\[holes\]), (\[terms\]) is very similar to boundary problem for Poisson equation. We prefer to solve equation (\[mp\]) instead of (\[esh1\]) since (\[mp\]) easily accounts currents circulating around holes and for reasons of efficiency of numerical computations. After calculation of the $\psi(r)$ function, we can calculate the energy functional, as well as the inductance matrix [@khapaev2001inductance]. Unfortunately $\psi$-function approach meets problems for structures with internal contacts as contacts 1 and 2 in Fig. \[schem\]. This problem is a purely mathematical [@ZRen2003]. It isn’t possible to define stream function for internal source. There are artificial approaches to resolve this problem, which are based on the introduction of the cuts between contours of internal terminals and external boundary. But it is just workaround and not a practical solution. To overcome these difficulties we decompose current density into the sum of excitation current for terminals and screening current: $$\begin{aligned} \vec{J}(r)=\vec{J}_{ex}(r)+\vec{J}_{scr}(r). \label{deco}\end{aligned}$$ For evaluating excitation current $\vec{J}_{ex}(r)$, some techniques are known [@ZRen2003; @Tamburrino2010]. These techniques are based on topological considerations for finite element method meshes and as result produce non-physical currents for so called “thick cuts” for internal sources [@ZRen2003]. In our case it is still difficult to account all full current combinations for calculations of elements of inductance matrix. Fortunately one more physical decomposition (\[deco\]) exists. Physically, it is equivalent to the separation of the total current on the circulating and laminar components. To implement it, taking into account (\[esh1\]), we define excitation current as $$\begin{aligned} \vec{J}_{ex}(r)=\frac{1}{\lambda_S}\nabla_{xy}\varphi(r), \quad \Delta_{xy}\varphi(r)=0, \quad \frac{1}{\lambda_S}\frac{\partial\varphi}{\partial n}=0,\quad r\in C_{ext},C_{h,k}. \label{Jex}\end{aligned}$$ Equation (\[Jex\]) needs boundary conditions for internal sources and contacts on the external boundary. We consider two approaches for internal sources modeling. In the first model we consider internal contacts as holes. In this case we have two current components. One is flowing across hole boundary and the other is circulating around the hole. Current across boundary for internal and external sources should be presented by Neumann boundary conditions for function $\varphi(r)$: $$\begin{aligned} \frac{1}{\lambda_S}\frac{\partial\varphi}{\partial n}=\frac{I_m}{|C_m|},\quad r\in C_m. \label{bch}\end{aligned}$$ Here $C_m$ is the boundary of $m$-th contact, $I_m$ is full current across $C_m$ and $|C_m|$ is the length of contact. It is assumed that the injection current is distributed uniformly along the perimeter of any internal or external terminals. This assumption is physically justified since in real devices the characteristic dimensions of the terminal is much smaller than $\lambda_{S}$. The first model allows us to investigate new objects such as mutual inductance of hole and contact to this hole. The first model has two disadvantages. The current can’t flow under contact. Also contacts from top and bottom on same place are a problem. In this case first approach brings to two different intersecting holes. Both of these drawbacks are overcame in the second model of the internal terminal. In the second model, the terminal is considered as the locus of local current sources $\vec{J}_{ex}(r)$. $$\begin{aligned} \nabla_{xy} \cdot \vec{J}_{ex}(r)=\frac{I_m}{|S_m|},\quad r\in S_m, \label{div}\end{aligned}$$ where $I_m$ is the full current injected into the area, $|S_m|,$ of internal contact $m.$ For structure in Fig. \[schem\] it brings us to the following boundary problem for function $\varphi(r)$: $$\begin{aligned} \vec{J}_{ex}(r)=\frac{1}{\lambda_S}\nabla_{xy}\varphi(r), \label{Jdiv1} \\ \frac{1}{\lambda_S}\Delta_{xy}\varphi(r)=F(r), \quad F(r)=\frac{I_m}{|S_m|} \quad r\in S_m, \quad F(r)=0 \; r\notin S_m, \label{Jdiv2} \\ \frac{1}{\lambda_S}\frac{\partial\varphi}{\partial n}=0,\quad r\in C_{ext},C_{h,k},\quad \frac{1}{\lambda_S}\frac{\partial\varphi}{\partial n}=\frac{I_3}{|C_3|},\quad r\in C_3. \label{Jdiv3}\end{aligned}$$ From (\[Jex\]) it follows that circulation of $\vec{J}_{ex}(r)$ around any hole equals to zero. For $\vec{J}_{scr}(r)$ from (\[esh1\]) we have $$\begin{aligned} %\hspace*{-2.5cm} -\lambda_S \nabla_{xy} \times \vec{J}_{scr}(r)+ \nonumber \\ \frac{1}{4\pi}\int\!\!\!\!\int\limits_{S}\!\! \left(J_{scr,x}(r')\partial_y G(r,r')-J_{scr,y}(r')\partial_x G(r,r')\right)ds'+F(r)=0 \label{Jscr} \\ \hspace*{-0.5cm}F(r)=\frac{1}{4 \pi \lambda_S}\int\!\!\!\!\int\limits_{S}\!\! \left(\partial_x\varphi(r')\partial_y G(r,r')-\partial_y \varphi \partial_x G(r,r')\right)ds'+H_{ext,z}(r)=0 \label{Fr}\end{aligned}$$ After solution of the boundary problem (\[Jdiv1\],\[Jdiv2\],\[Jdiv3\]) for $\varphi(r)$ it is possible to calculate the $F(r)$ function from (\[Fr\]) and reduce the problem to computation of $\vec{J}_{scr}(r)$ making use of the well developed later [@khapaev2001inductance; @meSST; @kupriyanov2010; @TEEconf] stream function approach $$\begin{aligned} J_{scr,x}(r) = \partial_y \psi(r), \quad J_{scr,y}(r) =-\partial_x \psi(r)\end{aligned}$$ with the boundary conditions for holes $$\begin{aligned} \psi(r)=0 \quad r\in C_{ext}, \quad, \psi(r)=0 \quad r\in C_{3}, \quad \psi(r)=I_{h,k},\; r\in C_{h,k}. \label{Jscr_holes}\end{aligned}$$ In accordance with Eq. (14), the vectorial sum of $J_{ex}(r)$ and $J_{scr}(r)$ determines the spatial distribution of the total current $\vec{J}(r)$ in the structure. Knowledge of this distribution allows us to calculate the total energy $E$ $$\begin{aligned} E=\frac{1}{2}\left(\lambda_S \int\!\!\!\!\int\limits_{S}\!\!|\vec{J}(r)|^2+\frac{1}{4\pi}\int\!\!\!\!\int\limits_{S}ds'\int\!\!\!\!\int\limits_{S} (\vec{J}(r),\vec{J}(r'))G(r,r')ds\right), \label{E}\end{aligned}$$ which in turn makes it possible to find the inductance matrix [@khapaev2001inductance]. Numerical technique and program =============================== Finite Elements Method ---------------------- Our basic numerical technique is Finite Element Method (FEM) [@jin2002finite]. We use triangular meshes and linear finite elements. This approach was evaluated for stream function equations (\[Jscr\]) in [@khapaev2001inductance; @khapaev20013d; @meSST]. For Poisson equations (\[Jex\]) and (\[Jdiv1\],\[Jdiv2\],\[Jdiv3\]) FEM implementation is strait forward. There are several CPU time consuming procedures in the algorithm. The first is calculation of FEM approximation of equation (\[Jscr\]) and the right part $F(r)$ in Eq. (\[Fr\]). The next is calculation of full energy defined by the expression (\[E\]). To speed up all of them we introduce matrix of interactions between triangles in FEM mesh. Let $\Delta_i$ and $\Delta_j$ be two cells - triangles in FEM mesh, then elements of interaction matrix are quadruple integrals $$\begin{aligned} a_{ij}=\frac{1}{4\pi}\int\!\!\!\!\int\limits_{\Delta_i}ds'\int\!\!\!\!\int\limits_{\Delta_j}G(r,r')ds. \label{aij}\end{aligned}$$ Half-analytic method for (\[aij\]) evaluation was developed in [@khapaev2001inductance; @TEEconf] and essentially used in the current version of software. Matrix with elements (\[aij\]) allows to perform quick and easy calculation of FEM matrixes and the full energy (\[E\]). The solution of FEM linear system equations for $J_{scr}$ (\[Jscr\]) is the third CPU time consuming procedure since it deals with inversion of large fully populated matrix. FEM solutions for (\[Jex\]) and (\[Jdiv1\],\[Jdiv2\],\[Jdiv3\]) are fast because it based on the use of sparse matrix technique. Nevertheless time of calculations stay acceptable even for rather large problems where FEM matrix dimension can reach $10000$ or more. To reduce the size of FEM matrices and improve accuracy, we use program Triangle [@shewchuk96b] as core engine for FEM mesh construction. New technique improves meshing for overlapped multilayer structures. For that we implement triangulation for joint projection of all nets. This approach improves discrete physical model for planarized and non-planarized fabrication processes. For non-planarized processes we can assign different film height for every triangle in the mesh. Together with formula (\[Gmn\]) it gives very accurate discrete model of a layout. All these enhancements allow effective solution of larger problems with smaller FEM meshes and with good accuracy. Spatial distribution of supercurrent in typical layouts ======================================================= Hole as terminal ---------------- Consider simple inductances calculations demonstrating the model of hole as terminal. The structure we consider is $8\times 11\mu m$ plate with thickness $0.4\mu m$ and with $2\times 5\mu m$ hole and London penetration depth $\lambda = 0.4\mu m$. On Fig. \[ht1:htb\] current density and current direction with small vanes are shown. When we calculate the inductances we set full currents around holes and full currents flowing across terminals as $I=1 mA$. For hole it means that some fluxoid $\Phi=L\cdot I$ is trapped in the hole where $L$ is the inductance of hole. Fig. \[ht1:1ho\] show currents for self inductance of the hole. Simple estimation using per-unit-length inductances of coplanar lines of $3\mu m$ width and $2\mu m$ and $5\mu m$ spacing between lines gives $9.9 pH$. In this estimation we calculate per-unit-length inductances using 2D program [@KhapaevJr]. These inductances are $1.03 pH/\mu m$ for spacing $2\mu m$ and $1.24 pH/\mu m$ for spacing $3\mu m$. We take length of strips $6\mu m$ and $3\mu m$ to account corners of the hole. Our result with 3D-MLSI is $10.1 pH$ and match estimation well. Fig. \[ht1:1term\] demonstrates results for hole as terminal. All four sides of the hole inject current with uniform density. Current leave plate across bottom boundary. Inductance of this current path is $1.59pH$. Also, we can calculate mutual inductance of current circulating around the hole and current from hole to bottom side. This inductance is very small ($3\cdot 10^{-5}$pH ) because the problem is symmetric. Spatial distribution of the current is shown in Fig. \[ht2:1mut\]. We can consider only one side of hole as a terminal. Other terminal is part of right border of the plate. The current distribution for this case is shown in Fig. \[ht2:side\_term\]. The inductance of this current path is $3.27pH$. Mutual inductance between current around hole and terminal to terminal current path is $0.084 pH$. Hole as current source ---------------------- Next calculations are performed for model of hole as current source. In this case, area of contact isn’t cutted out and there are no hole in the film but current source with homogenous density is present. We consider the same $8\times 11\mu m$ plate with $2\times 5\mu m$ area of current source. Current directions are shown in Fig. \[hv:3v\], the inductance of this current path is $1.61pH$. The current source can be more complicated. On Fig. \[hv:3cv\] we demonstrate coil-like source simulating London current crowding effect in the vertical contact. The inductance in this case is $1.58pH$. Multilayer interferometer ------------------------- The next example is multilayer interferometer designed for IPHT RSFQ process [@CF_IPHT]. Design contains three layers, $0.2\mu m$, $0.25\mu m$ and $0.35\mu m$ thick and $\lambda = 0.9\mu m.$ The distances between the layers are $0.25\mu m$ and $0.37\mu m$. The shape and dimensions of nets are presented in Fig. \[intf:intfb\]. We consider a current circulating in all three layers. The first layer is the ground plane, see Fig. \[intf:gp\]. The second layer consists of two square parts, see Fig. \[intf:m2\]. The parts are symmetric so only left part is shown on Fig. \[intf:m2\]. Ground plane is connected with second layer by two contacts shown on Fig. \[intf:gp\] and Fig. \[intf:m2\] by dashed segments. The second layer, see Fig. \[intf:m2\], is connected with the third layer using the square current source terminals shown on Fig. \[intf:m2\], Fig. \[intf:m3\] by dashed squares. It is assumed that a uniformly distributed supercurrent is injected into the second layer through the dashed segment located in the left part of the second layer. Then it flows across the left rectangle to square dashed current source and jump to the third layer. For third layer inlet current source is left dashed square and outlet source is right dashed square. Then current symmetrically returns back across right part of second layer. First layer carry all return current. We calculate inductance for this closed current loop. It is $12.6 pH$. The inductance of strip of length $110\mu m$ in third layer over groundplane is $11pH$. In Fig. \[intf:mesh\] mesh of triangles is shown. This mesh is created for all nets once taking into account all projections on bottom layer plane. In this case, the mesh accurately retrace all boundaries of nets. This adaptive non-regular mesh improves accuracy of FEM. Conclusions =========== We developed the new version of 3D-MLSI software for calculation of inductances and currents in complex multilayer superconductor structures. It provides higher accuracy and computing performance. We have now the simulation tool that allow calculation of inductances, currents and fields of practically all superconductor structures. We significantly improve the numerical algorithm of 3D-MLSI software. To do this we further developed the meshing procedure by using well recognized triangulation engine tool “Triangle” as the core of meshing. Moreover we implemented triangulation for joint projection of all nets thus improving discrete physical model for inductance extraction in layouts designed for planarized and non-planarized fabrication processes. We introduced two physical model for description of the internal terminals that are staggered or stacked vias between layers or connections between the films contained Josephson junction. Using simple examples, we have demonstrated their ability to describe the spatial distribution of the currents and to calculate the inductances in structures and devices having contacts between the individual layers in multilayer designs. We would like to thank V.K. Semenov, C.J. Fourie, E.B. Goldobin and L.R. Tagirov for fruitful discussions and C.J. Fourie for practical testcase. M.K. acknowledge partial support by the Program of Competitive Growth of Kazan Federal University. References {#references .unnumbered} ==========
--- abstract: 'The in-medium behavior of the nucleon spectral density including self-energies is revisited within the framework of QCD sum rules. Special emphasis is given to the density dependence of four-quark condensates. A complete catalog of four-quark condensates is presented and relations among them are derived. Generic differences of such four-quark condensates occurring in QCD sum rules for light baryons and light vector mesons are discussed.' author: - | R. Thomas$^{a}$, T. Hilger$^{b}$, B. Kämpfer$^{a,b}$\ $^a$ Forschungszentrum Dresden-Rossendorf, PF 510119, 01314 Dresden, Germany\ $^b$ Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany title: 'Four-Quark Condensates in Nucleon QCD Sum Rules' --- Introduction ============ A goal of contemporary hadron physics is to relate the confined quark and gluon degrees of freedom and parameters related to Quantum Chromodynamics (QCD) to the comprehensive hadronic spectrum. Lattice QCD and chiral effective field theory are suitable tools to accomplish this and other goals in exploring the structure of low-energy QCD and properties of hadrons. Another - though not so direct - but successful approach is given by QCD sum rules, originally formulated by Shifman, Vainshtein and Zakharov [@Shifman:1978bx] to describe masses of light vector mesons for example [@Shifman:1978by]. The method since then gained attention in numerous applications, e. g. to calculate masses and couplings of low-lying hadrons, magnetic moments, etc. (cf. e.g. [@Reinders:1984sr; @pk:Narison2004; @Colangelo:2000dp]). Its particular meaning is that numerous hadronic observables are directly linked to a set of fundamental QCD quantities, the condensates and moments of parton distributions. Hadrons are excitations from the ground state. Changes in this state are expected to reflect in a change of hadronic properties, especially in spectral functions and moments thereof related, e.g., to masses of hadrons. Measurements of ”mass modifications” of hadrons in a finite temperature, strongly interacting medium or when situated inside nuclear matter, that means embedded in a bulk of protons and neutrons and baryonic and mesonic resonances, then probe the QCD vacuum (cf. [@Krusche:2006ki] for an experimental overview). The properties of the QCD ground state are mapped to and quantified by a number of condensates, like intrinsic material constants, which partially carry information on symmetry features of the theory. Besides the chiral condensate ${\ensuremath{\left \langle \bar{q} q \right \rangle}}$, up to mass dimension 6 the gluon condensate ${\ensuremath{\left \langle \tfrac{\alpha_s}{\pi} G^2 \right \rangle}}$, the mixed quark-gluon condensate ${\ensuremath{\left \langle \bar{q} g_s \sigma G q \right \rangle}}$, the triple gluon condensate ${\ensuremath{\left \langle g_s^3 G^3 \right \rangle}}$ and structures of the form ${\ensuremath{\left \langle \bar{q} \Gamma q \bar{q} \Gamma q \right \rangle}}$ contribute in vacuum. ($\Gamma$ denotes all possible structures formed by Dirac and Gell-Mann matrices.) We emphasize here the specific role of the latter class of hitherto poorly known condensates, the four-quark condensates. Within the realm of hadron spectroscopy the explanation of the actual numerical value of the nucleon mass is crucial as ingredient for understanding macroscopic matter. The nucleon represents as carrier of mass the hard core of visible matter in the universe and thus is an important source for gravitation. Our investigations here are to be considered in line with previous investigations [@Furnstahl:1992pi; @Jin:1992id; @Jin:1993up; @Cohen:1994wm] for nucleons inside cold nuclear matter, which are also discussed in [@Henley:1993nr] and continuously explored in [@Drukarev:1988kd; @Drukarev:1994fw; @Drukarev:2001wd; @Drukarev:2003xa; @Drukarev:2004zg; @Drukarev:2004fn; @Sadovnikova:2005ye; @Sadovnikova:2006te]. The possible extension to further effects at finite temperatures lies beyond the present scope. An advantage of applying QCD sum rules is that effects of small finite baryon density $n$ are described systematically by the change of condensates and the advent of new structures which are absent at zero density. For the nucleon an important dependence of self-energies on four-quark condensates was found. Comparisons with results of chiral effective field theory [@Gross-Boelting:1998jg], where nucleon self-energies show strong cancellation effects (i. e. they change with the same magnitude but have opposite signs) suggest that the relevant four-quark condensates should be weakly density dependent [@Furnstahl:1992pi]. For the $\omega$ meson, however, we recently deduced in [@Thomas:2005dc] evidence for a significant density dependence of a particular combination of four-quark condensates appearing there. This was based on the experimentally found shift of the spectral strength of $\omega$ to lower invariant masses inside nuclear matter, as measured by the CB-TAPS collaboration [@Trnka:2005ey]. Therefore, we will spell out explicitly the four-quark condensates in nucleon sum rules in medium, which up to now are usually given in the factorization approximation or are determined by special models. So one can directly distinguish the four-quark condensate structures in sum rules for vector mesons and baryons. The work is organized as follows. In section 2 we review the operator product expansion for the nucleon and the hadronic model to write out the QCD sum rule equations. The four-quark condensates are discussed in section 3 where an exhaustive list is presented and relations among these condensates are given. Afterwards we present a numerical analysis of their influences in a nucleon sum rule evaluation and compare to other results (section 4). Conclusions can be found in section 5. In the appendices, an explanation of the calculation of an OPE and remarks on four-quark condensate relations are supplemented. QCD Sum Rules for the Nucleon {#qcdsumrulesforthenucleon} ============================= Current-Current Correlator -------------------------- QCD sum rules (QSR) link hadronic observables and expectation values of quark and gluon operators. This allows to determine properties of the low-lying hadronic excitations of the QCD ground state ${\ensuremath{\left | \Psi \right \rangle}}$. It relies on the concept of semi-local quark-hadron duality applied to the time-ordered correlation function $$\Pi (q) = i \int d^4x \, e^{iqx} {\ensuremath{\left \langle \Psi \left | {\ensuremath{T \left [ \eta (x) \bar{\eta} (0) \right ]}} \right | \Psi \right \rangle}} \equiv i \int d^4x \, e^{iqx} \Pi (x) , \label{eq:correlationFunction}$$ which describes the propagation of a hadron created from the vacuum. On one side it can be calculated in terms of quarks and gluons via an operator product expansion (OPE) for large space-like momenta $q^2$. This introduces Wilson coefficients multiplied by local normal ordered expectation values of quark and gluon fields – the QCD condensates. Thereby the hadron is assigned an interpolating field $\eta$ which resembles the right quantum numbers and is built from the fundamental degrees of freedom of QCD. On the other side, the interpolating field couples to the hadron excited from the ground state and the correlation function can be related solely to the hadronic properties for $q^2 >0$. By means of analyticity of the correlator $\Pi (q)$, dispersion relations equate both approaches. This leads to the celebrated QCD sum rules [@Shifman:1978bx]. Analysing transformed dispersion relations in a suitable range of momenta allows a determination of hadronic properties. The generalization to nuclear matter with non-vanishing temperature or chemical potentials relies on Gibbs averaged expectation values ${\ensuremath{\left \langle \Psi \left | \ldots \right | \Psi \right \rangle}}$ instead of vacuum expectation values ${\ensuremath{\left \langle 0 \left | \ldots \right | 0 \right \rangle}}$ [@Bochkarev:1985ex]. In what follows we focus on the nucleon, calculate the OPE and discuss the hadronic side; the sum rule finishes this section. Interpolating Fields -------------------- Following an argument of Ioffe [@Ioffe:1981kw] one can write down two interpolating fields representing the nucleon with the corresponding quantum numbers $I(J^{P})=\tfrac{1}{2} (\tfrac{1}{2}^{+} )$, $\epsilon^{abc}[u^T_a C \gamma_\mu u_b ] \gamma_5 \gamma^\mu d_c$ and $\epsilon^{abc}[u^T_a C \sigma_{\mu\nu} u_b ] \gamma_5 \sigma^{\mu\nu} d_c \,$, when restricting to fields that contain no derivatives and couple to spin $\tfrac{1}{2}$ only.[^1] Extended forms of the nucleon current may include derivatives [@Braun:1992jp; @Stein:1994zk], or make use of tensor interpolating fields [@Furnstahl:1995nd; @Leinweber:1995fn] (also used to extrapolate the vacuum nucleon mass via QCD sum rules [@pk:Langwallner2005] to (unphysical) larger values obtained on the lattice, comparable to similar efforts within chiral perturbation theory [@McGovern:2006fm]). The complications in nucleon sum rules can further be dealt with when taking into consideration the coupling of positive and negative parity states to the nucleon interpolating field [@Kondo:2005ur]. In this work, our structures are always written for the proton; by exchanging $u$ and $d$ the neutron is obtained (even the neutron-proton mass difference has been analyzed in this framework [@Yang:1993bp]). As interpolating fields a Fierz rearranged and thus simplified linear combination is widely used [@Furnstahl:1992pi] $$\eta (x) = 2 \epsilon^{abc} \left \{ t [u^T_a(x) C \gamma_5 d_b(x) ] u_c(x) + [u^T_a(x) C d_b(x) ] \gamma_5 u_c(x) \right \} \, , \label{eq:interpolatingFieldNucleonGeneral1}$$ which in the above basis reads $$\label{eq:interpolatingFieldNucleonGeneralInIoffeBasis} \tilde{\eta}(x) = \dfrac{1}{2} \epsilon^{abc} \left \{ (1-t) [u^T_a(x) C \gamma_\mu u_b(x) ] \gamma_5 \gamma^\mu d_c(x) + (1+t) [u^T_a(x) C \sigma_{\mu\nu} u_b(x) ] \gamma_5 \sigma^{\mu\nu} d_c(x) \right \} \, .$$ Both currents, $\eta$ and $\tilde{\eta}$, are related by Fierz transformations whereby in such a straightforward calculation the remaining difference vanishes for symmetry reasons (analog to the exclusion of Dirac structures in [@Ioffe:1981kw] when constructing all possible nucleon fields). The consequence of these two equivalent representations  and  is that two different forms of the OPE arise. On the level of four-quark condensates the identity is not obvious and leads to relations between different four-quark structures. There appear constraints on pure flavor four-quark condensates which can be understood also without connection to an OPE (algebraic relations on the operator level). For mixed structures such relations do not follow. This will be discussed in section 3. Our subsequent equations will be given for the ansatz (\[eq:interpolatingFieldNucleonGeneral1\]) with arbitrary mixing parameter $t$. In nucleon sum rule calculations the particular choice of the field with $t=-1$, the so-called Ioffe interpolating field, is preferred for reasons of applicability of the method and numerical stability of the evaluation procedure (cf. also [@Leinweber:1994nm] for a discussion of an optimal nucleon interpolating field; another choice of $t$ would emphasize the negative-parity state in the sum rule [@Jido:1996ia]). Operator Product Expansion -------------------------- Using Wilson’s OPE the correlation function (\[eq:correlationFunction\]) can be represented asymptotically as a series of Wilson coefficients multiplied by expectation values of quark and gluon operators, the condensates. As outlined in appendix \[ap:ope\], where also notations are summarized, these coefficients can be calculated considering quark propagation in a gluon background field which further simplifies in the Fock-Schwinger gauge. Application of Wick’s theorem to  introduces the normal ordered expectation values, which projected on color singlets, Dirac and Lorentz scalars and restricted by the demand for time and parity reversal invariance in cold nuclear matter leads to an expansion into local condensates. The OPE for $\Pi (x)$ and the Fourier transform in (\[eq:correlationFunction\]) are important steps towards the sum rule formulation. Still the correlator can be decomposed into invariant functions. Lorentz invariance and the requested symmetry with respect to time/parity reversal allow the decomposition $$\Pi (q) = \Pi _s (q^2,qv) + \Pi _q (q^2,qv) {\ensuremath{ q \mspace{-8mu} / }} + \Pi _v (q^2,qv) {\ensuremath{ v \mspace{-8mu} / }} \, , \label{eq:invariantDecomposition}$$ where $v$ is the four-velocity vector of the medium. The three invariant functions which accordingly yield three sum rule equations can be projected out by appropriate Dirac traces $$\begin{aligned} \label{eq:decomp-s} \Pi _s (q^2,qv) =& \dfrac{1}{4} {\ensuremath{\mathrm{Tr} \left ( \Pi (q) \right ) }} \, ,\\ \Pi _q (q^2,qv) =& \dfrac{1}{4[q^2v^2-(qv)^2]} \left \{ v^2 {\ensuremath{\mathrm{Tr} \left ( {\ensuremath{ q \mspace{-8mu} / }} \Pi (q) \right ) }} - (qv) {\ensuremath{\mathrm{Tr} \left ( {\ensuremath{ v \mspace{-8mu} / }} \Pi (q) \right ) }} \right \} \, ,\\ \label{eq:decomp-v} \Pi _v (q^2,qv) =& \dfrac{1}{4[q^2v^2-(qv)^2]} \left \{ q^2 {\ensuremath{\mathrm{Tr} \left ( {\ensuremath{ v \mspace{-8mu} / }} \Pi (q) \right ) }} - (qv) {\ensuremath{\mathrm{Tr} \left ( {\ensuremath{ q \mspace{-8mu} / }} \Pi (q) \right ) }} \right \} \,\end{aligned}$$ and are furthermore decomposed into even $(\mathrm{e})$ and odd $(\mathrm{o})$ parts w.r.t. $qv$ $$\label{eq:evenOddDecomposition} \Pi_i (q^2,qv) = \Pi_i^\mathrm{e} (q^2, (qv)^2) + (qv) \Pi_i^\mathrm{o} (q^2, (qv)^2) \, .$$ For the nucleon interpolating field , this leads to $$\begin{aligned} \label{eq:ope-se} \Pi_s^\mathrm{e}(q^2, (qv)^2) = & + \dfrac{ c_1 }{16 \pi^{2} } q^2 \ln (-q^2) {\ensuremath{\left \langle \bar{q} q \right \rangle}} + \dfrac{3c_2}{16 \pi^2} \ln (-q^2) {\ensuremath{\left \langle \bar{q} g_s (\sigma G) q \right \rangle}} \nonumber \\ & + \dfrac{2c_3}{3 \pi^2 v^2} \dfrac{(qv)^2}{q^2} \left ( {\ensuremath{\left \langle \bar{q} (viD)^2 q / v^2 \right \rangle}} + \dfrac{1}{8} {\ensuremath{\left \langle \bar{q} g_s (\sigma G) q \right \rangle}} \right ) \, , \\ \Pi_s^\mathrm{o}(q^2, (qv)^2) = & - \dfrac{1}{3 v^2} \dfrac{1}{q^2} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} \right \}_{\rm eff}^{1} \, , \\ \Pi_q^\mathrm{e}(q^2, (qv)^2) = & - \dfrac{c_4}{512 \pi^4} q^4 \ln (-q^2) - \dfrac{c_4}{256 \pi^2} \ln (-q^2) {\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} G^2 \right \rangle}} \nonumber \\ & + \dfrac{c_4}{72 \pi^2 v^2} \left ( 5 \ln (-q^2) - \dfrac{8 (qv)^2}{q^2 v^2} \right ) {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} (viD) q \right \rangle}} \nonumber \\ & - \dfrac{c_4}{1152 \pi^2 v^2} \left ( \ln (-q^2) - \dfrac{4(qv)^2}{q^2 v^2} \right ) {{\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} [ (vG)^2 + (v\tilde{G})^2 ] \right \rangle}}}\nonumber \\ & - \dfrac{1}{6} \dfrac{1}{q^2} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}}^2 + \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{q} \, , \\ \Pi_q^\mathrm{o}(q^2, (qv)^2) = & + \dfrac{c_4}{24 \pi^2 v^2} \ln (-q^2) {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} + \dfrac{c_5}{72 \pi^2 v^2} \dfrac{1}{q^2} {\ensuremath{\left \langle \bar{q} g_s {\ensuremath{ v \mspace{-8mu} / }} (\sigma G) q \right \rangle}} \nonumber \\ & - \dfrac{c_4}{12 \pi^2 v^2} \dfrac{1}{q^2} \left ( 1 + \dfrac{2 (qv)^2}{q^2 v^2} \right ) \left ( {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} (viD)^2 q /v^2 \right \rangle}} + \dfrac{1}{12} {\ensuremath{\left \langle \bar{q} g_s {\ensuremath{ v \mspace{-8mu} / }} (\sigma G) q \right \rangle}} \right ) \, , \\ \Pi_v^\mathrm{e}(q^2, (qv)^2) = & + \dfrac{c_4}{12 \pi^2 v^2} q^2 \ln (-q^2) {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} - \dfrac{c_5}{48 \pi^2 v^2} \ln (-q^2) {\ensuremath{\left \langle \bar{q} g_s {\ensuremath{ v \mspace{-8mu} / }} (\sigma G) q \right \rangle}} \nonumber \\ & + \dfrac{c_4 }{2 \pi^2 v^4} \dfrac{(qv)^2}{q^2} \left ( {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} (viD)^2 q /v^2 \right \rangle}} + \dfrac{1}{12} {\ensuremath{\left \langle \bar{q} g_s {\ensuremath{ v \mspace{-8mu} / }} (\sigma G) q \right \rangle}} \right ) \, , \\ \label{eq:ope-vo} \Pi_v^\mathrm{o}(q^2, (qv)^2) = & + \dfrac{c_4}{288 \pi^2 v^4} \ln (-q^2) {{\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} [ (vG)^2 + (v\tilde{G})^2 ] \right \rangle}}}- \dfrac{5 c_4}{18 \pi^2 v^4} \ln (-q^2) {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} (viD) q \right \rangle}} \nonumber \\ & - \dfrac{1}{3 v^2} \dfrac{1}{q^2} \left \{ \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{v} \, ,\end{aligned}$$ where the $c_i (t)$’s, being polynomials of the mixing parameter $t$, are written out below in section \[sec:sumruleequations\] in the final sum rules. Numerical values for condensates are collected in section \[sec:numericalanalysis\] where sum rules are numerically analyzed. The contributions from four-quark condensates are written here as the usual factorized result denoted by $\{ \ldots \}_{\rm eff}^{1,q,v}$; full expressions which replace and overcome this simplification are the focus of section \[sec:fourquarkcondensates\] (see especially Eqs. - below). Note that, in contrast to the OPE for $\Pi$ for light vector mesons with conserved currents $j_\mu=\tfrac{1}{2} (\bar{u} \gamma_\mu u \pm \bar{d} \gamma_\mu d)$ (for $\omega, \rho$ mesons), four-quark condensates enter already without a factor $\alpha_s$ (the strong coupling) and the chiral condensate $\langle \bar{q} q \rangle$ does not appear in a renormalization invariant combination with the quark mass. Dispersion Relations -------------------- The representation of the correlation function through Wilson coefficients and condensates valid for large Euclidean momenta $q^2 < 0$ is by the provided analiticity of $\Pi (q)$ related to the spectral density integrated over real values of the energy $q_0$. This is reflected in the fixed-$\vec{q}$ dispersion relation of the form (up to subtractions not displayed here and written now in nuclear matter rest frame only with $q_0$ as argument) $$\label{eq:dispersionRelationAnsatz} \dfrac{1}{\pi} \int_{-\infty}^{+\infty} d\omega \dfrac{\Delta \Pi (\omega) }{\omega - q_0} = \Pi (q_0) \, ,$$ where $\vec{q}$ is held fixed and the spectral density $\rho (\omega) = \tfrac{1}{\pi} \Delta \Pi (\omega)$ enters as the discontinuity of the correlator $\Pi$ on the real axis $$\Delta \Pi (\omega) = \dfrac{1}{2 i} \lim_{\epsilon \rightarrow 0} \left [ \Pi (\omega +i\epsilon) - \Pi (\omega -i\epsilon) \right ] \, .$$ Although dispersion relations could require polynomial subtractions enforcing convergence, such finite polynomials vanish under Borel transformation $\mathcal{B}: f(q_0^2) \rightarrow \tilde{f}({\ensuremath{ \mathcal{M} }}^2)$ and need not be considered here. The correlation function is decomposed into even and odd parts using  for $v =(1, 0,0,0)$ defining the rest frame of nuclear matter, with $$\begin{aligned} \Pi^{\mathrm e} (q_0^2) & \equiv \dfrac{1}{2} \left ( \Pi (q_0) + \Pi (-q_0) \right ) \mspace{20mu}= \dfrac{1}{\pi} \int_{-\infty}^{+\infty} d\omega \dfrac{{\omega \Delta} \Pi (\omega) }{\omega^2 - q_0^2} \, , \\ \Pi^{\mathrm o} (q_0^2) & \equiv \dfrac{1}{2q_0} \left ( \Pi (q_0) - \Pi (-q_0) \right ) \mspace{5mu}= \dfrac{1}{\pi} \int_{-\infty}^{+\infty} d\omega \dfrac{\Delta \Pi (\omega) }{\omega^2 - q_0^2} \, ,\end{aligned}$$ which, given the integral representation , become functions of $q_0^2$. The starting point of our sum rule analysis is the special combination $$\label{eq:sumruleansatz} \dfrac{1}{\pi} \int_{-\infty}^{+\infty} d\omega (\omega - \bar{E}) \dfrac{{ \Delta } \Pi (\omega) }{\omega^2 - q_0^2} = \Pi^{\mathrm e} (q_0^2) - \bar{E} \Pi^{\mathrm o} (q_0^2) \, ,$$ where the l.h.s. encodes hadronic properties, while the r.h.s. is subject to the OPE -. Such a combination has been proposed in [@Furnstahl:1992pi] with the motivation to suppress anti-nucleon contributions effectively by a suitable choice of the quantity $\bar{E}$. Having in mind the usual decomposition ”resonance + continuum”, we split the l.h.s. of the integral into an anti-nucleon continuum, $-\infty < \omega < \omega_-$, anti-nucleon, $\omega_- < \omega < 0$, nucleon, $0 < \omega < \omega_+$, and nucleon continuum, $\omega_+ < \omega < \infty$, and choose $$\label{eq:ebardefinition} \bar{E} = \dfrac{\int_{\omega_-}^0 d\omega \Delta \Pi (\omega) \omega e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2}}{\int_{\omega_-}^0 d\omega \Delta \Pi (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2}} \quad \text{and} \quad E = \dfrac{\int^{\omega_+}_0 d\omega \Delta \Pi (\omega) \omega e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2}}{\int^{\omega_+}_0 d\omega \Delta \Pi (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2}} \, ,$$ where the latter similarly defines the first moment of the Borel weighted spectral density for the positive energy excitation.[^2] This delivers the Borel transformed sum rule $$\begin{split} \label{eq:exactFullSumRule} (E - \bar{E}) \dfrac{1}{\pi} \int_{0}^{\omega_+} d \omega \Delta \Pi (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} = \tilde{\Pi}^\mathrm{e} ({\ensuremath{ \mathcal{M} }}^2) - \dfrac{1}{\pi} \int_{\omega_+}^{\infty} d \omega \omega \Pi^\mathrm{e}_\mathrm{per} (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \\ - \bar{E} \left \{ \tilde{\Pi}^\mathrm{o} ({\ensuremath{ \mathcal{M} }}^2) - \dfrac{1}{\pi} \int_{\omega_+}^{\infty} d \omega \Pi^\mathrm{o}_\mathrm{per} (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \right \} + \dfrac{1}{\pi} \int_{\omega_-}^{-\omega_+} d \omega \Delta \Pi (\omega) [\omega - \bar{E}] e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \, . \end{split}$$ The continuum contributions $\Pi^\mathrm{e,o}_\mathrm{per} (\omega) \equiv \Delta \Pi (\omega) \mp \Delta \Pi (-\omega)$ are arranged on the r.h.s. with the reasoning of employing the semi-local duality hypothesis: For the integrated continuum the same asymptotic behavior is assumed for the correlation functions of hadronic and quark/gluon degrees of freedom in the limit of large energies. These integrals are then extended to the respective continuum thresholds $\omega_\pm$. Typically only the logarithmic terms in $\Pi$ provide discontinuities which enter the continuum integrals. To summarize, Eq.  exhibits the typical structure of QCD sum rules: the hadronic properties on the l.h.s., i.e. the low-lying nucleon spectral function, are thought to be given by the operator product representation of $\Pi$ including condensates on the r.h.s. The last term on the r.h.s. accounts for asymmetric continuum thresholds, i.e. $\omega_- \neq -\omega_+$, and can be estimated by semi-local quark-hadron duality. It should be emphasized that the given sum rule is for a certain, weighted moment of a part of the nucleon spectral function. Without further assumptions, local properties of $\Delta \Pi(\omega)$ cannot be deduced. Note also that in this form the anti-nucleon enters inevitably the sum rule. The reasoning behind the choice of  with  is that in mean field approximation, where self-energy contributions in the propagator are real and energy-momentum independent (cf. also [@Furnstahl:1992pi]), the pole contribution of the nucleon propagator $G(q) = ( {\ensuremath{ q \mspace{-8mu} / }} - M_N - \Sigma )^{-1}$ can be written as $$G(q) = \dfrac{1}{1-\Sigma_q} \dfrac{{\ensuremath{ q \mspace{-8mu} / }} + M_N^* - {\ensuremath{ v \mspace{-8mu} / }} \Sigma_{v}}{(q_0 - E_+)(q_0 - E_-)}\, . \label{eq:nucleonPropagator}$$ Pauli corrections to positive-energy baryons and propagation of holes in the Fermi sea give rise to an additional piece $G_\mathrm{D} (q) \sim \Theta (|\vec{q_F}| - |\vec{q}\,|)$ [@Serot:1984ey] vanishing for nucleon momenta $\vec{q}$ above the Fermi surface $|\vec{q_F}|$ considered here. The self-energy $\Sigma$ is decomposed into invariant structures $\Sigma = \tilde{\Sigma}_s + \Sigma_q {\ensuremath{ q \mspace{-8mu} / }} + \tilde{\Sigma}_v {\ensuremath{ v \mspace{-8mu} / }}$ [@Rusnak:1995ex] (for mean field $\Sigma_q = 0$), where one introduces scalar $\Sigma_{s} = M_N^* - M_N$ and vector self-energies $\Sigma_{v}$, which are related to the decomposition above via $M_N^* = \tfrac{M_N + \tilde{\Sigma}_s}{1-\Sigma_q}$ and $\Sigma_v = \tfrac{\tilde{\Sigma}_v}{1-\Sigma_q}$ [@Furnstahl:1992pi]. In the rest frame of nuclear matter the energy of the nucleon is $E_+$, correspondingly $E_-$ that of the antinucleon excitation, where $E_\pm = \Sigma_v \pm \sqrt{\vec{q}\;^{2} + M_N^{*2}}$. Since the sum rule explicitly depends on the nucleon momentum however the self-energy $\Sigma$ as well as invariant structures $\Sigma_i$ and derived quantities acquire now a momentum dependence and become functions of the Lorentz invariants $q^2$, $qv$ and $v^2$ extending mean field theory towards the relativistic Hartree-Fock approximation [@Serot:1984ey]. Eq.  is giving rise to a discontinuity $\Delta G (q_0) = \tfrac{1}{2 i} \lim_{\epsilon \rightarrow 0} ( G(q_0+i\epsilon) - G(q_0-i\epsilon) )$ with a simple pole structure $$\Delta G (q_0) = \dfrac{\pi}{1-\Sigma_q} \dfrac{{\ensuremath{ q \mspace{-8mu} / }} + M_N^* - {\ensuremath{ v \mspace{-8mu} / }} \Sigma_v}{E_+ - E_-} \left ( \delta (q_0 - E_- ) - \delta (q_0 - E_+ ) \right ) \, ,$$ where the general expression, Eq. , identifies $\bar{E}$ with the anti-nucleon pole energy $E_-$ for all 3 Dirac structures (analogously, $E$ is identified with $E_+$). Then the l.h.s. of the sum rule  reads $$\label{eq:transitionProptoGeneral} ( E_+ - E_- ) \dfrac{1}{\pi} \int_{0}^{\omega_+} d \omega \Delta \Pi (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} = - \dfrac{\lambda_N^2}{1-\Sigma_q} ( {\ensuremath{ q \mspace{-8mu} / }} + M_N^* - {\ensuremath{ v \mspace{-8mu} / }} \Sigma_v) e^{-E_+^2 / {\ensuremath{ \mathcal{M} }}^2} \, .$$ Here, $\lambda_N$ enters through the transition from Eq.  to the nuclon propagator via insertion of a complete basis, retaining only the nucleon state $|q,s \rangle$ with the relation to the nucleon bispinor $\lambda_N u(p,s) = \langle \Psi |\eta (0)| q,s \rangle$ and can be combined into an effective coupling $\lambda_N^{*2}=\tfrac{\lambda_N^2}{(1-\Sigma_q)}$. More general one can interpret Eq.  as parametrization of the l.h.s. of , where integrated information of $\Delta \Pi$ is mapped onto the quantities $M_N^*$, $\Sigma_{v}$, $\lambda_N^{*2}$ and $E_\pm$, which are subject of our further analysis, by virtue of Eqs. - $$\begin{aligned} \label{eq:decomposedSumRule1s} - \lambda_N^{*2} M_N^* e^{-E_+^2 / {\ensuremath{ \mathcal{M} }}^2} & = ( E_+ - E_- ) \dfrac{1}{\pi} \int_{0}^{\omega_+} d \omega \Delta \Pi_s (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \, , \\ - \lambda_N^{*2} e^{-E_+^2 / {\ensuremath{ \mathcal{M} }}^2} & = ( E_+ - E_- ) \dfrac{1}{\pi} \int_{0}^{\omega_+} d \omega \Delta \Pi_q (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \, , \\ \label{eq:decomposedSumRule1v} \lambda_N^{*2} \Sigma_v e^{-E_+^2 / {\ensuremath{ \mathcal{M} }}^2} & = ( E_+ - E_- ) \dfrac{1}{\pi} \int_{0}^{\omega_+} d \omega \Delta \Pi_v (\omega) e^{-\omega^2 / {\ensuremath{ \mathcal{M} }}^2} \, .\end{aligned}$$ Due to the supposed pole structure in  the self-energy components are related to $E_\pm$ (or more general to $E$ and $\bar{E}$) and the relations from distinct Dirac structures are coupled equations. The given general spectral integrals however not yet relate the unknown quantities, so that our numerical results presented here are not completely independent of the given nucleon propagator ansatz. These relations highlight also the dependence on the Borel mass ${\ensuremath{ \mathcal{M} }}$ which one gets rid of by averaging in an appropriate Borel window. In [@Griegel:1994xb], it has been pointed out, that $\Pi$ also contains chiral logarithms, e.g. $ \overset{\mspace{12mu} \circ \mspace{3mu} 2}{m}_{\mspace{-5mu} \pi} \log \overset{\mspace{12mu} \circ \mspace{3mu} 2}{m}_{\mspace{-5mu} \pi}$, which, however, do not appear in the chiral perturbation theory expression for $M_N$. It was argued [@Lee:1994hs; @Birse:1996fw] that low-lying continuum like $\pi N$ excitations around $M_N$ cancel such unwanted pieces. In this respect, the parameters $M_N^*$, $\Sigma_q$, $\Sigma_v$ in - are hardly to be identified with pure nucleon pole characteristics, but should be considered as measure of integrated strength of nucleon like excitations in a given interval. Moreover, many hadronic models point to a quite distributed strength or even multi-peak structures (e.g. [@Post:2003hu]). The importance of an explicit inclusion of scattering contributions in the interval $0 \ldots \omega_+$ has been demonstrated in [@Koike:1993sq; @Mallik:2001ft; @Adami:1992at] for finite temperature effects on the in-medium nucleon or in [@pk:Morath2001] for the in-medium $D^+$ when trying to isolate the pure pole contribution. In vacuum QCD sum rules for baryons, e. g. the nucleon, improvement of the continuum treatment is achieved by the inclusion of negative-parity states, which are equally described by a given correlation function as the corresponding positive-parity states [@Jido:1996ia; @Jido:1996zw; @Oka:1996zz; @Kondo:2005ur; @Kondo:2006xz]. Resorting to integrated strength distributions avoids these problems, but loses the tight relation to simple pole parameters. Sum Rule Equations {#sec:sumruleequations} ------------------ Eq. (\[eq:exactFullSumRule\]) is the sum rule we are going to evaluate with respect to the above motivated identifications. Inserting the decomposition (\[eq:invariantDecomposition\]) with - we arrive at the three coupled sum rule equations $$\begin{aligned} \label{eq:sumRuleEquation_s} \lambda_N^{*2} M_N^* e^{-(E_+^2 - \vec{q}\;^{2}) / {\ensuremath{ \mathcal{M} }}^2} &= \mspace{57mu}A_1 {\ensuremath{ \mathcal{M} }}^4 + A_2 {\ensuremath{ \mathcal{M} }}^2 + A_3 \, ,\\ \label{eq:sumRuleEquation_q} \lambda_N^{*2} e^{-(E_+^2 - \vec{q}\;^{2}) / {\ensuremath{ \mathcal{M} }}^2} &= B_0 {\ensuremath{ \mathcal{M} }}^6 \mspace{60mu }+ B_2 {\ensuremath{ \mathcal{M} }}^2 + B_3 + B_4 / {\ensuremath{ \mathcal{M} }}^2 \, ,\\ \label{eq:sumRuleEquation_v} \lambda_N^{*2} \Sigma_v e^{-(E_+^2 - \vec{q}\;^{2}) / {\ensuremath{ \mathcal{M} }}^2} &= \mspace{60mu }C_1 {\ensuremath{ \mathcal{M} }}^4 + C_2 {\ensuremath{ \mathcal{M} }}^2 + C_3 \, ,\end{aligned}$$ with coefficients $$\begin{aligned} \label{eq:sumRuleCoefficients} A_1 &= - \dfrac{c_1}{16\pi^2} E_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}} \, , \nonumber\\ A_2 &= - \dfrac{3c_2}{16\pi^2} E_0 {\ensuremath{\left \langle g_s \bar{q} \sigma G q \right \rangle}} \, , \nonumber\\ A_3 &= -\dfrac{2c_3}{3\pi^2} \vec{q}\;^{2} \left ( {\ensuremath{\left \langle \bar{q} iD_0 iD_0 q \right \rangle}} + \dfrac{1}{8} {\ensuremath{\left \langle g_s \bar{q} \sigma G q \right \rangle}} \right ) -\dfrac{1}{3} E_- \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} \right \}_{\rm eff}^{1} \, , \nonumber\\ B_0 &= \dfrac{c_4}{256 \pi^4} E_2 \, , \nonumber\\ B_2 &= \dfrac{c_4 E_0}{24 \pi^2} E_- {\ensuremath{\left \langle q^{\dagger} q \right \rangle}} - \dfrac{5 c_4 E_0}{72\pi^2} {\ensuremath{\left \langle q^\dagger i D_0 q \right \rangle}} + \dfrac{c_4 E_0}{256 \pi^2} {\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} G^2 \right \rangle}} \nonumber\\ & + \dfrac{c_4 E_0}{1152 \pi^2} {\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} [(vG)^2 + (v\tilde{G})^2] \right \rangle}} \, , \nonumber\\ B_3 &= \dfrac{c_4 \vec{q}\;^{2}}{9\pi^2} {\ensuremath{\left \langle q^\dagger i D_0 q \right \rangle}} - \dfrac{c_4 \vec{q}\;^{2}}{288 \pi^2} {\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} [(vG)^2 + (v\tilde{G})^2] \right \rangle}} + \dfrac{c_5 E_-}{72 \pi^2} {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} \nonumber\\ & - \dfrac{c_4}{4} E_- \left ( {\ensuremath{\left \langle q^\dagger iD_0 iD_0 q \right \rangle}} + \dfrac{1}{12} {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} \right ) + \dfrac{1}{6} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}}^2 + \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{q} \, , \nonumber\\ B_4 &= \dfrac{c_4}{6\pi^2} \vec{q}\;^{2} \left ( {\ensuremath{\left \langle q^\dagger iD_0 iD_0 q \right \rangle}} + \dfrac{1}{12} {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} \right ) \, , \nonumber\\ C_1 &= \dfrac{c_4}{12\pi^2} E_1 {\ensuremath{\left \langle q^{\dagger} q \right \rangle}} \, , \nonumber\\ C_2 &= \dfrac{5 c_4}{18\pi^2} E_0 E_- {\ensuremath{\left \langle q^{\dagger} iD_0 q \right \rangle}} - \dfrac{c_4 E_0}{288 \pi^2} E_- {\ensuremath{\left \langle \dfrac{\alpha_s}{\pi} [(vG)^2 + (v\tilde{G})^2] \right \rangle}} - \dfrac{c_5 E_0}{48 \pi^2} {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} \, , \nonumber\\ C_3 &= \dfrac{c_4}{2\pi^2} \vec{q}\;^{2} \left ( {\ensuremath{\left \langle q^\dagger iD_0 iD_0 q \right \rangle}} + \dfrac{1}{12} {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} \right ) + \dfrac{1}{3} E_- \left \{ \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{v} \, ,\end{aligned}$$ and factors $E_j$ emerging from continuum contributions, with the definition $s_0 = \omega_+^2 - \vec{q}\,^{2}$, $$E_0 = \left [ 1 - e^{-s_0 / M^2} \right ], \; E_1 = \left [ 1 - \left ( 1 + \dfrac{s_0}{M^2} \right ) e^{-s_0 / M^2} \right ], \; E_2 = \left [ 1 - \left ( 1 + \dfrac{s_0}{M^2} + \dfrac{s_0^2}{2 M^4} \right ) e^{-s_0 / M^2} \right ] ,$$ and the asymmetric continuum threshold integral in Eq.  neglected. The list is exhaustive for all condensates up to mass dimension 5 in the limit of vanishing quark masses. The coefficients $c_i$ denote general structures due to the mixing of interpolating fields according to  obeying $$\begin{aligned} c_1 & = 7t^2 - 2t - 5, \\ c_2 & = - t^2 + 1, \\ c_3 & = 2t^2 - t - 1, \\ c_4 & = 5t^2 + 2t + 5, \\ c_5 & = 7t^2 + 10t + 7.\end{aligned}$$ Four-Quark Condensates {#sec:fourquarkcondensates} ====================== Formally, four-quark condensates are QCD ground state expectation values of hermitian products of four quark operators which are to be Dirac and Lorentz scalars, color singlets and are to be invariant under time and parity reversal. Thereby we restrict ourselves to equilibrated cold nuclear matter[^3] but do not impose isospin symmetry from the very beginning in view of further applications, such as the proton-neutron mass difference in asymmetric cold nuclear matter (e.g. [@Drukarev:2004fn]). With the following discussion of independent four-quark condensates for arbitrary numbers of flavors we allow for the inclusion of strange quark contributions as well. Physically, the four-quark condensates quantify the correlated production of two quark-antiquark pairs in the physical vacuum. In contrast to the square of the two-quark condensate, which accounts for uncorrelated production of two of these pairs, the four-quark condensates are a measure of the correlation and thus evidence the complexity of the QCD ground state. Especially, deviations from factorization, the approximation of unknown four-quark condensates in terms of the squared chiral condensate justified in the large $N_c$ limit (cf. also [@Leupold:2005eq]), represent effects of these more involved correlations. In this section the classification of four-quark condensates, in the light quark sector, is performed in some detail. Projection and Classification {#sec:fqcClassification} ----------------------------- The projections onto Dirac, Lorentz and color structures lead to all possible in-medium four-quark condensates just as for the example of the non-local two-quark expectation value in appendix \[ap:ope\]. However the situation is even simpler since we are only interested in the mass dimension 6 condensates, so derivatives are not required and all operators in four-quark expectation values are to be taken at $x=0$. Using the Clifford bases $O_k \in \{ \mathbbm{1}, \gamma_\mu , \sigma_{\mu < \nu}, i\gamma_5 \gamma_\mu , \gamma_5 \}$ and $O^m \in \{ \mathbbm{1}, \gamma^\mu , \sigma^{\mu < \nu}, i\gamma_5 \gamma^\mu , \gamma_5 \}$ which fulfill ${\ensuremath{\mathrm{Tr} \left ( O_k O^m \right ) }} = 4 \delta^m_k$ one can project out the Dirac indices of products of four arbitrary quark operators $$\left ( \underset{{\ensuremath{ \mathrm{e} }}}{\bar{q}_1}^{a'} \underset{{\ensuremath{ \mathrm{f} }}}{q_2}^{a} \underset{{\ensuremath{ \mathrm{g} }}}{\bar{q}_3}^{b'} \underset{{\ensuremath{ \mathrm{h} }}}{q_4}^{b} \right ) = \dfrac{1}{16} \sum_{k,l=1}^{16} \left ( \bar{q}_1^{a'} O_k q_2^{a} \bar{q}_3^{b'} O^l q_4^{b} \right ) \, \underset{{\ensuremath{ \mathrm{f} }},{\ensuremath{ \mathrm{e} }}}{O}^k \, \underset{{\ensuremath{ \mathrm{h} }},{\ensuremath{ \mathrm{g} }}}{O_l} \, . \label{eq:fqcDiracProjection}$$ Note, here and elsewhere, Dirac indices, if explicitly shown, are attached below the concerned objects. From (\[eq:fqcDiracProjection\]) there are 25 combinatorial Lorentz structures which have to be projected on condensates to obey Lorentz invariance (using the four-velocity $v_\mu$), time/parity reversal and hermiticity. For each of the remaining 5 (10) Lorentz scalars in vacuum (medium) two possible color singlet combinations can be formed using contractions with the unity element and the generators of $SU(N_c=3)$. Thus one obtains the projection formula $$\bar{q}^{a'}_1 q^a_2 \bar{q}^{b'}_3 q^b_4 = \dfrac{1}{9} \left ( \bar{q}_1 q_2 \bar{q}_3 q_4 \right ) \mathbbm{1}_{aa'} \mathbbm{1}_{bb'} + \dfrac{1}{12} \left ( \bar{q}_1 \lambda^A q_2 \bar{q}_3 \lambda^A q_4 \right ) \lambda^B_{aa'} \lambda^B_{bb'} \, .$$ Especially, in the calculation of an operator product expansion for baryons the color condensate structures naturally arise from the product $ \epsilon_{abc} \epsilon_{a'b'c'} \; \delta^{cc'} = \epsilon_{abc} \epsilon_{a'b'c} = \delta_{aa'} \delta_{bb'} - \delta_{ab'} \delta_{a'b} $, hence there the four-quark condensates generally appear in linear combinations of color structures in the form $$\begin{aligned} \epsilon_{abc} \epsilon_{a'b'c} \; \bar{q}^{a'}_1 q^a_2 \bar{q}^{b'}_3 q^b_4 & = \dfrac{2}{3} \left \{ \left ( \bar{q}_1 q_2 \bar{q}_3 q_4 \right ) - \dfrac{3}{4} \left ( \bar{q}_1 \lambda^A q_2 \bar{q}_3 \lambda^A q_4 \right ) \right \} \, . \label{eq:fqcColourDecomposition}\end{aligned}$$ This would imply two condensate structures for each Lorentz scalar term; however, for expectation values with just one flavor (pure flavor four-quark condensates) these structures are not independent. Combining Fierz rearrangement of the Dirac contractions of pure four-quark operators with the rearrangement of the color structures, one derives the transformation equation $$\left ( \bar{u} O_k \lambda^A u \bar{u} O^l \lambda^A u \right ) = - \dfrac{2}{3} \left ( \bar{u} O_k u \bar{u} O^l u \right ) - \dfrac{1}{8} {\ensuremath{\mathrm{Tr} \left ( O_k O_n O^l O^m \right ) }} \left ( \bar{u} O_m u \bar{u} O^n u \right ) \, ,$$ which relates the two different color combinations. This transformation can be brought in matrix form $\vec{y} = \hat{A} \vec{x}$ with $$\vec{y} = \begin{pmatrix} {\ensuremath{\left \langle \bar{q} \lambda^A q \bar{q} \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_\alpha \lambda^A q \bar{q} \gamma^\alpha \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A q \bar{q} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A q \right \rangle}} /v^2 \\ {\ensuremath{\left \langle \bar{q} \sigma_{\alpha \beta} \lambda^A q \bar{q} \sigma^{\alpha \beta} \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \sigma_{\alpha \beta} \lambda^A q \bar{q} \sigma^{\gamma \delta} \lambda^A q \right \rangle}} g^{\alpha}_{\gamma} v^\beta v_\delta /v^2 \\ {\ensuremath{\left \langle \bar{q} \gamma_5 \gamma_\alpha \lambda^A q \bar{q} \gamma_5 \gamma^\alpha \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A q \bar{q} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A q \right \rangle}} /v^2 \\ {\ensuremath{\left \langle \bar{q} \gamma_5 \lambda^A q \bar{q} \gamma_5 \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A q \bar{q} \lambda^A q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_5 \gamma^\alpha \lambda^A q \bar{q} \sigma^{\beta \gamma} \lambda^A q \right \rangle}} i \epsilon_{\alpha \beta \gamma \delta} v^\delta /2 \\ \end{pmatrix} \mspace{20mu} \text{,} \mspace{20mu} \vec{x} = \begin{pmatrix} {\ensuremath{\left \langle \bar{q} q \bar{q} q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_\alpha q \bar{q} \gamma^\alpha q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} /v^2 \\ {\ensuremath{\left \langle \bar{q} \sigma_{\alpha \beta} q \bar{q} \sigma^{\alpha \beta} q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \sigma_{\alpha \beta} q \bar{q} \sigma^{\gamma \delta} q \right \rangle}}g^{\alpha}_{\gamma} v^\beta v_\delta /v^2 \\ {\ensuremath{\left \langle \bar{q} \gamma_5 \gamma_\alpha q \bar{q} \gamma_5 \gamma^\alpha q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} q \bar{q} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} /v^2 \\ {\ensuremath{\left \langle \bar{q} \gamma_5 q \bar{q} \gamma_5 q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \bar{q} q \right \rangle}} \\ {\ensuremath{\left \langle \bar{q} \gamma_5 \gamma^\alpha q \bar{q} \sigma^{\beta \gamma} q \right \rangle}} i \epsilon_{\alpha \beta \gamma \delta} v^\delta /2 \\ \end{pmatrix} \, ,$$ $$\label{eq:transformationMatrix} \hat{A}= \begin{pmatrix} -7/6 & -1/2 & 0 & -1/4 & 0 & 1/2 & 0 & -1/2 & 0 & 0 \\ -2 & 1/3 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 \\ -1/2 & 1/2 & -5/3 & -1/4 & 1 & 1/2 & -1 & 1/2 & 0 & 0 \\ -6 & 0 & 0 & 1/3 & 0 & 0 & 0 & -6 & 0 & 0 \\ -3/2 & -1/2 & 2 & 1/4 & -2/3 & 1/2 & -2 & -3/2 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 1/3 & 0 & -2 & 0 & 0 \\ 1/2 & 1/2 & -1 & 1/4 & -1 & 1/2 & -5/3 & -1/2 & 0 & 0 \\ -1/2 & 1/2 & 0 & -1/4 & 0 & -1/2 & 0 & -7/6 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -5/3 & -i \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3i & 1/3 \end{pmatrix} \, .$$ We emphasize that the inverse transformation $\hat{A}^{-1}$ exists. However, structures for baryon sum rules typically are combinations of two color contractions, dictated by Eq. , which form components of the vector $$\vec{z} = \dfrac{2}{3} \left ( \vec{x} - \dfrac{3}{4} \vec{y} \right ) = \hat{B} \vec{x} = \dfrac{2}{3} \left ( \hat{A}^{-1} - \dfrac{3}{4} \mathbbm{1} \right ) \vec{y} \, , \quad \hat{B} \equiv \tfrac{2}{3} ( \mathbbm{1} - \tfrac{3}{4} \hat{A}) \, . \label{eq:fqcMatrixEquation}$$ The matrix $\hat{B}$ has the fivefold eigenvalues 0 and 2, and the corresponding eigenspaces both have dimension 5, especially the kernel of $\hat{B}$ spanned by the eigenvectors to eigenvalue 0. The fact that the kernel contains more than the null vector implies that $\hat{B}$ has no inverse. The transformation of this equation into the basis of eigenvectors yields a new vector $\vec{z}\,'$ where 5 elements are to be zero. Written in components of $\vec{z}$ these relations are $$\begin{aligned} \label{eq:fqcConstraints1} z_2 + z_6 & = 0 \, , \\ 4 z_1 - 2 z_2 - z_4 & = 0 \, , \\ 2 z_1 - z_4 + 2 z_8 & = 0 \, , \\ z_1 - z_3 - z_5 + z_7 & = 0 \, , \\ \label{eq:fqcConstraints5} z_9 - i z_{10} & = 0 \, .\end{aligned}$$ The first three conditions occur already in the vacuum set, the latter two ones are additional in the medium case. Of course, the conditions can be written differently, e.g., the second and third line may be conveniently combined to $ z_1 - z_2 - z_8 = 0$ for applications. An alternative derivation of these relations is presented in appendix \[ap:constraints\]. The relations (\[eq:fqcConstraints1\])-(\[eq:fqcConstraints5\]) have two important consequences: firstly, they allow to simplify pure flavor four-quark condensates in baryon sum rules; secondly, since Eq. (\[eq:fqcMatrixEquation\]) can not be inverted, they forbid a direct translation from pure flavor four-quark condensates in baryon sum rules at the order $\alpha_s^0$ to those which occur e.g. in sum rules for light vector mesons in the order $\alpha_s^1$. Four-Quark Condensates in the Nucleon QCD Sum Rule -------------------------------------------------- We have now provided all prerequisites to specify the four-quark condensates which occur in the sum rule (\[eq:exactFullSumRule\]). The full expressions for the four-quark condensates in the order $\alpha_s^0$, abbreviated in  so far symbolically, are $$\begin{aligned} \label{eq:fqcList_s} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} \right \}_{\rm eff}^{1} & = \dfrac{3}{2} \epsilon_{abc} \epsilon_{a'b'c} \left ( - 2 c_2 {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{u}^{b'} u^{b} \right \rangle}} + c_6 {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{d}^{b'} d^{b} \right \rangle}} - 3 c_2 {\ensuremath{\left \langle \bar{u}^{a'} u^{a} \bar{d}^{b'} {\ensuremath{ v \mspace{-8mu} / }} d^{b} \right \rangle}} \right . \nonumber \\ & \left . + c_7 {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\kappa u^{a} \bar{d}^{b'} \sigma_{\lambda \pi} d^{b} \epsilon^{\kappa \lambda \pi \xi} v_\xi \right \rangle}} \right ) \, ,\end{aligned}$$ $$\begin{aligned} \label{eq:fqcList_q} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}}^2 + \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{q} & = \epsilon_{abc} \epsilon_{a'b'c} \left ( 2 c_{9} {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\tau u^{a} \bar{u}^{b'} \gamma^\tau u^{b} \right \rangle}} - 2 c_{9} {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{u}^{b'} {\ensuremath{ v \mspace{-8mu} / }} u^{b} / v^2 \right \rangle}} \right . \nonumber \\ & + 4 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\tau u^{a} \bar{u}^{b'} \gamma_5 \gamma^\tau u^{b} \right \rangle}} - 4 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{u}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{b} / v^2 \right \rangle}} \nonumber \\ & - 9 c_2 {\ensuremath{\left \langle \bar{u}^{a'} u^{a} \bar{d}^{b'} d^{b} \right \rangle}} + \dfrac{9}{2} c_2 {\ensuremath{\left \langle \bar{u}^{a'} \sigma_{\kappa \lambda} u^{a} \bar{d}^{b'} \sigma^{\kappa \lambda} d^{b} \right \rangle}} - 9 c_2 {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 u^{a} \bar{d}^{b'} \gamma_5 d^{b} \right \rangle}} \nonumber \\ & + c_{10} {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\tau u^{a} \bar{d}^{b'} \gamma^\tau d^{b} \right \rangle}} - 2 c_{9} {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{d}^{b'} {\ensuremath{ v \mspace{-8mu} / }} d^{b} / v^2 \right \rangle}} \nonumber \\ & \left . + c_{8} {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\tau u^{a} \bar{d}^{b'} \gamma_5 \gamma^\tau d^{b} \right \rangle}} - 4 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{d}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} d^{b} / v^2 \right \rangle}} \right ) \, ,\end{aligned}$$ $$\begin{aligned} \label{eq:fqcList_v} \left \{ \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{v} & = \epsilon_{abc} \epsilon_{a'b'c} \left ( - c_{9} {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\tau u^{a} \bar{u}^{b'} \gamma^\tau u^{b} \right \rangle}} + 4 c_{9} {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{u}^{b'} {\ensuremath{ v \mspace{-8mu} / }} u^{b} / v^2 \right \rangle}} \right . \nonumber \\ & - 2 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\tau u^{a} \bar{u}^{b'} \gamma_5 \gamma^\tau u^{b} \right \rangle}} + 8 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{u}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{b} / v^2 \right \rangle}} \nonumber \\ & - c_{9} {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\tau u^{a} \bar{d}^{b'} \gamma^\tau d^{b} \right \rangle}} + 4 c_{9} {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{d}^{b'} {\ensuremath{ v \mspace{-8mu} / }} d^{b} / v^2 \right \rangle}} \nonumber \\ & \left . - 2 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\tau u^{a} \bar{d}^{b'} \gamma_5 \gamma^\tau d^{b} \right \rangle}} + 8 t {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^{a} \bar{d}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} d^{b} / v^2 \right \rangle}} \right ) \, .\end{aligned}$$ Here, additional polynomials which express the mixing of interpolating fields are $$\begin{aligned} c_6 &= t^2-2t+1,\\ c_7 &= t^2-t,\\ c_{8} &= 9t^2+10t+9,\\ c_{9} &= t^2+1,\\ c_{10} &= 11t^2+6t+11.\end{aligned}$$ These expressions extend the non-factorized four-quark condensates for the nucleon in vacuum listed in [@Koike:1993sq; @Thomas:2005wc]. Factorization and Parametrization of Four-Quark Condensates ----------------------------------------------------------- Up to now we have introduced all possible four-quark condensates in the light quark sector and written out explicitly the structures which appear in the nucleon sum rule. In such a way the sum rule equations of the type employed, e.g. in [@Cohen:1994wm], are equipped with complete four-quark condensates. We evaluate now the sum rule equations with the focus on these particular combinations of four-quark condensates. So we are faced with the common problem of the poor knowledge of four-quark condensates. Usually assuming the vacuum saturation hypothesis or resorting to the large $N_c$ limit the four-quark condensates are factorized into products of condensates with two quark operators. The factorization of four-quark condensates allows to set the proper units, however its reliability is a matter of debate. For instance, [@Birse:1996qp] state that the four-quark condensates in the nucleon sum rule are the expectation value of a chirally invariant operator, while $\langle \bar{q} q \rangle ^2$ is not invariant and thus a substitution by the factorized form would be inconsistent with the chiral perturbation theory expression for the nucleon self-energy. The four-quark condensates breaking chiral symmetry might have a meaningful connection to the chiral condensate but for the chirally invariant structures such a closer relation to ${\ensuremath{\left \langle \bar{q} q \right \rangle}}$ is not clear [@Leupold:2006ih]. Moreover, for nucleon sum rules at finite temperature $T$ (and vanishing chemical potential) it was argued in [@Koike:1993sq] that the four-quark condensates are $T$ independent quite different from the behavior of ${\ensuremath{\left \langle \bar{q} q \right \rangle}}^2$ which is why a naive factorization would lead to artificial temperature effects in the nucleon mass. For numerical purposes it is convenient to correct the values deduced from factorization by factors $\kappa$ and examine the effect of these correction factors on predictions from QCD sum rules. In this section the four-quark condensates classified so far in general are spelled out and the parametrization with a set of quantities $\kappa$ is defined. In doing so one includes a density dependent factor $\kappa (n)$ in the factorized result $${\ensuremath{\left \langle \bar{q}_{f1} \Gamma_1 \mathbbm{C}_1 q_{f1} \bar{q}_{f2} \Gamma_2 \mathbbm{C}_2 q_{f2} \right \rangle}} = \kappa (n) {\ensuremath{\left \langle \bar{q}_{f1} \Gamma_1 \mathbbm{C}_1 q_{f1} \bar{q}_{f2} \Gamma_2 \mathbbm{C}_2 q_{f2} \right \rangle}}_\mathrm{fac} \, ,$$ where $\kappa$ and the following parametrization depend on the specific condensate structure. In linear density approximation this product ansatz obtains contributions both from the expansion $\kappa (n) = \kappa^{(0)} + \kappa^{(1)} n$ with $\kappa^{(1)} = \tfrac{\partial \kappa (0) }{\partial n}$ and from the linearized, factorized four-quark condensate expression $ {\ensuremath{\left \langle \bar{q}_{f1} \Gamma_1 \mathbbm{C}_1 q_{f1} \bar{q}_{f2} \Gamma_2 \mathbbm{C}_2 q_{f2} \right \rangle}}_\mathrm{fac} = a + b n. $ If $\kappa^{(0)}=1$, then $\kappa^{(1)}=0$ recovers the usual factorization, which means the four-quark condensate behaves like the product of two two-quark condensates; $\kappa^{(1)}>0$ represents a stronger density dependence with respect to the factorization and vice versa. Inserting both expansions one can also describe the total density dependence of the condensates by the combination ${\ensuremath{\kappa^{\rm med}_{ \rm }}} = \kappa^{(0)} + \tfrac{a}{b} \kappa^{(1)}$, $${\ensuremath{\left \langle \bar{q}_{f1} \Gamma_1 \mathbbm{C}_1 q_{f1} \bar{q}_{f2} \Gamma_2 \mathbbm{C}_2 q_{f2} \right \rangle}} = a \kappa^{(0)} + b {\ensuremath{\kappa^{\rm med}_{ \rm }}} n$$ such that for ${\ensuremath{\kappa^{\rm med}_{ \rm }}}=0$ the condensate is (in first order) independent of density. For condensates with vanishing $a$ or $b$ in factorization we choose $a={\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2$ and $b={\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} \sigma_{\rm N} / m_q$ as scale to study deviations from zero and denote these instances by ${\ensuremath{\tilde{\kappa}}}$. The classification of possible four-quark condensates is collected together with the specific $\kappa$ parametrization in Tabs. \[tab:listNonMixingFQC\] and \[tab:listMixingFQC\]. ### Non-Flavor Mixing Case {#non-flavor-mixing-case .unnumbered} The condensates which contain only one flavor are listed in Tab. \[tab:listNonMixingFQC\]. From the demand for parity and time reversal invariance only $5$ $(10)$ Dirac and Lorentz scalar four quark operators remained in vacuum (medium). Further, these structures carry color indices and must be projected on colorless objects for which there are two ways. However, since the same flavors occur, both color combinations can be alternatively rearranged via Fierz transformation. Hence, there are only $5$ $(10)$ independent $\kappa$ parameter sets in the Tab. \[tab:listNonMixingFQC\], although both color alternatives are listed. The parameter sets with indices $1,2$ are related by the transformation . ### Flavor Mixing Case {#flavor-mixing-case .unnumbered} Here the condensates containing two quark operator pairs are distinguished by flavor. The numbering is as for the pure flavor structures. However, the conversion of the two color contractions is not possible due to different flavors. Compared to the non-flavor mixing case the missing exchange symmetry of $\bar{q} q$ contractions due to different flavors allows additional placements of Dirac matrices and thus leads to 4 additional condensate structures in medium (see Tab. \[tab:listMixingFQC\]). Therefore, 10 (24) flavor-mixed four quark condensates and thus $\kappa$ parameter pairs appear in vacuum (medium). #### Hence, there exist in medium {vacuum} for $n_f$ flavors without flavor symmetry taken into account $2n_f (6n_f-1)$ {$5n_f^2$} independent four-quark condensates being Lorentz invariant expectation values of hermitian products of four quark operators constrained by time and parity reversal invariance. Symmetry under flavor rotation reduces these numbers to $20$ {$10$}, respectively. Finally note that these are also the numbers of necessary ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$ parameters. Since the four-quark condensates in operator product expansions obtained from the medium projections in the limit of vanishing baryon density $n$ should coincide with the vacuum result, this leads by contraction of vacuum and medium projections of four-quark condensates to the relations ${\ensuremath{\kappa^{\rm vac}_{ \rm v',t',a' }}} = \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm v,t,a }}}$, which have already been included in Tabs. \[tab:listNonMixingFQC\] and \[tab:listMixingFQC\]. Further, Lorentz projections which exist only in medium imply no new ${\ensuremath{\kappa^{\rm vac}_{ \rm }}}$ parameters and so the number of ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$ in medium reduces consistently to the number of ${\ensuremath{\kappa^{\rm vac}_{ \rm }}}$ and four-quark condensates in vacuum. [clr]{}\ Indices & Full condensate & Parametrized Factorization\ & &in Linear Density Approximation\ \ $ {\rm 1s} $ & ${\ensuremath{\left \langle \bar{u} u \bar{u} u \right \rangle}}$ & $ \tfrac{11}{12} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 1s }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1s }}} n \xi \right ) $\ $ {\rm 1v} $ & ${\ensuremath{\left \langle \bar{u} \gamma_\alpha u \bar{u} \gamma^\alpha u \right \rangle}}$ & $ - \tfrac{1}{3} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 1v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1v }}} n \xi \right ) $\ $ {\rm 1v'} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} u \bar{u} {\ensuremath{ v \mspace{-8mu} / }} u \right \rangle}} /v^2$ & $ - \tfrac{1}{12} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 1v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1v' }}} n \xi \right ) $\ $ {\rm 1t} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} u \bar{u} \sigma^{\alpha \beta} u \right \rangle}}$ & $ - \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 1t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1t }}} n \xi \right ) $\ $ {\rm 1t'} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} u \bar{u} \sigma^{\gamma \delta} u \right \rangle}}g^{\alpha}_{\gamma} v^\beta v_\delta /v^2$ & $ - \tfrac{1}{4} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 1t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1t' }}} n \xi \right ) $\ $ {\rm 1a} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\alpha u \bar{u} \gamma_5 \gamma^\alpha u \right \rangle}}$ & $ \tfrac{1}{3} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 1a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1a }}} n \xi \right ) $\ $ {\rm 1a'} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u \right \rangle}} /v^2$ & $ \tfrac{1}{12} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 1a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1a' }}} n \xi \right ) $\ $ {\rm 1p} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 u \bar{u} \gamma_5 u \right \rangle}}$ & $ - \tfrac{1}{12} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 1p }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 1p }}} n \xi \right ) $\ $ {\rm 1vs} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} u \bar{u} u \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 1vs }}} n \xi $\ $ {\rm 1at} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\kappa u \bar{u} \sigma_{\lambda \pi} u \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 1at }}} n \xi $\ \ $ {\rm 2s} $ & ${\ensuremath{\left \langle \bar{u} \lambda^A u \bar{u} \lambda^A u \right \rangle}}$ & $ - \tfrac{4}{9} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 2s }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2s }}} n \xi \right ) $\ $ {\rm 2v} $ & ${\ensuremath{\left \langle \bar{u} \gamma_\alpha \lambda^A u \bar{u} \gamma^\alpha \lambda^A u \right \rangle}}$ & $ - \tfrac{16}{9} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 2v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2v }}} n \xi \right ) $\ $ {\rm 2v'} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{u} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \right \rangle}} /v^2$ & $ - \tfrac{4}{9} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 2v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2v' }}} n \xi \right ) $\ $ {\rm 2t} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} \lambda^A u \bar{u} \sigma^{\alpha \beta} \lambda^A u \right \rangle}}$ & $ - \tfrac{16}{3} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 2t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2t }}} n \xi \right ) $\ $ {\rm 2t'} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} \lambda^A u \bar{u} \sigma^{\gamma \delta} \lambda^A u \right \rangle}}g^{\alpha}_{\gamma} v^\beta v_\delta /v^2$ & $ - \tfrac{4}{3} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 2t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2t' }}} n \xi \right ) $\ $ {\rm 2a} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\alpha \lambda^A u \bar{u} \gamma_5 \gamma^\alpha \lambda^A u \right \rangle}}$ & $ \tfrac{16}{9} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 2a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2a }}} n \xi \right ) $\ $ {\rm 2a'} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \right \rangle}} /v^2$ & $ \tfrac{4}{9} \left ( \tfrac{1}{4} {\ensuremath{\kappa^{\rm vac}_{ \rm 2a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2a' }}} n \xi \right ) $\ $ {\rm 2p} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \lambda^A u \bar{u} \gamma_5 \lambda^A u \right \rangle}}$ & $ - \tfrac{4}{9} \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm 2p }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 2p }}} n \xi \right ) $\ $ {\rm 2vs} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{u} \lambda^A u \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 2vs }}} n \xi $\ $ {\rm 2at} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\kappa \lambda^A u \bar{u} \sigma_{\lambda \pi} \lambda^A u \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 2at }}} n \xi $\ \ [clr]{}\ Indices & Full condensate & Parametrized Factorization\ & &in Linear Density Approximation\ \ $ {\rm 3s} $ & ${\ensuremath{\left \langle \bar{u} u \bar{d} d \right \rangle}}$ & $ {\ensuremath{\kappa^{\rm vac}_{ \rm 3s }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm 3s }}} n \xi $\ $ {\rm 3v} $ & ${\ensuremath{\left \langle \bar{u} \gamma_\alpha u \bar{d} \gamma^\alpha d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3v }}} n \xi $\ $ {\rm 3v'} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} u \bar{d} {\ensuremath{ v \mspace{-8mu} / }} d \right \rangle}} /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3v' }}} n \xi $\ $ {\rm 3t} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} u \bar{d} \sigma^{\alpha \beta} d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3t }}} n \xi $\ $ {\rm 3t'} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} u \bar{d} \sigma^{\gamma \delta} d \right \rangle}}g^{\alpha}_{\gamma} v^\beta v_\delta /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3t' }}} n \xi $\ $ {\rm 3a} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\alpha u \bar{d} \gamma_5 \gamma^\alpha d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3a }}} n \xi $\ $ {\rm 3a'} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u \bar{d} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} d \right \rangle}} /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3a' }}} n \xi $\ $ {\rm 3p} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 u \bar{d} \gamma_5 d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 3p }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 3p }}} n \xi $\ $ {\rm 3vs} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} u \bar{d} d \right \rangle}}$ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 3vs }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ $ {\rm 3at} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\kappa u \bar{d} \sigma_{\lambda \pi} d \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 3at }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ \ $ {\rm 4s} $ & ${\ensuremath{\left \langle \bar{u} \lambda^A u \bar{d} \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4s }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4s }}} n \xi $\ $ {\rm 4v} $ & ${\ensuremath{\left \langle \bar{u} \gamma_\alpha \lambda^A u \bar{d} \gamma^\alpha \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4v }}} n \xi $\ $ {\rm 4v'} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{d} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A d \right \rangle}} /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4v' }}} n \xi $\ $ {\rm 4t} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} \lambda^A u \bar{d} \sigma^{\alpha \beta} \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4t }}} n \xi $\ $ {\rm 4t'} $ & ${\ensuremath{\left \langle \bar{u} \sigma_{\alpha \beta} \lambda^A u \bar{d} \sigma^{\gamma \delta} \lambda^A d \right \rangle}}g^{\alpha}_{\gamma} v^\beta v_\delta /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4t }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4t' }}} n \xi $\ $ {\rm 4a} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\alpha \lambda^A u \bar{d} \gamma_5 \gamma^\alpha \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4a }}} n \xi $\ $ {\rm 4a'} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{d} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} \lambda^A d \right \rangle}} /v^2$ & $ \tfrac{1}{4} {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4a }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4a' }}} n \xi $\ $ {\rm 4p} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \lambda^A u \bar{d} \gamma_5 \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm vac}_{ \rm 4p }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4p }}} n \xi $\ $ {\rm 4vs} $ & ${\ensuremath{\left \langle \bar{u} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A u \bar{d} \lambda^A d \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 4vs }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ $ {\rm 4at} $ & ${\ensuremath{\left \langle \bar{u} \gamma_5 \gamma_\kappa \lambda^A u \bar{d} \sigma_{\lambda \pi} \lambda^A d \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 4at }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ \ $ {\rm 5vs} $ & ${\ensuremath{\left \langle \bar{d} {\ensuremath{ v \mspace{-8mu} / }} d \bar{u} u \right \rangle}}$ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 5vs }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ $ {\rm 5at} $ & ${\ensuremath{\left \langle \bar{d} \gamma_5 \gamma_\kappa d \bar{u} \sigma_{\lambda \pi} u \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 5at }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ \ $ {\rm 6vs} $ & ${\ensuremath{\left \langle \bar{d} {\ensuremath{ v \mspace{-8mu} / }} \lambda^A d \bar{u} \lambda^A u \right \rangle}}$ & $ {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm 6vs }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ $ {\rm 6at} $ & ${\ensuremath{\left \langle \bar{d} \gamma_5 \gamma_\kappa \lambda^A d \bar{u} \sigma_{\lambda \pi} \lambda^A u \right \rangle}} \epsilon^{\kappa \lambda \pi \xi} v_\xi $ & $ {\ensuremath{\kappa^{\rm med}_{ \rm 6at }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} 3 n/2 $\ \ Insertion of these parametrization into the relevant sums of four-quark condensates - yields effective $\kappa$ parameters as linear combinations of the previously defined condensate-specific parameters. The sum rule is only sensitive to these effective combinations and can thus only reveal information on the behavior of specific linear combinations of four-quark condensates. Therefore in the sum rule analysis the three parameters ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$, ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$, ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$ describing the density dependence enter as $$\begin{aligned} \label{eq:fqcparametrization_combination_s} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}} \right \}_{\rm eff}^{1} & = c_1 \left ( {\ensuremath{\kappa^{\rm med}_{ \rm s }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} \dfrac{3}{2} n \right ) \, ,\\ \label{eq:fqcparametrization_combination_q} \left \{ c_1 {\ensuremath{\left \langle \bar{q} q \right \rangle}}^2 + \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{q} & = c_1 \left ( {\ensuremath{\kappa^{\rm vac}_{ \rm q }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}^2 + {\ensuremath{\kappa^{\rm med}_{ \rm q }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} \dfrac{\sigma_{\rm N}}{ m_{\rm q}} n \right ) \, , \\ \label{eq:fqcparametrization_combination_v} \left \{ \dfrac{c_4}{v^2} {\ensuremath{\left \langle \bar{q} {\ensuremath{ v \mspace{-8mu} / }} q \right \rangle}}^2 \right \}_{\rm eff}^{v} & = c_4 \left ( {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}} {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}} \dfrac{\sigma_{\rm N}}{ m_{\rm q}} n \right ) \,\end{aligned}$$ and are functions of the mixing angle $t$ as well. However, we restrict this discussion to the limit of the Ioffe interpolating field $t=-1$. Note again, the ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$ values are effective combinations representing the density dependence of the respective condensate lists - and thus negative ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$, a four-quark condensate behavior contrary to the factorization assumption, comprise cancellation effects within these condensate combinations. ### Density dependence of four-quark condensates from models {#density-dependence-of-four-quark-condensates-from-models .unnumbered} It is instructive to derive values for the effective density dependence parameters ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$. Expectation values of four-quark operators in the nucleon were previously calculated in a perturbative chiral quark model [@Drukarev:2003xd] and taken into account in sum rule evaluations for the in-medium nucleon [@Drukarev:2004zg]. (Corrections to the factorization of four-quark condensates in nucleon sum rules have also been considered in the framework of the Nambu-Jona-Lasinio model in [@Celenza:1994ri]. Lattice evaluations of four-quark operators in the nucleon are yet restricted to combinations which avoid the mixing with lower dimensional operators on the lattice [@Gockeler:2001xw], and provide not yet enough information to constrain Eqs. -.) The results in [@Drukarev:2003xd] can be translated to our $\kappa$ parameters. However, only such color combinations being significant in baryon sum rules are considered, see left column in Tab. \[tab:condensatePartsFromDrukarev\]. We note that the values given in [@Drukarev:2003xd] have to be corrected slightly in order to reach full consistency with the Fierz relations -, which are an operator identity and thus must be fulfilled also for expectation values in the nucleon. An optimized minimally corrected set is found by the following procedure: minimize the relative deviation of all separate values compared to values delivered in the parametrization of [@Drukarev:2003xd] (this is in the order of 10 %, however with different possible adjustments); from these configurations choose the set with smallest sum of separate deviations (this deviation sum estimates to 40 % and different configurations are close to this value). The results from which the relevant density dependence for our condensate classification is obtained are collected in Tab. \[tab:condensatePartsFromDrukarev\]; our slight modifications of values in the original parametrization [@Drukarev:2003xd] are documented in Tabs. \[tab:parametersDrukarevPureFlavor\] and \[tab:parametersDrukarevMixedFlavor\] in appendix \[ap:expectationValues\]. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- Mean Nucleon Matrix Element PCQM model $[{\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}]$ (to be color contracted with $\epsilon_{abc} \epsilon_{a'b'c'}$) $ {\ensuremath{\left \langle \bar{u}^{a'} u^a \bar{u}^{b'} u^b \right \rangle}}_N$ $ 3.993 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\alpha u^a \bar{u}^{b'} \gamma^\alpha u^b \right \rangle}}_N$ $ 1.977 $ $ {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{u}^{b'} {\ensuremath{ v \mspace{-8mu} / }} u^b \right \rangle}}_N /v^2$ $ 0.432 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \sigma_{\alpha \beta} u^a \bar{u}^{b'} \sigma^{\alpha \beta} u^b \right \rangle}}_N$ $ 12.024 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \sigma_{\alpha \beta} u^a \bar{u}^{b'} \sigma^{\alpha \delta} u^b \right \rangle}}_N v^\beta v_\delta /v^2$ $ 3.045 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\alpha u^a \bar{u}^{b'} \gamma_5 \gamma^\alpha u^b \right \rangle}}_N$ $ -1.980 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{u}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^b \right \rangle}}_N /v^2$ $ -0.519 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 u^a \bar{u}^{b'} \gamma_5 u^b \right \rangle}}_N$ $ 2.016 $ $ {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{u}^{b'} u^b \right \rangle}}_N$ $ - $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\kappa u^a \bar{u}^{a'} \sigma_{\lambda \pi} u^b \right \rangle}}_N \epsilon^{\kappa \lambda \pi \xi} v_\xi$ $ - $ $ {\ensuremath{\left \langle \bar{u}^{a'} u^a \bar{d}^{b'} d^b \right \rangle}}_N$ $ 3.19 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_\alpha u^a \bar{d}^{b'} \gamma^\alpha d^b \right \rangle}}_N$ $ -2.05 $ $ {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{d}^{b'} {\ensuremath{ v \mspace{-8mu} / }} d^b \right \rangle}}_N /v^2$ $ -0.73 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \sigma_{\alpha \beta} u^a \bar{d}^{b'} \sigma^{\alpha \beta} d^b \right \rangle}}_N$ $ 3.36 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \sigma_{\alpha \beta} u^a \bar{d}^{b'} \sigma^{\alpha \delta} d^b \right \rangle}}_N v^\beta v_\delta /v^2$ $ 1.11 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\alpha u^a \bar{d}^{b'} \gamma_5 \gamma^\alpha d^b \right \rangle}}_N$ $ 1.66 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{d}^{b'} \gamma_5 {\ensuremath{ v \mspace{-8mu} / }} d^b \right \rangle}}_N /v^2$ $ 0.37 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 u^a \bar{d}^{b'} \gamma_5 d^b \right \rangle}}_N$ $ -0.185 $ $ {\ensuremath{\left \langle \bar{u}^{a'} {\ensuremath{ v \mspace{-8mu} / }} u^a \bar{d}^{b'} d^b \right \rangle}}_N$ $ -0.245 $ $ {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\kappa u^a \bar{d}^{b'} \sigma_{\lambda \pi} d^b \right \rangle}}_N \epsilon^{\kappa \lambda \pi \xi} v_\xi$ $ - $ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------- : The combinations arranged as in the vector $\vec{z}$ of four-quark expectation values obtained from the (partially modified) set taken from a perturbative chiral quark model calculation (PCQM) in [@Drukarev:2003xd] from which the characteristic density dependence of four-quark condensates, the value of ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$, is derived. Isospin symmetry $N=\tfrac{1}{2}(p+n)$ of the nuclear matter ground state is assumed. The values in the pure flavor sector (upper part) are tuned to obey Fierz relations - on the accuracy level $ < 0.01 {\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}$. For three combinations no results are provided in [@Drukarev:2003xd] as indicated by ”$-$”.[]{data-label="tab:condensatePartsFromDrukarev"} The connection to our $\kappa$ parameters is derived as follows: Generally, in linear density approximation condensates behave like $ {\ensuremath{\left \langle \Psi_0 \left | A \right | \Psi_0 \right \rangle}} = {\ensuremath{\left \langle 0 \left | A \right | 0 \right \rangle}} + n {\ensuremath{\left \langle N \left | A \right | N \right \rangle}} $. If one compares our parametrized density dependent part of each four-quark condensate with the evaluation of nucleon matrix elements of four-quark operators in the combinations in Tab. \[tab:condensatePartsFromDrukarev\] one obtains values for linear combinations of $\kappa$ parameters. The linear combinations refer to the two distinct color alternatives representing, as mentioned above, the typical color combination in baryon sum rules. These values can thus be applied to give also the required effective parameters[^4] (apart from the term $\sim {\ensuremath{\left \langle \bar{u}^{a'} \gamma_5 \gamma_\kappa u^a \bar{d}^{b'} \sigma_{\lambda \pi} d^b \right \rangle}}_N \epsilon^{\kappa \lambda \pi \xi} v_\xi$ not considered in [@Drukarev:2003xd] which we had to neglect in the determination of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$) $$\label{eq:pcqmKappaSet} {\ensuremath{\kappa^{\rm med}_{ \rm s }}} = -0.25 \, , \quad {\ensuremath{\kappa^{\rm med}_{ \rm q }}} = -0.10 \, , \quad {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}} = -0.03 \, .$$ Note that individual ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$ parameters are not small compared to these effective numbers indicating significant cancellation effects in the density dependent parts of combined four-quark condensates. Moreover, for pure flavor four-quark condensates the ambiguity due to Fierz relations between operators does not allow to prefer a specific condensate type as dominating the four-quark condensates in the sum rule. For further attempts to gain estimates of four-quark condensates we refer the interested reader to [@Zschocke:2005gr]. ### Remark on Chiral Symmetry {#remark-on-chiral-symmetry .unnumbered} The chiral condensate is often considered an order parameter for the $SU(n_f)_A$ chiral symmetry of QCD. Its change however might partially originate from virtual low-momentum pions and thus could not clearly signal partial restoration of chiral symmetry in matter [@Birse:1996fw]. The interpretation of four-quark condensates as order parameters for spontaneous break-down of chiral symmetry is an open issue. In [@Leupold:2006ih] a specific combination of four-quark condensates arising from the difference between vector and axial-vector correlators is proposed as such an alternative parameter. This combination (shown to agree with vacuum factorization in the analysis of tau-lepton decay data [@Bordes:2005wv]) is distinct from the above four-quark condensate lists in the nucleon channel as well as from the combination in the $\omega$ sum rule [@Thomas:2005dc]. For instance, in vacuum nucleon QCD sum rules the four-quark condensate combination (the vacuum limit of Eq.  with isospin symmetry being applied; $\psi$ is the flavor vector) enters as sum of a chirally invariant part $$\left [ 2(2t^2+t+2) {\ensuremath{\left \langle \bar{\psi} \gamma_\mu \psi \bar{\psi} \gamma^\mu \psi \right \rangle}} + (3t^2+4t+3) {\ensuremath{\left \langle \bar{\psi} \gamma_5 \gamma_\mu \psi \bar{\psi} \gamma_5 \gamma^\mu \psi \right \rangle}} \right ] - \dfrac{3}{4} \left [\text{color structures with} \, \lambda^A \right ] \nonumber \, ,$$ and a part which breaks this symmetry (pointed out in factorized form already in [@Jido:1996ia]) $$\left [ 3(t^2-1) \left ( {\ensuremath{\left \langle \bar{\psi} \psi \bar{\psi} \psi \right \rangle}} + {\ensuremath{\left \langle \bar{\psi} \gamma_5 \psi \bar{\psi} \gamma_5 \psi \right \rangle}} - \dfrac{1}{2} {\ensuremath{\left \langle \bar{\psi} \sigma_{\mu \nu} \psi \bar{\psi} \sigma^{\mu \nu} \psi \right \rangle}} \right ) \right ] - \dfrac{3}{4} \left [\text{color structures with} \, \lambda^A \right ] \nonumber \, .$$ In the preferred case $t=-1$ only the chirally invariant part survives and thus this remainder cannot be used as an order parameter. Additional insight into the change of four-quark condensates and their role as order parameters of spontaneous chiral symmetry breaking could be acquired from other hadronic channels, as the generalization of further baryon sum rules in vacuum [@Leinweber:1989hh; @Lee:2002jb; @Lee:2006bu] to the medium case, e.g. for the $\Delta$ [@Jin:1994vw; @Johnson:1995sk]. Analysis ======== Approximations -------------- In text book examples of QCD sum rules for light vector mesons (e.g. [@Reinders:1984sr; @pk:Narison2004]) one usually considers mass equations and optimizes them for maximum flatness w.r.t. the Borel mass. This, however, includes often derivative sum rules and seems not to be appropriate in the case of fermions where the condensates are distributed over coupled sum rule equations for several invariant functions. Despite of this, equations for the self-energies can be formed dividing Eqs.  and  by  thus arriving at a generalization of Ioffe’s formula [@Ioffe:1981kw] for the nucleon vacuum mass. Approximated forms incorporating only lowest dimension condensates are sometimes used as estimates for in-medium nucleon self-energies [@Finelli:2002na; @Finelli:2003fk]: $\Sigma_v = 64 \pi^2 {\ensuremath{\left \langle q^{\dagger} q \right \rangle}} / (3 {\ensuremath{ \mathcal{M} }}^2) = 0.36 {\ensuremath{\; {\rm GeV}}} n/n_0$ and $\Sigma_s = -M_N -8\pi^2 {\ensuremath{\left \langle \bar{q} q \right \rangle}} / {\ensuremath{ \mathcal{M} }}^2 = -0.37 {\ensuremath{\; {\rm GeV}}} n/n_0$ at ${\ensuremath{ \mathcal{M} }}^2 = 1 {\ensuremath{\; {\rm GeV^2}}}$. Although to be confirmed by dedicated sum rule analysis, it is instructive to understand the impact of four-quark condensates at finite density from naive decoupled self-energy equations linearized in density. For fixed Borel mass ${\ensuremath{ \mathcal{M} }}^2 = 1 {\ensuremath{\; {\rm GeV^2}}}$, threshold $s_0 = 2.5 {\ensuremath{\; {\rm GeV^2}}}$ and condensates listed below, the self-energies become independent when a constant $E_- = -M_N$ is assumed; with ${\ensuremath{\kappa^{\rm vac}_{ \rm q }}}$ adjusted to yield the vacuum nucleon mass the self-energies are estimated as $$\label{eq:estimate_sigmav} \Sigma_v = (0.16 + 1.22 {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}} ) {\ensuremath{\; {\rm GeV}}} \dfrac{n}{n_0} \, ,$$ $$\label{eq:estimate_sigmas} \Sigma_s = - (0.32 + 0.11 {\ensuremath{\kappa^{\rm med}_{ \rm s }}} - 0.31 {\ensuremath{\kappa^{\rm med}_{ \rm q }}} ){\ensuremath{\; {\rm GeV}}} \dfrac{n}{n_0} \, .$$ Indeed at small values of $k_F$ the impact of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$, ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$ and ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$ is as follows: The vector self-energy $\Sigma_v$ only depends on ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$, the scalar self-energy $\Sigma_s$ is effected by ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ and ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$, whereby a negative ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ works equivalent to a positive value for ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$ and vice versa. Comparable effects in $\Sigma_s$ point out that a characteristic value of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ is three times the corresponding absolute value of ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$. Whereas this qualitative estimate from Eqs.  and  is in line with the numerical analysis below for small densities $n<0.7n_0$ corresponding to Fermi momenta $k_F = (3 \pi^2 n / 2)^{1/3}<1.2{\ensuremath{\; {\rm fm^{-1}}}}$, the limit of constant four-quark condensates deviates from the widely excepted picture of cancelling vector and scalar self-energies which can be traced back to competing effects of higher order condensates. Since even in the small density limit for constant four-quark condensates the estimated ratio $\Sigma_v / \Sigma_s \sim \tfrac{1}{2}$ cannot be confirmed numerically, these estimates cannot substitute a numerical sum rule evaluation. Numerical Analysis {#sec:numericalanalysis} ------------------ In order to investigate numerically the importance of the three combinations of four-quark condensates entering the sum rule equations - at finite baryon density we perform an evaluation for fixed continuum threshold parameter $s_0 = 2.5 {\ensuremath{\; {\rm GeV^2}}}$ in a fixed Borel window ${\ensuremath{ \mathcal{M} }}^2 = 0.8 \ldots 1.4 {\ensuremath{\; {\rm GeV^2}}}$. Since we are especially interested in medium modifications we use all sum rule equations although chiral-odd sum rule equations have been identified more reliable in the vacuum case [@Jin:1997pb] (however note that instanton contributions might change the relevance of particular sum rule equations [@Dorokhov:1989zw; @Forkel:1993hj]). From Eqs. - after transformation one unique left-hand side and the corresponding three r.h.sides are compared and their differences are minimized by a search for the optimum parameters $\Sigma_v$, $M_N^*$ and $\lambda_N^{*2}$ with a logarithmic deviation measure [@Leinweber:1989hh; @Jin:1993up]. The condensates are estimated from various relations, e.g., the chiral condensate [$\left \langle \bar{q} q \right \rangle$]{} depends via partial conservation of the axial current (PCAC) on the pion decay constant and pion mass, the gluon condensate ${\ensuremath{\left \langle (\alpha_s/\pi) G^2 \right \rangle}}$ is determined from the QCD trace anomaly and further condensates can be expressed through moments of parton distribution functions. We use the values ${\ensuremath{\left \langle \bar{q} q \right \rangle}} = - (0.245 {\ensuremath{\; {\rm GeV}}})^3 + n \sigma_N/(2 m_q)$ with $\sigma_N = 45 {\ensuremath{\; {\rm MeV}}}$ and $m_q = 5.5 {\ensuremath{\; {\rm MeV}}}$, ${\ensuremath{\left \langle g_s \bar{q} \sigma G q \right \rangle}} = x^2 {\ensuremath{\left \langle \bar{q} q \right \rangle}} + 3.0 {\ensuremath{\; {\rm GeV}}}^2 n$ with $x^2=0.8 {\ensuremath{\; {\rm GeV}}}^2$, ${\ensuremath{\left \langle \bar{q} iD_0 iD_0 q \right \rangle}} + {\ensuremath{\left \langle g_s \bar{q} \sigma G q \right \rangle}}/8 = 0.3 {\ensuremath{\; {\rm GeV}}}^2 n$, ${\ensuremath{\left \langle (\alpha_s/\pi) G^2 \right \rangle}} = -2 \langle (\alpha_s/\pi) (\vec{E}^2-\vec{B}^2)\rangle = -2 [-0.5 (0.33 {\ensuremath{\; {\rm GeV}}})^4 + 0.325 {\ensuremath{\; {\rm GeV}}} n)]$, $\langle (\alpha_s/\pi) [(vG)^2 + (v\tilde{G})^2] \rangle = - \langle (\alpha_s/\pi) (\vec{E}^2+\vec{B}^2)\rangle = 0.1 {\ensuremath{\; {\rm GeV}}} n$, ${\ensuremath{\left \langle q^\dagger iD_0 iD_0 q \right \rangle}} + {\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}}/12 = (0.176 {\ensuremath{\; {\rm GeV}}})^2 n$, ${\ensuremath{\left \langle q^{\dagger} q \right \rangle}} = 1.5 n$, ${\ensuremath{\left \langle q^{\dagger} iD_0 q \right \rangle}} = 0.18 {\ensuremath{\; {\rm GeV}}} n$ and ${\ensuremath{\left \langle g_s q^\dagger \sigma G q \right \rangle}} = -0.33 {\ensuremath{\; {\rm GeV}}}^2 n$ as employed and discussed in [@Jin:1992id]. The values of possible $\kappa^{\rm med}$ parameters are given in Eqs. . Values for ${\ensuremath{\kappa^{\rm vac}_{ \rm q }}}$ are adjusted to reproduce the vacuum nucleon pole mass. The results of these numerical evaluations for a nucleon on the Fermi surface $|\vec{q}_F|=k_F$ are summarized in Figs. \[fig:basis\]-\[fig:optimizedkappa\]. Fig. \[fig:basis\] shows the scalar and vector self-energies of the nucleon as a function of the Fermi momentum. The situation with four-quark condensate combinations - kept constant at their vacuum value (i. e. ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}={\ensuremath{\kappa^{\rm med}_{ \rm q }}}={\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}=0$) is compared to the QCD sum rule evaluation with $\kappa$ parameters from Eqs. . The results have the same qualitative behavior as self-energies determined from chiral effective field theory with realistic NN potentials [@Gross-Boelting:1998jg; @Plohl:2006hy]. Figs. \[fig:kappa\_s\], \[fig:kappa\_q\] and \[fig:kappa\_v\] exhibit the impact of the 3 different four-quark condensate combinations: The vector self-energy is, in agreement with Eq. , mainly determined by ${\ensuremath{\kappa^{\rm med}_{ \rm v }}}$ especially for small densities (for positive values of ${\ensuremath{\kappa^{\rm med}_{ \rm v }}}$ even the qualitative form of the vector self-energy changes), ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$ has only small impact, and ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ does not effect $\Sigma_{\rm v}$ at all. The scalar self-energy, in contrast, is influenced by all 3 combinations, whereby the change of ${\ensuremath{\kappa^{\rm med}_{ \rm v }}}$ is only visible for Fermi momenta $k_F>0.8 {\ensuremath{\; {\rm fm^{-1}}}}$ as also suggested by Eq. . Figs. \[fig:kappa\_s\] and \[fig:kappa\_q\] also reveal the opposed impact of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ versus ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$. A variation of $s_0$ is not crucial (see Fig. \[fig:threshold\]). The inclusion of anomalous dimension factors in the sum rule equations as in [@Jin:1993up; @Cohen:1994wm] leads to a reduction of $\Sigma_v$ in the order of $20 \%$ but causes only minor changes in $\Sigma_s$. Thereby the naive choice of the anomalous dimension from the factorized form of the four-quark condensates leaves space for improvement since it is known that four-quark condensates mix under renormalization [@Jamin:1985su]. Our analysis concentrates on the impact of four-quark condensates, but also the variation of the density dependence of further condensates can change the result. For example, a large change of the density behavior of the genuine chiral condensate, as determined by the $\sigma_N$ term, by factor 2 {0.5} leads to 8 % decrease {4 % increase} in the effective mass parameter $M_N^*$ at $k_F \sim 0.8 {\ensuremath{\; {\rm fm^{-1}}}}$, while $\Sigma_v$ is less sensitive. Correspondingly, the effective coupling $\lambda_N^{*2}$ is reduced by 10 % {enhanced by 5 %}. An improved weakly attractive cancellation pattern between $\Sigma_s$ (attraction) and $\Sigma_v$ (repulsion), and thus agreement with chiral effective field theory [@Plohl:2006hy], can be achieved for a parameter set ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}=1.2$, ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}=-0.4$, ${\ensuremath{\kappa^{\rm med}_{ \rm v }}}=0.1$ (see Fig. \[fig:optimizedkappa\]). However such a fit would allow larger values of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ compensated by a larger magnitude of the negative value of ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$ and vice versa. Note that in both ways the factorization limit ${\ensuremath{\kappa^{\rm med}_{ \rm s,q }}}=1$ is violated by one or the other four-quark condensate combination. Such optimized $\kappa$ parameters, adjusted to results of [@Plohl:2006hy], deviate noticeably from those in Eqs.  deduced from [@Drukarev:2003xd]. Quantities characterizing the energy of an excitation with nucleon quantum numbers are $M_N^*$ and $E_+$, introduced in section 2.4. Since $\Sigma_s$ is negative, $M_N^*$ drops continously with increasing density achieving a value of about $540 {\ensuremath{\; {\rm MeV}}}$ at nuclear saturation density (corresponding to $k_F \sim 1.35 {\ensuremath{\; {\rm fm^{-1}}}}$) if extrapolated from the optimized fit in Fig. \[fig:optimizedkappa\]. The energy $E_+$ barely changes as function of $k_F$. Considering the behavior of the effective coupling parameter in the cases of Figs. \[fig:kappa\_s\]-\[fig:kappa\_v\] the maximum impact of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ {${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$} on $\lambda_N^{*2}$ is 6 % {3 %} at $k_F \sim 0.8 {\ensuremath{\; {\rm fm^{-1}}}}$. In the extreme case, ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}=1$ leads to a 40 % increase of $\lambda_N^{*2}$. The variation of this coupling as a function of $k_F$ is in the order of 10 % in the optimized scenario. Generally, specific assumptions on the four-quark condensates can cause a decrease or an increase as well. This alternation of $\lambda_N^{*2}$ has already been pointed out in [@Jin:1993up], whereby their assumptions yield even a $\pm 20~\%$ change at nuclear density compared to the vacuum limit (cf. also [@Furnstahl:1995nd]). The vacuum limit of the calculated $\lambda_N^{*2}$ agrees with the existing range of values (see [@Leinweber:1994nm] for a compilation of results for the coupling strength of the nucleon excitation to the interpolating field in vacuum). ![Nucleon vector and scalar self-energies as functions of the nucleon Fermi momentum $k_F=(3\pi^2n/2)^{1/3}$. The sum rule result for constant four-quark condensates (QSR with constant fqc: ${\ensuremath{\kappa^{\rm med}_{ \rm s }}} = {\ensuremath{\kappa^{\rm med}_{ \rm q }}} = {\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}} =0$, solid curve) is compared to an evaluation with density dependent four-quark condensates as given in Eqs.  (QSR with fqc from PCQM, dotted curves). The latter choice causes only minor differences in $\Sigma_v$ and $\Sigma_s$, for the scalar self-energy also because of competing impact of ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$ and ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$. The self-energies from chiral effective field theory [@Plohl:2006hy] (ChEFT, dashed curves) are shown as well but should be used as comparison only at small densities.[]{data-label="fig:basis"}](./out170407basis.ps){width="10.5cm"} ![The variation of nucleon self-energies for different assumptions of the density dependence of the four-quark condensates in Eq.  parametrized by ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}$; other four-quark condensate combinations are held constant.[]{data-label="fig:kappa_s"}](./out230107kappameds.ps){width="10.5cm"} ![The same as Fig. \[fig:kappa\_s\] but for a variation of ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}$ (${\ensuremath{\kappa^{\rm med}_{ \rm s }}}={\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}=0$).[]{data-label="fig:kappa_q"}](./out230107kappamedq.ps){width="10.5cm"} ![The same as Fig. \[fig:kappa\_s\] but for a variation of ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$ (${\ensuremath{\kappa^{\rm med}_{ \rm s }}}={\ensuremath{\kappa^{\rm med}_{ \rm q }}}=0$).[]{data-label="fig:kappa_v"}](./out170407kappamedv.ps){width="10.5cm"} ![The impact of different threshold parameters $s_0$ on the nucleon self-energies for the case of constant four-quark condensates, i. e. for ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}={\ensuremath{\kappa^{\rm med}_{ \rm q }}}={\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}=0$.[]{data-label="fig:threshold"}](./out230107variations0.ps){width="10.5cm"} ![QCD sum rule evaluations of nucleon self-energies with the parameter set ${\ensuremath{\kappa^{\rm med}_{ \rm s }}}=1.2$, ${\ensuremath{\kappa^{\rm med}_{ \rm q }}}=-0.4$, ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}=0.1$ (dash-dotted curves) compare to chiral effective field theory [@Plohl:2006hy] with realistic NN forces as input.[]{data-label="fig:optimizedkappa"}](./out170407optimizedKappaForPlohl.ps){width="10.5cm"} Conclusions =========== Four-quark condensates have a surprisingly strong impact on conventional spectral QCD sum rules of light vector mesons. Unfortunately, four-quark condensates and their density dependencies are poorly known. One possibility is to consider a large set of hadronic observables and to try to constrain these parameters characterizing the QCD vacuum. Steps along this line of reasoning have been done, e. g. , in [@Johnson:1995sk]. In order to accomplish a systematic approach, we present here a complete catalog of independent four-quark condensates for equilibrated symmetric or asymmetric nuclear matter. While the number of such condensates is fairly large already in the light quark sector, we point out that only special combinations enter the QCD sum rules. For the conventional nucleon QCD sum rule, three different combinations of four-quark condensates are identified. We note that the knowledge of these combinations (even the individual condensates entering) is not sufficient to convert them into the combination being specific for the spectral QCD sum rule for light vector mesons. In analyzing the set of independent four-quark condensates we find also identities which must be fulfilled in a consistent treatment. Model calculations of four-quark condensates seem not to fulfill automatically these constraints. On the level of an exploratory study we show the impact of the three combinations of four-quark condensates on the vector and scalar self-energies of the nucleon. In cold nuclear matter at sufficiently low densities the density dependence of only one effective four-quark condensate combination is found to be important for the vector self-energy and the other two combinations dominate the scalar self-energy. Keeping in mind that the nucleon self-energy pieces per se are not proved to represent observables, one is tempted to try an adjustment of these parameters to advanced nuclear matter calculations. While the overall pattern agrees fairly well (i. e. large and opposite scalar and vector self-energies) we can reproduce also the fine details on a quantitative level at low densities. Keeping the four-quark condensates frozen in to vacuum values or giving them a density dependence as suggested by a perturbative chiral quark model induce some quantitative modifications which may be considered as estimator of systematic uncertainties related to the four-quark sector. Furthermore, the special use of sum rules and interpolating current and details of the numerical evaluation procedure may prevent QCD sum rules for the nucleon as precision tool. The knowledge of this situation may be of relevance for approaches to the nuclear many-body problem which utilizes chiral dynamics and condensate-related features of the mean field. Finally, we remind that our study is restricted to cold nuclear matter. The extension towards finite temperature deserves separate investigations. Acknowledgement {#acknowledgement .unnumbered} =============== Discussions with M. Birse, W. Weise and S. Zschocke are greatfully acknowledged. We thank Ch. Fuchs and O. Plohl for providing extended calculations of nucleon self-energies and S. Leupold for clarifying discussions about four-quark condensate classifications. The work is supported by 06DR136, GSI-FE, EU-I3HP. Appendix ======== Operator Product Expansion {#ap:ope} -------------------------- For completeness and to clarify some technical details we recollect important steps of an OPE calculation. A convenient way to obtain this series is to calculate the Wilson coefficients in an external weak gluon field [@Novikov:1983gd]. In the background field formalism the correlation function (\[eq:correlationFunction\]) is expanded according to Wick’s theorem $\Pi (x) = \Pi_{\ensuremath{ \mathrm{per} }} (x) + \Pi_{\ensuremath{ \mathrm{2q} }} (x) + \Pi_{\ensuremath{ \mathrm{4q} }} (x) + \ldots$ , where the full contractions are collected in the perturbative term $\Pi_{\rm per}$ and further terms $\Pi_{\ensuremath{ \mathrm{2q,4q,\ldots} }}$ denote the number of non-contracted quark operators. The latter terms give rise to non-local condensates containing the indicated number of quark operators. The use of Wick’s theorem naturally introduces the normal ordering of operators ${\ensuremath{\left \langle \Psi \left | :\hat{A}_1 \cdots \hat{A}_n: \right | \Psi \right \rangle}} \equiv {\ensuremath{\left \langle \hat{A}_1 \cdots \hat{A}_n \right \rangle}}$, which will be assumed in all expectation values formed out of products of field operators. Under the presence of the gluon background field the quark propagator $S^q$ which appears in the terms in $\Pi (x)$ is modified, following from the solution of the Dirac equation in an external field in the Fock-Schwinger gauge for the gluon field. The corrections to the free quark operator appear in an expansion in the coupling $g_s = \sqrt{4 \pi \alpha_s}$ $$S^q_{ab} (x) = {\ensuremath{\left \langle \Psi \left | {\ensuremath{T \left [ q_a (x) \bar{q}_b (0) \right ]}} \right | \Psi \right \rangle}} = \dfrac{i}{2\pi^2} \dfrac{{\ensuremath{ x \mspace{-8mu} / }}}{x^4} \delta_{ab} + \dfrac{ig_s}{8 \pi^2} \tilde{G}_{\mu \nu}^A (0) T^A_{ab} \dfrac{x^\mu}{x^2} \gamma^\nu \gamma_5 + \ldots \, , \label{eq:quarkPropagatorInGluonBackground}$$ with the dual gluon field strength tensor $\tilde{G}_{\mu \nu}^A= \tfrac{1}{2} \epsilon_{\mu \nu \kappa \lambda} G^{\kappa \lambda A}$ and color matrices $T^A_{ab}$, valid for massless quarks and inclusion of pure gluon condensates up to mass dimension 4. The Fock-Schwinger gauge is determined by $(x-x_0)_\mu A^\mu (x) = 0$, and usually one chooses $x_0 = 0$. It allows to express partial derivatives of fields easily by covariant derivatives which matters when expanding non-local products of such operators. In general, results are gauge invariant, however technically fixing this gauge has enormous advantages in calculations of Wilson coefficients. Let us remark, that although the term $\Pi_{\ensuremath{ \mathrm{2q} }}$ initially contains two uncontracted quark field operators, the expansion of the non-local expectation value into local condensates together with weak gluon fields resulting from modified quark propagators and the use of the equations of motion would induce further four-quark condensates at the order $\alpha_s$. The use of the quark propagator (\[eq:quarkPropagatorInGluonBackground\]) leads to gluon insertions in the expectation values in $\Pi$ and thus to condensates of higher mass dimension. To obtain the condensates the expectation values are projected onto all possible Dirac, Lorentz and color scalars obeying symmetry w.r.t. time and parity reversal. This introduces all possible condensates up to the considered dimension, and having inserted the projections for the specific correlation function offers also the corresponding Wilson coefficients and therefore the OPE [@Jin:1992id]. For example, the non-local diquark expectation value can be projected on color and Dirac structures $${\ensuremath{\left \langle q_{a\alpha} (x) \bar{q}_{b \beta} (0) \right \rangle}} = - \dfrac{\delta_{ab}}{12} \sum_{\Gamma} \epsilon_\Gamma {\ensuremath{\left \langle \bar{q} (0) \Gamma q(x) \right \rangle}} \Gamma_{\alpha \beta} \, , \label{eq:nonlocalDiquarkExpansion}$$ where elements of the Clifford algebra $\Gamma \in \{ \mathbbm{1}, \gamma_\mu , \sigma_{\mu \nu}, i\gamma_5 \gamma_\mu , \gamma_5 \}$, are contracted over Lorentz indices, $\epsilon_\Gamma = \tfrac{1}{2}$ for $\Gamma = \sigma_{\mu \nu}$ and $\epsilon_\Gamma = 1$ otherwise. A Taylor expansion of the quark operator at $x=0$ in the Fock-Schwinger gauge $$q(x) = q(0) + x^\mu D_\mu q(0) + \dfrac{1}{2} x^\mu x^\nu D_\mu D_\nu q(0) + \ldots$$ leads to additional Lorentz structures, such that the local expansion of the non-local diquark term (\[eq:nonlocalDiquarkExpansion\]) up to mass dimension 5 in the expectation values taken at $x=0$ yields $${\ensuremath{\left \langle q_{a\alpha} (x) \bar{q}_{b \beta} (0) \right \rangle}} = - \dfrac{\delta_{ab}}{12} \sum_{\Gamma} \epsilon_\Gamma \Gamma_{\alpha \beta} \left ( {\ensuremath{\left \langle \bar{q} \Gamma q \right \rangle}} + x^\mu {\ensuremath{\left \langle \bar{q} \Gamma D_\mu q \right \rangle}} + \dfrac{1}{2} x^\mu x^\nu {\ensuremath{\left \langle \bar{q} \Gamma D_\mu D_\nu q \right \rangle}} \right ).$$ However, matrix elements ${\ensuremath{\left \langle \bar{q} (0) \Gamma q(x) \right \rangle}}$ with $\Gamma \in \{ \sigma_{\mu \nu}, i \gamma_5 \gamma_\mu, \gamma_5 \}$ do not contribute due to the demand of time and parity reversal invariance and the multiplication with the symmetric Taylor expansion in $x$. Condensates with field derivatives can be transformed whereby a couple of manipulations using the equations of motion $$( i {\ensuremath{ D \mspace{-10mu} / }} - m ) q = 0 \, , \mspace{50mu} \bar{q} ( i \overleftarrow{{\ensuremath{ D \mspace{-10mu} / }}} + m ) = 0 \, , \mspace{50mu} D^{AB}_\mu G_B^{\mu \nu} = g_s \sum_f \bar{q} \gamma^\nu T^A q \, ,$$ and the representation of the gluon tensor $G_{\mu \nu} = T_A G^A_{\mu \nu}$ $$G_{\mu \nu} = \dfrac{i}{g_s} [D_\mu , D_\nu], \mspace{20mu} \text{and thus} \mspace{20mu} \dfrac{1}{2} g_s \sigma G + {\ensuremath{ D \mspace{-10mu} / }} {\ensuremath{ D \mspace{-10mu} / }} = D^2, \mspace{50mu} D_{\mu} = \dfrac{1}{2} \left ( \gamma_\mu {\ensuremath{ D \mspace{-10mu} / }} + {\ensuremath{ D \mspace{-10mu} / }} \gamma_\mu \right ),$$ are exploited to yield simplifications in condensate projections. Terms which contain factors of the small quark mass are neglected in these considerations. Similar projections can be performed for structures which include gluonic parts from the propagator (\[eq:quarkPropagatorInGluonBackground\]) and lead to gluon condensates in $\Pi_{\rm per}(x)$ and are also carried out to find the linear combinations of four-quark condensates in $\Pi_{4q}$. Following this sketched line of manipulations, one arrives at Eqs. -. Alternative Derivation of Pure-Flavor Four-Quark Condensate Interrelations {#ap:constraints} -------------------------------------------------------------------------- The constraints between two different color structures of pure-flavor four-quark condensates have been presented in section \[sec:fqcClassification\] by analyzing the specific color structure transformation. For the typical baryon color combination of four-quark condensates the conversion matrix $\hat{B}$  was derived with the decisive property that it cannot be inverted. In algebraic terms, the underlying system of linear equations is linearly dependent. This gave rise to the Fierz relations -. If one is only interested in these relations, another direct way of derivation exists. Thereby one considers the ”zero identity” $$\epsilon^{abc} \epsilon^{a'b'c'} \; \underset{{\ensuremath{ \mathrm{e} }}}{\bar{q}}^{a'} \underset{{\ensuremath{ \mathrm{f} }}}{q}^{a} \underset{{\ensuremath{ \mathrm{g} }}}{\bar{q}}^{b'} \underset{{\ensuremath{ \mathrm{h} }}}{q}^{b} \; \underset{{\ensuremath{ \mathrm{e,g} }}}{(\Gamma C)} \; \underset{{\ensuremath{ \mathrm{f,h} }}}{(C\tilde{\Gamma})} = 0 \mspace{20mu} \text{if} \mspace{20mu} (\Gamma C)^T = -(\Gamma C) \mspace{20mu} \text{or} \mspace{20mu} (C\tilde{\Gamma})^T = -(C\tilde{\Gamma}),$$ which can be seen by a rearrangement of the product and renaming of indices (this is the analog discussion as for the choice of possible interpolating fields for the nucleon). Fierz transformation of this relations yields the basic formula $$\epsilon^{abc} \epsilon^{a'b'c'} \; \bar{q}^{a'} O_m q^a \bar{q}^{b'} O^n q^b \; {\ensuremath{\mathrm{Tr} \left ( \tilde{\Gamma} O_n \Gamma C O^{mT} C \right ) }} = 0 \, ,$$ which gives, with insertion of allowed $\Gamma$ and $\tilde{\Gamma}$, all possible constraints on the color combinations in the sense of the vector $\vec{z}$ in . From the non-vanishing possibilities we list only combinations relevant for four-quark condensates and contract them to achieve relations between components of $\vec{z}$: $$\begin{aligned} \Gamma = \mathbbm{1}, \tilde{\Gamma} = \mathbbm{1} & \quad \Longrightarrow \mspace{37mu} 0 = -2z_1 + 2z_2 + z_4 + 2z_6 - 2z_8 \, , \\ \Gamma = \gamma_5, \tilde{\Gamma} = \gamma_5 & \quad \Longrightarrow \mspace{37mu} 0 = -2z_1 - 2z_2 + z_4 - 2z_6 - 2z_8 \, , \\ \Gamma = i \gamma_5 \gamma^\alpha, \tilde{\Gamma} = i \gamma_5 \gamma_\beta & \quad \Longrightarrow \quad \left \{ \begin{aligned} 0 &= - 2 z_1 + z_2 - z_6 + 2 z_8 \, , \\ 0 &= - 2z_1 + 2z_2 - 4 z_3 + z_4 - 4 z_5 - 2z_6 + 4 z_7 + 2z_8 \, , \end{aligned} \right . \\ \Gamma = i \gamma_5 \gamma^\alpha, \tilde{\Gamma} = \gamma_5 & \quad \Longrightarrow \mspace{37mu} 0 = iz_9 +z_{10} \, . \end{aligned}$$ This set of constraints is equivalent to - in section \[sec:fqcClassification\]. Four-Quark Expectation Values in the Nucleon -------------------------------------------- Supplementary to Tab. \[tab:condensatePartsFromDrukarev\] we collect the underlying coefficients to be understood in connection with the work of Drukarev et al. [@Drukarev:2003xd]. \[ap:expectationValues\] -------------- ----------- ------------ ---------------- ------------ -------------------- Expectation [Mean Value]{} value $N=p$ $N=n$ $N=p$ $N=n$ $N=\tfrac{p+n}{2}$ $U_N^{S,uu}$ $ 3.94 $ $ 4.05$ $ 3.939 $ $ 4.047 $ $ 3.993 $ $a_N^{V,uu}$ $ 0.52 $ $ 0.51 $ $ 0.520 $ $ 0.510 $ $ 0.515 $ $b_N^{V,uu}$ $ -0.13 $ $ -0.02 $ $ -0.143 $ $ -0.023 $ $ -0.083 $ $a_N^{T,uu}$ $ 0.98 $ $ 1.02 $ $ 0.968 $ $ 1.009 $ $ 0.989 $ $b_N^{T,uu}$ $ 0.05 $ $ < 0.01 $ $ 0.045 $ $ 0.007 $ $ 0.026 $ $a_N^{A,uu}$ $ -0.45 $ $ -0.50 $ $ -0.471 $ $ -0.502 $ $ -0.487 $ $b_N^{A,uu}$ $ -0.06 $ $ -0.01 $ $ -0.054 $ $ -0.009 $ $ -0.032 $ $U_N^{P,uu}$ $ 1.91 $ $ 1.96 $ $ 2.002 $ $ 2.030 $ $ 2.016 $ -------------- ----------- ------------ ---------------- ------------ -------------------- : Coefficients of pure flavor nucleon four-quark expectation values (in units of ${\ensuremath{\left \langle \bar{q} q \right \rangle_{\rm vac}}}=(-0.245 {\ensuremath{\; {\rm GeV}}})^3$) as determined in [@Drukarev:2003xd] in the terminology introduced there and modified values from a fine-tuned parameter set which fulfill the constraints -. The parameters ${\ensuremath{\kappa^{\rm med}_{ \rm s,q }}}$ and ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$ are finally derived from the right column which shows the result for isospin symmetric baryonic matter.[]{data-label="tab:parametersDrukarevPureFlavor"} --------------- ----------- ---------------- -------------------- Expectation [Mean Value]{} value $N=p$ $N=n$ $N=\tfrac{p+n}{2}$ $U_N^{S,ud}$ $ 3.19 $ $ 3.19$ $ 3.19 $ $a_N^{V,ud}$ $ -0.44$ $ -0.44 $ $ -0.44 $ $b_N^{V,ud}$ $ -0.29 $ $ -0.29 $ $ -0.29 $ $a_N^{T,ud}$ $ 0.19 $ $ 0.19 $ $ 0.19 $ $b_N^{T,ud}$ $ 0.18 $ $ 0.18 $ $ 0.18 $ $a_N^{A,ud}$ $ 0.43 $ $ 0.43 $ $ 0.43 $ $b_N^{A,ud}$ $ -0.06 $ $ -0.06 $ $ -0.06 $ $U_N^{P,ud}$ $ -0.20 $ $ -0.17 $ $ -0.185 $ $U_N^{VS,ud}$ $ -0.28 $ $ -0.21 $ $ -0.245 $ --------------- ----------- ---------------- -------------------- : As Tab. \[tab:parametersDrukarevPureFlavor\] but for coefficients of nucleon four-quark expectation values parametrizing mixed flavor structures as determined in [@Drukarev:2003xd] and the mean values used to calculate medium strength parameters ${\ensuremath{\kappa^{\rm med}_{ \rm }}}$ in isospin symmetric matter. The modifications referring to pure-flavor four-quark condensates are not needed here.[]{data-label="tab:parametersDrukarevMixedFlavor"} [70]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , ****, (). , , , ****, (). , , , ****, (). , ** (, ). , in **, edited by (, ). , ****, (). , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). , , , ****, (). , , , ****, (). , , (), . , , , ****, (). , , , ****, (). (), ****, (). , ****, (). , ****, (), . , ** (, ). , , , , ****, (). , , , , , ****, (). , , , ****, (). , ****, (). , , (). , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , (). (), . , , , ****, (). , , , (), . , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , , , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , , , , ****, (). , ****, (). , ****, (). (), . , ****, (). , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). [^1]: Concerning conventions on metric, Dirac and Gell-Mann matrices, charge conjugation matrix $C$ etc. we follow [@pk:Itzykson1980]. [^2]: In general, the Dirac structure of $\Delta \Pi$ would require definitions $E_i$, $\bar{E}_i$ for the distinct invariant functions $(i=s,q,v)$ of the decomposition . In the case considered here we assume that these weighted moments coincide with $E_{s,q,v}=E$ (analogously $\bar{E}$). Also $\omega_{\pm}$ are simplified to be common for $s,q,v$ parts. In the shown Borel transformed equations, decomposed terms are symbolically rearranged to full Dirac structures. [^3]: The catalog can be extended to non-equilibrated systems lifting the demand for time reversal symmetry or to systems at finite temperature and vanishing chemical potential where charge conjugation provides a good symmetry. [^4]: Note some difference to the OPE part stated in eqations (87)-(89) of [@Drukarev:2004zg] for the whole combination of the density dependent four-quark condensate contribution. Our equivalent OPE calculation utilizing the same nucleon four-quark expectation values (encoded in ${\ensuremath{\kappa^{\rm med}_{ \rm s,q }}}$, ${\ensuremath{\tilde{\kappa}^{\rm med}_{ \rm v }}}$ as above) yields $\Pi_{\mathrm 4q} = ( 0.49 \tfrac{(qp)}{M_N} \mathbbm{1} + 0.52 {\ensuremath{ q \mspace{-8mu} / }} + 0.57 \tfrac{(qp)}{M_N^2} {\ensuremath{ v \mspace{-8mu} / }} ) \tfrac{{\ensuremath{\left \langle \bar{q}q \right \rangle}}}{q^2} n $, with $p=M_N v$.
--- abstract: 'The laser cooling and trapping of ultracold neutral dysprosium has been recently demonstrated using the broad, open 421-nm cycling transition. Narrow-line magneto-optical trapping of Dy on longer wavelength transitions would enable the preparation of ultracold Dy samples suitable for loading optical dipole traps and subsequent evaporative cooling. We have identified the closed 741-nm cycling transition as a candidate for the narrow-line cooling of Dy. We present experimental data on the isotope shifts, the hyperfine constants $A$ and $B$, and the decay rate of the 741-nm transition. In addition, we report a measurement of the 421-nm transition’s linewidth, which agrees with previous measurements. We summarize the laser cooling characteristics of these transitions as well as other narrow cycling transitions that may prove useful for cooling Dy.' author: - Mingwu Lu - Seo Ho Youn - 'Benjamin L. Lev' title: 'Spectroscopy of a narrow-line laser cooling transition in atomic dysprosium' --- Introduction ============ Due to its extraordinarily large ground state magnetic dipole moment ($10$ Bohr magnetons), dysprosium is a prime candidate for the study of ultracold dipolar physics [@PfauReview09; @*Fregoso:2009; @*Fregoso:2009b; @*Machida10]. The Dy atom belongs to the lanthanide (rare-earth) group and has ten $4f$ elections in an open shell submerged under closed $s$ shells. Numerous combinations of valence electron couplings lead to a multitude of electronic energy levels. Laser cooling Dy in a traditional manner would require an impracticable number of repumper lasers due to the large number of metastable states below 421 nm (see Fig. \[fig:dy\_levels\]). Consequently, preparation of cold Dy samples had been limited to the method of buffer gas cooling [@Hancox:2004; @*Newman2010]. Recent progress in the laser cooling and trapping of Dy atoms in a repumper-less magneto-optical trap (MOT) [@Lu2010] now allows the creation of large population samples at $>$100$\times$ colder temperatures. Laser cooling and trapping Dy constitutes a new route toward achieving Bose and Fermi degeneracy in this most magnetic of atoms [^1]. However, further progress necessitates the optical trapping of Dy [@MetcalfBook99] so that evaporative cooling may proceed without suffering trap losses arising from dipole-relaxation-induced inelastic collisions [@Hensler:2003]. A possible solution lies in the narrow-line-cooling of Dy in a manner similar to that demonstrated in the highly magnetic erbium system [@Berglund:2008]. A narrow-line Dy MOT could produce Dy temperatures in the $\mu$K-range. Such ultracold samples would be readily confined in standard optical dipole traps (ODTs). ![(Color online) (a) Dy energy level diagram with high $J$ values [@Martin:1978]. The relevant laser cooling transitions between the even parity (red) ground state and the odd (black) excited states are marked with wavelengths and spectroscopic terms. Dy’s five high-abundance isotopes have nuclear spin $I=0$ for the bosons $^{164}$Dy, $^{162}$Dy, and $^{160}$Dy and $I=5/2$ for the fermions $^{163}$Dy and $^{161}$Dy. (b) Fermion hyperfine structure in the 741-nm state (not to scale) determined from measurements in Sec. \[hyperfine\]. (c) Fermion ground state hyperfine structure (not to scale) [@Childs70]. $F=J+I$, where $J$ is the total electronic angular momentum.[]{data-label="fig:dy_levels"}](Figure1.pdf){width="49.00000%"} We focus here on the characteristics of the cycling transition at wavelength 741 nm. We believe this transition to be a prime candidate for creating a narrow-line MOT in a similar manner as that demonstrated in Ref. [@Berglund:2008] for Er. Existing spectroscopic data [@Martin:1978] are insufficient for implementing the 741-nm narrow-line MOT, and we present here the first measurements of this transition’s hyperfine structure, isotope shifts, and lifetime. Together these measurements provide a sufficient spectroscopic guide for attempting the 741-nm narrow-line cooling of Dy’s bosonic and fermionic isotopes. Standard spectroscopic records [@Martin:1978] had misrecorded by 21% the linewidth of the 401-nm state used for Er laser cooling [@Mcclelland:2006b]. Linewidth verification of the analogous transition in Dy (at 421 nm) is therefore justified, and we present a linewidth measurement that is consistent with previous measurements. Finally, we discuss the properties and relative merits of other optical transitions at 598 nm, 626 nm and 1001 nm that could also be used for laser cooling. 741-nm transition ================= We discuss in this section an optical transition at 741 nm that could prove important for creating narrow-line Dy MOTs. Although the broad, blue 421-nm transition is highly effective for cooling Dy atoms from an atomic beam and in a MOT [@Youn2010a], the transition’s $\sim$1 mK Doppler limit is too high for directly loading an ODT. Due to its narrow linewidth (unmeasured until now), the red 741-nm transition provides a means to Doppler cool with a very low temperature limit. Our measurements indicate that this transition’s recoil temperature would be higher than its Doppler-limited temperature: Observing novel laser cooling phenomena [@Sr04] might be possible using this transition. Experimental apparatus for measuring isotope shifts and hyperfine structure --------------------------------------------------------------------------- The spectroscopic measurement for determining the isotope shifts and hyperfine structure employs a crossed excitation laser and atomic beam method [@Demtroder]. In a UHV chamber [@Youn2010a], thermal Dy atoms effuse from a high-temperature oven working at 1275 $^\circ$C. The atoms are collimated by a long differential pumping tube which forms an atomic beam with a diameter of 5 mm and a diverging (half) angle of $\sim$0.02 rad. The beam enters a chamber with two pairs of optical viewports oriented orthogonally to the atomic beam. ![(Color online) 741-nm line spectrum for the five most abundant Dy isotopes. Bosonic isotope peaks are marked with mass numbers in red. Hyperfine peaks of fermionic isotopes (blue) are identified by the markers defined in the inset table. The VIth peak of $^{161}$Dy and the 4th peak of $^{163}$Dy overlap.[]{data-label="fig:741_spec"}](Figure2.pdf){width="48.00000%"} \[ABtable\] Coeff. $^{163}$Dy$_\text{th}$[^2] $^{163}$Dy$_\text{expt}$ $^{161}$Dy$_\text{expt}$ $163_{e}/161_\text{e}$ $163_{g}/161_\text{g}$[^3] -------- ---------------------------- -------------------------- -------------------------- ------------------------ ---------------------------- $A$ 142 142.91(3) -102.09(2) -1.3999(4) -1.40026(4) $B$ 4000 4105(2) 3883(1) 1.0570(9) 1.05615(90) : Values of hyperfine coefficients $A$ and $B$ (MHz) for the excited state ($e$) of the 741-nm transition in Dy including comparison with those of the ground state ($g$). A 5 mW 741-nm laser beam \[1/e$^{2}$ waist (radius) $\approx$ 2 mm\] from an external cavity diode laser (ECDL) is directed through the UHV chamber via one pair of viewports. This laser has a mode-hop-free region of 20 GHz. Atomic fluorescence on the 741-nm line is collected through an orthogonal viewport by a $2''$ AR-coated achromatic lens pair with a magnification of 0.4$\times$. This forms an image on the detection area ($1$-mm diameter) of a femtowatt photodetector (DC gain: $1\times 10^{10}$ V/W, bandwidth $\geq$750 Hz). The whole system is carefully shielded from stray light. The direct output from the detector suffers from low signal-to-noise due to the oven’s thermal radiation and the multiple scattering of 741-nm light from the windows and the chamber’s inner walls. An electronic bandpass filter (0.3 Hz to 3 kHz) with a DC amplification of 10 improves the signal-to-noise to a sufficient level without artificially broadening the Doppler-limited resonances. With a laser scanning rate of $7$ GHz/s and a 15 MHz Doppler-broadening (measured in Sec. \[421\]), the $\sim$2 ms rise and fall time of the spectral peaks are slower than the detector’s response time. To measure the full spectrum of isotope shifts and hyperfine states, we scan the ECDL using the piezoelectric transducer (PZT) that modulates its grating position. However, the free scan of the ECDL suffers from the slight nonlinear scanning of the PZT versus drive voltage. We reduce the nonlinearity by limiting the scan range and by scanning the PZT slowly to prevent inertial effects. To calibrate the frequency scan, we couple the 741-nm light into a temperature-stabilized 750 MHz free-spectral range (FSR) confocal cavity. Simultaneously recording the transmission of the confocal cavity with the fluorescence signal provides a frequency calibration as the ECDL is scanned. The FSR of the cavity itself is measured via rf sidebands imprinted onto the cavity-coupled 741-nm light with a stable and calibrated rf frequency source. To correct for the nonlinearity of the scan, a calibration is performed by fitting a polynomial to the cavity spectrum. The maximum deviation throughout the scan due to a quadratic term is 3% of the linear term (the cubic term is negligible); we corrected the nonlinear effect up to quadratic order. The calibrated spectrum after 512 averages is shown in Fig. \[fig:741\_spec\]. These data are sufficient to resolve and identify all $J\rightarrow J+1$ ($F\rightarrow F+1$) 741-nm transitions for the bosonic (fermionic) isotopes. Optical pumping is a negligible effect in this spectrum since the $\sim$10 $\mu$s transit time is much shorter than the transition lifetime (see Sec. \[741gamma\]). Hyperfine structure {#hyperfine} ------------------- The position and ordering of $^{163}$Dy and $^{161}$Dy’s hyperfine peaks in the spectrum are given by the ground and excited state’s $A$ and $B$ coefficients [^4]. Identification of the isotope and hyperfine transition peaks are found with guidance from the calculations in Ref. [@Flambaum2010], and a least squares fitting routine extracts the experimental values of $A$ and $B$ (see Table \[ABtable\]). Since this is a cycling $J\rightarrow J+1$ transition, the strongest observed lines for the fermions are of the $F\rightarrow F+1$ type; the much weaker [@Sobelman] $F \rightarrow F$ transitions are visible as the small, unlabeled peaks in Fig. \[fig:741\_spec\]. We note that for the excited states, the ratios of $A^{163}_{e}/A^{161}_{e} = -1.3999(4)$ and $B^{163}_{e}/B^{161}_{e} = 1.0570(9)$ are consistent with the analogous ratios for the ground state; there is no noticeable hyperfine anomaly for the 741-nm transition [@Clark77]. Isotope shifts -------------- \[isotopeshifts\] Isotope shifts 741nm 457 nm[^5] ---------------- ---------------- --------------- $^{164}$Dy 0 0 $^{163}$Dy $-915(2)$ MHz $660(3)$ MHz $^{162}$Dy $-1214(3)$ MHz $971(2)$ MHz $^{161}$Dy $-2320(5)$ MHz $1744(3)$ MHz $^{160}$Dy $-2552(5)$ MHz $2020(3)$ MHz : Isotope shifts in the 741-nm and 457-nm transitions in Dy. The measured isotope shifts (from $^{164}$Dy to $^{160}$Dy) for the 741-nm transition are listed in Table \[isotopeshifts\], together with the isotope shifts for the 457-nm line. The 457-nm line has a pure electronic configuration which makes it useful as a reference transition for creating the King plot [@Budker] described below. The fermionic isotope shifts are derived from the center-of-gravity of the hyperfine peaks, the positions of which are extracted from hyperfine structure fits. We could not obtain the isotope shift of $^{158}$Dy and $^{156}$Dy due to their low natural abundances and the weak strength of the 741-nm transition. We draw the King plot using the documented $4f^{10}6s^2({^5}I_8)\rightarrow 4f^{10}6s6p({^7}I_8)$ transition at 457 nm as the reference transition [@Zaal80; @Leefer:2009b]. The isotope shift includes the contribution from the mass term, which is related to the change of isotope mass, and the field shift, which is due to the finite possibility of electrons being inside the nucleus. Different electron configurations lead to different field shifts: From the fit of the isotope shifts of the 741-nm transition plotted against those values for 457 nm, the ratio of electronic field-shift parameter $E_{741}/E_{457}=-1.746(9)$ is determined based on the slope of the linear fit. The minus sign indicates the very different nature of two transitions, i.e., one is of $4f\rightarrow5d$ type while the other is $6s\rightarrow6p$. The mass term includes the normal mass shift (NMS) and specific mass shift (SMS). The NMS for the 457-nm and 741-nm transitions are $\delta\nu_{\text{nms}164-162}^{457}= 27$ MHz and $\delta\nu_{\text{nms}164-162}^{741}= 17$ MHz, respectively [@Budker]. The SMS of the 457-nm transition is 7(8) MHz [@Zaal80], which allows us to calculate the SMS for the 741-nm line: $\delta\nu_{\text{sms}164-162}^{741}= 563(17)$ MHz, based on the intercept of the King plot. Such a large SMS is known to arise from the following effects: $4f$ electrons deeply buried inside the electron core, strong electron correlations, and $4f$ electrons coupling to each other before coupling to the outer $6s$ electrons [@Cowan73]. A transition of type $6s\rightarrow 6p$ leaves the inner electrons little changed, while $4f\rightarrow 5d$ transitions lead to large changes in the inner electron correlations. The large experimental value of the 741-nm transition SMS is consistent with the typical values for $4f\rightarrow 5d$ transitions [@Jin01]. ![(Color online) King plot of the isotope shifts in the 741-nm line versus isotope shifts in the 457-nm line. $\Delta N$ is the mass number difference between isotope pairs. Inset is the fit residual.[]{data-label="fig:King plot"}](Figure3.pdf){width="46.00000%"} Lifetime measurement {#741gamma} -------------------- A direct lifetime measurement based on the fluorescence decay observed with the crossed-beam method is not possible due to large transit time broadening relative to the natural decay time. Therefore, we resort to measuring the fluorescence scattered from relatively static atoms, i.e., from the $\sim$1 mK atoms in the MOT and in the magnetostatic trap (MT). We measure a consistent lifetime with both methods. In the “MOT” method, we shine a retroreflected 5 mW 741-nm excitation beam of waist 3 mm onto a MOT generated on the 421-nm transition and record the decay of 741-nm scattered light after the 741-nm beam is extinguished. While the 741-nm excitation beam is on, the system establishes a steady-state population distribution among the 421-nm, 741-nm, and ground states. By switching on and off the 741-nm laser beam with a period of $250$ $\mu$s, the atoms initially shelved in the 741-nm state will decay back to the ground state via the spontaneous emission of 741-nm photons. The small $1:10^5$ branching ratio of the 421-nm transition, measured in previous work [@Lu2010; @Youn2010a], means that the Dy atom is effectively a three-level system during the 250 $\mu$s decay measurement: decay out of the three-level system from 421-nm state to the metastable states occurs on a much longer time scale, $>$2 ms. Solutions to the optical Bloch equations [@CohenT] for such a “V” system—two excited states coupled to a ground state via resonant 421-nm and 741-nm light—verify that the decay rate observed is equal to the decay rate of the bare 741-nm state. An avalanche photodetector (APD) and collection lenses with a 741-nm narrow bandpass filter is used to detect the weak signal, which we average 10752 times to obtain the data in Fig. \[fig:lifetime\_741\](a). Note that for neither this method nor the following one does the presence of 421-nm light or magnetic field gradients affect the natural decay rate measurement of the closed 741-nm transition. ![(Color online) Measured decay of 741-nm fluorescence. (a) MOT method: Recorded 741-nm fluorescence signal from a Dy MOT by an APD averaged 10752 times. The blue line is an exponential fit to the fluorescent decay of the 741-nm level. The 741-nm excitation laser is turned off (on) at $t=0$ ($t=250$) $\mu$s. (b) MT method: Photon counting record of scattered 741-nm light from a magnetic trap, averaged 27 times, after the 741-nm excitation beam is extinguished at $t=0$. The red line is an exponential fit to the decay.[]{data-label="fig:lifetime_741"}](Figure4.pdf){width="49.00000%"} In the “MT” method, we extinguish the 421-nm MOT light and capture the atoms in the magnetic quadrupole field of the now-extinguished MOT. We wait 5 s to allow the atoms to equilibrate in the MT [@Lu2010] before shining onto the trap a resonant retroreflected 741-nm beam of 5 mW power and waist 3 mm. A single photon counter with collection lenses and a 741-nm bandpass filter records the very weak flux of 741-nm photons from the MT after the 741-nm excitation beam is extinguished \[see Fig. \[fig:lifetime\_741\](b)\]. The long experimental run time necessary to measure a single decay limits the obtainable statistics. Single exponential fits to the data in both methods derive lifetimes that are consistent with each other (see Table \[linewidth\]). We note that the values are 4$\times$ longer than the theoretical value reported in Ref. [@Flambaum2010], and the measurement reported here may be used to refine Dy structure calculations [@DzubaPC2010]. With such a narrow linewidth—the weighted combination of the lifetime is 89.3(8) $\mu$s, resulting in a linewidth of 1.78(2) kHz—narrow-line cooling on the 741-nm transition is technically challenging for red-detuned narrow-line MOTs, since the laser linewidth should be comparably narrow [@Sr04]. However, a blue-detuned narrow-line MOT, which relies on the atoms’ large magnetic dipole and has been demonstrated with Er [@Berglund:2008], does not require a laser linewidth as narrow as the addressed atomic line; a narrow-line blue-detuned MOT on the 741-nm line seems feasible. \[linewidth\] MOT method MT method Theory[^6] ----------------------- ---------------------- ------------------ $89.6(8)\,\rm{\mu s}$ $84(14)\,\rm{\mu s}$ $21\,\rm{\mu s}$ : Lifetime of the 741-nm excited state. 421-nm transition {#421} ================= Quantitative understanding of the population, dynamics, and cooling mechanisms [@Youn2010b] of the Dy MOT requires the accurate knowledge of the 421-nm transition’s linewidth. To ensure the use of the correct value of the 421-nm transition linewidth in laser cooling calculations, we remeasure this linewidth using the crossed-beam method described earlier, though with a 421-nm beam derived from a frequency-doubled Ti:Sapphire laser. To uniformly and stably scan the laser frequency, we employ the transfer cavity technique to lock the laser to a spectroscopic reference [@Youn2010a]. The optical transfer cavity is doubly resonant at 780 nm and 842 nm. The cavity itself is stabilized by locking a 780-nm ECDL to a hyperfine transition of the 780-nm D2 line in Rb before locking a resonance of the cavity to the stabilized ECDL. The Ti:Sapphire laser which generates the 842-nm beam is then locked to this cavity. In order to scan the Ti:Sapphire laser’s frequency while the cavity remains locked to Rb, an electro-optical modulator driven by a microwave source generates tunable GHz-frequency sidebands on the 842-nm laser beam. By locking the sideband to the cavity, the carrier frequency can be stably scanned via tuning the microwave source. The 421-nm laser beam is obtained from a resonant LBO doubler. In the experiment, the laser frequency is scanned 400 MHz with a period of 1 s to ensure that the bandwidth of the PIN photodetector does not artificially broaden the transition. The fluorescence was collected via a pair of 2” achromatic doublets mounted outside an AR-coated UHV viewport. The electronic detector output was recorded on a fast digital oscilloscope and averaged 64 times. ![(Color online) The photodetector signal as a function of 421-nm laser frequency, referenced to the line peak. The legend lists the laser intensities used in each measurement. Each curve is averaged 64 times.[]{data-label="fig:scan_421"}](Figure5.png){width="45.00000%"} The fluorescence versus frequency is shown in Fig. \[fig:scan\_421\]. The profile has the typical Voigt form due to the residual Doppler broadening of the atomic beam. Curves corresponding to different laser beam intensities possess differing linewidths due to power broadening. A global Voigt fit allows a deconvolution of the Doppler width from the transition linewidth by assuming a single Gaussian Doppler width and by accounting for the power broadening from the laser. The fitted value for the Doppler broadening is 14.8(6) MHz, which is consistent with the estimation of the residual Doppler broadening based on the geometry of the collimation tube and oven orifice. At low intensities, the power broadening is linear as a function of laser intensity. A linear fit to the extracted linewidths provides the natural linewidth at the zero-intensity limit [@Mcclelland:2006b]. The extrapolated value for the natural linewidth of the 421-nm transition is 31.9(8) MHz. The uncertainty in the linewidth measurement arises from the error sources listed in Table \[errors\]. Among the errors, the largest source is the laser frequency drift from the imperfect transfer cavity lock. Unlike for Er [@Mcclelland:2006b], this measurement result is consistent with that listed in the standard tables, 33.1(17) MHz [@Martin:1978; @Lawler97]. Alternative laser cooling transitions ===================================== Unlike the 421-nm line, the 598-nm, 626-nm, 741-nm and 1001-nm lines are closed cycling transitions; laser cooling on them would obviate the need for repumping lasers or magnetic confinement in metastable states. While the 421-nm transition has been used to form the first Dy MOT [@Lu2010] and the 741-nm transition—easily generated by a stabiized ECDL—is a good candidate for narrow-line cooling, the other three transitions might also be useful for laser cooling and trapping. Table \[Lines\_sum\] summarizes these five laser cooling transitions: $g$ is the Landé factor of the excited state; $\Gamma$ is the transition decay rate; the linewidth is $\Delta \nu = \Gamma/2\pi$; and the excited state lifetime is $\tau = 1/\Gamma$. From these values, we can calculate some quantities of importance to laser cooling and trapping [@MetcalfBook99]. The saturation intensity $I_\text{sat} \equiv \pi h c\Gamma/3\lambda^3$ is, e.g., an estimate of the required MOT laser power; the capture velocity $v_\text{cap} \equiv \Gamma\lambda/2\pi$ provides a measure of the velocity range within which atoms can be collected in a MOT; $T_\text{Doppler} = \hbar\Gamma/2k_{B}$ is the Doppler cooling temperature limit; and $T_\text{recoil} = \hbar^2k^2/mk_B$ is the temperature limit due to photon recoils. The 1001-nm transition was considered as a candidate for narrow-line cooling because this is an intercombination transition which typically possesses narrow linewidth. We used the same experimental apparatus as in the 741-nm measurement—though a different ECDL laser—to find and measure the linewidth of the 1001-nm transition. However, we did not detect the line. Concurrently, theoretical calculations in Ref. [@Flambaum2010] predicted the exceptionally small linewidth of $53$ Hz, which explains our inability to detect the line with our current apparatus. This ultranarrow linewidth limits the transition’s utility for a MOT, but along with the 741-nm line, the 1001-nm transition may be useful for resolved sideband cooling in an optical lattice [@Katori03; @Sterr07; @Lev10]. This cooling technique may provide an alternative method [@Weiss02] to evaporative cooling for the production of degenerate Dy gases. \[errors\] Source Uncertainty (MHz) --------------------------------- ------------------- Extrapolation to zero intensity 0.13 Drift of laser during scans[^7] 0.7 Laser linewidth 0.1 Rise time of detector 0.4 : 421-nm transition linewidth error budget. The 626-nm transition has a intermediate linewidth of $135$ kHz, which could be used as the main laser cooling and trapping transition in a MOT while the atomic beam is Zeeman-slowed via the broad 421-nm transition. The benefit of such a combination [@Yabuzaki99] lies in the lower MOT temperature, since the 626-nm transition’s Doppler limit is only 3.2 $\mu$K. A colder MOT facilitates subsequent ODT loading. However, this combination requires the use of a narrow-linewidth dye-based laser to obtain 626-nm light. We note that the large Landé $g$ factor difference between the excited state (1.29) and ground state (1.24) suggests that intra-MOT sub-Doppler cooling [@Berglund:2007; @Youn2010b] will not be as effective on this transition. \[Lines\_sum\] Line $g$ $\Gamma$ $\Delta\nu$ $\tau$ $I_\text{sat}$ $v_\text{cap}$ $T_\text{Doppler}$ $T_\text{recoil}$ ----------------- -------- --------------------------------- ------------------ --------------------- --------------------------- ------------------- -------------------- ------------------- $421\ \rm{nm}$ $1.22$ $2.00\times 10^8\ s^{-1}$ [^8] $31.9\ \rm{MHz}$ $4.99\ \rm{ns}$ $55.8\ \rm{mW/cm^{2}}$ $13\ \rm{m/s}$ $765\, \mu \rm{K}$ $660\ \rm{nK}$ $598\ \rm{nm}$ $1.24$ $7.7\times 10^4\ s^{-1} $ [^9] $12\ \rm{kHz}$ $13\ \rm{ \mu s}$ $7.5\ \rm{\mu W/cm^{2}}$ $7.3\ \rm{mm/s}$ $294\, \rm{n K}$ $327\ \rm{nK}$ $626\ \rm{nm}$ $1.29$ $8.5\times 10^5\ s^{-1}$[^10] $135\ \rm{kHz}$ $1.2\ \rm{\mu s}$ $72\ \rm{\mu W/cm^{2}}$ $8.5\ \rm{cm/s}$ $3.2\, \rm{\mu K}$ $298\ \rm{nK}$ $741\ \rm{nm}$ $1.23$ $1.12\times 10^4\ s^{-1}$ [^11] $1.78\ \rm{kHz}$ $89.3\ \rm{ \mu s}$ $0.57\ \rm{\mu W/cm^{2}}$ $1.3\ \rm{mm/s}$ $42.7\, \rm{nK}$ $213\ \rm{nK}$ $1001\ \rm{nm}$ $1.32$ $3.3\times 10^2\ s^{-1}$[^12] $53\ \rm{Hz}$ $3\ \rm{ ms}$ $6.9\ \rm{nW/cm^{2}}$ $0.05\ \rm{mm/s}$ $1.3\, \rm{nK}$ $116\ \rm{nK}$ The linewidth of the 598-nm transition has yet to be measured, but the calculated value [@Flambaum2010] indicates a linewidth of $12$ kHz. This narrow linewidth would be optimal for conventional narrow-line cooling as performed in, e.g., ultracold Sr experiments [@Sr04], but again, the line must be generated with a dye-based laser. Its excited state Landé $g$ factor (1.24) is almost the same as its ground state’s, which bodes well for effective intra-MOT sub-Doppler cooling. summary ======= We measured the natural lifetime of the 741-nm line of Dy using a Dy MOT and magnetic trap; the weighted average is 89.3(8) $\mu$s. We predict that this closed cycling transition will be useful for the formation of a narrow-line MOT, which could cool Dy to the ultracold temperatures necessary for loading an optical dipole trap. The isotope shifts and hyperfine structure ($A$ and $B$ coefficients) were measured for all five high-abundance Dy isotopes, providing a spectral roadmap for the future narrow-line cooling of bosonic and fermionic Dy. In addition, we verified the linewidth of the 421-nm transition in Dy to be 31.9(8) MHz, a precautionary measure taken since the standard tables had listed the analogous transition in Er to be in error by 21%. Finally, we tabulated—based on up-to-date linewidth information—the laser cooling properties of other attractive transitions in Dy. We acknowledge support from the NSF (PHY08-47469), AFOSR (FA9550-09-1-0079), and the Army Research Office MURI award W911NF0910406. [36]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop (),  @noop [****,  ()]{} @noop [**]{} (, ) @noop [****, ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop (),  @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop @noop (),  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop @noop [****,  ()]{} [****,  ()](\doibase 10.1103/PhysRevA.60.R745) @noop [****,  ()]{} [^1]: Fermionic Dy’s magnetic moment is the largest of all elements, and bosonic Dy’s is equal only to terbium’s [@Martin:1978]. [^2]: Ref. [@Flambaum2010] [^3]: Ref. [@Childs70] [^4]: The $A$ and $B$ coefficients are defined in, e.g., Ref. [@MetcalfBook99]. [^5]: Ref. [@Zaal80] [^6]: Ref. [@Flambaum2010] [^7]: Scan nonlinearity is negligible in transfer cavity technique. [^8]: Present work, 2.5% uncertainty [^9]: Ref. [@Flambaum2010], theory [^10]: Ref. [@Martin:1978], experiment 5% uncertainty [^11]: Present work, 1% uncertainty [^12]: Ref. [@Flambaum2010], theory
--- author: - 'J. Ďurech' - 'J. Hanuš' - 'V. Alí-Lagoa' date: 'Received ?; accepted ?' title: Asteroid models reconstructed from the Lowell Photometric Database and WISE data --- [Information about the spin state of asteroids is important for our understanding of the dynamical processes affecting them. However, spin properties of asteroids are known for only a small fraction of the whole population.]{} [To enlarge the sample of asteroids with a known rotation state and basic shape properties, we combined sparse-in-time photometry from the Lowell Observatory Database with flux measurements from NASA’s WISE satellite.]{} [We applied the light curve inversion method to the combined data. The thermal infrared data from WISE were treated as reflected light because the shapes of thermal and visual light curves are similar enough for our purposes. While sparse data cover a wide range of geometries over many years, WISE data typically cover an interval of tens of hours, which is comparable to the typical rotation period of asteroids. The search for best-fitting models was done in the framework of the Asteroids@home distributed computing project.]{} [By processing the data for almost 75,000 asteroids, we derived unique shape models for about 900 of them. Some of them were already available in the DAMIT database and served us as a consistency check of our approach. In total, we derived new models for 662 asteroids, which significantly increased the total number of asteroids for which their rotation state and shape are known. For another 789 asteroids, we were able to determine their sidereal rotation period and estimate the ecliptic latitude of the spin axis direction. We studied the distribution of spins in the asteroid population. Apart from updating the statistics for the dependence of the distribution on asteroid size, we revealed a significant discrepancy between the number of prograde and retrograde rotators for asteroids smaller than about 10km.]{} [Combining optical photometry with thermal infrared light curves is an efficient approach to obtaining new physical models of asteroids. The amount of asteroid photometry is continuously growing and joint inversion of data from different surveys could lead to thousands of new models in the near future.]{} Introduction ============ The spin state and shape are among the basic physical characteristics of asteroids. The knowledge of these characteristics can help us to understand dynamical processes, such as collisions [@Bot.ea:15b], thermal effects [@Vokrouhlicky2015], and rotational disruption [@Wal.Jac:15], for example, that have been affecting the distribution of spins and shapes in the main asteroid belt. The spin and shape properties can be reconstructed from photometric disk-integrated measurements if the target is observed at a sufficiently wide range of geometries [@Kaa.ea:02c]. The number of asteroid models reconstructed from photometry has been rapidly increasing due to the availability of a robust and fast inversion technique [@Kaasalainen2001a] and a growing archive of photometric data [@Warner2009; @Oszkiewicz2011]. The reliability of models derived from photometry was confirmed by independent methods [@Mar.ea:06; @Kel.ea:10; @Durech2011]. The main motivation for reconstructing more asteroid models is (apart from detailed studies of individual targets of particular interest) the possibility to reveal how the spin states and shapes are distributed in the asteroid population and which physical processes affect them [see @Sli.ea:09; @Hanus2011; @Hanus2013c; @Hanus2018a; @Kim.ea:14 for example]. We aim to improve the statistics of the distribution of spins and shapes in the asteroid population. New models can be derived not only by collecting more new observations, but also just by processing archival photometric observation of large surveys. This data-mining approach was used by @Hanus2011 [@Hanus2013a; @Hanus2016a] and @Durech2016, for example. In terms of quantity, the largest sources of photometric data are sparse-in-time measurements obtained by large sky surveys. While the inversion of sparse data is essentially the same as the inversion of dense light curves, a unique solution of the inverse problem can be found only for a small fraction of asteroids due to the high noise in the data. In anticipation of the publication of more accurate data from Gaia or LSST, we have already processed the available data, namely photometry from astrometric surveys compiled in the Lowell Observatory photometric database [@Oszkiewicz2011; @Durech2016]. As the next step, in this paper we derive hundreds of new asteroid models using the Lowell photometric database in combination with thermal infrared data observed by the Wide-field Infrared Survey Explorer [WISE, @Wright2010] and retrieved, vetted and archived in the framework of the NEOWISE survey [@Mainzer2011a]. Method {#sec:method} ====== When processing the data, we proceeded the same way as @Hanus2011. Then we applied the light curve inversion method of @Kaasalainen2001a to the data sets described below (Sec. \[sec:convex\]). The crucial task was to select only reliable solutions of the inverse problem. Input data ---------- We combined two photometric data sources: (i) sparse-in-time brightness measurements in V filter from the Lowell Observatory photometric database and (ii) thermal infrared data from the NEOWISE survey. The Lowell Observatory photometric database consists of sparse-in-time photometry from 11 large sky surveys that was re-calibrated to remove the most prominent systematic trends [@Oszkiewicz2011; @Bowell2014a]. The data are available for more than 300,000 asteroids, with the number of points per object ranging from tens to hundreds. The accuracy of photometry is around 0.15–0.2mag. Most of the measurements are from the years 2000–2012. The second source of data was the WISE catalog [@Wright2010; @Mainzer2011a]. The observations were made in four bands at 3.4, 4.6, 12, and 22${\mu\mathrm{m}}$, usually referred to as W1, W2, W3, and W4 data. We retrieved the Level 1b data from the WISE All-sky database by querying the IRSA/IPAC service for each NEOWISE detection reported to and vetted by the Minor Planet Center. We rejected all measurements potentially affected by artefact contamination as flagged by the WISE Moving Object Pipeline Subsystem [WMOPS, @Cutri2012]. Only measurements with quality flags A, B, or C, and artefact flags 0, p, or P were accepted. More details about these criteria can be found in [@AliLagoa2017] and references therein. Thermal infrared data of asteroids such as W3 and W4 are typically used to derive their thermophysical properties by means of a thermophysical model [see the review by @Delbo2015 for example]. Although it would be, in principle, possible to search for a unique model using the photometry and thermal data in a fully thermophysical approach – with the method of @Durech2017c, for example – this would be, in practice, extremely time consuming when dealing with a large number of objects. Instead, we used another approach that we tested in @Durech2015c, where we treated the WISE thermal fluxes as reflected light. More specifically, we took the data as relative light curves assuming that the shape of a visual light curve is not very different from a light curve at thermal wavelengths under the same observing geometry. This is true for main-belt asteroids with typical values of thermal inertia (tens to hundreds SI units) and rotation period (several hours or longer). To further support the validity of the assumption of the similarity between the optical and thermal light curves, we generated thermal light curves for several configurations; these are compared in Fig. \[fig:reflected\_vs\_thermal\] to the optical light curve generated by a standard ray-tracing algorithm. Our observing configuration and thermal properties correspond to typical values expected for a main-belt asteroid. Without loss of generality, we selected a shape model of asteroid (15) Eunomia derived by @Nathues2005 as our referenced shape model. The observing geometry was the following: the asteroid was located at a heliocentric distance of 2.5AU with the phase angle of 20$^{\circ}$ (this corresponds to a typical WISE observation of a main-belt asteroid), the sidereal rotation period was set to seven hours and we observed the asteroid equator-on. To generate the thermal light curve, we used the implementation of [@Delbo2004] and @Delbo2007a of the thermophysical model (TPM) developed by [@Spencer1989], [@Spencer1990], @Lagerros1996 [@Lagerros1997; @Lagerros1998], and [@Emery1998]. A detailed description of the model can be found in @Hanus2015a [@Hanus2018b]. We used two values of thermal inertia as input for the TPM: 50 and 200Jm$^{-2}$s$^{-1/2}$K$^{-1}$. Such values are typical for main-belt asteroids [@Hanus2018b]. Moreover, for each thermal inertia value, we ran the TPM with three different degrees of the macroscopic roughness model $\overline\theta$. We parametrize $\overline\theta$ by hemispherical craters with an opening angle $\gamma_\mathrm{c}$ and an areal coverage $\rho_\mathrm{c}$. Our model includes no roughness ($\gamma_\mathrm{c} = 0$, $\rho_\mathrm{c} = 0$), medium roughness (50, 0.5), and high roughness (90, 0.9). The TPM includes additional parameters that we fixed to realistic values (absolute magnitude, slope parameter, geometric visible albedo, Bond albedo). As we study only the normalized thermal light curve, the absolute size of the shape model is irrelevant. For generating the optical light curve, we used the combination of a single Lomell-Seeliger and multiple Lambertian scattering laws in the ray-tracing algorithm. \ \ The majority of asteroid thermal infrared data from WISE was obtained in the W3 and W4 channels, where asteroid thermal emission dominates over most inertial sources. The thermal light curves in W3 and W4 filters for all combinations of $\Gamma$ and $\overline\theta$ are qualitatively consistent with the optical light curve (Fig. \[fig:reflected\_vs\_thermal\]), which justifies our use of the thermal light curves in filters W3 and W4 as if they were reflected light. We note that there are differences between the optical and thermal light curves; mostly the amplitudes of the thermal light curves are slightly larger than of the optical light curve. On the other hand, the positions of minima and maxima are consistent. As a result, the shape modeling with the thermal data treated as a reflected light should provide reliable rotation states, whereas the elongation of the shape models could be slightly overestimated. However, this effect seems to be negligible because when we compared shape elongations of our new models with those of models derived from only visual photometry, the difference was small and in the opposite direction (see Sect. \[sec:models\_comparison\]). The thermal light curves of main belt asteroids in filters W1 and W2 differ more from the optical light curve than those in filters W3 and W4: the relative amplitudes are often significantly larger and the minima and maxima are shifted with respect to those of the optical light curve (see Fig. \[fig:reflected\_vs\_thermal\]). Some of the thermal light curves are not smooth; this is due to the internal numerical limitations in the surface roughness implementation in the TPM code. Fortunately, real data in filter W1 are almost always dominated by the reflected component (see Fig. \[fig:contamination\]), meaning that the small thermal contribution is not important for the overall flux. The only exception are dark objects in the inner main belt, but higher-albedo igneous asteroids are more numerous in this region. The situation in filter W2 is more complicated as is illustrated in Fig. \[fig:contamination\]. The relative contributions of the thermal and reflected components to the total observed flux depend on the surface temperature distribution, which is a complicated function of the heliocentric distance, geometric visible albedo, shape, rotation state, and so on. Depending on these parameters, the thermal component in W2 can range from a few to almost one hundred percent of the total flux. For the most common cases this fraction is between 30 and 70%. Still, the thermal light curves are not too different from the optical one, so the total light curve composed from reflected and emitted parts should not differ substantially from the optical light curve. We note that most of the asteroids for which observations are available in the W2 filter also have data in W3 and W4 filters, which diminishes the role of the W2 data in the shape modeling. Moreover, the amount of data pertaining to observations in W1 and/or W2 filters represents only a few percent of the whole WISE All-sky catalog. We also tested other values of thermal inertia, different input shape models, and different observing geometries. In all cases, we obtained a qualitative agreement with the conclusions above based on the shape model of Eugenia. ![image](rlc_W1_BP070_RP140.eps){width="32.00000%"} ![image](rlc_W1_BP100_RP140.eps){width="32.00000%"} ![image](rlc_W1_BP130_RP140.eps){width="32.00000%"}\ ![image](rlc_W2_BP070_RP140.eps){width="32.00000%"} ![image](rlc_W2_BP100_RP140.eps){width="32.00000%"} ![image](rlc_W2_BP130_RP140.eps){width="32.00000%"} Convex models {#sec:convex} ------------- To find a physical model that fits the photometric data, we represented the shape by a convex polyhedron and used the light curve inversion method of @Kaasalainen2001a. We assumed that there was no albedo variation over the surface – this assumption is necessary for the mathematical uniqueness of the shape solution and is generally accepted because asteroids visited by spacecraft show only small surface albedo variations. The rotation state was described by the sidereal rotation period $P$ and the ecliptic coordinates $(\lambda, \beta)$ of the spin axis direction (i.e., the pole). The search in the $P, \lambda, \beta$ parameters space was done the same way as in [@Durech2016]: we scanned the 2–100h interval of periods. For each trial period, we ran the shape optimization with ten initial pole directions. These time-consuming computations were performed using the distributed computing project Asteroids@home [@Durech2015b]. The whole interval of periods was divided into smaller intervals with roughly similar computing requirements and these tasks were distributed among volunteers participating in the project. Once they returned all the results, we combined them into the final periodogram. Subsequently, we identified the globally best-fitting solution and verified its reliability. Ellipsoids ---------- Similarly to @Durech2016, we also used an additional shape parametrization to find the correct rotation period, namely a model of a triaxial geometrically scattering ellipsoid. Given the poor photometric accuracy of the data, this simple model fits the data well enough to be efficiently used for the period search. The shape was described by only two parameters – semiaxes ratios $a/c$ and $b/c$. Because the brightness can be computed analytically in this case [@Kaasalainen2007; @Ostro1984], this approach is approximately one hundred times faster than modeling the shape as a convex polyhedron. Moreover, by setting $a > b > c$, where $c$ is the axis of rotation, the model automatically fulfills the condition that the rotation is around the shortest axis with the maximum moment of inertia. This condition cannot be easily fulfilled during the period search with convex shapes, only checked ex post, so in many cases convex models that formally fit the data should be rejected because they are unrealistically elongated along the rotation axis and such rotation would not be stable. However, the three-dimensional (3D) shape reconstruction and inertia check is done only for the best-fitting model. Therefore, in practice, ellipsoidal models are more efficient in finding the correct rotation period because convex models can formally fit the data with an incorrect period with a nonphysical shape due to their flexibility. After finding the best rotation period with the ellipsoidal model, we switched back to convex shape representation for the subsequent pole search. Tests {#sec:tests} ----- Having the periodograms for each asteroid, the critical task was to decide if the formally best solution with the lowest $\chi^2$ is indeed the correct solution, that is, whether the minimum in the $\chi^2$ is significant, or just a random fluctuation. To decide this, we performed a number of tests in almost exactly the same way as in @Durech2016 when processing only Lowell data. The only difference was that instead of having a fixed threshold of 5% for the $\chi^2$ increase $\chi^2_\mathrm{tr} = 1.05\,\chi^2_\mathrm{min}$, we computed the acceptance level for each asteroid individually according to the formula $\chi^2_\mathrm{tr} = (1 + 0.5 \sqrt{2/\nu})\,\chi^2_\mathrm{min}$. This is nothing more than an empirical prescription to take into account the number of measurements ($\nu$ is the number of degrees of freedom, i.e., the difference between the number of data points and the number of parameters). In our case, we have three parameters for the spin state, three parameters to fit the photometric phase function, and $(n+1)^2$ shape parameters with convex shapes or two parameters for ellipsoids. With convex shapes, $n$ is the degree and order of the spherical harmonics expansion [@Kaasalainen2001b]. The empirical formula for $\chi^2_\mathrm{tr}$ is related to the fact that the $\chi^2$ distribution with $\nu$ degrees of freedom has mean $\nu$ and variance $2\nu$, so the formal $1\sigma$ interval for a normalized $\chi^2/\nu$ is $1 \pm \sqrt{2 / \nu}$. The multiplicative factor $1/2$ is an adjustment without which the threshold would be too high and the number of unique models too low. For comparison, the 5% level used in our previous analysis now corresponds to $\nu = 200$. Here we summarize the steps that we took to select the final models. These steps are essentially the same as those of our previous analysis in @Durech2016 so we leave out the details; they are shown in a flow chart in Fig. \[fig:flow\_chart\]. 1. The period interval 2–100h was scanned independently with convex models with two shape resolutions $n = 3$ and $n = 6$ and with ellipsoids. 2. For each periodogram, we found the period with the lowest $\chi^2_\mathrm{min}$. We defined this period as unique if all other periods outside the uncertainty interval had $\chi^2$ higher than the threshold $\chi^2_\mathrm{tr}$ defined above. 3. The unique periods for two different resolutions of the convex model had to be the same within the error interval. 4. If the unique period was longer than 50h, we checked if there is no deeper minimum for periods that were longer than the original interval of 2–100h: we ran the period search again with an interval of 100–1000h. 5. If there were more than two pole solutions defined again by the $\chi^2_\mathrm{tr}$, we reported such models as partial if they had constrained $\beta$ (see Sect. \[sec:partial\_models\]). 6. If there were two possible poles, the difference in $\beta$ had to be smaller than $50^\circ$ and the difference in $\lambda$ 120–240$^\circ$ – this corresponds to the $\lambda \pm 180^\circ$ ambiguity for observations restricted to regions near the ecliptic plane [@Kaasalainen2006]. 7. Check if the rotation is along the shortest axis. 8. Visual check of the fit, residuals, and the shape. = \[rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em\] = \[draw, thick, color=black, -latex’\] (none1) \[text width=0cm\] ; (period\_n6) [period search $n = 6$]{}; (period\_n3) [period search $n = 3$]{}; (none3) ; (period\_ell) [period search ellipsoids]{}; (unique\_n6) [unique period?]{}; (unique\_n3) [unique period?]{}; (unique\_ell) [unique period?]{}; (none2) \[text width=1cm, below of=none1\] ; (period\_the\_same) [period the same?]{}; (period\_gt50) [period $> 50\,$h?]{}; (pole) [pole search]{}; (beta\_lambda) [$\beta, \lambda$ check]{}; (inertia) [inertia check]{}; (visual) [visual check]{}; (model) [model]{}; (partial\_model) [partial model]{}; (Delta\_beta) [$\Delta \beta$ check]{}; (inertia\_partial) [inertia check]{}; (visual\_partial) [visual check]{}; (period\_n6) – (unique\_n6); (period\_n3) – (unique\_n3); (period\_ell) – (unique\_ell); (unique\_n3) – (period\_the\_same); (unique\_n6) – (period\_the\_same); (period\_the\_same) – (period\_gt50); (period\_gt50) – (pole); (unique\_ell) – (period\_gt50); (period\_gt50) – ++(1.5,0) |- (period\_ell); (pole) – node [1 or 2 poles]{} (beta\_lambda); (beta\_lambda) – (inertia); (inertia) – (visual); (visual) – (model); (pole) -| node \[near start, color=black\] [$>2$ poles]{} (Delta\_beta); (Delta\_beta) – (inertia\_partial); (inertia\_partial) – (visual\_partial); (visual\_partial) – (partial\_model); (period\_gt50) – ++ (1.4,0) |- (period\_n3); (period\_gt50) – ++ (-1.4,0) |- (period\_n6); Results {#sec:results} ======= By processing data for all $\sim 75,000$ asteroids for which we had enough observations, we ended up with 908 reconstructed unique models. Out of these, 246 were already known from inversion of other data and served as an independent check of reliability and error estimation. The efficiency is low because of poor photometric accuracy of sparse photometry, but still significantly higher than when processing Lowell sparse data alone [@Dur.ea:16]. Comparison with independent models {#sec:models_comparison} ---------------------------------- The models of 246 asteroids that were already reconstructed from other data – not necessarily fully independent because many of them were based on Lowell sparse data [@Durech2016] – and made available through the Database of Asteroid Models from Inversion Techniques [DAMIT, @Durech2010] were used for various tests. We used this subset mainly to check the frequency of false positive results. Out of 246 models, five had periods that were slightly different from published values. The two different periods corresponded to different local minima; the relative difference between periods was of the order of only $10^{-4}$, but in most cases two slightly different periods led to largely different pole directions. Periods for four other asteroids were completely different. In the remaining 237 cases, the period was determined correctly (or at least in agreement with the DAMIT value) and for such cases the difference between the poles was mainly less than $30^\circ$, with only a few cases having a pole difference up to $60^\circ$. The distribution of pole differences was similar to that presented by @Durech2016. The mean pole difference was $12^\circ$ and the median value was $9^\circ$. We also compared the semiaxis ratios $a/b$ and $b/c$ (computed from a dynamically equivalent ellipsoid) of our models and those in DAMIT. The mean value of $(a/b)_\text{DAMIT} / (a/b)_\text{our}$ was about 1.06 with a standard deviation of 0.13. For $(b/c)_\text{DAMIT} / (b/c)_\text{our}$, the mean value was also 1.06, while the standard deviation was higher: 0.18. Therefore, on average, our models are slightly less elongated than their counterparts in DAMIT. This test showed us that, with the current setup, the majority of models we derive are “correct” in the sense that they agree with models based on different data sets often containing mainly dense light curves. The number of false positive solutions is a few percent. We expect that the number of incorrect period/pole solutions among the new models in the following section is about the same, that is, a few percent. ![image](beta_vs_size.eps){width="\textwidth"} New models ---------- In total, we derived new models for 662 asteroids (169 using convex shape period search, 513 using ellipsoids, and 20 overlapping). These models and their parameters are listed in Table \[tab:models\]. All models are available in DAMIT, from where not only the shape and spin can be downloaded but also all data points that were used for the inversion. We compared the derived sidereal rotation periods $P$ with those reported in the Asteroid Lightcurve Database (LCDB) of @Warner2009; version from November 12, 2017. In most cases, they agreed, which can be taken as another independent verification of the reliability of the model. In some cases however, the periods were different and we checked again if this was likely due to an erroneous model or an incorrect period in the LCDB. In only one case was it clear that our model was wrong – we rejected asteroid (227) Philosophia from the list of our results because our period was not consistent with dense light curves [@Mar.ea:18]. For other cases that were not consistent with LCDB, we checked the LCDB entries and sometimes concluded that the LCDB period is likely wrong because it was not supported by enough quality light curves (usually the uncertainty code was $< 2$). In other cases, both the LCDB entry and our model looked right and we were not able to decide if our model or the LCDB entry was wrong – these cases are marked with an asterisk in Table \[tab:models\]. Partial models {#sec:partial_models} -------------- Apart from the models described above, we also derived 789 so-called partial models [@Hanus2011]. These are asteroids for which the rotation period was determined uniquely but for which there were more than two possible pole solutions satisfying the $\chi^2_\mathrm{tr}$ criterion. Although we do not take such results as unique solutions of the inverse problem, they still carry important information about the rotation state. In these cases, there are more than two pole solutions that fit the data equally well, but are usually not distributed randomly. On the contrary, their $\beta$ values are often limited to one hemisphere, clearly distinguishing between prograde and retrograde rotation. This is a valuable constraint that can be used in the analysis of the spin axes distribution in the following section. These partial models are listed in Table \[tab:models\_partial\]. Because the pole direction is not known, we list the mean pole latitude $\beta$ of all acceptable solutions and their dispersion $\Delta$ defined as $|\beta_\mathrm{max} - \beta_\mathrm{min}| / 2$. We list only such asteroids for which the $\beta$ values were limited within $50^\circ$, so $\Delta \leq 25^\circ$. We also individually checked those asteroids for which our period was different from that in LCDB. For some of them, we concluded that the LCDB period was not reliable; for others the LCDB period seemed reliable but so did our model, so we marked such inconsistent models with an asterisk in Table \[tab:models\_partial\]. We rejected asteroid (6199) Yoshiokayayoi because it was in strong disagreement with reliable LCDB data and was likely a false positive solution in our sample. Spin distribution ================= The new models we derived significantly increased the total number of asteroids for which the spin orientation is known. Although there are also other sources of asteroid models and spin parameters [the radar models, see, e.g., @Benner2015], we limited our analysis to models from DAMIT and the new models we derived here. DAMIT contains models for 943 asteroids (as of January 2018), so the total number of available models is now $\sim 1600$. There are other 789 partial models, which means that the total number of asteroids for which we have at least some information about the spin axis direction is $\sim 2400$. In what follows, we concentrate on the analysis of how the spin ecliptic latitude $\beta$ is distributed in the population. Other physical parameters like the shape or the rotation period are likely to be biased by the selection effects – elongated asteroids are more likely to be successfully modeled than spheroidal asteroids because they have larger light curve amplitudes and their signal is not lost in the noise [@Mar.ea:18]. To draw any reliable conclusions about the distribution of the shapes or periods, we would need to carefully de-bias our sample, which is outside the scope of this paper. On the other hand, if there was any bias affecting $\beta$, it should be symmetric with respect to $\pm \beta$ and not dependent on the size, so we can readily examine and interpret the latitude distribution. The $\beta$ values are related to the ecliptic plane. However, a more “physical” value is the pole obliquity $\gamma$, defined as the angle between the spin axis and the direction perpendicular to the orbital plane. The conversion between these two parameters is trivial for zero orbital inclination because in this case $\gamma = 90^\circ - \beta$ and the prograde/retrograde rotation exactly corresponds to the sign of $\beta$. For nonzero inclination, the conversion depends also on the ecliptic longitude $\lambda$ of the pole and on the orbital elements $I$ (inclination) and $\Omega$ (longitude of the ascending node). Because $\lambda$ is not known for partial models, we assume the simple zero inclination conversion. For full models, there are often two possible pole solutions with, in general, different $\beta$ and also $\gamma$ values. Averaging $\gamma$ or $\beta$ values of two models would lead to smearing of the extreme values, so for the following plots we randomly selected one of them with the corresponding $\beta$. For partial models, the value of $\beta$ in the plots is taken as an arithmetic mean of the values for all acceptable poles. The orbital elements were taken from the AstOrb[^1] database, the diameters were mainly from the NEOWISE database [@Mainzer2011a] with some values taken also from Akari [@Usui2011] and IRAS [@Tedesco2004] catalogs. ![image](beta_in_MB.eps){width="\textwidth"} Pole latitude versus size ------------------------- The distribution of the pole latitude $\beta$ for main-belt asteroids as a function of asteroid size is shown in Fig. \[fig:beta\_vs\_size\]. The distribution is strongly bimodal for asteroids smaller than 20–30km, which can be satisfactorily explained as an effect of a YORP-driven evolution [@Hanus2011; @Hanus2013a]. Due to the YORP effect, small asteroids evolve towards the extreme values of obliquity. In the lower panel of Fig. \[fig:beta\_vs\_size\], we show the fraction $N_\mathrm{p} / N_\mathrm{total}$ of the number of prograde $(\beta > 0)$ rotators in a running box over $N_\mathrm{total} = 100$ asteroids as a function of size. For asteroids larger than 100km, the number of prograde rotators is statistically higher ($N_\mathrm{p} = 68$, $N_\mathrm{total} = 108$, probability of the null hypothesis that $N_\mathrm{p} = N_\mathrm{total} / 2$ is $p \simeq 0.7\%$ assuming binomial distribution) in accordance with the model of [@Johansen2010] who suggested that the preferentially prograde rotation of large asteroids is a result of accretion of pebbles on planetesimals in a gaseous environment. On the other hand, for asteroids in the size range 1–10km, there is an excess of retrograde rotators ($N_\mathrm{p} = 520$, $N_\mathrm{total} = 1276$, $p \simeq 2 \times 10^{-11}$). For asteroids between 10 and 100km, the number of prograde and retrograde rotators is statistically the same ($N_\mathrm{p} = 441$, $N_\mathrm{total} = 919$, $p \simeq 22\%$). Because most asteroids smaller than about 30km have large absolute values of $\beta$, the prograde/retrograde analysis is not sensitive to asteroids with $|\beta| < 30^\circ$. The ratio is almost the same even if we restrict ourselves to $|\beta| > 30^\circ$ where the distinction between prograde and retrograde rotation is unambiguous even for nonzero inclination. This is not true for asteroids larger than 100km, where for $|\beta| > 30^\circ$ we have $N_\mathrm{p} = 34$, $N_\mathrm{total} = 55$, and $p \simeq 8\%$. The excess of small retrograde rotators in Fig. \[fig:beta\_vs\_size\] is statistically significant, however, it is not clear if the reconstructed distribution of $\beta$ is the same as the real one. Although we are not aware of any bias in the observations or the method that could cause the asymmetry in $\pm \beta$, there could still be some nontrivial systematic effect that we have not taken into account. Pole latitude distribution across the main belt ----------------------------------------------- The distribution of pole latitude $\beta$ is not only dependent on the size, but also on the proper semimajor axis, namely on the proximity to resonances. For asteroids in a collisional family, $\beta$ depends on the relative position with respect to the center of the family. The color-coded distribution of $\beta$ across the main belt is shown in Fig. \[fig:beta\_in\_MB\]. Similarly to how the YORP effect is responsible for clustering of poles around extreme values of obliquity for small asteroids, fingerprints of the Yarkovsky effect are clearly visible in some asteroid families (e.g., Eunomia, Koronis, Eos, and Themis) where retrograde family members are concentrated to the left (smaller semimajor axis $a$) of the family center while prograde are concentrated to the right (larger $a$). This is shown as a color dichotomy in Fig. \[fig:beta\_in\_MB\]b and is in agreement with the theoretical prediction that prograde rotators migrate to a higher semimajor axis, the opposite to retrograde rotators [@Vokrouhlicky2015; @Hanus2013c; @Hanus2018a]. The excess of retrograde rotators in the right “wing” of the Flora family might be caused by contamination with Baptistina family members [@M-D.ea:05; @Bot.ea:07]. However, any detailed check of family membership or a deeper study of the distribution of spins in families are outside the scope of this paper. Another correlation that we can see in Fig. \[fig:beta\_in\_MB\] is the one between the sense of rotation and the location with respect to the mean-motion and secular resonances. As shown by @Hanus2011, an area to the left of a resonance contains more prograde rotators because they move towards the resonance and become scattered with only a small probability of crossing the resonance. For the same reason, there are more retrograde rotators to the right of the resonance. This separation due to resonances can be seen in Fig. \[fig:beta\_in\_MB\]c. In Fig. \[fig:beta\_in\_MB\]d, we plot the running mean of $\beta$ over 20 asteroids as a function of $a$. We can see a general behavior that to the left of a resonance the mean $\beta$ is high, meaning more prograde rotators. To the right of a resonance it drops to negative values meaning retrograde rotation. At the inner end of the main belt, the $\nu_6$ resonance cuts the belt and this area contains mainly retrograde rotators. At the opposite end, the 2:1 mean-motion resonance defines the edge and this area is populated mainly by prograde rotators. Finally, there are also other features in Fig. \[fig:beta\_in\_MB\] that seem to be significant but for which we have no simple explanation. Namely, these are the excess of prograde rotators at $\sim 2.24\,$AU and the excess of retrograde rotators at 3.10AU. The former might be related to the proximity to the inner edge of the main belt and the $\nu_6$ resonance. The latter might be directly related to the 9:4 resonance, which filters out prograde rotators to the right of the resonance. All prograde rotators between 9:4 resonance and 3.1AU might be Eos family members, some of them not identified as belonging to the family. Conclusions {#sec:conclusion} =========== The combination of optical photometry with thermal data turned out to be an efficient way to enlarge the sample of asteroids with shape models and spin parameters. Although the success rate of deriving a unique physical model from Lowell and WISE data is low, and the derived models are probably a very biased sample of the whole population (there is a strong bias in favor of elongated asteroids – their light curve amplitude is larger and the signal is less likely to be lost in the noise than for spherical objects), the asymmetry and anisotropy of the pole latitude $\beta$ corresponding roughly to the difference between prograde and retrograde rotation seems to be significant. The potential of this kind of data mining is really huge, because apart from the continuously growing number of asteroid light curves, data from other surveys like ATLAS, PTF, Gaia, or LSST are or will become available. With the models we derive here, the next step could be the derivation of thermophysical parameters in the same way as by @Hanus2018b. We also plan to investigate in more detail the rotation states in collisional families. We would not be able to process data for hundreds of thousands of asteroids without the help of the tens of thousands of volunteers who joined the Asteroids@home BOINC project and provided their computing resources. We greatly appreciate their contribution. The work of JĎ and JH was supported by the grants 15-04816S and 18-04514J of the Czech Science Foundation. VAL has received funding from the European Union’s Horizon 2020 Research and Innovation Programme, under Grant Agreement no. 687378. This publication also makes use of data products from NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology, funded by the Planetary Science Division of the National Aeronautics and Space Administration. The list of new models ====================== [^1]: <ftp://ftp.lowell.edu/pub/elgb/astorb.html>
--- abstract: 'Using data from world stock exchange indices prior to and during periods of global financial crises, clusters and networks of indices are built for different thresholds and diverse periods of time, so that it is then possible to analyze how clusters are formed according to correlations among indices and how they evolve in time, particularly during times of financial crises. Further analysis is made on the eigenvectors corresponding to the second highest eigenvalues of the correlation matrices, revealing a structure peculiar to markets that operate in different time zones.' author: - | Leonidas Sandoval Junior\ \ Insper, Instituto de Ensino e Pesquisa title: Cluster formation and evolution in networks of financial market indices --- Introduction ============ This work uses a distance measure based on Spearman’s rank correlation in order to build three dimensional networks based on hierarchies. This approach makes it possible to show how clusters form at different levels of correlations, and how the formations of those networks evolve in time. The data used are the time series of international stock exchange indices taken from many countries in the world so as to represent both evolved and developing economies in a variety of continents. The series were taken both prior to and during some of the severest financial crises of the past decades, namely the 1987 Black Monday, the 1997 Asian Financial Crisis, the 1998 Russian Crisis, the burst of the dot-com bubble in 2001, and the crisis after September, 11, 2001, so that different regimes of volatility are represented. The idea is to be able to study how clusters behave when going from periods of low volatility to more turbulent times. The Subprime Mortgage crisis of 2008 is analyzed in a companion article, along with the years ranging from 2007 to 2010. The data consists on some of the benchmark indices of a diversity of stock exchanges around the world. The number goes from 16 indices (1986) to 79 indices (2001). The choice of indices was mainly based on availability of data, since some of the stock exchanges being studied didn’t develope or had no recorded indices until quite recently. As there are differences between some of the days certain stock markets operate, some of the operation days had to be deleted and others duplicated. The rule was the following: when more than 30% of markets didn’t operate on a certain day, that day was deleted. When that number was bellow 30%, we repeated the value of the index for the previous day for the markets that didn’t open. Special care had to be given to markets whose weekends didn’t correspond to the usual ocidental days, like some Arab countries. For those markets, we adjusted the weekends in order to match those of the majority of markets. All that was done in order to certify that we minimized the number of days with missing data, so that we could measure the log-returns of two consecutive days of a market’s operation whenever that was possible. The correlation matrix of the time series of financial data encodes a large amount of information, and an even greater amount of noise. That information and noise must be filtered if one is to try to understand how the elements (in our case, indices) relate to each other and how that relation evolves in time. One of the most common filtering procedures is to represent those relations using a [*Minimum Spanning Tree*]{} [@mst01]-[@mst15], which is a graph containing all indices, connected by at least one edge, so that the sum of the edges is minimum, and which presents no loops. Another type of representation is that of [*Maximally Planar Filtered Graphs*]{} [@pmfg01]-[@pmfg06], which admits loops but must be representable in two dimensional graphs without crossings. Yet another type of representation is obtained by establishing a number which defines how many connections (edges) are to be represented in a graph of the correlations between nodes. There is no limitation with respect to crossing of edges or to the formation of loops, and if the number is high enough, then one has a graph where all nodes are connected to one another. These are usualy called [*Asset Trees*]{}, or [*Asset Graphs*]{} [@asset01]-[@asset08], for they are no trees in the network sense. Another way to build asset graphs is to establish a value (threshold) such that distances above it are not considered. This eliminates connections (edges) as well as indices (nodes), but also turn the diagrams more understandable by filtering both information and noise. Some previous works using graphic representations of correlations between international assets (indices or otherwise) can be found in [@mst05], [@pmfg05], and [@asset07]. In the present work, we shall build asset trees of indices based on threshold values for a distance measure between them that is based on the Spearman rank correlation among indices. By establishing increasing values for the thresholds, we shall be able to devise the formation of clusters among indices, and how they evolve in time. Details of the way that is done are given in Section 2. A second topic is pursued in Section 3, where an analysis is made of the eigenvector corresponding to the second highest eigenvalue of the correlation matrix of each group of data being studied. It is well-known ([@leocorr] and references therein) that the eigenvector corresponding to the highest eigenvalue of a correlation matrix of financial assets corresponds to a “market mode” that resembles the common movement of the market as a whole, and that often the eigenvectors corresponding to the second and sometimes the third highest eigenvalues also carry some information on the internal structure of the market being studied. What is implied in Section 3 is that this internal structure reflects the fact that stock markets operate at different time zones, what is a characteristic that is unique to a global market. Depiction and evolution of clusters =================================== In this section, we use data from some key periods of time for financial markets in order to build clusters based on hierarchy. The procedure is to consider the daily log-return for each index, given by $$\label{logreturn} R_t=\ln P_t-\ln P_{t-1}\ ,$$ where $P_t$ is the value of the index at day $t$ and $P_{t-1}$ is the value of the same index on day $t-1$. The log-returns are then used in order to create a correlation matrix $C$ based on Spearman’s rank correlation, and then a distance is defined as $$\label{distance} d_{ij}=1-c_{ij}\ ,$$ where $d_{ij}$ is the distance between indices $i$ and $j$ and $c_{ij}$ is the correlation between both indices. Spearman rank correlation is being used instead of the usual Pearson correlation, for it better captures nonlinear relations between indices, and the distance measure was chosen as a linear realization of the correlation. The distance measure satisfies all conditions for an Euclidean measure. In order to represent as best as possible the true distances of indices in a graph, we used three-dimensional maps based on principal component analysis, that minimizes the differences between the true distances and the distances represented in the graph. Then, those three-dimensional graphs were used in order to represent networks based on threshold values of the distances between nodes (indices). By using simulations (1000 for each period) with randomized data, where the log returns of each index were randomly shifted so as to eliminate any temporal correlation between them, but maintain their probability frequency distributions, we established thresholds above which random noise starts to interfere severely with the connections among indices. Those thresholds are discussed for each of the semesters depicted in this article. A compromise had to be established between considering a good amount of data so as to minimize noise in the measurements of correlation and considering a small enough time interval so as to capture the relations between indices in a certain instant of time. Choosing to consider intervals of the size of a semester, this compromise was achieved, mainly because the crises being considered tended to happen on the second half of each year. 1987 - Black Monday ------------------- The first asset graphs to be built are based on data for the years 1986 and 1987, the year that preceded and the one that witnessed the crisis known as Black Monday, whose peak ocurred on Monday, October 19, 1987. Not yet completely explained, the crisis made stock markets worldwide drop up to 45% in less than two weeks, and represented the worst financial crisis since 1929. The networks for the two semesters of 1986 were built using the indices of 16 markets: S&P 500 from the New York Stock Exchange (S&P), and Nasdaq (Nasd), both from the USA, S&P TSX from Canada (Cana), Ibovespa from Brazil (Braz), FTSE 100 from the United Kingdom (UK), DAX from Germany (Germ) - West Germany at the time - ATX from Austria (Autr), AEX from the Netherlands (Neth), SENSEX from India (Indi), Colombo All Share from Sri Lanka (SrLa), Nikkei 25 from Japan (Japa), Hang Seng from Hong Kong (HoKo), TAIEX from Taiwan (Taiw), Kospi from South Korea (SoKo), Kuala Lumpur Composite from Malaysia (Mala), and JCI from Indonesia (Indo). The number of countries is small, mainly due to lack of data, but offers a relatively wide variety of nations and cultures in three continents. The networks for the two semesters of 1987 were built using 23 indices, adding to the ones of 1986 indices ISEQ from Ireland (Irel), OMX from Sweden (Swed), OMX Helsinki from Finland (Finl), IBEX 35 from Spain (Spai), ASE General Index from Greece (Gree), and PSEi from the Philippines (Phil). ### First semester, 1986 Figure 1 shows the three dimensional view of the asset trees for the first semester of 1986, with threshold ranging from $T=0.3$ to $T=0.8$. Since we are using only the indices that have connections bellow $T=0.8$, not all indices are displayed in the following graphs. 0.3 cm (-4,-0.2)(1,1.9) (-3,1.3)[T=0.3]{} (-3.9,-1.6)(3.7,-1.6)(3.7,1.7)(-3.9,1.7)(-3.9,-1.6) (0.2748,0.2635,0.1772)(0.3787,0.1646,0.1878) (0.2748,0.2635,0.1772) (0.3787,0.1646,0.1878) (0.2748,0.2635,0.2272)[S&P]{} (0.3787,0.1646,0.2378)[Nasd]{} (-8,-0.2)(1,1.9) (-3,1.3)[T=0.5]{} (-3.9,-1.6)(3.7,-1.6)(3.7,1.7)(-3.9,1.7)(-3.9,-1.6) (0.2748,0.2635,0.1772)(0.3787,0.1646,0.1878) (0.3787,0.1646,0.1878)(0.4180,0.1087,-0.1400) (0.0888,-0.5344,-0.0139)(0.2803,-0.4897,0.0337) (0.2748,0.2635,0.1772) (0.3787,0.1646,0.1878) (0.4180,0.1087,-0.1400) (0.0888,-0.5344,-0.0139) (0.2803,-0.4897,0.0337) (0.2748,0.2635,0.2272)[S&P]{} (0.3787,0.1646,0.2378)[Nasd]{} (0.4180,0.1087,-0.1900)[Cana]{} (0.0888,-0.5344,-0.0639)[Germ]{} (0.2803,-0.4897,0.0837)[Neth]{} 1.8 cm (-4,-0.2)(1,1.9) (-3,1.3)[T=0.7]{} (-3.9,-1.6)(3.7,-1.6)(3.7,1.7)(-3.9,1.7)(-3.9,-1.6) (0.2748,0.2635,0.1772)(0.3787,0.1646,0.1878) (0.2748,0.2635,0.1772)(0.4180,0.1087,-0.1400) (0.3787,0.1646,0.1878)(0.4180,0.1087,-0.1400) (0.3787,0.1646,0.1878)(0.3638,0.0751,-0.1316) (0.0888,-0.5344,-0.0139)(0.2803,-0.4897,0.0337) (0.2748,0.2635,0.1772) (0.3787,0.1646,0.1878) (0.4180,0.1087,-0.1400) (0.3638,0.0751,-0.1316) (0.0888,-0.5344,-0.0139) (0.2803,-0.4897,0.0337) (0.2748,0.2635,0.2272)[S&P]{} (0.3787,0.1646,0.2378)[Nasd]{} (0.4180,0.1087,-0.1900)[Cana]{} (0.3638,0.0751,-0.1816)[UK]{} (0.0888,-0.5344,-0.0639)[Germ]{} (0.2803,-0.4897,0.0837)[Neth]{} (-8,-0.2)(1,1.9) (-3,1.3)[T=0.8]{} (-3.9,-1.6)(3.7,-1.6)(3.7,1.7)(-3.9,1.7)(-3.9,-1.6) (0.2748,0.2635,0.1772)(0.3787,0.1646,0.1878) (0.2748,0.2635,0.1772)(0.4180,0.1087,-0.1400) (0.2748,0.2635,0.1772)(0.3638,0.0751,-0.1316) (0.3787,0.1646,0.1878)(0.4180,0.1087,-0.1400) (0.3787,0.1646,0.1878)(0.3638,0.0751,-0.1316) (0.3787,0.1646,0.1878)(0.2803,-0.4897,0.0337) (0.3787,0.1646,0.1878)(-0.1068,0.0420,-0.1631) (0.4180,0.1087,-0.1400)(0.3638,0.0751,-0.1316) (0.4180,0.1087,-0.1400)(0.2803,-0.4897,0.0337) (-0.2655,-0.1135,0.0532)(-0.1068,0.0420,-0.1631) (0.3638,0.0751,-0.1316)(0.2803,-0.4897,0.0337) (0.0888,-0.5344,-0.0139)(0.2803,-0.4897,0.0337) (-0.5160,0.3904,-0.1440)(-0.1068,0.0420,-0.1631) (0.2803,-0.4897,0.0337)(-0.1293,-0.1766,-0.0161) (0.2748,0.2635,0.1772) (0.3787,0.1646,0.1878) (0.4180,0.1087,-0.1400) (-0.2655,-0.1135,0.0532) (0.3638,0.0751,-0.1316) (0.0888,-0.5344,-0.0139) (-0.5160,0.3904,-0.1440) (0.2803,-0.4897,0.0337) (-0.1068,0.0420,-0.1631) (-0.1293,-0.1766,-0.0161) (0.2748,0.2635,0.2272)[S&P]{} (0.3787,0.1646,0.2378)[Nasd]{} (0.4180,0.1087,-0.1900)[Cana]{} (-0.2655,-0.1135,0.1032)[Braz]{} (0.3638,0.0751,-0.1816)[UK]{} (0.0888,-0.5344,-0.0639)[Germ]{} (-0.5160,0.3904,-0.2140)[Autr]{} (0.2803,-0.4897,0.0837)[Neth]{} (-0.1068,0.0420,-0.2131)[Japa]{} (-0.1293,-0.1766,-0.0661)[HoKo]{} 1.8 cm Fig. 1. Three dimensional view of the asset trees for the first semester of 1986, with threshold ranging from $T=0.3$ to $T=0.8$. 0.3 cm At $T=0.3$, the first cluster is formed, with the connection between S&P and Nasdaq, both indices from the USA. For $T=0.5$, Canada is added to the North American cluster and a new cluster is formed when Germany and Netherlands connect with each other. For $T=0.6$, a connection is formed between S&P and Canada, strenghtening the North American cluster. At $T=0.7$, random noise starts to interfere with the results, but the connection formed between the UK and Nasdaq seems genuine. For $T=0.8$, the two clusters merge, and Austria, Brazil, Japan, and Hong Kong join the new single cluster. Noise overwhelms the results for $T>0.9$, and although many more connections are made for higher thresholds, they cannot be trusted. All the remaining indices connect with the resulting cluster within $T=0.9$ and $T=1.0$, and the last connections occur for $T=1.3$. ### Second semester, 1986 Figure 2 shows the three dimensional view of the asset trees for the second semester of 1986, with threshold ranging from $T=0.3$ to $T=0.8$. 0.3 cm (-4,-2.1)(1,1.7) (-3,1.1)[T=0.3]{} (-3.9,-3.2)(3.7,-3.2)(3.7,1.5)(-3.9,1.5)(-3.9,-3.2) (-0.3183,0.2219,0.0548)(-0.2672,0.3351,-0.0145) (-0.3183,0.2219,0.0548) (-0.2672,0.3351,-0.0145) (-0.3183,0.2219,0.1048)[S&P]{} (-0.2672,0.3351,-0.0645)[Nasd]{} (-8,-2.1)(1,1.7) (-3,1.1)[T=0.5]{} (-3.9,-3.2)(3.7,-3.2)(3.7,1.5)(-3.9,1.5)(-3.9,-3.2) (-0.3183,0.2219,0.0548)(-0.2672,0.3351,-0.0145) (-0.3183,0.2219,0.0548)(-0.3461,0.1142,0.1184) (-0.3183,0.2219,0.0548) (-0.2672,0.3351,-0.0145) (-0.3461,0.1142,0.1184) (-0.3183,0.2219,0.1048)[S&P]{} (-0.2672,0.3351,-0.0645)[Nasd]{} (-0.3461,0.1142,0.1684)[Cana]{} 1.5 cm (-4,-2.1)(1,1.7) (-3,1.1)[T=0.6]{} (-3.9,-3.2)(3.7,-3.2)(3.7,1.5)(-3.9,1.5)(-3.9,-3.2) (-0.3183,0.2219,0.0548)(-0.2672,0.3351,-0.0145) (-0.3183,0.2219,0.0548)(-0.3461,0.1142,0.1184) (-0.3183,0.2219,0.0548)(-0.2530,0.2321,0.1112) (-0.2672,0.3351,-0.0145)(-0.3461,0.1142,0.1184) (-0.2672,0.3351,-0.0145)(-0.2530,0.2321,0.1112) (-0.3461,0.1142,0.1184)(-0.3106,0.0790,-0.0354) (-0.2530,0.2321,0.1112)(-0.3106,0.0790,-0.0354) (-0.0717,0.0271,-0.1439)(-0.3106,0.0790,-0.0354) (-0.3183,0.2219,0.0548) (-0.2672,0.3351,-0.0145) (-0.3461,0.1142,0.1184) (-0.2530,0.2321,0.1112) (-0.0717,0.0271,-0.1439) (-0.3106,0.0790,-0.0354) (-0.3183,0.2219,0.1048)[S&P]{} (-0.2672,0.3351,-0.0645)[Nasd]{} (-0.3461,0.1142,0.1684)[Cana]{} (-0.2530,0.2321,0.1612)[UK]{} (-0.0717,0.0271,-0.1939)[Germ]{} (-0.3106,0.0790,-0.0854)[Neth]{} (-8,-2.1)(1,1.7) (-3,1.1)[T=0.8]{} (-3.9,-3.2)(3.7,-3.2)(3.7,1.5)(-3.9,1.5)(-3.9,-3.2) (-0.3183,0.2219,0.0548)(-0.2672,0.3351,-0.0145) (-0.3183,0.2219,0.0548)(-0.3461,0.1142,0.1184) (-0.3183,0.2219,0.0548)(-0.2530,0.2321,0.1112) (-0.3183,0.2219,0.0548)(-0.3106,0.0790,-0.0354) (-0.2672,0.3351,-0.0145)(-0.3461,0.1142,0.1184) (-0.2672,0.3351,-0.0145)(-0.2530,0.2321,0.1112) (-0.2672,0.3351,-0.0145)(-0.3106,0.0790,-0.0354) (-0.3461,0.1142,0.1184)(-0.2530,0.2321,0.1112) (-0.3461,0.1142,0.1184)(-0.3106,0.0790,-0.0354) (-0.2530,0.2321,0.1112)(-0.3106,0.0790,-0.0354) (-0.0717,0.0271,-0.1439)(-0.3106,0.0790,-0.0354) (-0.0717,0.0271,-0.1439)(-0.0851,-0.1377,-0.4672) (-0.3183,0.2219,0.0548) (-0.2672,0.3351,-0.0145) (-0.3461,0.1142,0.1184) (-0.2530,0.2321,0.1112) (-0.0717,0.0271,-0.1439) (-0.3106,0.0790,-0.0354) (-0.0851,-0.1377,-0.4672) (-0.3183,0.2219,0.1048)[S&P]{} (-0.2672,0.3351,-0.0645)[Nasd]{} (-0.3461,0.1142,0.1684)[Cana]{} (-0.2530,0.2321,0.1612)[UK]{} (-0.0717,0.0271,-0.1939)[Germ]{} (-0.3106,0.0790,-0.0854)[Neth]{} (-0.0851,-0.1377,-0.5172)[Japa]{} 1.6 cm Fig. 2. Three dimensional view of the asset trees for the second semester of 1986, with threshold ranging from $T=0.3$ to $T=0.8$. 0.3 cm The first connection is formed at $T=0.3$, between S&P and Nasdaq. This cluster grows only at $T=0.5$, with the addition of Canada. At $T=0.6$, the UK joins the North American cluster, which also connects with the Netherlands and, through it, with Germany. For $T=0.7$, Netherlands establishes itself as a major hub, and Japan joins, via Germany, at $T=0.8$. Random noise starts to have more effects at $T=0.7$, and by $T=0.9$, it becomes strong, so the many connections that are formed above $T=0.9$ cannot be trusted. The last connections occur at $T=1.3$. ### First semester, 1987 Figure 3 shows the three dimensional view of the asset trees for the first semester of 1987, with threshold ranging from $T=0.3$ to $T=0.8$. The first connection, again between S&P and Nasdaq, forms at $T=0.3$. At $T=0.4$, Canada joins the emerging North American cluster. For $T=0.5$, a new cluster is formed, between Germany and Netherlands. At $T=0.6$, a connection is formed between Canada and S&P, and for $T=0.7$, the UK connects with the North American cluster through Nasdaq. For $T=0.8$, random noise starts having strong effects. We have connections of the UK with Ireland and Hong Kong, and more connections among the members of the emerging European cluster, now comprised of Germany, Netherlands, Sweden, Finland, and Spain. South Korea connects with Ireland, and Sri Lanka connects with Malaysia. The last connections are probably due to random noise. 0.8 cm (-3.5,-2.8)(1,2) (-3,1.7)[T=0.3]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.3548,0.3508,0.2828)(-0.4439,0.3895,0.1685) (-0.3548,0.3508,0.2828) (-0.4439,0.3895,0.1685) (-0.3548,0.3508,0.3228)[S&P]{} (-0.4439,0.3895,0.2185)[Nasd]{} (-8,-2.8)(1,2) (-3,1.7)[T=0.4]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.3548,0.3508,0.2828)(-0.4439,0.3895,0.1685) (-0.4439,0.3895,0.1685)(-0.2927,0.4073,0.1442) (-0.3548,0.3508,0.2828) (-0.4439,0.3895,0.1685) (-0.2927,0.4073,0.1442) (-0.3548,0.3508,0.3228)[S&P]{} (-0.4439,0.3895,0.2185)[Nasd]{} (-0.2927,0.4073,0.1942)[Cana]{} 1 cm (-3.5,-2.8)(1,2) (-3,1.7)[T=0.6]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.3548,0.3508,0.2828)(-0.4439,0.3895,0.1685) (-0.3548,0.3508,0.2828)(-0.2927,0.4073,0.1442) (-0.4439,0.3895,0.1685)(-0.2927,0.4073,0.1442) (-0.2182,-0.4967,0.1098)(-0.4761,-0.3290,0.1010) (-0.3548,0.3508,0.2828) (-0.4439,0.3895,0.1685) (-0.2927,0.4073,0.1442) (-0.2182,-0.4967,0.1098) (-0.4761,-0.3290,0.1010) (-0.3548,0.3508,0.3228)[S&P]{} (-0.4439,0.3895,0.2185)[Nasd]{} (-0.2927,0.4073,0.1942)[Cana]{} (-0.2182,-0.4967,0.1598)[Germ]{} (-0.4761,-0.3290,0.1510)[Neth]{} (-8,-2.8)(1,2) (-3,1.7)[T=0.8]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.3548,0.3508,0.2828)(-0.4439,0.3895,0.1685) (-0.3548,0.3508,0.2828)(-0.2927,0.4073,0.1442) (-0.4439,0.3895,0.1685)(-0.2927,0.4073,0.1442) (-0.4439,0.3895,0.1685)(-0.3246,0.1166,-0.1551) (-0.2927,0.4073,0.1442)(-0.3246,0.1166,-0.1551) (-0.3246,0.1166,-0.1551)(-0.2024,-0.0006,-0.4309) (-0.3246,0.1166,-0.1551)(-0.0297,0.1509,-0.2859) (-0.2024,-0.0006,-0.4309)(-0.1040,0.0543,-0.3098) (-0.2182,-0.4967,0.1098)(-0.4761,-0.3290,0.1010) (-0.2182,-0.4967,0.1098)(-0.1440,-0.4232,0.0181) (-0.4761,-0.3290,0.1010)(-0.1440,-0.4232,0.0181) (-0.4761,-0.3290,0.1010)(-0.1865,-0.3497,0.1005) (0.2693,0.2517,-0.2629)(0.4491,0.1496,-0.1378) (0.3528,0.0400,0.1433)(0.2432,0.1585,0.2071) (-0.3548,0.3508,0.2828) (-0.4439,0.3895,0.1685) (-0.2927,0.4073,0.1442) (-0.3246,0.1166,-0.1551) (-0.2024,-0.0006,-0.4309) (-0.2182,-0.4967,0.1098) (-0.4761,-0.3290,0.1010) (-0.1440,-0.4232,0.0181) (0.2693,0.2517,-0.2629) (0.4491,0.1496,-0.1378) (0.3528,0.0400,0.1433) (-0.1865,-0.3497,0.1005) (-0.0297,0.1509,-0.2859) (-0.1040,0.0543,-0.3098) (0.2432,0.1585,0.2071) (-0.3548,0.3508,0.3228)[S&P]{} (-0.4439,0.3895,0.2185)[Nasd]{} (-0.2927,0.4073,0.1942)[Cana]{} (-0.3246,0.1166,-0.2051)[UK]{} (-0.2024,-0.0006,-0.4809)[Irel]{} (-0.2182,-0.4967,0.1598)[Germ]{} (-0.4761,-0.3290,0.1510)[Neth]{} (-0.1440,-0.4232,0.0681)[Swed]{} (0.2693,0.2517,-0.3129)[Finl]{} (0.4491,0.1496,-0.1878)[Spai]{} (0.3528,0.0400,0.1933)[SrLa]{} (-0.1865,-0.3497,0.1505)[Japa]{} (-0.0297,0.1509,-0.3359)[HoKo]{} (-0.1040,0.0543,-0.3598)[SoKo]{} (0.2432,0.1585,0.2571)[Mala]{} 0.5 cm Fig. 3. Three dimensional view of the asset trees for the first semester of 1987, with threshold ranging from $T=0.3$ to $T=0.8$. 0.3 cm ### Second semester, 1987 Figure 4 shows the three dimensional view of the asset trees for the second semester of 1987, with threshold ranging from $T=0.4$ to $T=0.7$. At $T=0.3$, we have a cluster formed by the North American indices, S&P, Nasdaq, and Canada. For $T=0.4$, the North American cluster gets fully connected, and a new cluster, composed by Germany and Netherlands, is formed. At $T=0.5$, many connections are established, the UK and Netherlands with the North American cluster, between the UK and Ireland, and between Sweden and other European indices. At $T=0.6$, Germany connects fully with the North American cluster, and Austria and Spain join the European cluster, while Japan connects with Sweden. Random noise starts to get much more influent for $T=0.7$. We now have connections of India, Hong Kong, and Malaysia with the North American and European clusters. More connections are formed for higher thresholds, until all indices are connected for $T=1.3$. Note that the size of the clusters shrank for the second semester of 1987, a consequence of the increase in correlation among the indices. 0.5 cm (-4,-1.6)(1,1.5) (-3,1.1)[T=0.4]{} (-3.9,-1.9)(2.9,-1.9)(2.9,1.5)(-3.9,1.5)(-3.9,-1.9) (-0.3962,-0.2840,-0.0640)(-0.3849,-0.0888,0.0158) (-0.3962,-0.2840,-0.0640)(-0.3537,-0.0322,0.0199) (-0.3849,-0.0888,0.0158)(-0.3537,-0.0322,0.0199) (-0.2762,0.1126,0.0079)(-0.4088,-0.0281,0.0579) (-0.3962,-0.2840,-0.0640) (-0.3849,-0.0888,0.0158) (-0.3537,-0.0322,0.0199) (-0.2762,0.1126,0.0079) (-0.4088,-0.0281,0.0579) (-0.3962,-0.2840,-0.1140)[S&P]{} (-0.3849,-0.0888,0.0658)[Nasd]{} (-0.3537,-0.0322,0.0699)[Cana]{} (-0.2762,0.1126,0.0579)[Germ]{} (-0.4088,-0.0281,0.1079)[Neth]{} (-8,-1.6)(1,1.5) (-3,1.1)[T=0.5]{} (-3.9,-1.9)(2.9,-1.9)(2.9,1.5)(-3.9,1.5)(-3.9,-1.9) (-0.3962,-0.2840,-0.0640)(-0.3849,-0.0888,0.0158) (-0.3962,-0.2840,-0.0640)(-0.3537,-0.0322,0.0199) (-0.3962,-0.2840,-0.0640)(-0.4076,-0.1801,0.0021) (-0.3962,-0.2840,-0.0640)(-0.4088,-0.0281,0.0579) (-0.3849,-0.0888,0.0158)(-0.3537,-0.0322,0.0199) (-0.3849,-0.0888,0.0158)(-0.4076,-0.1801,0.0021) (-0.3849,-0.0888,0.0158)(-0.4088,-0.0281,0.0579) (-0.3537,-0.0322,0.0199)(-0.4076,-0.1801,0.0021) (-0.3537,-0.0322,0.0199)(-0.4088,-0.0281,0.0579) (-0.4076,-0.1801,0.0021)(-0.2572,0.1114,-0.0763) (-0.4076,-0.1801,0.0021)(-0.4088,-0.0281,0.0579) (-0.2572,0.1114,-0.0763)(-0.4088,-0.0281,0.0579) (-0.2572,0.1114,-0.0763)(-0.2006,0.2453,0.0863) (-0.2762,0.1126,0.0079)(-0.4088,-0.0281,0.0579) (-0.2762,0.1126,0.0079)(-0.2006,0.2453,0.0863) (-0.3962,-0.2840,-0.0640) (-0.3849,-0.0888,0.0158) (-0.3537,-0.0322,0.0199) (-0.4076,-0.1801,0.0021) (-0.2572,0.1114,-0.0763) (-0.2762,0.1126,0.0079) (-0.4088,-0.0281,0.0579) (-0.2006,0.2453,0.0863) (-0.3962,-0.2840,-0.1140)[S&P]{} (-0.3849,-0.0888,0.0658)[Nasd]{} (-0.3537,-0.0322,0.0699)[Cana]{} (-0.4076,-0.1801,0.0521)[UK]{} (-0.2572,0.1114,-0.1263)[Irel]{} (-0.2762,0.1126,0.0579)[Germ]{} (-0.4088,-0.0281,0.1079)[Neth]{} (-0.2006,0.2453,0.1363)[Swed]{} 0.4 cm (-4,-1.6)(1,1.5) (-3,1.1)[T=0.6]{} (-3.9,-1.9)(2.9,-1.9)(2.9,1.5)(-3.9,1.5)(-3.9,-1.9) (-0.3962,-0.2840,-0.0640)(-0.3849,-0.0888,0.0158) (-0.3962,-0.2840,-0.0640)(-0.3537,-0.0322,0.0199) (-0.3962,-0.2840,-0.0640)(-0.4076,-0.1801,0.0021) (-0.3962,-0.2840,-0.0640)(-0.2762,0.1126,0.0079) (-0.3962,-0.2840,-0.0640)(-0.4088,-0.0281,0.0579) (-0.3849,-0.0888,0.0158)(-0.3537,-0.0322,0.0199) (-0.3849,-0.0888,0.0158)(-0.4076,-0.1801,0.0021) (-0.3849,-0.0888,0.0158)(-0.2572,0.1114,-0.0763) (-0.3849,-0.0888,0.0158)(-0.2762,0.1126,0.0079) (-0.3849,-0.0888,0.0158)(-0.4088,-0.0281,0.0579) (-0.3537,-0.0322,0.0199)(-0.4076,-0.1801,0.0021) (-0.3537,-0.0322,0.0199)(-0.2762,0.1126,0.0079) (-0.3537,-0.0322,0.0199)(-0.4088,-0.0281,0.0579) (-0.4076,-0.1801,0.0021)(-0.2572,0.1114,-0.0763) (-0.4076,-0.1801,0.0021)(-0.2762,0.1126,0.0079) (-0.4076,-0.1801,0.0021)(-0.4088,-0.0281,0.0579) (-0.2572,0.1114,-0.0763)(-0.2762,0.1126,0.0079) (-0.2572,0.1114,-0.0763)(-0.4088,-0.0281,0.0579) (-0.2572,0.1114,-0.0763)(-0.2006,0.2453,0.0863) (-0.2762,0.1126,0.0079)(-0.4088,-0.0281,0.0579) (-0.2762,0.1126,0.0079)(-0.2006,0.2453,0.0863) (0.2350,0.4744,-0.0619)(0.0839,0.4052,-0.2131) (-0.4088,-0.0281,0.0579)(-0.2006,0.2453,0.0863) (-0.2006,0.2453,0.0863)(-0.0684,0.2229,-0.1159) (-0.3962,-0.2840,-0.0640) (-0.3849,-0.0888,0.0158) (-0.3537,-0.0322,0.0199) (-0.4076,-0.1801,0.0021) (-0.2572,0.1114,-0.0763) (-0.2762,0.1126,0.0079) (0.2350,0.4744,-0.0619) (-0.4088,-0.0281,0.0579) (-0.2006,0.2453,0.0863) (0.0839,0.4052,-0.2131) (-0.0684,0.2229,-0.1159) (-0.3962,-0.2840,-0.1140)[S&P]{} (-0.3849,-0.0888,0.0658)[Nasd]{} (-0.3537,-0.0322,0.0699)[Cana]{} (-0.4076,-0.1801,0.0521)[UK]{} (-0.2572,0.1114,-0.1263)[Irel]{} (-0.2762,0.1126,0.0579)[Germ]{} (0.2350,0.4744,-0.1119)[Autr]{} (-0.4088,-0.0281,0.1079)[Neth]{} (-0.2006,0.2453,0.1363)[Swed]{} (0.0839,0.4052,-0.2631)[Spai]{} (-0.0684,0.2229,-0.1659)[Japa]{} (-8,-1.6)(1,1.5) (-3,1.1)[T=0.7]{} (-3.9,-1.9)(2.9,-1.9)(2.9,1.5)(-3.9,1.5)(-3.9,-1.9) (-0.3962,-0.2840,-0.0640)(-0.3849,-0.0888,0.0158) (-0.3962,-0.2840,-0.0640)(-0.3537,-0.0322,0.0199) (-0.3962,-0.2840,-0.0640)(-0.4076,-0.1801,0.0021) (-0.3962,-0.2840,-0.0640)(-0.2762,0.1126,0.0079) (-0.3962,-0.2840,-0.0640)(-0.4088,-0.0281,0.0579) (-0.3849,-0.0888,0.0158)(-0.3537,-0.0322,0.0199) (-0.3849,-0.0888,0.0158)(-0.4076,-0.1801,0.0021) (-0.3849,-0.0888,0.0158)(-0.2572,0.1114,-0.0763) (-0.3849,-0.0888,0.0158)(-0.2762,0.1126,0.0079) (-0.3849,-0.0888,0.0158)(-0.4088,-0.0281,0.0579) (-0.3849,-0.0888,0.0158)(-0.2006,0.2453,0.0863) (-0.3849,-0.0888,0.0158)(-0.0506,-0.2656,0.0853) (-0.3849,-0.0888,0.0158)(-0.0106,0.2446,0.0558) (-0.3537,-0.0322,0.0199)(-0.4076,-0.1801,0.0021) (-0.3537,-0.0322,0.0199)(-0.2762,0.1126,0.0079) (-0.3537,-0.0322,0.0199)(-0.4088,-0.0281,0.0579) (-0.3537,-0.0322,0.0199)(-0.2572,0.1114,-0.0763) (-0.3537,-0.0322,0.0199)(-0.2006,0.2453,0.0863) (-0.3537,-0.0322,0.0199)(-0.1158,0.2376,0.0565) (-0.4076,-0.1801,0.0021)(-0.2572,0.1114,-0.0763) (-0.4076,-0.1801,0.0021)(-0.2762,0.1126,0.0079) (-0.4076,-0.1801,0.0021)(-0.4088,-0.0281,0.0579) (-0.4076,-0.1801,0.0021)(-0.2006,0.2453,0.0863) (-0.2572,0.1114,-0.0763)(-0.2762,0.1126,0.0079) (-0.2572,0.1114,-0.0763)(-0.4088,-0.0281,0.0579) (-0.2572,0.1114,-0.0763)(-0.2006,0.2453,0.0863) (-0.2572,0.1114,-0.0763)(-0.1158,0.2376,0.0565) (-0.2762,0.1126,0.0079)(-0.4088,-0.0281,0.0579) (-0.2762,0.1126,0.0079)(-0.2006,0.2453,0.0863) (-0.2762,0.1126,0.0079)(-0.0684,0.2229,-0.1159) (-0.2762,0.1126,0.0079)(-0.1158,0.2376,0.0565) (0.2350,0.4744,-0.0619)(0.0839,0.4052,-0.2131) (0.2350,0.4744,-0.0619)(-0.2006,0.2453,0.0863) (0.2350,0.4744,-0.0619)(-0.0106,0.2446,0.0558) (-0.4088,-0.0281,0.0579)(-0.2006,0.2453,0.0863) (-0.4088,-0.0281,0.0579)(-0.1158,0.2376,0.0565) (-0.2006,0.2453,0.0863)(-0.0684,0.2229,-0.1159) (-0.2006,0.2453,0.0863)(-0.1158,0.2376,0.0565) (0.0839,0.4052,-0.2131)(-0.1158,0.2376,0.0565) (0.0839,0.4052,-0.2131)(-0.0106,0.2446,0.0558) (-0.0684,0.2229,-0.1159)(-0.0106,0.2446,0.0558) (-0.3962,-0.2840,-0.0640) (-0.3849,-0.0888,0.0158) (-0.3537,-0.0322,0.0199) (-0.4076,-0.1801,0.0021) (-0.2572,0.1114,-0.0763) (-0.2762,0.1126,0.0079) (0.2350,0.4744,-0.0619) (-0.4088,-0.0281,0.0579) (-0.2006,0.2453,0.0863) (0.0839,0.4052,-0.2131) (-0.0506,-0.2656,0.0853) (-0.0684,0.2229,-0.1159) (-0.1158,0.2376,0.0565) (-0.0106,0.2446,0.0558) (-0.3962,-0.2840,-0.1140)[S&P]{} (-0.3849,-0.0888,0.0658)[Nasd]{} (-0.3537,-0.0322,0.0699)[Cana]{} (-0.4076,-0.1801,0.0521)[UK]{} (-0.2572,0.1114,-0.1263)[Irel]{} (-0.2762,0.1126,0.0579)[Germ]{} (0.2350,0.4744,-0.1119)[Autr]{} (-0.4088,-0.0281,0.1079)[Neth]{} (-0.2006,0.2453,0.1363)[Swed]{} (0.0839,0.4052,-0.2631)[Spai]{} (-0.0506,-0.2656,0.1353)[Indi]{} (-0.0684,0.2229,-0.1659)[Japa]{} (-0.1158,0.2376,0.1065)[HoKo]{} (-0.0106,0.2446,0.1058)[Mala]{} 0.7 cm Fig. 4. Three dimensional view of the asset trees for the second semester of 1987, with threshold ranging from $T=0.4$ to $T=0.7$. 0.3 cm 1997 and 1998 - Asian Financial Crisis and Russian Crisis --------------------------------------------------------- We now jump some years ahead to the next two crises we are going to analyze. The first one is the Asian Financial Crisis of 1997, which began with the devaluation of the Thailandese currency and spread to other Pacific Asian markets. The second was almost a consequence of the first one, as prices of commodities fell worldwide, affecting the Russian economy with particular acuteness. The networks for 1997 are built using 57 indices, adding the indices IPC from Mexico (Mexi), BCT Corp from Costa Rica (CoRi), Bermuda SX Index from Bermuda (Bermu), Jamaica SX from Jamaica (Jama), Merval from Argentina (Arge), IPSA from Chile (Chil), IBC from Venezuela (Vene), IGBVL from Peru (Peru), CAC 40 from France (Fran), SMI from Switzerland (Swit), BEL 20 from Belgium (Belg), OMX Copenhagen 20 from Denmark (Denm), OBX from Norway (Norw), OMX Iceland from Iceland (Icel), PSI 20 from Portugal (Port), PX (or PX50) from the Czech Republic (CzRe), SAX from Slovakia (Slok), Budapest SX from Hungary (Hung), WIG from Poland (Pola), OMXT from Estonia (Esto), ISE National 100 from Turkey (Turk), Tel Aviv 25 from Israel (Isra), BLOM from Lebanon (Leba), TASI from Saudi Arabia (SaAr), MSM 30 from Ohman (Ohma), Karachi 100 from Pakistan (Paki), Shanghai SE Composite from China (Chin), SET from Thailand (Thai), S&P/ASX 200 from Australia (Aust), CFG25 from Morocco (Moro), Ghana All Share from Ghana (Ghan), NSE 20 from Kenya (Keny), FTSE/JSE Africa All Share from South Africa (SoAf), and SEMDEX from Mauritius (Maur). For 1997, we have no data about Russia, which is added as the index MICEX from Russia (Russ) to the data used for building networks for 1998 (which then has 58 indices). ### First semester, 1997 Figure 5 shows the three dimensional view of the asset trees for the first semester of 1997, with threshold ranging from $T=0.4$ to $T=0.7$. At threshold $T=0.3$, the only connection is between S&P ans Nasdaq. For $T=0.4$, Canada joins the North American cluster, and an European cluster appears, formed by France, Germany, Switzerland, Belgium, Netherlands, Sweden, Finland, and Norway. At $T=0.5$, nothing changes in the North Amercian cluster, but the European cluster becomes denser, with more connections formed among its indices. The UK, Ireland, Austria, Denmark, Spain, and Portugal join that cluster. Something to notice is that there are strong ties formed among the Scandinavian indices, and among the Central European ones. For $T=0.6$, Brazil and Argentina connect with the now American cluster. There is also further consolidation in the European cluster. Strange connections are formed, like two connections of Peru, with Denmark and Portugal. Less strange are the connections of Australia with Germany, and of Hong Kong with Austria. Noise starts to become important at this threshold, so one should be careful not to consider all connections now as true information. At $T=0.7$, noise becomes a strong effect, but we analize this case, anyway, because it also has some important information. We now have connections between the American and the European clusters, formed via Canada and Chile. Mexico connects with Argentina, and thus to the American cluster. Poland joins the European cluster, and Hong Kong, Australia, and South Africa establish more connections with Europe (South Africa, for the first time). There is also a small, separate cluster, formed by Malaysia and Indonesia. Peru connects with Norway, and Iceland forms a cluster with Mauritius, the latter almost clearly an effect of random noise. More connections are formed for higher thresholds, but many of them are the effect of random connections, and are indistinguishable from connections containing true information. 0.3 cm (-4,-2.1)(1,4) (-3,3.3)[T=0.4]{} (-3.9,-4)(3.7,-4)(3.7,3.7)(-3.9,3.7)(-3.9,-4) (-0.1802,-0.4934,-0.0027)(-0.2080,-0.3788,0.0883) (-0.1802,-0.4934,-0.0027)(-0.3369,-0.3974,0.0032) (-0.3539,-0.0231,-0.1883)(-0.4251,0.0003,-0.0273) (-0.3585,0.1557,-0.0561)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.4251,0.0003,-0.0273) (-0.3959,0.1795,-0.1045)(-0.4004,0.2005,-0.0327) (-0.3815,0.0924,-0.1479)(-0.4186,0.0145,-0.1688) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4251,0.0003,-0.0273)(-0.4004,0.2005,-0.0327) (-0.1802,-0.4934,-0.0027) (-0.2080,-0.3788,0.0883) (-0.3369,-0.3974,0.0032) (-0.3539,-0.0231,-0.1883) (-0.3585,0.1557,-0.0561) (-0.3959,0.1795,-0.1045) (-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688) (-0.4251,0.0003,-0.0273) (-0.4004,0.2005,-0.0327) (-0.3582,0.0560,0.0114) (-0.1802,-0.4934,-0.0527)[S&P]{} (-0.2080,-0.3788,0.1383)[Nasd]{} (-0.3369,-0.3974,0.0532)[Cana]{} (-0.3539,-0.0231,-0.2383)[Fran]{} (-0.3585,0.1557,-0.1061)[Germ]{} (-0.3959,0.1795,-0.1545)[Swit]{} (-0.3815,0.0924,-0.1979)[Belg]{} (-0.4238,0.1194,-0.1766)[Neth]{} (-0.4251,0.0003,-0.0773)[Swed]{} (-0.4004,0.2005,-0.0827)[Finl]{} (-0.3582,0.0560,0.0614)[Norw]{} (-8,-2.1)(1,4) (-3,3.3)[T=0.5]{} (-3.9,-4)(3.7,-4)(3.7,3.7)(-3.9,3.7)(-3.9,-4) (-0.1802,-0.4934,-0.0027)(-0.2080,-0.3788,0.0883) (-0.1802,-0.4934,-0.0027)(-0.3369,-0.3974,0.0032) (-0.4186,0.0145,-0.1688)(-0.3105,0.1520,-0.0900) (-0.4186,0.0145,-0.1688)(-0.3539,-0.0231,-0.1883) (-0.4186,0.0145,-0.1688)(-0.3959,0.1795,-0.1045) (-0.4186,0.0145,-0.1688)(-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688)(-0.4186,0.0145,-0.1688) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.3105,0.1520,-0.0900)(-0.3585,0.1557,-0.0561) (-0.3539,-0.0231,-0.1883)(-0.3585,0.1557,-0.0561) (-0.3539,-0.0231,-0.1883)(-0.4251,0.0003,-0.0273) (-0.3539,-0.0231,-0.1883)(-0.3959,0.1795,-0.1045) (-0.3539,-0.0231,-0.1883)(-0.3815,0.0924,-0.1479) (-0.3539,-0.0231,-0.1883)(-0.4186,0.0145,-0.1688) (-0.3539,-0.0231,-0.1883)(-0.4004,0.2005,-0.0327) (-0.3539,-0.0231,-0.1883)(-0.3776,0.0098,-0.0952) (-0.3539,-0.0231,-0.1883)(-0.2502,0.1332,-0.2860) (-0.3585,0.1557,-0.0561)(-0.2673,0.1819,0.2247) (-0.3585,0.1557,-0.0561)(-0.3815,0.0924,-0.1479) (-0.3585,0.1557,-0.0561)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.3815,0.0924,-0.1479) (-0.3959,0.1795,-0.1045)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.4251,0.0003,-0.0273) (-0.3959,0.1795,-0.1045)(-0.4004,0.2005,-0.0327) (-0.3959,0.1795,-0.1045)(-0.3582,0.0560,0.0114) (-0.3959,0.1795,-0.1045)(-0.3776,0.0098,-0.0952) (-0.3815,0.0924,-0.1479)(-0.4186,0.0145,-0.1688) (-0.3815,0.0924,-0.1479)(-0.4251,0.0003,-0.0273) (-0.3815,0.0924,-0.1479)(-0.4004,0.2005,-0.0327) (-0.3815,0.0924,-0.1479)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.2377,0.1665,-0.1070) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.4186,0.0145,-0.1688)(-0.2502,0.1332,-0.2860) (-0.4251,0.0003,-0.0273)(-0.4004,0.2005,-0.0327) (-0.4251,0.0003,-0.0273)(-0.3582,0.0560,0.0114) (-0.2377,0.1665,-0.1070)(-0.4004,0.2005,-0.0327) (-0.4004,0.2005,-0.0327)(-0.3582,0.0560,0.0114) (-0.3776,0.0098,-0.0952)(-0.2502,0.1332,-0.2860) (-0.1802,-0.4934,-0.0027) (-0.2080,-0.3788,0.0883) (-0.3369,-0.3974,0.0032) (-0.4186,0.0145,-0.1688) (-0.3105,0.1520,-0.0900) (-0.3539,-0.0231,-0.1883) (-0.3585,0.1557,-0.0561) (-0.3959,0.1795,-0.1045) (-0.2673,0.1819,0.2247) (-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688) (-0.4251,0.0003,-0.0273) (-0.2377,0.1665,-0.1070) (-0.4004,0.2005,-0.0327) (-0.3582,0.0560,0.0114) (-0.3776,0.0098,-0.0952) (-0.2502,0.1332,-0.2860) (-0.1802,-0.4934,-0.0527)[S&P]{} (-0.2080,-0.3788,0.1383)[Nasd]{} (-0.3369,-0.3974,0.0532)[Cana]{} (-0.4186,0.0145,-0.2188)[UK]{} (-0.3105,0.1520,-0.1400)[Irel]{} (-0.3539,-0.0231,-0.2383)[Fran]{} (-0.3585,0.1557,-0.1061)[Germ]{} (-0.3959,0.1795,-0.1545)[Swit]{} (-0.2673,0.1819,0.2747)[Autr]{} (-0.3815,0.0924,-0.1979)[Belg]{} (-0.4238,0.1194,-0.1766)[Neth]{} (-0.4251,0.0003,-0.0773)[Swed]{} (-0.2377,0.1665,-0.1570)[Denm]{} (-0.4004,0.2005,-0.0827)[Finl]{} (-0.3582,0.0560,0.0614)[Norw]{} (-0.3776,0.0098,-0.1452)[Spai]{} (-0.2502,0.1332,-0.3360)[Port]{} 2.2 cm (-4,-2.1)(1,4) (-3,3.3)[T=0.6]{} (-3.9,-4)(3.7,-4)(3.7,3.7)(-3.9,3.7)(-3.9,-4) (-0.1802,-0.4934,-0.0027)(-0.2080,-0.3788,0.0883) (-0.1802,-0.4934,-0.0027)(-0.3369,-0.3974,0.0032) (-0.1802,-0.4934,-0.0027)(-0.0308,-0.5050,0.0596) (-0.2080,-0.3788,0.0883)(-0.3369,-0.3974,0.0032) (-0.2080,-0.3788,0.0883)(-0.1393,-0.3837,0.0985) (-0.3369,-0.3974,0.0032)(-0.0308,-0.5050,0.0596) (-0.4186,0.0145,-0.1688)(-0.3105,0.1520,-0.0900) (-0.4186,0.0145,-0.1688)(-0.3539,-0.0231,-0.1883) (-0.4186,0.0145,-0.1688)(-0.3959,0.1795,-0.1045) (-0.4186,0.0145,-0.1688)(-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688)(-0.4186,0.0145,-0.1688) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.2377,0.1665,-0.1070) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.4186,0.0145,-0.1688)(-0.2502,0.1332,-0.2860) (-0.3105,0.1520,-0.0900)(-0.3585,0.1557,-0.0561) (-0.3105,0.1520,-0.0900)(-0.3539,-0.0231,-0.1883) (-0.3105,0.1520,-0.0900)(-0.3959,0.1795,-0.1045) (-0.3105,0.1520,-0.0900)(-0.3815,0.0924,-0.1479) (-0.3105,0.1520,-0.0900)(-0.4186,0.0145,-0.1688) (-0.3105,0.1520,-0.0900)(-0.4251,0.0003,-0.0273) (-0.3105,0.1520,-0.0900)(-0.2377,0.1665,-0.1070) (-0.3105,0.1520,-0.0900)(-0.4004,0.2005,-0.0327) (-0.3105,0.1520,-0.0900)(-0.2502,0.1332,-0.2860) (-0.3539,-0.0231,-0.1883)(-0.3585,0.1557,-0.0561) (-0.3539,-0.0231,-0.1883)(-0.4251,0.0003,-0.0273) (-0.3539,-0.0231,-0.1883)(-0.3959,0.1795,-0.1045) (-0.3539,-0.0231,-0.1883)(-0.3815,0.0924,-0.1479) (-0.3539,-0.0231,-0.1883)(-0.4186,0.0145,-0.1688) (-0.3539,-0.0231,-0.1883)(-0.4004,0.2005,-0.0327) (-0.3539,-0.0231,-0.1883)(-0.3776,0.0098,-0.0952) (-0.3539,-0.0231,-0.1883)(-0.2502,0.1332,-0.2860) (-0.3585,0.1557,-0.0561)(-0.3959,0.1795,-0.1045) (-0.3585,0.1557,-0.0561)(-0.2673,0.1819,0.2247) (-0.3585,0.1557,-0.0561)(-0.3815,0.0924,-0.1479) (-0.3585,0.1557,-0.0561)(-0.4186,0.0145,-0.1688) (-0.3585,0.1557,-0.0561)(-0.4251,0.0003,-0.0273) (-0.3585,0.1557,-0.0561)(-0.2377,0.1665,-0.1070) (-0.3585,0.1557,-0.0561)(-0.4004,0.2005,-0.0327) (-0.3585,0.1557,-0.0561)(-0.3582,0.0560,0.0114) (-0.3585,0.1557,-0.0561)(-0.3776,0.0098,-0.0952) (-0.3585,0.1557,-0.0561)(-0.2502,0.1332,-0.2860) (-0.3585,0.1557,-0.0561)(-0.1225,0.0585,0.2137) (-0.3959,0.1795,-0.1045)(-0.3815,0.0924,-0.1479) (-0.3959,0.1795,-0.1045)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.4251,0.0003,-0.0273) (-0.3959,0.1795,-0.1045)(-0.2377,0.1665,-0.1070) (-0.3959,0.1795,-0.1045)(-0.4004,0.2005,-0.0327) (-0.3959,0.1795,-0.1045)(-0.3582,0.0560,0.0114) (-0.3959,0.1795,-0.1045)(-0.3776,0.0098,-0.0952) (-0.3959,0.1795,-0.1045)(-0.2502,0.1332,-0.2860) (-0.2673,0.1819,0.2247)(-0.3815,0.0924,-0.1479) (-0.2673,0.1819,0.2247)(-0.4186,0.0145,-0.1688) (-0.2673,0.1819,0.2247)(-0.2377,0.1665,-0.1070) (-0.2673,0.1819,0.2247)(-0.0476,0.1580,0.2463) (-0.3815,0.0924,-0.1479)(-0.4186,0.0145,-0.1688) (-0.3815,0.0924,-0.1479)(-0.4251,0.0003,-0.0273) (-0.3815,0.0924,-0.1479)(-0.2377,0.1665,-0.1070) (-0.3815,0.0924,-0.1479)(-0.4004,0.2005,-0.0327) (-0.3815,0.0924,-0.1479)(-0.3582,0.0560,0.0114) (-0.3815,0.0924,-0.1479)(-0.3776,0.0098,-0.0952) (-0.3815,0.0924,-0.1479)(-0.2502,0.1332,-0.2860) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.2377,0.1665,-0.1070) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.4186,0.0145,-0.1688)(-0.2502,0.1332,-0.2860) (-0.4251,0.0003,-0.0273)(-0.4004,0.2005,-0.0327) (-0.4251,0.0003,-0.0273)(-0.3582,0.0560,0.0114) (-0.4251,0.0003,-0.0273)(-0.3776,0.0098,-0.0952) (-0.4251,0.0003,-0.0273)(-0.2502,0.1332,-0.2860) (-0.2377,0.1665,-0.1070)(-0.4004,0.2005,-0.0327) (-0.2377,0.1665,-0.1070)(-0.3582,0.0560,0.0114) (-0.4004,0.2005,-0.0327)(-0.3582,0.0560,0.0114) (-0.4004,0.2005,-0.0327)(-0.3776,0.0098,-0.0952) (-0.3582,0.0560,0.0114)(-0.3776,0.0098,-0.0952) (-0.3776,0.0098,-0.0952)(-0.2502,0.1332,-0.2860) (-0.1802,-0.4934,-0.0027) (-0.2080,-0.3788,0.0883) (-0.3369,-0.3974,0.0032) (0.0414,-0.2952,0.0106) (-0.1393,-0.3837,0.0985) (-0.0308,-0.5050,0.0596) (-0.4186,0.0145,-0.1688) (-0.3105,0.1520,-0.0900) (-0.3539,-0.0231,-0.1883) (-0.3585,0.1557,-0.0561) (-0.3959,0.1795,-0.1045) (-0.2673,0.1819,0.2247) (-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688) (-0.4251,0.0003,-0.0273) (-0.2377,0.1665,-0.1070) (-0.4004,0.2005,-0.0327) (-0.3582,0.0560,0.0114) (-0.3776,0.0098,-0.0952) (-0.2502,0.1332,-0.2860) (-0.0476,0.1580,0.2463) (-0.1225,0.0585,0.2137) (-0.1802,-0.4934,-0.0527)[S&P]{} (-0.2080,-0.3788,0.1383)[Nasd]{} (-0.3369,-0.3974,0.0532)[Cana]{} (0.0414,-0.2952,0.0606)[Mexi]{} (-0.1393,-0.3837,0.1485)[Braz]{} (-0.0308,-0.5050,0.1096)[Arge]{} (-0.4186,0.0145,-0.2188)[UK]{} (-0.3105,0.1520,-0.1400)[Irel]{} (-0.3539,-0.0231,-0.2383)[Fran]{} (-0.3585,0.1557,-0.1061)[Germ]{} (-0.3959,0.1795,-0.1545)[Swit]{} (-0.2673,0.1819,0.2747)[Autr]{} (-0.3815,0.0924,-0.1979)[Belg]{} (-0.4238,0.1194,-0.1766)[Neth]{} (-0.4251,0.0003,-0.0773)[Swed]{} (-0.2377,0.1665,-0.1570)[Denm]{} (-0.4004,0.2005,-0.0827)[Finl]{} (-0.3582,0.0560,0.0614)[Norw]{} (-0.3776,0.0098,-0.1452)[Spai]{} (-0.2502,0.1332,-0.3360)[Port]{} (-0.0476,0.1580,0.2963)[HoKo]{} (-0.1225,0.0585,0.2637)[Aust]{} (-8,-2.1)(1,4) (-3,3.3)[T=0.7]{} (-3.9,-4)(3.7,-4)(3.7,3.7)(-3.9,3.7)(-3.9,-4) (-0.1802,-0.4934,-0.0027)(-0.2080,-0.3788,0.0883) (-0.1802,-0.4934,-0.0027)(-0.3369,-0.3974,0.0032) (-0.1802,-0.4934,-0.0027)(-0.1393,-0.3837,0.0985) (-0.1802,-0.4934,-0.0027)(-0.0308,-0.5050,0.0596) (-0.2080,-0.3788,0.0883)(-0.3369,-0.3974,0.0032) (-0.2080,-0.3788,0.0883)(-0.1393,-0.3837,0.0985) (-0.2080,-0.3788,0.0883)(-0.0308,-0.5050,0.0596) (-0.2080,-0.3788,0.0883)(-0.3776,0.0098,-0.0952) (-0.3369,-0.3974,0.0032)(-0.0308,-0.5050,0.0596) (-0.3369,-0.3974,0.0032)(-0.1393,-0.3837,0.0985) (-0.3369,-0.3974,0.0032)(-0.4186,0.0145,-0.1688) (-0.3369,-0.3974,0.0032)(-0.3585,0.1557,-0.0561) (-0.3369,-0.3974,0.0032)(-0.3815,0.0924,-0.1479) (-0.3369,-0.3974,0.0032)(-0.4186,0.0145,-0.1688) (-0.3369,-0.3974,0.0032)(-0.4251,0.0003,-0.0273) (-0.3369,-0.3974,0.0032)(-0.3582,0.0560,0.0114) (-0.3369,-0.3974,0.0032)(-0.3776,0.0098,-0.0952) (0.0414,-0.2952,0.0106)(-0.0308,-0.5050,0.0596) (-0.1393,-0.3837,0.0985)(-0.0308,-0.5050,0.0596) (-0.1393,-0.3837,0.0985)(-0.1311,-0.2968,0.1712) (-0.0308,-0.5050,0.0596)(-0.1311,-0.2968,0.1712) (-0.1311,-0.2968,0.1712)(-0.0877,-0.2248,0.0921) (-0.1311,-0.2968,0.1712)(-0.3815,0.0924,-0.1479) (-0.1311,-0.2968,0.1712)(-0.4004,0.2005,-0.0327) (-0.1311,-0.2968,0.1712)(-0.3582,0.0560,0.0114) (-0.0877,-0.2248,0.0921)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3105,0.1520,-0.0900) (-0.4186,0.0145,-0.1688)(-0.3539,-0.0231,-0.1883) (-0.4186,0.0145,-0.1688)(-0.3585,0.1557,-0.0561) (-0.4186,0.0145,-0.1688)(-0.3959,0.1795,-0.1045) (-0.4186,0.0145,-0.1688)(-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688)(-0.4186,0.0145,-0.1688) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.2377,0.1665,-0.1070) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.4186,0.0145,-0.1688)(-0.2502,0.1332,-0.2860) (-0.3105,0.1520,-0.0900)(-0.3585,0.1557,-0.0561) (-0.3105,0.1520,-0.0900)(-0.3539,-0.0231,-0.1883) (-0.3105,0.1520,-0.0900)(-0.3959,0.1795,-0.1045) (-0.3105,0.1520,-0.0900)(-0.3815,0.0924,-0.1479) (-0.3105,0.1520,-0.0900)(-0.4186,0.0145,-0.1688) (-0.3105,0.1520,-0.0900)(-0.4251,0.0003,-0.0273) (-0.3105,0.1520,-0.0900)(-0.2377,0.1665,-0.1070) (-0.3105,0.1520,-0.0900)(-0.4004,0.2005,-0.0327) (-0.3105,0.1520,-0.0900)(-0.3582,0.0560,0.0114) (-0.3105,0.1520,-0.0900)(-0.3776,0.0098,-0.0952) (-0.3105,0.1520,-0.0900)(-0.2502,0.1332,-0.2860) (-0.3105,0.1520,-0.0900)(-0.1225,0.0585,0.2137) (-0.3539,-0.0231,-0.1883)(-0.3585,0.1557,-0.0561) (-0.3539,-0.0231,-0.1883)(-0.4251,0.0003,-0.0273) (-0.3539,-0.0231,-0.1883)(-0.3959,0.1795,-0.1045) (-0.3539,-0.0231,-0.1883)(-0.3815,0.0924,-0.1479) (-0.3539,-0.0231,-0.1883)(-0.4186,0.0145,-0.1688) (-0.3539,-0.0231,-0.1883)(-0.4004,0.2005,-0.0327) (-0.3539,-0.0231,-0.1883)(-0.3582,0.0560,0.0114) (-0.3539,-0.0231,-0.1883)(-0.3776,0.0098,-0.0952) (-0.3539,-0.0231,-0.1883)(-0.2502,0.1332,-0.2860) (-0.3585,0.1557,-0.0561)(-0.3959,0.1795,-0.1045) (-0.3585,0.1557,-0.0561)(-0.2673,0.1819,0.2247) (-0.3585,0.1557,-0.0561)(-0.3815,0.0924,-0.1479) (-0.3585,0.1557,-0.0561)(-0.4186,0.0145,-0.1688) (-0.3585,0.1557,-0.0561)(-0.4251,0.0003,-0.0273) (-0.3585,0.1557,-0.0561)(-0.2377,0.1665,-0.1070) (-0.3585,0.1557,-0.0561)(-0.4004,0.2005,-0.0327) (-0.3585,0.1557,-0.0561)(-0.3582,0.0560,0.0114) (-0.3585,0.1557,-0.0561)(-0.3776,0.0098,-0.0952) (-0.3585,0.1557,-0.0561)(-0.2502,0.1332,-0.2860) (-0.3585,0.1557,-0.0561)(-0.0476,0.1580,0.2463) (-0.3585,0.1557,-0.0561)(-0.1225,0.0585,0.2137) (-0.3585,0.1557,-0.0561)(-0.1503,0.2115,-0.1024) (-0.3959,0.1795,-0.1045)(-0.3815,0.0924,-0.1479) (-0.3959,0.1795,-0.1045)(-0.4186,0.0145,-0.1688) (-0.3959,0.1795,-0.1045)(-0.4251,0.0003,-0.0273) (-0.3959,0.1795,-0.1045)(-0.2377,0.1665,-0.1070) (-0.3959,0.1795,-0.1045)(-0.4004,0.2005,-0.0327) (-0.3959,0.1795,-0.1045)(-0.3582,0.0560,0.0114) (-0.3959,0.1795,-0.1045)(-0.3776,0.0098,-0.0952) (-0.3959,0.1795,-0.1045)(-0.2502,0.1332,-0.2860) (-0.3959,0.1795,-0.1045)(-0.1503,0.2115,-0.1024) (-0.2673,0.1819,0.2247)(-0.3815,0.0924,-0.1479) (-0.2673,0.1819,0.2247)(-0.4186,0.0145,-0.1688) (-0.2673,0.1819,0.2247)(-0.2377,0.1665,-0.1070) (-0.2673,0.1819,0.2247)(-0.4004,0.2005,-0.0327) (-0.2673,0.1819,0.2247)(-0.3582,0.0560,0.0114) (-0.2673,0.1819,0.2247)(-0.3776,0.0098,-0.0952) (-0.2673,0.1819,0.2247)(-0.0476,0.1580,0.2463) (-0.3815,0.0924,-0.1479)(-0.4186,0.0145,-0.1688) (-0.3815,0.0924,-0.1479)(-0.4251,0.0003,-0.0273) (-0.3815,0.0924,-0.1479)(-0.2377,0.1665,-0.1070) (-0.3815,0.0924,-0.1479)(-0.4004,0.2005,-0.0327) (-0.3815,0.0924,-0.1479)(-0.3582,0.0560,0.0114) (-0.3815,0.0924,-0.1479)(-0.3776,0.0098,-0.0952) (-0.3815,0.0924,-0.1479)(-0.2502,0.1332,-0.2860) (-0.3815,0.0924,-0.1479)(0.0224,0.0800,-0.1673) (-0.3815,0.0924,-0.1479)(-0.1225,0.0585,0.2137) (-0.3815,0.0924,-0.1479)(-0.1503,0.2115,-0.1024) (-0.4186,0.0145,-0.1688)(-0.4251,0.0003,-0.0273) (-0.4186,0.0145,-0.1688)(-0.2377,0.1665,-0.1070) (-0.4186,0.0145,-0.1688)(-0.4004,0.2005,-0.0327) (-0.4186,0.0145,-0.1688)(-0.3582,0.0560,0.0114) (-0.4186,0.0145,-0.1688)(-0.3776,0.0098,-0.0952) (-0.4186,0.0145,-0.1688)(-0.2502,0.1332,-0.2860) (-0.4186,0.0145,-0.1688)(-0.1225,0.0585,0.2137) (-0.4186,0.0145,-0.1688)(-0.1503,0.2115,-0.1024) (-0.4251,0.0003,-0.0273)(-0.4004,0.2005,-0.0327) (-0.4251,0.0003,-0.0273)(-0.3582,0.0560,0.0114) (-0.4251,0.0003,-0.0273)(-0.3776,0.0098,-0.0952) (-0.4251,0.0003,-0.0273)(-0.2502,0.1332,-0.2860) (-0.2377,0.1665,-0.1070)(-0.4004,0.2005,-0.0327) (-0.2377,0.1665,-0.1070)(-0.3582,0.0560,0.0114) (-0.2377,0.1665,-0.1070)(-0.2502,0.1332,-0.2860) (-0.4004,0.2005,-0.0327)(-0.3582,0.0560,0.0114) (-0.4004,0.2005,-0.0327)(-0.3776,0.0098,-0.0952) (-0.4004,0.2005,-0.0327)(-0.2502,0.1332,-0.2860) (-0.4004,0.2005,-0.0327)(0.0224,0.0800,-0.1673) (-0.4004,0.2005,-0.0327)(-0.1503,0.2115,-0.1024) (-0.3582,0.0560,0.0114)(-0.3776,0.0098,-0.0952) (-0.3582,0.0560,0.0114)(-0.2502,0.1332,-0.2860) (-0.3582,0.0560,0.0114)(-0.1225,0.0585,0.2137) (0.2491,-0.1293,-0.2440)(0.3172,-0.0126,-0.4672) (-0.3776,0.0098,-0.0952)(-0.2502,0.1332,-0.2860) (-0.3776,0.0098,-0.0952)(-0.1503,0.2115,-0.1024) (-0.2502,0.1332,-0.2860)(-0.1503,0.2115,-0.1024) (0.1578,0.2256,0.3681)(0.1069,0.2638,0.4227) (-0.1802,-0.4934,-0.0027) (-0.2080,-0.3788,0.0883) (-0.3369,-0.3974,0.0032) (0.0414,-0.2952,0.0106) (-0.1393,-0.3837,0.0985) (-0.0308,-0.5050,0.0596) (-0.1311,-0.2968,0.1712) (-0.0877,-0.2248,0.0921) (-0.4186,0.0145,-0.1688) (-0.3105,0.1520,-0.0900) (-0.3539,-0.0231,-0.1883) (-0.3585,0.1557,-0.0561) (-0.3959,0.1795,-0.1045) (-0.2673,0.1819,0.2247) (-0.3815,0.0924,-0.1479) (-0.4186,0.0145,-0.1688) (-0.4251,0.0003,-0.0273) (-0.2377,0.1665,-0.1070) (-0.4004,0.2005,-0.0327) (-0.3582,0.0560,0.0114) (0.2491,-0.1293,-0.2440) (-0.3776,0.0098,-0.0952) (-0.2502,0.1332,-0.2860) (0.0224,0.0800,-0.1673) (-0.0476,0.1580,0.2463) (0.1578,0.2256,0.3681) (0.1069,0.2638,0.4227) (-0.1225,0.0585,0.2137) (-0.1503,0.2115,-0.1024) (0.3172,-0.0126,-0.4672) (-0.1802,-0.4934,-0.0527)[S&P]{} (-0.2080,-0.3788,0.1383)[Nasd]{} (-0.3369,-0.3974,0.0532)[Cana]{} (0.0414,-0.2952,0.0606)[Mexi]{} (-0.1393,-0.3837,0.1485)[Braz]{} (-0.0308,-0.5050,0.1096)[Arge]{} (-0.1311,-0.2968,0.2212)[Chil]{} (-0.0877,-0.2248,0.1421)[Peru]{} (-0.4186,0.0145,-0.2188)[UK]{} (-0.3105,0.1520,-0.1400)[Irel]{} (-0.3539,-0.0231,-0.2383)[Fran]{} (-0.3585,0.1557,-0.1061)[Germ]{} (-0.3959,0.1795,-0.1545)[Swit]{} (-0.2673,0.1819,0.2747)[Autr]{} (-0.3815,0.0924,-0.1979)[Belg]{} (-0.4238,0.1194,-0.1766)[Neth]{} (-0.4251,0.0003,-0.0773)[Swed]{} (-0.2377,0.1665,-0.1570)[Denm]{} (-0.4004,0.2005,-0.0827)[Finl]{} (-0.3582,0.0560,0.0614)[Norw]{} (0.2491,-0.1293,-0.2940)[Icel]{} (-0.3776,0.0098,-0.1452)[Spai]{} (-0.2502,0.1332,-0.3360)[Port]{} (0.0224,0.0800,-0.2173)[Pola]{} (-0.0476,0.1580,0.2963)[HoKo]{} (0.1578,0.2256,0.4181)[Mala]{} (0.1069,0.2638,0.4727)[Indo]{} (-0.1225,0.0585,0.2637)[Aust]{} (-0.1503,0.2115,-0.1524)[SoAf]{} (0.3172,-0.0126,-0.5172)[Maur]{} 2.2 cm Fig. 5. Three dimensional view of the asset trees for the first semester of 1997, with threshold ranging from $T=0.4$ to $T=0.7$. 0.3 cm ### Second semester, 1997 Figure 6 shows the three dimensional view of the asset trees for the second semester of 1997, with threshold ranging from $T=0.3$ to $T=0.6$. -0.2 cm (-4.5,-0.6)(1,3.5) (-4,2.4)[T=0.3]{} (-4.9,-2.3)(3.4,-2.3)(3.4,2.8)(-4.9,2.8)(-4.9,-2.3) (-0.2676,-0.4137,0.1785)(-0.2067,-0.2892,0.1898) (-0.3343,-0.0252,-0.0690)(-0.3805,-0.0217,-0.1152) (-0.3386,0.0422,-0.1185)(-0.3841,0.0623,-0.0658) (-0.3386,0.0422,-0.1185)(-0.3979,0.0103,-0.1027) (-0.3386,0.0422,-0.1185)(-0.3564,0.0272,-0.0652) (-0.3386,0.0422,-0.1185)(-0.3521,0.0124,-0.0260) (-0.3449,0.1072,-0.1580)(-0.3124,0.0973,-0.0844) (-0.3449,0.1072,-0.1580)(-0.3979,0.0103,-0.1027) (-0.3449,0.1072,-0.1580)(-0.3162,0.1401,-0.0403) (-0.3841,0.0623,-0.0658)(-0.3979,0.0103,-0.1027) (-0.3841,0.0623,-0.0658)(-0.3805,-0.0217,-0.1152) (-0.3841,0.0623,-0.0658)(-0.3564,0.0272,-0.0652) (-0.3805,-0.0217,-0.1152)(-0.3979,0.0103,-0.1027) (-0.3979,0.0103,-0.1027)(-0.3564,0.0272,-0.0652) (-0.3979,0.0103,-0.1027)(-0.3162,0.1401,-0.0403) (-0.2676,-0.4137,0.1785) (-0.2067,-0.2892,0.1898) (-0.3343,-0.0252,-0.0690) (-0.3386,0.0422,-0.1185) (-0.3449,0.1072,-0.1580) (-0.3841,0.0623,-0.0658) (-0.3124,0.0973,-0.0844) (-0.3805,-0.0217,-0.1152) (-0.3979,0.0103,-0.1027) (-0.3564,0.0272,-0.0652) (-0.3162,0.1401,-0.0403) (-0.3521,0.0124,-0.0260) (-0.2676,-0.4137,0.2085)[S&P]{} (-0.2067,-0.2892,0.2198)[Nasd]{} (-0.3343,-0.0252,-0.0990)[UK]{} (-0.3386,0.0422,-0.1485)[Fran]{} (-0.3449,0.1072,-0.1880)[Germ]{} (-0.3841,0.0623,-0.0958)[Swit]{} (-0.3124,0.0973,-0.1144)[Autr]{} (-0.3805,-0.0217,-0.1452)[Belg]{} (-0.3979,0.0103,-0.1327)[Neth]{} (-0.3564,0.0272,-0.0952)[Swed]{} (-0.3162,0.1401,-0.0703)[Finl]{} (-0.3521,0.0124,-0.0560)[Spai]{} (-7.8,-0.6)(1,3.5) (-4,2.4)[T=0.4]{} (-4.9,-2.3)(3.4,-2.3)(3.4,2.8)(-4.9,2.8)(-4.9,-2.3) (-0.2676,-0.4137,0.1785)(-0.2067,-0.2892,0.1898) (-0.2676,-0.4137,0.1785)(-0.2984,-0.1021,0.1069) (-0.2676,-0.4137,0.1785)(-0.3234,-0.1600,0.2106) (-0.2676,-0.4137,0.1785)(-0.2267,-0.3579,0.1913) (-0.2067,-0.2892,0.1898)(-0.2984,-0.1021,0.1069) (-0.2283,-0.2709,0.2703)(-0.2267,-0.3579,0.1913) (-0.3343,-0.0252,-0.0690)(-0.3805,-0.0217,-0.1152) (-0.3343,-0.0252,-0.0690)(-0.3386,0.0422,-0.1185) (-0.3343,-0.0252,-0.0690)(-0.3841,0.0623,-0.0658) (-0.3343,-0.0252,-0.0690)(-0.3979,0.0103,-0.1027) (-0.3343,-0.0252,-0.0690)(-0.3564,0.0272,-0.0652) (-0.3343,-0.0252,-0.0690)(-0.3162,0.1401,-0.0403) (-0.3343,-0.0252,-0.0690)(-0.3521,0.0124,-0.0260) (-0.2904,0.1195,-0.1673)(-0.3449,0.1072,-0.1580) (-0.2904,0.1195,-0.1673)(-0.3124,0.0973,-0.0844) (-0.2904,0.1195,-0.1673)(-0.3979,0.0103,-0.1027) (-0.3386,0.0422,-0.1185)(-0.3449,0.1072,-0.1580) (-0.3386,0.0422,-0.1185)(-0.3841,0.0623,-0.0658) (-0.3386,0.0422,-0.1185)(-0.3805,-0.0217,-0.1152) (-0.3386,0.0422,-0.1185)(-0.3979,0.0103,-0.1027) (-0.3386,0.0422,-0.1185)(-0.3564,0.0272,-0.0652) (-0.3386,0.0422,-0.1185)(-0.3162,0.1401,-0.0403) (-0.3386,0.0422,-0.1185)(-0.3521,0.0124,-0.0260) (-0.3449,0.1072,-0.1580)(-0.3124,0.0973,-0.0844) (-0.3449,0.1072,-0.1580)(-0.3841,0.0623,-0.0658) (-0.3449,0.1072,-0.1580)(-0.3805,-0.0217,-0.1152) (-0.3449,0.1072,-0.1580)(-0.3979,0.0103,-0.1027) (-0.3449,0.1072,-0.1580)(-0.3564,0.0272,-0.0652) (-0.3449,0.1072,-0.1580)(-0.3105,0.0525,-0.0854) (-0.3449,0.1072,-0.1580)(-0.3162,0.1401,-0.0403) (-0.3841,0.0623,-0.0658)(-0.3979,0.0103,-0.1027) (-0.3841,0.0623,-0.0658)(-0.3124,0.0973,-0.0844) (-0.3841,0.0623,-0.0658)(-0.3805,-0.0217,-0.1152) (-0.3841,0.0623,-0.0658)(-0.3564,0.0272,-0.0652) (-0.3841,0.0623,-0.0658)(-0.3105,0.0525,-0.0854) (-0.3841,0.0623,-0.0658)(-0.3162,0.1401,-0.0403) (-0.3841,0.0623,-0.0658)(-0.3521,0.0124,-0.0260) (-0.3124,0.0973,-0.0844)(-0.3805,-0.0217,-0.1152) (-0.3124,0.0973,-0.0844)(-0.3979,0.0103,-0.1027) (-0.3124,0.0973,-0.0844)(-0.3162,0.1401,-0.0403) (-0.3805,-0.0217,-0.1152)(-0.3979,0.0103,-0.1027) (-0.3805,-0.0217,-0.1152)(-0.3564,0.0272,-0.0652) (-0.3805,-0.0217,-0.1152)(-0.3162,0.1401,-0.0403) (-0.3805,-0.0217,-0.1152)(-0.3521,0.0124,-0.0260) (-0.3979,0.0103,-0.1027)(-0.3564,0.0272,-0.0652) (-0.3979,0.0103,-0.1027)(-0.3105,0.0525,-0.0854) (-0.3979,0.0103,-0.1027)(-0.3162,0.1401,-0.0403) (-0.3979,0.0103,-0.1027)(-0.3521,0.0124,-0.0260) (-0.3564,0.0272,-0.0652)(-0.3162,0.1401,-0.0403) (-0.3564,0.0272,-0.0652)(-0.3521,0.0124,-0.0260) (-0.3105,0.0525,-0.0854)(-0.3162,0.1401,-0.0403) (-0.2676,-0.4137,0.1785) (-0.2067,-0.2892,0.1898) (-0.2984,-0.1021,0.1069) (-0.3234,-0.1600,0.2106) (-0.2283,-0.2709,0.2703) (-0.2267,-0.3579,0.1913) (-0.3343,-0.0252,-0.0690) (-0.2904,0.1195,-0.1673) (-0.3386,0.0422,-0.1185) (-0.3449,0.1072,-0.1580) (-0.3841,0.0623,-0.0658) (-0.3124,0.0973,-0.0844) (-0.3805,-0.0217,-0.1152) (-0.3979,0.0103,-0.1027) (-0.3564,0.0272,-0.0652) (-0.3105,0.0525,-0.0854) (-0.3162,0.1401,-0.0403) (-0.3521,0.0124,-0.0260) (-0.2676,-0.4137,0.2085)[S&P]{} (-0.2067,-0.2892,0.2198)[Nasd]{} (-0.2984,-0.1021,0.1369)[Cana]{} (-0.3234,-0.1600,0.2406)[Mexi]{} (-0.2283,-0.2709,0.3003)[Braz]{} (-0.2267,-0.3579,0.2213)[Arge]{} (-0.3343,-0.0252,-0.0990)[UK]{} (-0.2904,0.1195,-0.1973)[Irel]{} (-0.3386,0.0422,-0.1485)[Fran]{} (-0.3449,0.1072,-0.1880)[Germ]{} (-0.3841,0.0623,-0.0958)[Swit]{} (-0.3124,0.0973,-0.1144)[Autr]{} (-0.3805,-0.0217,-0.1452)[Belg]{} (-0.3979,0.0103,-0.1327)[Neth]{} (-0.3564,0.0272,-0.0952)[Swed]{} (-0.3105,0.0525,-0.1154)[Denm]{} (-0.3162,0.1401,-0.0703)[Finl]{} (-0.3521,0.0124,-0.0560)[Spai]{} 1.5 cm (-4.5,-0.6)(1,3.5) (-4,2.4)[T=0.5]{} (-4.9,-2.3)(3.4,-2.3)(3.4,2.8)(-4.9,2.8)(-4.9,-2.3) (-0.2676,-0.4137,0.1785)(-0.2067,-0.2892,0.1898) (-0.2676,-0.4137,0.1785)(-0.2984,-0.1021,0.1069) (-0.2676,-0.4137,0.1785)(-0.3234,-0.1600,0.2106) (-0.2676,-0.4137,0.1785)(-0.2283,-0.2709,0.2703) (-0.2676,-0.4137,0.1785)(-0.2267,-0.3579,0.1913) (-0.2067,-0.2892,0.1898)(-0.2984,-0.1021,0.1069) (-0.2067,-0.2892,0.1898)(-0.3234,-0.1600,0.2106) (-0.2067,-0.2892,0.1898)(-0.2267,-0.3579,0.1913) (-0.2984,-0.1021,0.1069)(-0.3234,-0.1600,0.2106) (-0.2984,-0.1021,0.1069)(-0.3841,0.0623,-0.0658) (-0.2984,-0.1021,0.1069)(-0.3979,0.0103,-0.1027) (-0.2984,-0.1021,0.1069)(-0.3162,0.1401,-0.0403) (-0.3234,-0.1600,0.2106)(-0.2283,-0.2709,0.2703) (-0.3234,-0.1600,0.2106)(-0.2267,-0.3579,0.1913) (-0.3234,-0.1600,0.2106)(-0.3841,0.0623,-0.0658) (-0.2283,-0.2709,0.2703)(-0.2267,-0.3579,0.1913) (-0.2301,-0.0755,0.0603)(-0.3162,0.1401,-0.0403) (-0.3343,-0.0252,-0.0690)(-0.2904,0.1195,-0.1673) (-0.3343,-0.0252,-0.0690)(-0.3449,0.1072,-0.1580) (-0.3343,-0.0252,-0.0690)(-0.3124,0.0973,-0.0844) (-0.3343,-0.0252,-0.0690)(-0.3805,-0.0217,-0.1152) (-0.3343,-0.0252,-0.0690)(-0.3386,0.0422,-0.1185) (-0.3343,-0.0252,-0.0690)(-0.3841,0.0623,-0.0658) (-0.3343,-0.0252,-0.0690)(-0.3979,0.0103,-0.1027) (-0.3343,-0.0252,-0.0690)(-0.3564,0.0272,-0.0652) (-0.3343,-0.0252,-0.0690)(-0.3105,0.0525,-0.0854) (-0.3343,-0.0252,-0.0690)(-0.3162,0.1401,-0.0403) (-0.3343,-0.0252,-0.0690)(-0.2473,0.0683,-0.1804) (-0.3343,-0.0252,-0.0690)(-0.3521,0.0124,-0.0260) (-0.3343,-0.0252,-0.0690)(-0.2608,0.1491,-0.0483) (-0.2904,0.1195,-0.1673)(-0.3386,0.0422,-0.1185) (-0.2904,0.1195,-0.1673)(-0.3449,0.1072,-0.1580) (-0.2904,0.1195,-0.1673)(-0.3124,0.0973,-0.0844) (-0.2904,0.1195,-0.1673)(-0.3841,0.0623,-0.0658) (-0.2904,0.1195,-0.1673)(-0.3805,-0.0217,-0.1152) (-0.2904,0.1195,-0.1673)(-0.3979,0.0103,-0.1027) (-0.2904,0.1195,-0.1673)(-0.3105,0.0525,-0.0854) (-0.2904,0.1195,-0.1673)(-0.3162,0.1401,-0.0403) (-0.2904,0.1195,-0.1673)(-0.2473,0.0683,-0.1804) (-0.2904,0.1195,-0.1673)(-0.3521,0.0124,-0.0260) (-0.2904,0.1195,-0.1673)(-0.2608,0.1491,-0.0483) (-0.3386,0.0422,-0.1185)(-0.3449,0.1072,-0.1580) (-0.3386,0.0422,-0.1185)(-0.3124,0.0973,-0.0844) (-0.3386,0.0422,-0.1185)(-0.3841,0.0623,-0.0658) (-0.3386,0.0422,-0.1185)(-0.3805,-0.0217,-0.1152) (-0.3386,0.0422,-0.1185)(-0.3979,0.0103,-0.1027) (-0.3386,0.0422,-0.1185)(-0.3564,0.0272,-0.0652) (-0.3386,0.0422,-0.1185)(-0.3105,0.0525,-0.0854) (-0.3386,0.0422,-0.1185)(-0.3162,0.1401,-0.0403) (-0.3386,0.0422,-0.1185)(-0.2473,0.0683,-0.1804) (-0.3386,0.0422,-0.1185)(-0.3521,0.0124,-0.0260) (-0.3386,0.0422,-0.1185)(-0.2608,0.1491,-0.0483) (-0.3449,0.1072,-0.1580)(-0.3124,0.0973,-0.0844) (-0.3449,0.1072,-0.1580)(-0.3841,0.0623,-0.0658) (-0.3449,0.1072,-0.1580)(-0.3805,-0.0217,-0.1152) (-0.3449,0.1072,-0.1580)(-0.3979,0.0103,-0.1027) (-0.3449,0.1072,-0.1580)(-0.3564,0.0272,-0.0652) (-0.3449,0.1072,-0.1580)(-0.3105,0.0525,-0.0854) (-0.3449,0.1072,-0.1580)(-0.3162,0.1401,-0.0403) (-0.3449,0.1072,-0.1580)(-0.2473,0.0683,-0.1804) (-0.3449,0.1072,-0.1580)(-0.3521,0.0124,-0.0260) (-0.3449,0.1072,-0.1580)(-0.2608,0.1491,-0.0483) (-0.3841,0.0623,-0.0658)(-0.3979,0.0103,-0.1027) (-0.3841,0.0623,-0.0658)(-0.3124,0.0973,-0.0844) (-0.3841,0.0623,-0.0658)(-0.3805,-0.0217,-0.1152) (-0.3841,0.0623,-0.0658)(-0.3564,0.0272,-0.0652) (-0.3841,0.0623,-0.0658)(-0.3105,0.0525,-0.0854) (-0.3841,0.0623,-0.0658)(-0.3162,0.1401,-0.0403) (-0.3841,0.0623,-0.0658)(-0.2473,0.0683,-0.1804) (-0.3841,0.0623,-0.0658)(-0.3521,0.0124,-0.0260) (-0.3841,0.0623,-0.0658)(-0.2608,0.1491,-0.0483) (-0.3124,0.0973,-0.0844)(-0.3805,-0.0217,-0.1152) (-0.3124,0.0973,-0.0844)(-0.3979,0.0103,-0.1027) (-0.3124,0.0973,-0.0844)(-0.3105,0.0525,-0.0854) (-0.3124,0.0973,-0.0844)(-0.3162,0.1401,-0.0403) (-0.3124,0.0973,-0.0844)(-0.2473,0.0683,-0.1804) (-0.3124,0.0973,-0.0844)(-0.2608,0.1491,-0.0483) (-0.3805,-0.0217,-0.1152)(-0.3979,0.0103,-0.1027) (-0.3805,-0.0217,-0.1152)(-0.3564,0.0272,-0.0652) (-0.3805,-0.0217,-0.1152)(-0.3105,0.0525,-0.0854) (-0.3805,-0.0217,-0.1152)(-0.3162,0.1401,-0.0403) (-0.3805,-0.0217,-0.1152)(-0.2473,0.0683,-0.1804) (-0.3805,-0.0217,-0.1152)(-0.3521,0.0124,-0.0260) (-0.3805,-0.0217,-0.1152)(-0.2608,0.1491,-0.0483) (-0.3979,0.0103,-0.1027)(-0.3564,0.0272,-0.0652) (-0.3979,0.0103,-0.1027)(-0.3105,0.0525,-0.0854) (-0.3979,0.0103,-0.1027)(-0.3162,0.1401,-0.0403) (-0.3979,0.0103,-0.1027)(-0.2473,0.0683,-0.1804) (-0.3979,0.0103,-0.1027)(-0.3521,0.0124,-0.0260) (-0.3979,0.0103,-0.1027)(-0.2608,0.1491,-0.0483) (-0.3564,0.0272,-0.0652)(-0.3105,0.0525,-0.0854) (-0.3564,0.0272,-0.0652)(-0.3162,0.1401,-0.0403) (-0.3564,0.0272,-0.0652)(-0.2473,0.0683,-0.1804) (-0.3564,0.0272,-0.0652)(-0.3521,0.0124,-0.0260) (-0.3564,0.0272,-0.0652)(-0.2608,0.1491,-0.0483) (-0.3105,0.0525,-0.0854)(-0.3162,0.1401,-0.0403) (-0.3105,0.0525,-0.0854)(-0.2473,0.0683,-0.1804) (-0.3162,0.1401,-0.0403)(-0.2473,0.0683,-0.1804) (-0.3162,0.1401,-0.0403)(-0.3521,0.0124,-0.0260) (-0.3162,0.1401,-0.0403)(-0.2608,0.1491,-0.0483) (-0.3162,0.1401,-0.0403)(-0.2042,0.1291,0.0478) (-0.3521,0.0124,-0.0260)(-0.2608,0.1491,-0.0483) (-0.2676,-0.4137,0.1785) (-0.2067,-0.2892,0.1898) (-0.2984,-0.1021,0.1069) (-0.3234,-0.1600,0.2106) (-0.2283,-0.2709,0.2703) (-0.2267,-0.3579,0.1913) (-0.2301,-0.0755,0.0603) (-0.3343,-0.0252,-0.0690) (-0.2904,0.1195,-0.1673) (-0.3386,0.0422,-0.1185) (-0.3449,0.1072,-0.1580) (-0.3841,0.0623,-0.0658) (-0.3124,0.0973,-0.0844) (-0.3805,-0.0217,-0.1152) (-0.3979,0.0103,-0.1027) (-0.3564,0.0272,-0.0652) (-0.3105,0.0525,-0.0854) (-0.3162,0.1401,-0.0403) (-0.2473,0.0683,-0.1804) (-0.3521,0.0124,-0.0260) (-0.2608,0.1491,-0.0483) (-0.2042,0.1291,0.0478) (-0.2676,-0.4137,0.2085)[S&P]{} (-0.2067,-0.2892,0.2198)[Nasd]{} (-0.2984,-0.1021,0.1369)[Cana]{} (-0.3234,-0.1600,0.2406)[Mexi]{} (-0.2283,-0.2709,0.3003)[Braz]{} (-0.2267,-0.3579,0.2213)[Arge]{} (-0.2301,-0.0755,0.0903)[Chil]{} (-0.3343,-0.0252,-0.0990)[UK]{} (-0.2904,0.1195,-0.1973)[Irel]{} (-0.3386,0.0422,-0.1485)[Fran]{} (-0.3449,0.1072,-0.1880)[Germ]{} (-0.3841,0.0623,-0.0958)[Swit]{} (-0.3124,0.0973,-0.1144)[Autr]{} (-0.3805,-0.0217,-0.1452)[Belg]{} (-0.3979,0.0103,-0.1327)[Neth]{} (-0.3564,0.0272,-0.0952)[Swed]{} (-0.3105,0.0525,-0.1154)[Denm]{} (-0.3162,0.1401,-0.0703)[Finl]{} (-0.2473,0.0683,-0.2104)[Norw]{} (-0.3521,0.0124,-0.0560)[Spai]{} (-0.2608,0.1491,-0.0783)[Port]{} (-0.2042,0.1291,0.0778)[Hung]{} (-7.8,-0.6)(1,3.5) (-4,2.4)[T=0.6]{} (-4.9,-2.3)(3.4,-2.3)(3.4,2.8)(-4.9,2.8)(-4.9,-2.3) (-0.2676,-0.4137,0.1785)(-0.2067,-0.2892,0.1898) (-0.2676,-0.4137,0.1785)(-0.2984,-0.1021,0.1069) (-0.2676,-0.4137,0.1785)(-0.3234,-0.1600,0.2106) (-0.2676,-0.4137,0.1785)(-0.2283,-0.2709,0.2703) (-0.2676,-0.4137,0.1785)(-0.2267,-0.3579,0.1913) (-0.2676,-0.4137,0.1785)(-0.2301,-0.0755,0.0603) (-0.2676,-0.4137,0.1785)(-0.3343,-0.0252,-0.0690) (-0.2676,-0.4137,0.1785)(-0.3386,0.0422,-0.1185) (-0.2676,-0.4137,0.1785)(-0.3841,0.0623,-0.0658) (-0.2676,-0.4137,0.1785)(-0.3805,-0.0217,-0.1152) (-0.2676,-0.4137,0.1785)(-0.3979,0.0103,-0.1027) (-0.2676,-0.4137,0.1785)(-0.3564,0.0272,-0.0652) (-0.2676,-0.4137,0.1785)(-0.3521,0.0124,-0.0260) (-0.2067,-0.2892,0.1898)(-0.2984,-0.1021,0.1069) (-0.2067,-0.2892,0.1898)(-0.3234,-0.1600,0.2106) (-0.2067,-0.2892,0.1898)(-0.2283,-0.2709,0.2703) (-0.2067,-0.2892,0.1898)(-0.2267,-0.3579,0.1913) (-0.2067,-0.2892,0.1898)(-0.3343,-0.0252,-0.0690) (-0.2067,-0.2892,0.1898)(-0.3841,0.0623,-0.0658) (-0.2067,-0.2892,0.1898)(-0.3805,-0.0217,-0.1152) (-0.2067,-0.2892,0.1898)(-0.3979,0.0103,-0.1027) (-0.2067,-0.2892,0.1898)(-0.3564,0.0272,-0.0652) (-0.2067,-0.2892,0.1898)(-0.3162,0.1401,-0.0403) (-0.2984,-0.1021,0.1069)(-0.3234,-0.1600,0.2106) (-0.2984,-0.1021,0.1069)(-0.2267,-0.3579,0.1913) (-0.2984,-0.1021,0.1069)(-0.2301,-0.0755,0.0603) (-0.2984,-0.1021,0.1069)(-0.3343,-0.0252,-0.0690) (-0.2984,-0.1021,0.1069)(-0.2904,0.1195,-0.1673) (-0.2984,-0.1021,0.1069)(-0.3386,0.0422,-0.1185) (-0.2984,-0.1021,0.1069)(-0.3449,0.1072,-0.1580) (-0.2984,-0.1021,0.1069)(-0.3124,0.0973,-0.0844) (-0.2984,-0.1021,0.1069)(-0.3841,0.0623,-0.0658) (-0.2984,-0.1021,0.1069)(-0.3805,-0.0217,-0.1152) (-0.2984,-0.1021,0.1069)(-0.3979,0.0103,-0.1027) (-0.2984,-0.1021,0.1069)(-0.3564,0.0272,-0.0652) (-0.2984,-0.1021,0.1069)(-0.3105,0.0525,-0.0854) (-0.2984,-0.1021,0.1069)(-0.3162,0.1401,-0.0403) (-0.2984,-0.1021,0.1069)(-0.2473,0.0683,-0.1804) (-0.2984,-0.1021,0.1069)(-0.3521,0.0124,-0.0260) (-0.2984,-0.1021,0.1069)(-0.2042,0.1291,0.0478) (-0.3234,-0.1600,0.2106)(-0.2283,-0.2709,0.2703) (-0.3234,-0.1600,0.2106)(-0.2267,-0.3579,0.1913) (-0.3234,-0.1600,0.2106)(-0.2301,-0.0755,0.0603) (-0.3234,-0.1600,0.2106)(-0.2114,-0.0858,0.1272) (-0.3234,-0.1600,0.2106)(-0.3343,-0.0252,-0.0690) (-0.3234,-0.1600,0.2106)(-0.3386,0.0422,-0.1185) (-0.3234,-0.1600,0.2106)(-0.3449,0.1072,-0.1580) (-0.3234,-0.1600,0.2106)(-0.3124,0.0973,-0.0844) (-0.3234,-0.1600,0.2106)(-0.3841,0.0623,-0.0658) (-0.3234,-0.1600,0.2106)(-0.3805,-0.0217,-0.1152) (-0.3234,-0.1600,0.2106)(-0.3979,0.0103,-0.1027) (-0.3234,-0.1600,0.2106)(-0.3564,0.0272,-0.0652) (-0.3234,-0.1600,0.2106)(-0.3162,0.1401,-0.0403) (-0.3234,-0.1600,0.2106)(-0.3521,0.0124,-0.0260) (-0.3234,-0.1600,0.2106)(-0.2042,0.1291,0.0478) (-0.2283,-0.2709,0.2703)(-0.2267,-0.3579,0.1913) (-0.2267,-0.3579,0.1913)(-0.2301,-0.0755,0.0603) (-0.2301,-0.0755,0.0603)(-0.3162,0.1401,-0.0403) (-0.2301,-0.0755,0.0603)(-0.3449,0.1072,-0.1580) (-0.2301,-0.0755,0.0603)(-0.3841,0.0623,-0.0658) (-0.2301,-0.0755,0.0603)(-0.3805,-0.0217,-0.1152) (-0.2301,-0.0755,0.0603)(-0.3979,0.0103,-0.1027) (-0.2301,-0.0755,0.0603)(-0.3564,0.0272,-0.0652) (-0.2114,-0.0858,0.1272)(-0.3124,0.0973,-0.0844) (-0.2114,-0.0858,0.1272)(-0.0965,-0.0144,-0.0488) (-0.3343,-0.0252,-0.0690)(-0.2904,0.1195,-0.1673) (-0.3343,-0.0252,-0.0690)(-0.3386,0.0422,-0.1185) (-0.3343,-0.0252,-0.0690)(-0.3449,0.1072,-0.1580) (-0.3343,-0.0252,-0.0690)(-0.3124,0.0973,-0.0844) (-0.3343,-0.0252,-0.0690)(-0.3841,0.0623,-0.0658) (-0.3343,-0.0252,-0.0690)(-0.3805,-0.0217,-0.1152) (-0.3343,-0.0252,-0.0690)(-0.3979,0.0103,-0.1027) (-0.3343,-0.0252,-0.0690)(-0.3564,0.0272,-0.0652) (-0.3343,-0.0252,-0.0690)(-0.3105,0.0525,-0.0854) (-0.3343,-0.0252,-0.0690)(-0.3162,0.1401,-0.0403) (-0.3343,-0.0252,-0.0690)(-0.2473,0.0683,-0.1804) (-0.3343,-0.0252,-0.0690)(-0.3521,0.0124,-0.0260) (-0.3343,-0.0252,-0.0690)(-0.2608,0.1491,-0.0483) (-0.3343,-0.0252,-0.0690)(-0.2042,0.1291,0.0478) (-0.3343,-0.0252,-0.0690)(-0.1475,0.1603,-0.1459) (-0.2904,0.1195,-0.1673)(-0.3386,0.0422,-0.1185) (-0.2904,0.1195,-0.1673)(-0.3449,0.1072,-0.1580) (-0.2904,0.1195,-0.1673)(-0.3124,0.0973,-0.0844) (-0.2904,0.1195,-0.1673)(-0.3841,0.0623,-0.0658) (-0.2904,0.1195,-0.1673)(-0.3805,-0.0217,-0.1152) (-0.2904,0.1195,-0.1673)(-0.3979,0.0103,-0.1027) (-0.2904,0.1195,-0.1673)(-0.3564,0.0272,-0.0652) (-0.2904,0.1195,-0.1673)(-0.3105,0.0525,-0.0854) (-0.2904,0.1195,-0.1673)(-0.3162,0.1401,-0.0403) (-0.2904,0.1195,-0.1673)(-0.2473,0.0683,-0.1804) (-0.2904,0.1195,-0.1673)(-0.3521,0.0124,-0.0260) (-0.2904,0.1195,-0.1673)(-0.2608,0.1491,-0.0483) (-0.3386,0.0422,-0.1185)(-0.3449,0.1072,-0.1580) (-0.3386,0.0422,-0.1185)(-0.3124,0.0973,-0.0844) (-0.3386,0.0422,-0.1185)(-0.3841,0.0623,-0.0658) (-0.3386,0.0422,-0.1185)(-0.3805,-0.0217,-0.1152) (-0.3386,0.0422,-0.1185)(-0.3979,0.0103,-0.1027) (-0.3386,0.0422,-0.1185)(-0.3564,0.0272,-0.0652) (-0.3386,0.0422,-0.1185)(-0.3105,0.0525,-0.0854) (-0.3386,0.0422,-0.1185)(-0.3162,0.1401,-0.0403) (-0.3386,0.0422,-0.1185)(-0.2473,0.0683,-0.1804) (-0.3386,0.0422,-0.1185)(-0.3521,0.0124,-0.0260) (-0.3386,0.0422,-0.1185)(-0.2608,0.1491,-0.0483) (-0.3386,0.0422,-0.1185)(-0.2042,0.1291,0.0478) (-0.3449,0.1072,-0.1580)(-0.3124,0.0973,-0.0844) (-0.3449,0.1072,-0.1580)(-0.3841,0.0623,-0.0658) (-0.3449,0.1072,-0.1580)(-0.3805,-0.0217,-0.1152) (-0.3449,0.1072,-0.1580)(-0.3979,0.0103,-0.1027) (-0.3449,0.1072,-0.1580)(-0.3564,0.0272,-0.0652) (-0.3449,0.1072,-0.1580)(-0.3105,0.0525,-0.0854) (-0.3449,0.1072,-0.1580)(-0.3162,0.1401,-0.0403) (-0.3449,0.1072,-0.1580)(-0.2473,0.0683,-0.1804) (-0.3449,0.1072,-0.1580)(-0.3521,0.0124,-0.0260) (-0.3449,0.1072,-0.1580)(-0.2608,0.1491,-0.0483) (-0.3449,0.1072,-0.1580)(-0.2042,0.1291,0.0478) (-0.3449,0.1072,-0.1580)(-0.0582,0.2523,0.0287) (-0.3449,0.1072,-0.1580)(-0.0804,0.2221,0.1170) (-0.3449,0.1072,-0.1580)(-0.1475,0.1603,-0.1459) (-0.3841,0.0623,-0.0658)(-0.3979,0.0103,-0.1027) (-0.3841,0.0623,-0.0658)(-0.3124,0.0973,-0.0844) (-0.3841,0.0623,-0.0658)(-0.3805,-0.0217,-0.1152) (-0.3841,0.0623,-0.0658)(-0.3564,0.0272,-0.0652) (-0.3841,0.0623,-0.0658)(-0.3105,0.0525,-0.0854) (-0.3841,0.0623,-0.0658)(-0.3162,0.1401,-0.0403) (-0.3841,0.0623,-0.0658)(-0.2473,0.0683,-0.1804) (-0.3841,0.0623,-0.0658)(-0.3521,0.0124,-0.0260) (-0.3841,0.0623,-0.0658)(-0.2608,0.1491,-0.0483) (-0.3841,0.0623,-0.0658)(-0.2042,0.1291,0.0478) (-0.3841,0.0623,-0.0658)(-0.1475,0.1603,-0.1459) (-0.3124,0.0973,-0.0844)(-0.3805,-0.0217,-0.1152) (-0.3124,0.0973,-0.0844)(-0.3979,0.0103,-0.1027) (-0.3124,0.0973,-0.0844)(-0.3564,0.0272,-0.0652) (-0.3124,0.0973,-0.0844)(-0.3105,0.0525,-0.0854) (-0.3124,0.0973,-0.0844)(-0.3162,0.1401,-0.0403) (-0.3124,0.0973,-0.0844)(-0.2473,0.0683,-0.1804) (-0.3124,0.0973,-0.0844)(-0.3521,0.0124,-0.0260) (-0.3124,0.0973,-0.0844)(-0.2608,0.1491,-0.0483) (-0.3124,0.0973,-0.0844)(-0.2042,0.1291,0.0478) (-0.3124,0.0973,-0.0844)(-0.1475,0.1603,-0.1459) (-0.3805,-0.0217,-0.1152)(-0.3979,0.0103,-0.1027) (-0.3805,-0.0217,-0.1152)(-0.3564,0.0272,-0.0652) (-0.3805,-0.0217,-0.1152)(-0.3105,0.0525,-0.0854) (-0.3805,-0.0217,-0.1152)(-0.3162,0.1401,-0.0403) (-0.3805,-0.0217,-0.1152)(-0.2473,0.0683,-0.1804) (-0.3805,-0.0217,-0.1152)(-0.3521,0.0124,-0.0260) (-0.3805,-0.0217,-0.1152)(-0.2608,0.1491,-0.0483) (-0.3805,-0.0217,-0.1152)(-0.2042,0.1291,0.0478) (-0.3805,-0.0217,-0.1152)(-0.1475,0.1603,-0.1459) (-0.3979,0.0103,-0.1027)(-0.3564,0.0272,-0.0652) (-0.3979,0.0103,-0.1027)(-0.3105,0.0525,-0.0854) (-0.3979,0.0103,-0.1027)(-0.3162,0.1401,-0.0403) (-0.3979,0.0103,-0.1027)(-0.2473,0.0683,-0.1804) (-0.3979,0.0103,-0.1027)(-0.3521,0.0124,-0.0260) (-0.3979,0.0103,-0.1027)(-0.2608,0.1491,-0.0483) (-0.3979,0.0103,-0.1027)(-0.2042,0.1291,0.0478) (-0.3979,0.0103,-0.1027)(-0.1475,0.1603,-0.1459) (-0.3564,0.0272,-0.0652)(-0.3105,0.0525,-0.0854) (-0.3564,0.0272,-0.0652)(-0.3162,0.1401,-0.0403) (-0.3564,0.0272,-0.0652)(-0.2473,0.0683,-0.1804) (-0.3564,0.0272,-0.0652)(-0.3521,0.0124,-0.0260) (-0.3564,0.0272,-0.0652)(-0.2608,0.1491,-0.0483) (-0.3564,0.0272,-0.0652)(-0.2042,0.1291,0.0478) (-0.3105,0.0525,-0.0854)(-0.3162,0.1401,-0.0403) (-0.3105,0.0525,-0.0854)(-0.2473,0.0683,-0.1804) (-0.3105,0.0525,-0.0854)(-0.3521,0.0124,-0.0260) (-0.3105,0.0525,-0.0854)(-0.2608,0.1491,-0.0483) (-0.3105,0.0525,-0.0854)(-0.0582,0.2523,0.0287) (-0.3105,0.0525,-0.0854)(-0.1475,0.1603,-0.1459) (-0.3162,0.1401,-0.0403)(-0.2473,0.0683,-0.1804) (-0.3162,0.1401,-0.0403)(-0.3521,0.0124,-0.0260) (-0.3162,0.1401,-0.0403)(-0.2608,0.1491,-0.0483) (-0.3162,0.1401,-0.0403)(-0.2042,0.1291,0.0478) (-0.3162,0.1401,-0.0403)(-0.0965,-0.0144,-0.0488) (-0.2473,0.0683,-0.1804)(-0.3521,0.0124,-0.0260) (-0.2473,0.0683,-0.1804)(-0.2608,0.1491,-0.0483) (-0.2473,0.0683,-0.1804)(-0.2042,0.1291,0.0478) (-0.2473,0.0683,-0.1804)(-0.1475,0.1603,-0.1459) (-0.3521,0.0124,-0.0260)(-0.2608,0.1491,-0.0483) (-0.3521,0.0124,-0.0260)(-0.2042,0.1291,0.0478) (-0.2608,0.1491,-0.0483)(-0.2042,0.1291,0.0478) (-0.2608,0.1491,-0.0483)(-0.0781,0.0319,0.0711) (-0.2608,0.1491,-0.0483)(-0.1475,0.1603,-0.1459) (-0.2608,0.1491,-0.0483)(-0.0781,0.0319,0.0711) (-0.0586,0.1835,-0.1993)(-0.0804,0.2221,0.1170) (-0.0582,0.2523,0.0287)(0.2254,0.2504,0.2032) (-0.2676,-0.4137,0.1785) (-0.2067,-0.2892,0.1898) (-0.2984,-0.1021,0.1069) (-0.3234,-0.1600,0.2106) (-0.2283,-0.2709,0.2703) (-0.2267,-0.3579,0.1913) (-0.2301,-0.0755,0.0603) (-0.2114,-0.0858,0.1272) (-0.3343,-0.0252,-0.0690) (-0.2904,0.1195,-0.1673) (-0.3386,0.0422,-0.1185) (-0.3449,0.1072,-0.1580) (-0.3841,0.0623,-0.0658) (-0.3124,0.0973,-0.0844) (-0.3805,-0.0217,-0.1152) (-0.3979,0.0103,-0.1027) (-0.3564,0.0272,-0.0652) (-0.3105,0.0525,-0.0854) (-0.3162,0.1401,-0.0403) (-0.2473,0.0683,-0.1804) (-0.3521,0.0124,-0.0260) (-0.2608,0.1491,-0.0483) (-0.2042,0.1291,0.0478) (-0.0965,-0.0144,-0.0488) (-0.0781,0.0319,0.0711) (-0.0586,0.1835,-0.1993) (-0.0582,0.2523,0.0287) (0.2254,0.2504,0.2032) (-0.0804,0.2221,0.1170) (-0.1475,0.1603,-0.1459) (-0.2676,-0.4137,0.2085)[S&P]{} (-0.2067,-0.2892,0.2198)[Nasd]{} (-0.2984,-0.1021,0.1369)[Cana]{} (-0.3234,-0.1600,0.2406)[Mexi]{} (-0.2283,-0.2709,0.3003)[Braz]{} (-0.2267,-0.3579,0.2213)[Arge]{} (-0.2301,-0.0755,0.0903)[Chil]{} (-0.2114,-0.0858,0.1572)[Peru]{} (-0.3343,-0.0252,-0.0990)[UK]{} (-0.2904,0.1195,-0.1973)[Irel]{} (-0.3386,0.0422,-0.1485)[Fran]{} (-0.3449,0.1072,-0.1880)[Germ]{} (-0.3841,0.0623,-0.0958)[Swit]{} (-0.3124,0.0973,-0.1144)[Autr]{} (-0.3805,-0.0217,-0.1452)[Belg]{} (-0.3979,0.0103,-0.1327)[Neth]{} (-0.3564,0.0272,-0.0952)[Swed]{} (-0.3105,0.0525,-0.1154)[Denm]{} (-0.3162,0.1401,-0.0703)[Finl]{} (-0.2473,0.0683,-0.2104)[Norw]{} (-0.3521,0.0124,-0.0560)[Spai]{} (-0.2608,0.1491,-0.0783)[Port]{} (-0.2042,0.1291,0.0778)[Hung]{} (-0.0965,-0.0144,-0.0788)[Pola]{} (-0.0781,0.0319,0.1011)[Isra]{} (-0.0586,0.1835,-0.2293)[Japa]{} (-0.0582,0.2523,0.0587)[HoKo]{} (0.2254,0.2504,0.2332)[Mala]{} (-0.0804,0.2221,0.1470)[Aust]{} (-0.1475,0.1603,-0.1759)[SoAf]{} 2 cm Fig. 6. Three dimensional view of the asset trees for the second semester of 1997, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm At $T=0.2$, two seeds of clusters appear: one comprised of S&P and Nasdaq, and the other made of Switzerland and Netherlands. The European cluster grows at $T=0.3$, with the addition of the UK, France, Germany, Austria, Belgium, Sweden, Finland, and Spain. At $T=0.4$, an American cluster is formed, with Canada, Mexico, Brazil, and Argentina joining S&P and Nasdaq. The European cluster is joined by Ireland and Denmark, and becomes denser. At $T=0.5$, connections are established between the two existing clusters, and the European cluster is joined by Chile, which does not connect with the American cluster. Norway and Portugal also join the European cluster. At $T=0.6$, there is a massive number of connections between the American and the European clusters. Chile connects further with Europe and also with America, Peru also connects with both clusters, and Hungary and South Africa get fully integrated with the European cluster, with Hungary also connecting with the American cluster. Poland, Israel, Hong Kong, and Australia also make connections with the European cluster. Japan connects with Australia, and Hong Kong connects with Malaysia. For increasing values of the threshold, many more connections are made, but random noise becomes so strong we cannot separate meaningful connections from random ones. ### First semester, 1998 Figure 7 shows the three dimensional view of the asset trees for the first semester of 1998, with threshold ranging from $T=0.3$ to $T=0.6$. Connections are not formed until $T=0.3$. At this threshold, we have connections between S&P and Nasdaq, and between the UK, France, Switzerland, Belgium, Netherlands, Sweden, Finland, and Spain, forming the North American and the European clusters. At $T=0.4$, the North American cluster stays unaltered, while the European cluster becomes denser, with more connections among its members, and with the joining of Germany and Portugal. For $T=0.5$, Canada, Mexico, Argentina and Brazil, join the American cluster, which connects with the European one via Canada. Ireland, Denmark, and Norway join the European cluster, which becomes even denser, and establishes connections with Hong Kong and South Africa. Hungary connects with Hong Kong and we see the emergence of a Pacific Asian cluster, comprised of Hong Kong, Thailand, and Malaysia. At $T=0.6$, just bellow the area dominated by noise, we have an American cluster very much integrated with Europe, and Chile also connected with the European cluster. South Africa establishes a connection with America and more connections with Europe, along with a connection with the Pacific Asian cluster via Hong Kong. The European cluster, connected now with Hungary, now also has connections with Israel, India, and Australia. Hong Kong has now more connections with Europe, and also forms connections with Taiwan, Thailand, Philipines, the latter also forming connections with Thailand and Malaysia, strenghtening the Pacific Asian cluster. Above this threshold, noise starts to dominate, although we have the formation of many more meaningful connections, which are indistiguishable from the randomly made ones. The last connection occurs at $T=1.4$, between Iceland and Portugal. 0.8 cm (-4.5,-2.1)(1,3) (-4,2.7)[T=0.3]{} (-4.9,-3.8)(3.5,-3.8)(3.5,3.1)(-4.9,3.1)(-4.9,-3.8) (-0.1818,0.0156,-0.2120)(-0.0818,0.0532,-0.3095) (-0.3484,0.0876,0.1316)(-0.3445,0.1881,0.0423) (-0.3445,0.1881,0.0423)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3749,0.2011,-0.0265) (-0.3136,0.1359,0.0178)(-0.3969,0.1055,0.0457) (-0.3969,0.1055,0.0457)(-0.3394,0.1783,0.0388) (-0.3394,0.1783,0.0388)(-0.2730,0.1350,0.0903) (-0.1818,0.0156,-0.2120) (-0.0818,0.0532,-0.3095) (-0.3484,0.0876,0.1316) (-0.3445,0.1881,0.0423) (-0.3848,0.1190,0.0230) (-0.3136,0.1359,0.0178) (-0.3969,0.1055,0.0457) (-0.3394,0.1783,0.0388) (-0.2730,0.1350,0.0903) (-0.3749,0.2011,-0.0265) (-0.1818,0.0156,-0.2420)[S&P]{} (-0.0818,0.0532,-0.3395)[Nasd]{} (-0.3484,0.0876,0.1616)[UK]{} (-0.3445,0.1881,0.0723)[Fran]{} (-0.3848,0.1190,0.0530)[Swit]{} (-0.3136,0.1359,0.0478)[Belg]{} (-0.3969,0.1055,0.0757)[Neth]{} (-0.3394,0.1783,0.0688)[Swed]{} (-0.2730,0.1350,0.1203)[Finl]{} (-0.3749,0.2011,-0.0565)[Spai]{} (-8,-2.1)(1,3) (-4,2.7)[T=0.4]{} (-4.9,-3.8)(3.5,-3.8)(3.5,3.1)(-4.9,3.1)(-4.9,-3.8) (-0.1818,0.0156,-0.2120)(-0.0818,0.0532,-0.3095) (-0.3484,0.0876,0.1316)(-0.3445,0.1881,0.0423) (-0.3484,0.0876,0.1316)(-0.3848,0.1190,0.0230) (-0.3484,0.0876,0.1316)(-0.3969,0.1055,0.0457) (-0.3484,0.0876,0.1316)(-0.3394,0.1783,0.0388) (-0.3484,0.0876,0.1316)(-0.3749,0.2011,-0.0265) (-0.3445,0.1881,0.0423)(-0.3848,0.1190,0.0230) (-0.3445,0.1881,0.0423)(-0.3136,0.1359,0.0178) (-0.3445,0.1881,0.0423)(-0.3969,0.1055,0.0457) (-0.3445,0.1881,0.0423)(-0.3394,0.1783,0.0388) (-0.3445,0.1881,0.0423)(-0.2730,0.1350,0.0903) (-0.3445,0.1881,0.0423)(-0.3749,0.2011,-0.0265) (-0.3023,0.0758,0.1701)(-0.3848,0.1190,0.0230) (-0.3023,0.0758,0.1701)(-0.3136,0.1359,0.0178) (-0.3023,0.0758,0.1701)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3136,0.1359,0.0178) (-0.3848,0.1190,0.0230)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3394,0.1783,0.0388) (-0.3848,0.1190,0.0230)(-0.2730,0.1350,0.0903) (-0.3848,0.1190,0.0230)(-0.3749,0.2011,-0.0265) (-0.3136,0.1359,0.0178)(-0.3969,0.1055,0.0457) (-0.3136,0.1359,0.0178)(-0.3394,0.1783,0.0388) (-0.3136,0.1359,0.0178)(-0.2730,0.1350,0.0903) (-0.3136,0.1359,0.0178)(-0.3749,0.2011,-0.0265) (-0.3969,0.1055,0.0457)(-0.3394,0.1783,0.0388) (-0.3969,0.1055,0.0457)(-0.2730,0.1350,0.0903) (-0.3969,0.1055,0.0457)(-0.3749,0.2011,-0.0265) (-0.3969,0.1055,0.0457)(-0.3674,0.0891,0.1209) (-0.3394,0.1783,0.0388)(-0.2730,0.1350,0.0903) (-0.3394,0.1783,0.0388)(-0.3749,0.2011,-0.0265) (-0.1818,0.0156,-0.2120) (-0.0818,0.0532,-0.3095) (-0.3484,0.0876,0.1316) (-0.3445,0.1881,0.0423) (-0.3023,0.0758,0.1701) (-0.3848,0.1190,0.0230) (-0.3136,0.1359,0.0178) (-0.3969,0.1055,0.0457) (-0.3394,0.1783,0.0388) (-0.2730,0.1350,0.0903) (-0.3749,0.2011,-0.0265) (-0.3674,0.0891,0.1209) (-0.1818,0.0156,-0.2420)[S&P]{} (-0.0818,0.0532,-0.3395)[Nasd]{} (-0.3484,0.0876,0.1616)[UK]{} (-0.3445,0.1881,0.0723)[Fran]{} (-0.3023,0.0758,0.2001)[Germ]{} (-0.3848,0.1190,0.0530)[Swit]{} (-0.3136,0.1359,0.0478)[Belg]{} (-0.3969,0.1055,0.0757)[Neth]{} (-0.3394,0.1783,0.0688)[Swed]{} (-0.2987,0.1591,0.0551)[Denm]{} (-0.2730,0.1350,0.1203)[Finl]{} (-0.3749,0.2011,-0.0565)[Spai]{} (-0.3674,0.0891,0.1509)[Port]{} 2.4 cm (-4.5,-2.1)(1,3) (-4,2.7)[T=0.5]{} (-4.9,-3.8)(3.5,-3.8)(3.5,3.1)(-4.9,3.1)(-4.9,-3.8) (-0.1818,0.0156,-0.2120)(-0.0818,0.0532,-0.3095) (-0.1818,0.0156,-0.2120)(-0.2310,0.0899,-0.1417) (-0.1818,0.0156,-0.2120)(0.0273,-0.1765,-0.4038) (-0.0818,0.0532,-0.3095)(0.0273,-0.1765,-0.4038) (-0.2310,0.0899,-0.1417)(-0.3848,0.1190,0.0230) (-0.2310,0.0899,-0.1417)(-0.3394,0.1783,0.0388) (-0.2310,0.0899,-0.1417)(-0.2986,0.0550,-0.0316) (-0.0302,-0.1418,-0.4404)(-0.0987,-0.0804,-0.4072) (-0.0302,-0.1418,-0.4404)(0.0273,-0.1765,-0.4038) (-0.0987,-0.0804,-0.4072)(0.0273,-0.1765,-0.4038) (-0.3484,0.0876,0.1316)(-0.3445,0.1881,0.0423) (-0.3484,0.0876,0.1316)(-0.3023,0.0758,0.1701) (-0.3484,0.0876,0.1316)(-0.3848,0.1190,0.0230) (-0.3484,0.0876,0.1316)(-0.2020,0.1341,0.2437) (-0.3484,0.0876,0.1316)(-0.3136,0.1359,0.0178) (-0.3484,0.0876,0.1316)(-0.3969,0.1055,0.0457) (-0.3484,0.0876,0.1316)(-0.3394,0.1783,0.0388) (-0.3484,0.0876,0.1316)(-0.2730,0.1350,0.0903) (-0.3484,0.0876,0.1316)(-0.2986,0.0550,-0.0316) (-0.3484,0.0876,0.1316)(-0.3749,0.2011,-0.0265) (-0.3484,0.0876,0.1316)(-0.3674,0.0891,0.1209) (-0.3484,0.0876,0.1316)(-0.1903,-0.0635,0.1340) (-0.1362,0.1294,0.1198)(-0.2730,0.1350,0.0903) (-0.1362,0.1294,0.1198)(-0.2986,0.0550,-0.0316) (-0.3445,0.1881,0.0423)(-0.3023,0.0758,0.1701) (-0.3445,0.1881,0.0423)(-0.3848,0.1190,0.0230) (-0.3445,0.1881,0.0423)(-0.2020,0.1341,0.2437) (-0.3445,0.1881,0.0423)(-0.3136,0.1359,0.0178) (-0.3445,0.1881,0.0423)(-0.3969,0.1055,0.0457) (-0.3445,0.1881,0.0423)(-0.3394,0.1783,0.0388) (-0.3445,0.1881,0.0423)(-0.2987,0.1591,0.0251) (-0.3445,0.1881,0.0423)(-0.2730,0.1350,0.0903) (-0.3445,0.1881,0.0423)(-0.2986,0.0550,-0.0316) (-0.3445,0.1881,0.0423)(-0.3749,0.2011,-0.0265) (-0.3445,0.1881,0.0423)(-0.1903,-0.0635,0.1340) (-0.3023,0.0758,0.1701)(-0.2020,0.1341,0.2437) (-0.3023,0.0758,0.1701)(-0.3848,0.1190,0.0230) (-0.3023,0.0758,0.1701)(-0.3136,0.1359,0.0178) (-0.3023,0.0758,0.1701)(-0.3969,0.1055,0.0457) (-0.3023,0.0758,0.1701)(-0.3394,0.1783,0.0388) (-0.3023,0.0758,0.1701)(-0.2987,0.1591,0.0251) (-0.3023,0.0758,0.1701)(-0.2730,0.1350,0.0903) (-0.3023,0.0758,0.1701)(-0.2986,0.0550,-0.0316) (-0.3023,0.0758,0.1701)(-0.3674,0.0891,0.1209) (-0.3023,0.0758,0.1701)(-0.1903,-0.0635,0.1340) (-0.3848,0.1190,0.0230)(-0.2020,0.1341,0.2437) (-0.3848,0.1190,0.0230)(-0.3136,0.1359,0.0178) (-0.3848,0.1190,0.0230)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3394,0.1783,0.0388) (-0.3848,0.1190,0.0230)(-0.2987,0.1591,0.0251) (-0.3848,0.1190,0.0230)(-0.2730,0.1350,0.0903) (-0.3848,0.1190,0.0230)(-0.2986,0.0550,-0.0316) (-0.3848,0.1190,0.0230)(-0.3749,0.2011,-0.0265) (-0.3848,0.1190,0.0230)(-0.3674,0.0891,0.1209) (-0.2020,0.1341,0.2437)(-0.3136,0.1359,0.0178) (-0.2020,0.1341,0.2437)(-0.3969,0.1055,0.0457) (-0.2020,0.1341,0.2437)(-0.2730,0.1350,0.0903) (-0.2020,0.1341,0.2437)(-0.2986,0.0550,-0.0316) (-0.2020,0.1341,0.2437)(-0.3674,0.0891,0.1209) (-0.3136,0.1359,0.0178)(-0.3969,0.1055,0.0457) (-0.3136,0.1359,0.0178)(-0.3394,0.1783,0.0388) (-0.3136,0.1359,0.0178)(-0.2987,0.1591,0.0251) (-0.3136,0.1359,0.0178)(-0.2730,0.1350,0.0903) (-0.3136,0.1359,0.0178)(-0.2986,0.0550,-0.0316) (-0.3136,0.1359,0.0178)(-0.3749,0.2011,-0.0265) (-0.3136,0.1359,0.0178)(-0.3674,0.0891,0.1209) (-0.3136,0.1359,0.0178)(-0.1884,-0.2635,0.0685) (-0.3969,0.1055,0.0457)(-0.3394,0.1783,0.0388) (-0.3969,0.1055,0.0457)(-0.2987,0.1591,0.0251) (-0.3969,0.1055,0.0457)(-0.2730,0.1350,0.0903) (-0.3969,0.1055,0.0457)(-0.2986,0.0550,-0.0316) (-0.3969,0.1055,0.0457)(-0.3749,0.2011,-0.0265) (-0.3969,0.1055,0.0457)(-0.3674,0.0891,0.1209) (-0.3394,0.1783,0.0388)(-0.2987,0.1591,0.0251) (-0.3394,0.1783,0.0388)(-0.2730,0.1350,0.0903) (-0.3394,0.1783,0.0388)(-0.2986,0.0550,-0.0316) (-0.3394,0.1783,0.0388)(-0.3749,0.2011,-0.0265) (-0.3394,0.1783,0.0388)(-0.3674,0.0891,0.1209) (-0.2987,0.1591,0.0251)(-0.2730,0.1350,0.0903) (-0.2987,0.1591,0.0251)(-0.2986,0.0550,-0.0316) (-0.2987,0.1591,0.0251)(-0.3749,0.2011,-0.0265) (-0.2987,0.1591,0.0251)(-0.3674,0.0891,0.1209) (-0.2730,0.1350,0.0903)(-0.2986,0.0550,-0.0316) (-0.2730,0.1350,0.0903)(-0.3749,0.2011,-0.0265) (-0.2730,0.1350,0.0903)(-0.3674,0.0891,0.1209) (-0.2730,0.1350,0.0903)(-0.1903,-0.0635,0.1340) (-0.2986,0.0550,-0.0316)(-0.3674,0.0891,0.1209) (-0.3749,0.2011,-0.0265)(-0.3674,0.0891,0.1209) (-0.3136,0.1359,0.0178)(-0.1337,-0.2121,0.0003) (-0.1884,-0.2635,0.0685)(-0.0237,-0.4520,-0.0085) (-0.0153,-0.3278,0.0502)(-0.0237,-0.4520,-0.0085) (-0.1818,0.0156,-0.2120) (-0.0818,0.0532,-0.3095) (-0.2310,0.0899,-0.1417) (-0.0302,-0.1418,-0.4404) (-0.0987,-0.0804,-0.4072) (0.0273,-0.1765,-0.4038) (-0.3484,0.0876,0.1316) (-0.1362,0.1294,0.1198) (-0.3445,0.1881,0.0423) (-0.3023,0.0758,0.1701) (-0.3848,0.1190,0.0230) (-0.2020,0.1341,0.2437) (-0.3136,0.1359,0.0178) (-0.3969,0.1055,0.0457) (-0.3394,0.1783,0.0388) (-0.2987,0.1591,0.0251) (-0.2730,0.1350,0.0903) (-0.2986,0.0550,-0.0316) (-0.3749,0.2011,-0.0265) (-0.3674,0.0891,0.1209) (-0.1337,-0.2121,0.0003) (-0.1884,-0.2635,0.0685) (-0.0153,-0.3278,0.0502) (-0.0237,-0.4520,-0.0085) (-0.1903,-0.0635,0.1340) (-0.1818,0.0156,-0.2420)[S&P]{} (-0.0818,0.0532,-0.3395)[Nasd]{} (-0.2310,0.0899,-0.1717)[Cana]{} (-0.0302,-0.1418,-0.4704)[Mexi]{} (-0.0987,-0.0804,-0.4372)[Braz]{} (0.0273,-0.1765,-0.4338)[Arge]{} (-0.3484,0.0876,0.1616)[UK]{} (-0.1362,0.1294,0.1498)[Irel]{} (-0.3445,0.1881,0.0723)[Fran]{} (-0.3023,0.0758,0.2001)[Germ]{} (-0.3848,0.1190,0.0530)[Swit]{} (-0.2020,0.1341,0.2737)[Autr]{} (-0.3136,0.1359,0.0478)[Belg]{} (-0.3969,0.1055,0.0757)[Neth]{} (-0.3394,0.1783,0.0688)[Swed]{} (-0.2987,0.1591,0.0551)[Denm]{} (-0.2730,0.1350,0.1203)[Finl]{} (-0.2986,0.0550,-0.0616)[Norw]{} (-0.3749,0.2011,-0.0565)[Spai]{} (-0.3674,0.0891,0.1509)[Port]{} (-0.1337,-0.2121,0.0303)[Hung]{} (-0.1884,-0.2635,0.0985)[HoKo]{} (-0.0153,-0.3278,0.0802)[Thai]{} (-0.0237,-0.4520,-0.0085)[Mala]{} (-0.1903,-0.0635,0.1640)[SoAf]{} (-8,-2.1)(1,3) (-4,2.7)[T=0.6]{} (-4.9,-3.8)(3.5,-3.8)(3.5,3.1)(-4.9,3.1)(-4.9,-3.8) (-0.1818,0.0156,-0.2120)(-0.0818,0.0532,-0.3095) (-0.1818,0.0156,-0.2120)(-0.2310,0.0899,-0.1417) (-0.1818,0.0156,-0.2120)(-0.0987,-0.0804,-0.4072) (-0.1818,0.0156,-0.2120)(0.0273,-0.1765,-0.4038) (-0.1818,0.0156,-0.2120)(-0.3484,0.0876,0.1316) (-0.1818,0.0156,-0.2120)(-0.3445,0.1881,0.0423) (-0.1818,0.0156,-0.2120)(-0.3136,0.1359,0.0178) (-0.1818,0.0156,-0.2120)(-0.3674,0.0891,0.1209) (-0.0818,0.0532,-0.3095)(-0.2310,0.0899,-0.1417) (-0.0818,0.0532,-0.3095)(-0.0302,-0.1418,-0.4404) (-0.0818,0.0532,-0.3095)(-0.0987,-0.0804,-0.4072) (-0.0818,0.0532,-0.3095)(0.0273,-0.1765,-0.4038) (-0.2310,0.0899,-0.1417)(0.0273,-0.1765,-0.4038) (-0.2310,0.0899,-0.1417)(-0.3445,0.1881,0.0423) (-0.2310,0.0899,-0.1417)(-0.3023,0.0758,0.1701) (-0.2310,0.0899,-0.1417)(-0.3848,0.1190,0.0230) (-0.2310,0.0899,-0.1417)(-0.2020,0.1341,0.2437) (-0.2310,0.0899,-0.1417)(-0.3136,0.1359,0.0178) (-0.2310,0.0899,-0.1417)(-0.3969,0.1055,0.0457) (-0.2310,0.0899,-0.1417)(-0.3394,0.1783,0.0388) (-0.2310,0.0899,-0.1417)(-0.2987,0.1591,0.0251) (-0.2310,0.0899,-0.1417)(-0.2730,0.1350,0.0903) (-0.2310,0.0899,-0.1417)(-0.2986,0.0550,-0.0316) (-0.2310,0.0899,-0.1417)(-0.3749,0.2011,-0.0265) (-0.2310,0.0899,-0.1417)(-0.3674,0.0891,0.1209) (-0.2310,0.0899,-0.1417)(-0.1884,-0.2635,0.0685) (-0.2310,0.0899,-0.1417)(-0.1903,-0.0635,0.1340) (-0.0302,-0.1418,-0.4404)(-0.0987,-0.0804,-0.4072) (-0.0302,-0.1418,-0.4404)(0.0273,-0.1765,-0.4038) (-0.0987,-0.0804,-0.4072)(0.0273,-0.1765,-0.4038) (-0.0987,-0.0804,-0.4072)(-0.3136,0.1359,0.0178) (-0.0946,-0.1026,-0.0858)(-0.3848,0.1190,0.0230) (-0.3484,0.0876,0.1316)(-0.1362,0.1294,0.1198) (-0.3484,0.0876,0.1316)(-0.3445,0.1881,0.0423) (-0.3484,0.0876,0.1316)(-0.3023,0.0758,0.1701) (-0.3484,0.0876,0.1316)(-0.3848,0.1190,0.0230) (-0.3484,0.0876,0.1316)(-0.2020,0.1341,0.2437) (-0.3484,0.0876,0.1316)(-0.3136,0.1359,0.0178) (-0.3484,0.0876,0.1316)(-0.3969,0.1055,0.0457) (-0.3484,0.0876,0.1316)(-0.3394,0.1783,0.0388) (-0.3484,0.0876,0.1316)(-0.2987,0.1591,0.0251) (-0.3484,0.0876,0.1316)(-0.2730,0.1350,0.0903) (-0.3484,0.0876,0.1316)(-0.2986,0.0550,-0.0316) (-0.3484,0.0876,0.1316)(-0.3749,0.2011,-0.0265) (-0.3484,0.0876,0.1316)(-0.3674,0.0891,0.1209) (-0.3484,0.0876,0.1316)(-0.1903,-0.0635,0.1340) (-0.1362,0.1294,0.1198)(-0.3445,0.1881,0.0423) (-0.1362,0.1294,0.1198)(-0.3023,0.0758,0.1701) (-0.1362,0.1294,0.1198)(-0.3848,0.1190,0.0230) (-0.1362,0.1294,0.1198)(-0.2020,0.1341,0.2437) (-0.1362,0.1294,0.1198)(-0.3969,0.1055,0.0457) (-0.1362,0.1294,0.1198)(-0.3394,0.1783,0.0388) (-0.1362,0.1294,0.1198)(-0.2730,0.1350,0.0903) (-0.1362,0.1294,0.1198)(-0.2986,0.0550,-0.0316) (-0.1362,0.1294,0.1198)(-0.3674,0.0891,0.1209) (-0.1362,0.1294,0.1198)(-0.1903,-0.0635,0.1340) (-0.3445,0.1881,0.0423)(-0.3023,0.0758,0.1701) (-0.3445,0.1881,0.0423)(-0.3848,0.1190,0.0230) (-0.3445,0.1881,0.0423)(-0.2020,0.1341,0.2437) (-0.3445,0.1881,0.0423)(-0.3136,0.1359,0.0178) (-0.3445,0.1881,0.0423)(-0.3969,0.1055,0.0457) (-0.3445,0.1881,0.0423)(-0.3394,0.1783,0.0388) (-0.3445,0.1881,0.0423)(-0.2987,0.1591,0.0251) (-0.3445,0.1881,0.0423)(-0.2730,0.1350,0.0903) (-0.3445,0.1881,0.0423)(-0.2986,0.0550,-0.0316) (-0.3445,0.1881,0.0423)(-0.3749,0.2011,-0.0265) (-0.3445,0.1881,0.0423)(-0.3674,0.0891,0.1209) (-0.3445,0.1881,0.0423)(-0.1773,-0.0730,0.0351) (-0.3445,0.1881,0.0423)(-0.1884,-0.2635,0.0685) (-0.3445,0.1881,0.0423)(-0.1903,-0.0635,0.1340) (-0.3023,0.0758,0.1701)(-0.2020,0.1341,0.2437) (-0.3023,0.0758,0.1701)(-0.3848,0.1190,0.0230) (-0.3023,0.0758,0.1701)(-0.3136,0.1359,0.0178) (-0.3023,0.0758,0.1701)(-0.3969,0.1055,0.0457) (-0.3023,0.0758,0.1701)(-0.3394,0.1783,0.0388) (-0.3023,0.0758,0.1701)(-0.2987,0.1591,0.0251) (-0.3023,0.0758,0.1701)(-0.2730,0.1350,0.0903) (-0.3023,0.0758,0.1701)(-0.2986,0.0550,-0.0316) (-0.3023,0.0758,0.1701)(-0.3749,0.2011,-0.0265) (-0.3023,0.0758,0.1701)(-0.3674,0.0891,0.1209) (-0.3023,0.0758,0.1701)(-0.3136,0.1359,0.0178) (-0.3023,0.0758,0.1701)(-0.1773,-0.0730,0.0351) (-0.3023,0.0758,0.1701)(-0.1884,-0.2635,0.0685) (-0.3023,0.0758,0.1701)(-0.1903,-0.0635,0.1340) (-0.3848,0.1190,0.0230)(-0.2020,0.1341,0.2437) (-0.3848,0.1190,0.0230)(-0.3136,0.1359,0.0178) (-0.3848,0.1190,0.0230)(-0.3969,0.1055,0.0457) (-0.3848,0.1190,0.0230)(-0.3394,0.1783,0.0388) (-0.3848,0.1190,0.0230)(-0.2987,0.1591,0.0251) (-0.3848,0.1190,0.0230)(-0.2730,0.1350,0.0903) (-0.3848,0.1190,0.0230)(-0.2986,0.0550,-0.0316) (-0.3848,0.1190,0.0230)(-0.3749,0.2011,-0.0265) (-0.3848,0.1190,0.0230)(-0.3674,0.0891,0.1209) (-0.3848,0.1190,0.0230)(-0.3136,0.1359,0.0178) (-0.3848,0.1190,0.0230)(-0.1773,-0.0730,0.0351) (-0.3848,0.1190,0.0230)(-0.1884,-0.2635,0.0685) (-0.3848,0.1190,0.0230)(-0.1903,-0.0635,0.1340) (-0.2020,0.1341,0.2437)(-0.3136,0.1359,0.0178) (-0.2020,0.1341,0.2437)(-0.3969,0.1055,0.0457) (-0.2020,0.1341,0.2437)(-0.3394,0.1783,0.0388) (-0.2020,0.1341,0.2437)(-0.2987,0.1591,0.0251) (-0.2020,0.1341,0.2437)(-0.2730,0.1350,0.0903) (-0.2020,0.1341,0.2437)(-0.2986,0.0550,-0.0316) (-0.2020,0.1341,0.2437)(-0.3749,0.2011,-0.0265) (-0.2020,0.1341,0.2437)(-0.3674,0.0891,0.1209) (-0.2020,0.1341,0.2437)(-0.1903,-0.0635,0.1340) (-0.3136,0.1359,0.0178)(-0.3969,0.1055,0.0457) (-0.3136,0.1359,0.0178)(-0.3394,0.1783,0.0388) (-0.3136,0.1359,0.0178)(-0.2987,0.1591,0.0251) (-0.3136,0.1359,0.0178)(-0.2730,0.1350,0.0903) (-0.3136,0.1359,0.0178)(-0.2986,0.0550,-0.0316) (-0.3136,0.1359,0.0178)(-0.3749,0.2011,-0.0265) (-0.3136,0.1359,0.0178)(-0.3674,0.0891,0.1209) (-0.3136,0.1359,0.0178)(-0.3136,0.1359,0.0178) (-0.3136,0.1359,0.0178)(-0.1884,-0.2635,0.0685) (-0.3136,0.1359,0.0178)(-0.1903,-0.0635,0.1340) (-0.3969,0.1055,0.0457)(-0.3394,0.1783,0.0388) (-0.3969,0.1055,0.0457)(-0.2987,0.1591,0.0251) (-0.3969,0.1055,0.0457)(-0.2730,0.1350,0.0903) (-0.3969,0.1055,0.0457)(-0.2986,0.0550,-0.0316) (-0.3969,0.1055,0.0457)(-0.3749,0.2011,-0.0265) (-0.3969,0.1055,0.0457)(-0.3674,0.0891,0.1209) (-0.3969,0.1055,0.0457)(-0.1773,-0.0730,0.0351) (-0.3969,0.1055,0.0457)(-0.1577,0.0675,0.0433) (-0.3969,0.1055,0.0457)(-0.1884,-0.2635,0.0685) (-0.3969,0.1055,0.0457)(-0.1903,-0.0635,0.1340) (-0.3394,0.1783,0.0388)(-0.2987,0.1591,0.0251) (-0.3394,0.1783,0.0388)(-0.2730,0.1350,0.0903) (-0.3394,0.1783,0.0388)(-0.2986,0.0550,-0.0316) (-0.3394,0.1783,0.0388)(-0.3749,0.2011,-0.0265) (-0.3394,0.1783,0.0388)(-0.3674,0.0891,0.1209) (-0.3394,0.1783,0.0388)(-0.1903,-0.0635,0.1340) (-0.2987,0.1591,0.0251)(-0.2730,0.1350,0.0903) (-0.2987,0.1591,0.0251)(-0.2986,0.0550,-0.0316) (-0.2987,0.1591,0.0251)(-0.3749,0.2011,-0.0265) (-0.2987,0.1591,0.0251)(-0.3674,0.0891,0.1209) (-0.2730,0.1350,0.0903)(-0.2986,0.0550,-0.0316) (-0.2730,0.1350,0.0903)(-0.3749,0.2011,-0.0265) (-0.2730,0.1350,0.0903)(-0.3674,0.0891,0.1209) (-0.2730,0.1350,0.0903)(-0.1903,-0.0635,0.1340) (-0.2986,0.0550,-0.0316)(-0.3749,0.2011,-0.0265) (-0.2986,0.0550,-0.0316)(-0.3674,0.0891,0.1209) (-0.2986,0.0550,-0.0316)(-0.1884,-0.2635,0.0685) (-0.2986,0.0550,-0.0316)(-0.1903,-0.0635,0.1340) (-0.3749,0.2011,-0.0265)(-0.3674,0.0891,0.1209) (-0.3674,0.0891,0.1209)(-0.1903,-0.0635,0.1340) (-0.3136,0.1359,0.0178)(0.0793,-0.2484,0.0567) (-0.3136,0.1359,0.0178)(-0.1337,-0.2121,0.0003) (0.0679,-0.3812,0.1719)(0.0793,-0.2484,0.0567) (0.0679,-0.3812,0.1719)(-0.1884,-0.2635,0.0685) (0.0679,-0.3812,0.1719)(-0.0153,-0.3278,0.0502) (0.0679,-0.3812,0.1719)(0.0362,-0.0595,0.2713) (0.0793,-0.2484,0.0567)(-0.1903,-0.0635,0.1340) (-0.1773,-0.0730,0.0351)(-0.1884,-0.2635,0.0685) (0.0593,-0.2854,0.1115)(-0.0237,-0.4520,-0.0085) (-0.1337,-0.2121,0.0003)(-0.0277,-0.0913,0.2947) (-0.1337,-0.2121,0.0003)(-0.0153,-0.3278,0.0502) (-0.1337,-0.2121,0.0003)(-0.0237,-0.4520,-0.0085) (-0.1337,-0.2121,0.0003)(0.0714,-0.2860,0.0845) (-0.1337,-0.2121,0.0003)(-0.1903,-0.0635,0.1340) (-0.0153,-0.3278,0.0502)(-0.0237,-0.4520,-0.0085) (-0.0277,-0.0913,0.2947)(-0.1903,-0.0635,0.1340) (-0.0153,-0.3278,0.0502)(0.0714,-0.2860,0.0845) (-0.0237,-0.4520,-0.0085)(0.0714,-0.2860,0.0845) (-0.1818,0.0156,-0.2120) (-0.0818,0.0532,-0.3095) (-0.2310,0.0899,-0.1417) (-0.0302,-0.1418,-0.4404) (-0.0987,-0.0804,-0.4072) (0.0273,-0.1765,-0.4038) (-0.0946,-0.1026,-0.0858) (-0.3484,0.0876,0.1316) (-0.1362,0.1294,0.1198) (-0.3445,0.1881,0.0423) (-0.3023,0.0758,0.1701) (-0.3848,0.1190,0.0230) (-0.2020,0.1341,0.2437) (-0.3136,0.1359,0.0178) (-0.3969,0.1055,0.0457) (-0.3394,0.1783,0.0388) (-0.2987,0.1591,0.0251) (-0.2730,0.1350,0.0903) (-0.2986,0.0550,-0.0316) (-0.3749,0.2011,-0.0265) (-0.3674,0.0891,0.1209) (-0.1337,-0.2121,0.0003) (0.0679,-0.3812,0.1719) (0.0793,-0.2484,0.0567) (-0.1773,-0.0730,0.0351) (-0.1577,0.0675,0.0433) (0.0593,-0.2854,0.1115) (-0.1884,-0.2635,0.0685) (-0.0277,-0.0913,0.2947) (-0.0153,-0.3278,0.0502) (-0.0237,-0.4520,-0.0085) (0.0714,-0.2860,0.0845) (0.0362,-0.0595,0.2713) (-0.1903,-0.0635,0.1340) (-0.1818,0.0156,-0.2420)[S&P]{} (-0.0818,0.0532,-0.3395)[Nasd]{} (-0.2310,0.0899,-0.1717)[Cana]{} (-0.0302,-0.1418,-0.4704)[Mexi]{} (-0.0987,-0.0804,-0.4372)[Braz]{} (0.0273,-0.1765,-0.4338)[Arge]{} (-0.0946,-0.1026,-0.1158)[Chil]{} (-0.3484,0.0876,0.1616)[UK]{} (-0.1362,0.1294,0.1498)[Irel]{} (-0.3445,0.1881,0.0723)[Fran]{} (-0.3023,0.0758,0.2001)[Germ]{} (-0.3848,0.1190,0.0530)[Swit]{} (-0.2020,0.1341,0.2737)[Autr]{} (-0.3136,0.1359,0.0478)[Belg]{} (-0.3969,0.1055,0.0757)[Neth]{} (-0.3394,0.1783,0.0688)[Swed]{} (-0.2987,0.1591,0.0551)[Denm]{} (-0.2730,0.1350,0.1203)[Finl]{} (-0.2986,0.0550,-0.0616)[Norw]{} (-0.3749,0.2011,-0.0565)[Spai]{} (-0.3674,0.0891,0.1509)[Port]{} (-0.1337,-0.2121,0.0303)[Hung]{} (0.0679,-0.3812,0.2019)[Pola]{} (0.0793,-0.2484,0.0867)[Russ]{} (-0.1773,-0.0730,0.0651)[Isra]{} (-0.1577,0.0675,0.0733)[Indi]{} (0.0593,-0.2854,0.1415)[Japa]{} (-0.1884,-0.2635,0.0985)[HoKo]{} (-0.0277,-0.0913,0.3247)[Taiw]{} (-0.0153,-0.3278,0.0802)[Thai]{} (-0.0237,-0.4520,-0.0085)[Mala]{} (0.0714,-0.2860,0.1145)[Phil]{} (0.0362,-0.0595,0.3013)[Aust]{} (-0.1903,-0.0635,0.1640)[SoAf]{} 2 cm Fig. 7. Three dimensional view of the asset trees for the first semester of 1998, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm ### Second semester, 1998 Figure 8 shows the three dimensional view of the asset trees for the second semester of 1998, with threshold ranging from $T=0.3$ to $T=0.6$. 0.8 cm (-4.5,-0.4)(1,3.5) (-4.1,2.9)[T=0.3]{} (-4.9,-2.2)(3.5,-2.2)(3.5,3.3)(-4.9,3.3)(-4.9,-2.2) (-0.1669,0.4519,0.0158)(-0.1645,0.4154,-0.0394) (-0.0841,0.3790,0.2074)(-0.1578,0.3537,0.2720) (-0.3692,-0.0022,-0.0791)(-0.3108,0.0697,-0.1704) (-0.3692,-0.0022,-0.0791)(-0.3282,0.0347,-0.1103) (-0.3692,-0.0022,-0.0791)(-0.3192,0.0407,-0.1145) (-0.3692,-0.0022,-0.0791)(-0.3039,-0.0562,-0.1226) (-0.3692,-0.0022,-0.0791)(-0.3305,-0.0075,-0.1436) (-0.3692,-0.0022,-0.0791)(-0.2870,0.0817,-0.1253) (-0.3692,-0.0022,-0.0791)(-0.2901,-0.0219,-0.1636) (-0.3692,-0.0022,-0.0791)(-0.3184,0.0201,-0.1077) (-0.3692,-0.0022,-0.0791)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3108,0.0697,-0.1704)(-0.3282,0.0347,-0.1103) (-0.3108,0.0697,-0.1704)(-0.3192,0.0407,-0.1145) (-0.3108,0.0697,-0.1704)(-0.3039,-0.0562,-0.1226) (-0.3108,0.0697,-0.1704)(-0.3305,-0.0075,-0.1436) (-0.3108,0.0697,-0.1704)(-0.2870,0.0817,-0.1253) (-0.3108,0.0697,-0.1704)(-0.2901,-0.0219,-0.1636) (-0.3108,0.0697,-0.1704)(-0.3184,0.0201,-0.1077) (-0.3108,0.0697,-0.1704)(-0.2607,0.0878,-0.1453) (-0.3282,0.0347,-0.1103)(-0.3192,0.0407,-0.1145) (-0.3282,0.0347,-0.1103)(-0.3039,-0.0562,-0.1226) (-0.3282,0.0347,-0.1103)(-0.3305,-0.0075,-0.1436) (-0.3282,0.0347,-0.1103)(-0.2870,0.0817,-0.1253) (-0.3282,0.0347,-0.1103)(-0.2901,-0.0219,-0.1636) (-0.3282,0.0347,-0.1103)(-0.3184,0.0201,-0.1077) (-0.3282,0.0347,-0.1103)(-0.2691,-0.0242,-0.0653) (-0.3282,0.0347,-0.1103)(-0.2607,0.0878,-0.1453) (-0.3282,0.0347,-0.1103)(-0.2749,-0.0742,-0.1526) (-0.3192,0.0407,-0.1145)(-0.3039,-0.0562,-0.1226) (-0.3192,0.0407,-0.1145)(-0.3305,-0.0075,-0.1436) (-0.3192,0.0407,-0.1145)(-0.2870,0.0817,-0.1253) (-0.3192,0.0407,-0.1145)(-0.3184,0.0201,-0.1077) (-0.3192,0.0407,-0.1145)(-0.2607,0.0878,-0.1453) (-0.3192,0.0407,-0.1145)(-0.3052,-0.1485,-0.0688) (-0.3039,-0.0562,-0.1226)(-0.3305,-0.0075,-0.1436) (-0.3039,-0.0562,-0.1226)(-0.3184,0.0201,-0.1077) (-0.3305,-0.0075,-0.1436)(-0.2870,0.0817,-0.1253) (-0.3305,-0.0075,-0.1436)(-0.2901,-0.0219,-0.1636) (-0.3305,-0.0075,-0.1436)(-0.3184,0.0201,-0.1077) (-0.3305,-0.0075,-0.1436)(-0.2607,0.0878,-0.1453) (-0.3305,-0.0075,-0.1436)(-0.2749,-0.0742,-0.1526) (-0.3305,-0.0075,-0.1436)(-0.2747,-0.1440,-0.0422) (-0.2870,0.0817,-0.1253)(-0.2901,-0.0219,-0.1636) (-0.2870,0.0817,-0.1253)(-0.3184,0.0201,-0.1077) (-0.2870,0.0817,-0.1253)(-0.2607,0.0878,-0.1453) (-0.2901,-0.0219,-0.1636)(-0.3184,0.0201,-0.1077) (-0.2901,-0.0219,-0.1636)(-0.2607,0.0878,-0.1453) (-0.3184,0.0201,-0.1077)(-0.2691,-0.0242,-0.0653) (-0.2607,0.0878,-0.1453)(-0.2749,-0.0742,-0.1526) (-0.1669,0.4519,0.0158) (-0.1645,0.4154,-0.0394) (-0.0841,0.3790,0.2074) (-0.1578,0.3537,0.2720) (-0.3692,-0.0022,-0.0791) (-0.3108,0.0697,-0.1704) (-0.3282,0.0347,-0.1103) (-0.3192,0.0407,-0.1145) (-0.3039,-0.0562,-0.1226) (-0.3305,-0.0075,-0.1436) (-0.2870,0.0817,-0.1253) (-0.2901,-0.0219,-0.1636) (-0.3184,0.0201,-0.1077) (-0.2691,-0.0242,-0.0653) (-0.2607,0.0878,-0.1453) (-0.2749,-0.0742,-0.1526) (-0.3052,-0.1485,-0.0688) (-0.2747,-0.1440,-0.0422) (-0.1669,0.4519,0.0458)[S&P]{} (-0.1645,0.4154,-0.0794)[Nasd]{} (-0.0841,0.3790,0.2374)[Braz]{} (-0.1578,0.3537,0.3020)[Arge]{} (-0.3692,-0.0022,-0.1091)[UK]{} (-0.3108,0.0697,-0.2004)[Fran]{} (-0.3282,0.0347,-0.1403)[Germ]{} (-0.3192,0.0407,-0.1445)[Swit]{} (-0.3039,-0.0562,-0.1526)[Belg]{} (-0.3305,-0.0075,-0.1736)[Neth]{} (-0.2870,0.0817,-0.1553)[Swed]{} (-0.2901,-0.0219,-0.1936)[Denm]{} (-0.3184,0.0201,-0.1377)[Finl]{} (-0.2691,-0.0242,-0.0953)[Norw]{} (-0.2607,0.0878,-0.1753)[Spai]{} (-0.2749,-0.0742,-0.1826)[Port]{} (-0.3052,-0.1485,-0.0988)[CzRe]{} (-0.2747,-0.1440,-0.0722)[Hung]{} (-8,-0.4)(1,3.5) (-4.1,2.9)[T=0.4]{} (-4.9,-2.2)(3.5,-2.2)(3.5,3.3)(-4.9,3.3)(-4.9,-2.2) (-0.1669,0.4519,0.0158)(-0.1645,0.4154,-0.0394) (-0.1669,0.4519,0.0158)(-0.2323,0.2469,0.0204) (-0.1645,0.4154,-0.0394)(-0.2323,0.2469,0.0204) (-0.1182,0.3346,0.2266)(-0.0841,0.3790,0.2074) (-0.1182,0.3346,0.2266)(-0.1578,0.3537,0.2720) (-0.0841,0.3790,0.2074)(-0.1578,0.3537,0.2720) (-0.1578,0.3537,0.2720)(-0.1411,0.3688,0.1580) (-0.3692,-0.0022,-0.0791)(-0.2736,-0.0825,-0.0341) (-0.3692,-0.0022,-0.0791)(-0.3108,0.0697,-0.1704) (-0.3692,-0.0022,-0.0791)(-0.3282,0.0347,-0.1103) (-0.3692,-0.0022,-0.0791)(-0.3192,0.0407,-0.1145) (-0.3692,-0.0022,-0.0791)(-0.2624,-0.0365,-0.1415) (-0.3692,-0.0022,-0.0791)(-0.3039,-0.0562,-0.1226) (-0.3692,-0.0022,-0.0791)(-0.3305,-0.0075,-0.1436) (-0.3692,-0.0022,-0.0791)(-0.2870,0.0817,-0.1253) (-0.3692,-0.0022,-0.0791)(-0.2901,-0.0219,-0.1636) (-0.3692,-0.0022,-0.0791)(-0.3184,0.0201,-0.1077) (-0.3692,-0.0022,-0.0791)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3692,-0.0022,-0.0791)(-0.2749,-0.0742,-0.1526) (-0.3692,-0.0022,-0.0791)(-0.3052,-0.1485,-0.0688) (-0.3692,-0.0022,-0.0791)(-0.2747,-0.1440,-0.0422) (-0.3692,-0.0022,-0.0791)(-0.2451,-0.1011,0.0355) (-0.2736,-0.0825,-0.0341)(-0.3108,0.0697,-0.1704) (-0.2736,-0.0825,-0.0341)(-0.3282,0.0347,-0.1103) (-0.2736,-0.0825,-0.0341)(-0.3192,0.0407,-0.1145) (-0.2736,-0.0825,-0.0341)(-0.3039,-0.0562,-0.1226) (-0.2736,-0.0825,-0.0341)(-0.3305,-0.0075,-0.1436) (-0.2736,-0.0825,-0.0341)(-0.2901,-0.0219,-0.1636) (-0.2736,-0.0825,-0.0341)(-0.3184,0.0201,-0.1077) (-0.2736,-0.0825,-0.0341)(-0.2691,-0.0242,-0.0653) (-0.2736,-0.0825,-0.0341)(-0.0978,-0.1960,-0.1329) (-0.2736,-0.0825,-0.0341)(-0.3052,-0.1485,-0.0688) (-0.2736,-0.0825,-0.0341)(-0.2451,-0.1011,0.0355) (-0.3108,0.0697,-0.1704)(-0.3282,0.0347,-0.1103) (-0.3108,0.0697,-0.1704)(-0.3192,0.0407,-0.1145) (-0.3108,0.0697,-0.1704)(-0.2624,-0.0365,-0.1415) (-0.3108,0.0697,-0.1704)(-0.3039,-0.0562,-0.1226) (-0.3108,0.0697,-0.1704)(-0.3305,-0.0075,-0.1436) (-0.3108,0.0697,-0.1704)(-0.2870,0.0817,-0.1253) (-0.3108,0.0697,-0.1704)(-0.2901,-0.0219,-0.1636) (-0.3108,0.0697,-0.1704)(-0.3184,0.0201,-0.1077) (-0.3108,0.0697,-0.1704)(-0.2691,-0.0242,-0.0653) (-0.3108,0.0697,-0.1704)(-0.2607,0.0878,-0.1453) (-0.3108,0.0697,-0.1704)(-0.2749,-0.0742,-0.1526) (-0.3108,0.0697,-0.1704)(-0.3052,-0.1485,-0.0688) (-0.3108,0.0697,-0.1704)(-0.2747,-0.1440,-0.0422) (-0.3108,0.0697,-0.1704)(-0.2451,-0.1011,0.0355) (-0.3282,0.0347,-0.1103)(-0.3192,0.0407,-0.1145) (-0.3282,0.0347,-0.1103)(-0.2624,-0.0365,-0.1415) (-0.3282,0.0347,-0.1103)(-0.3039,-0.0562,-0.1226) (-0.3282,0.0347,-0.1103)(-0.3305,-0.0075,-0.1436) (-0.3282,0.0347,-0.1103)(-0.2870,0.0817,-0.1253) (-0.3282,0.0347,-0.1103)(-0.2901,-0.0219,-0.1636) (-0.3282,0.0347,-0.1103)(-0.3184,0.0201,-0.1077) (-0.3282,0.0347,-0.1103)(-0.2691,-0.0242,-0.0653) (-0.3282,0.0347,-0.1103)(-0.2607,0.0878,-0.1453) (-0.3282,0.0347,-0.1103)(-0.2749,-0.0742,-0.1526) (-0.3282,0.0347,-0.1103)(-0.3052,-0.1485,-0.0688) (-0.3282,0.0347,-0.1103)(-0.2747,-0.1440,-0.0422) (-0.3282,0.0347,-0.1103)(-0.2451,-0.1011,0.0355) (-0.3192,0.0407,-0.1145)(-0.2624,-0.0365,-0.1415) (-0.3192,0.0407,-0.1145)(-0.3039,-0.0562,-0.1226) (-0.3192,0.0407,-0.1145)(-0.3305,-0.0075,-0.1436) (-0.3192,0.0407,-0.1145)(-0.2870,0.0817,-0.1253) (-0.3192,0.0407,-0.1145)(-0.2901,-0.0219,-0.1636) (-0.3192,0.0407,-0.1145)(-0.3184,0.0201,-0.1077) (-0.3192,0.0407,-0.1145)(-0.2691,-0.0242,-0.0653) (-0.3192,0.0407,-0.1145)(-0.2607,0.0878,-0.1453) (-0.3192,0.0407,-0.1145)(-0.2749,-0.0742,-0.1526) (-0.3192,0.0407,-0.1145)(-0.3052,-0.1485,-0.0688) (-0.3192,0.0407,-0.1145)(-0.2747,-0.1440,-0.0422) (-0.3192,0.0407,-0.1145)(-0.2451,-0.1011,0.0355) (-0.2624,-0.0365,-0.1415)(-0.3039,-0.0562,-0.1226) (-0.2624,-0.0365,-0.1415)(-0.3305,-0.0075,-0.1436) (-0.2624,-0.0365,-0.1415)(-0.3184,0.0201,-0.1077) (-0.2624,-0.0365,-0.1415)(-0.2691,-0.0242,-0.0653) (-0.2624,-0.0365,-0.1415)(-0.2607,0.0878,-0.1453) (-0.2624,-0.0365,-0.1415)(-0.2749,-0.0742,-0.1526) (-0.2624,-0.0365,-0.1415)(-0.2451,-0.1011,0.0355) (-0.3039,-0.0562,-0.1226)(-0.3305,-0.0075,-0.1436) (-0.3039,-0.0562,-0.1226)(-0.2870,0.0817,-0.1253) (-0.3039,-0.0562,-0.1226)(-0.2901,-0.0219,-0.1636) (-0.3039,-0.0562,-0.1226)(-0.3184,0.0201,-0.1077) (-0.3039,-0.0562,-0.1226)(-0.2607,0.0878,-0.1453) (-0.3039,-0.0562,-0.1226)(-0.2749,-0.0742,-0.1526) (-0.3039,-0.0562,-0.1226)(-0.3052,-0.1485,-0.0688) (-0.3039,-0.0562,-0.1226)(-0.2747,-0.1440,-0.0422) (-0.3039,-0.0562,-0.1226)(-0.2451,-0.1011,0.0355) (-0.3305,-0.0075,-0.1436)(-0.2870,0.0817,-0.1253) (-0.3305,-0.0075,-0.1436)(-0.2901,-0.0219,-0.1636) (-0.3305,-0.0075,-0.1436)(-0.3184,0.0201,-0.1077) (-0.3305,-0.0075,-0.1436)(-0.2691,-0.0242,-0.0653) (-0.3305,-0.0075,-0.1436)(-0.2607,0.0878,-0.1453) (-0.3305,-0.0075,-0.1436)(-0.2749,-0.0742,-0.1526) (-0.3305,-0.0075,-0.1436)(-0.2747,-0.1440,-0.0422) (-0.3305,-0.0075,-0.1436)(-0.3052,-0.1485,-0.0688) (-0.3305,-0.0075,-0.1436)(-0.2451,-0.1011,0.0355) (-0.2870,0.0817,-0.1253)(-0.2901,-0.0219,-0.1636) (-0.2870,0.0817,-0.1253)(-0.3184,0.0201,-0.1077) (-0.2870,0.0817,-0.1253)(-0.2691,-0.0242,-0.0653) (-0.2870,0.0817,-0.1253)(-0.2607,0.0878,-0.1453) (-0.2870,0.0817,-0.1253)(-0.2749,-0.0742,-0.1526) (-0.2870,0.0817,-0.1253)(-0.3052,-0.1485,-0.0688) (-0.2870,0.0817,-0.1253)(-0.2451,-0.1011,0.0355) (-0.2901,-0.0219,-0.1636)(-0.3184,0.0201,-0.1077) (-0.2901,-0.0219,-0.1636)(-0.2691,-0.0242,-0.0653) (-0.2901,-0.0219,-0.1636)(-0.2607,0.0878,-0.1453) (-0.2901,-0.0219,-0.1636)(-0.2749,-0.0742,-0.1526) (-0.2901,-0.0219,-0.1636)(-0.3052,-0.1485,-0.0688) (-0.2901,-0.0219,-0.1636)(-0.2451,-0.1011,0.0355) (-0.3184,0.0201,-0.1077)(-0.2691,-0.0242,-0.0653) (-0.3184,0.0201,-0.1077)(-0.2607,0.0878,-0.1453) (-0.3184,0.0201,-0.1077)(-0.3052,-0.1485,-0.0688) (-0.3184,0.0201,-0.1077)(-0.2747,-0.1440,-0.0422) (-0.2691,-0.0242,-0.0653)(-0.3052,-0.1485,-0.0688) (-0.2691,-0.0242,-0.0653)(-0.2747,-0.1440,-0.0422) (-0.2607,0.0878,-0.1453)(-0.2749,-0.0742,-0.1526) (-0.2607,0.0878,-0.1453)(-0.2747,-0.1440,-0.0422) (-0.2607,0.0878,-0.1453)(-0.2451,-0.1011,0.0355) (-0.2749,-0.0742,-0.1526)(-0.2747,-0.1440,-0.0422) (-0.2749,-0.0742,-0.1526)(-0.2451,-0.1011,0.0355) (-0.3052,-0.1485,-0.0688)(-0.2747,-0.1440,-0.0422) (-0.3052,-0.1485,-0.0688)(-0.2451,-0.1011,0.0355) (-0.2747,-0.1440,-0.0422)(-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0158) (-0.1645,0.4154,-0.0394) (-0.2323,0.2469,0.0204) (-0.1182,0.3346,0.2266) (-0.0841,0.3790,0.2074) (-0.1578,0.3537,0.2720) (-0.1411,0.3688,0.1580) (-0.3692,-0.0022,-0.0791) (-0.2736,-0.0825,-0.0341) (-0.3108,0.0697,-0.1704) (-0.3282,0.0347,-0.1103) (-0.3192,0.0407,-0.1145) (-0.2624,-0.0365,-0.1415) (-0.3039,-0.0562,-0.1226) (-0.3305,-0.0075,-0.1436) (-0.2870,0.0817,-0.1253) (-0.2901,-0.0219,-0.1636) (-0.3184,0.0201,-0.1077) (-0.2691,-0.0242,-0.0653) (-0.2607,0.0878,-0.1453) (-0.2749,-0.0742,-0.1526) (-0.0978,-0.1960,-0.1329) (-0.3052,-0.1485,-0.0688) (-0.2747,-0.1440,-0.0422) (-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0458)[S&P]{} (-0.1645,0.4154,-0.0794)[Nasd]{} (-0.2323,0.2469,0.0504)[Cana]{} (-0.1182,0.3346,0.2566)[Mexi]{} (-0.0841,0.3790,0.2374)[Braz]{} (-0.1578,0.3537,0.3020)[Arge]{} (-0.1411,0.3688,0.1880)[Chil]{} (-0.3692,-0.0022,-0.1091)[UK]{} (-0.2736,-0.0825,-0.0641)[Irel]{} (-0.3108,0.0697,-0.2004)[Fran]{} (-0.3282,0.0347,-0.1403)[Germ]{} (-0.3192,0.0407,-0.1445)[Swit]{} (-0.2624,-0.0365,-0.1715)[Autr]{} (-0.3039,-0.0562,-0.1526)[Belg]{} (-0.3305,-0.0075,-0.1736)[Neth]{} (-0.2870,0.0817,-0.1553)[Swed]{} (-0.2901,-0.0219,-0.1936)[Denm]{} (-0.3184,0.0201,-0.1377)[Finl]{} (-0.2691,-0.0242,-0.0953)[Norw]{} (-0.2607,0.0878,-0.1753)[Spai]{} (-0.2749,-0.0742,-0.1826)[Port]{} (-0.0978,-0.1960,-0.1629)[Gree]{} (-0.3052,-0.1485,-0.0988)[CzRe]{} (-0.2747,-0.1440,-0.0722)[Hung]{} (-0.2451,-0.1011,0.0655)[SoAf]{} 2.3 cm (-4.5,-0.4)(1,3.5) (-4.1,2.9)[T=0.5]{} (-4.9,-2.2)(3.5,-2.2)(3.5,3.3)(-4.9,3.3)(-4.9,-2.2) (-0.1669,0.4519,0.0158)(-0.1645,0.4154,-0.0394) (-0.1669,0.4519,0.0158)(-0.2323,0.2469,0.0204) (-0.1669,0.4519,0.0158)(-0.1182,0.3346,0.2266) (-0.1669,0.4519,0.0158)(-0.0841,0.3790,0.2074) (-0.1669,0.4519,0.0158)(-0.1578,0.3537,0.2720) (-0.1669,0.4519,0.0158)(-0.1411,0.3688,0.1580) (-0.1645,0.4154,-0.0394)(-0.2323,0.2469,0.0204) (-0.1645,0.4154,-0.0394)(-0.1182,0.3346,0.2266) (-0.2323,0.2469,0.0204)(-0.1578,0.3537,0.2720) (-0.2323,0.2469,0.0204)(-0.3692,-0.0022,-0.0791) (-0.2323,0.2469,0.0204)(-0.3108,0.0697,-0.1704) (-0.2323,0.2469,0.0204)(-0.3282,0.0347,-0.1103) (-0.2323,0.2469,0.0204)(-0.3192,0.0407,-0.1145) (-0.2323,0.2469,0.0204)(-0.3305,-0.0075,-0.1436) (-0.2323,0.2469,0.0204)(-0.2870,0.0817,-0.1253) (-0.2323,0.2469,0.0204)(-0.2607,0.0878,-0.1453) (-0.2323,0.2469,0.0204)(-0.2749,-0.0742,-0.1526) (-0.2323,0.2469,0.0204)(-0.3052,-0.1485,-0.0688) (-0.1182,0.3346,0.2266)(-0.0841,0.3790,0.2074) (-0.1182,0.3346,0.2266)(-0.1578,0.3537,0.2720) (-0.0841,0.3790,0.2074)(-0.1578,0.3537,0.2720) (-0.0841,0.3790,0.2074)(-0.1411,0.3688,0.1580) (-0.1578,0.3537,0.2720)(-0.1411,0.3688,0.1580) (-0.3692,-0.0022,-0.0791)(-0.2736,-0.0825,-0.0341) (-0.3692,-0.0022,-0.0791)(-0.3108,0.0697,-0.1704) (-0.3692,-0.0022,-0.0791)(-0.3282,0.0347,-0.1103) (-0.3692,-0.0022,-0.0791)(-0.3192,0.0407,-0.1145) (-0.3692,-0.0022,-0.0791)(-0.2624,-0.0365,-0.1415) (-0.3692,-0.0022,-0.0791)(-0.3039,-0.0562,-0.1226) (-0.3692,-0.0022,-0.0791)(-0.3305,-0.0075,-0.1436) (-0.3692,-0.0022,-0.0791)(-0.2870,0.0817,-0.1253) (-0.3692,-0.0022,-0.0791)(-0.2901,-0.0219,-0.1636) (-0.3692,-0.0022,-0.0791)(-0.3184,0.0201,-0.1077) (-0.3692,-0.0022,-0.0791)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3692,-0.0022,-0.0791)(-0.2749,-0.0742,-0.1526) (-0.3692,-0.0022,-0.0791)(-0.3052,-0.1485,-0.0688) (-0.3692,-0.0022,-0.0791)(-0.2747,-0.1440,-0.0422) (-0.3692,-0.0022,-0.0791)(-0.2451,-0.1011,0.0355) (-0.2736,-0.0825,-0.0341)(-0.3108,0.0697,-0.1704) (-0.2736,-0.0825,-0.0341)(-0.3282,0.0347,-0.1103) (-0.2736,-0.0825,-0.0341)(-0.3192,0.0407,-0.1145) (-0.2736,-0.0825,-0.0341)(-0.2624,-0.0365,-0.1415) (-0.2736,-0.0825,-0.0341)(-0.3039,-0.0562,-0.1226) (-0.2736,-0.0825,-0.0341)(-0.3305,-0.0075,-0.1436) (-0.2736,-0.0825,-0.0341)(-0.2870,0.0817,-0.1253) (-0.2736,-0.0825,-0.0341)(-0.2901,-0.0219,-0.1636) (-0.2736,-0.0825,-0.0341)(-0.3184,0.0201,-0.1077) (-0.2736,-0.0825,-0.0341)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3692,-0.0022,-0.0791)(-0.2749,-0.0742,-0.1526) (-0.2736,-0.0825,-0.0341)(-0.0978,-0.1960,-0.1329) (-0.2736,-0.0825,-0.0341)(-0.3052,-0.1485,-0.0688) (-0.2736,-0.0825,-0.0341)(-0.2747,-0.1440,-0.0422) (-0.2736,-0.0825,-0.0341)(-0.1555,-0.2599,0.0008) (-0.2736,-0.0825,-0.0341)(-0.1362,-0.2591,0.1413) (-0.2736,-0.0825,-0.0341)(-0.2451,-0.1011,0.0355) (-0.3108,0.0697,-0.1704)(-0.3282,0.0347,-0.1103) (-0.3108,0.0697,-0.1704)(-0.3192,0.0407,-0.1145) (-0.3108,0.0697,-0.1704)(-0.2624,-0.0365,-0.1415) (-0.3108,0.0697,-0.1704)(-0.3039,-0.0562,-0.1226) (-0.3108,0.0697,-0.1704)(-0.3305,-0.0075,-0.1436) (-0.3108,0.0697,-0.1704)(-0.2870,0.0817,-0.1253) (-0.3108,0.0697,-0.1704)(-0.2901,-0.0219,-0.1636) (-0.3108,0.0697,-0.1704)(-0.3184,0.0201,-0.1077) (-0.3108,0.0697,-0.1704)(-0.2691,-0.0242,-0.0653) (-0.3108,0.0697,-0.1704)(-0.2607,0.0878,-0.1453) (-0.3108,0.0697,-0.1704)(-0.2749,-0.0742,-0.1526) (-0.3108,0.0697,-0.1704)(-0.3052,-0.1485,-0.0688) (-0.3108,0.0697,-0.1704)(-0.2747,-0.1440,-0.0422) (-0.3108,0.0697,-0.1704)(-0.2451,-0.1011,0.0355) (-0.3282,0.0347,-0.1103)(-0.3192,0.0407,-0.1145) (-0.3282,0.0347,-0.1103)(-0.2624,-0.0365,-0.1415) (-0.3282,0.0347,-0.1103)(-0.3039,-0.0562,-0.1226) (-0.3282,0.0347,-0.1103)(-0.3305,-0.0075,-0.1436) (-0.3282,0.0347,-0.1103)(-0.2870,0.0817,-0.1253) (-0.3282,0.0347,-0.1103)(-0.2901,-0.0219,-0.1636) (-0.3282,0.0347,-0.1103)(-0.3184,0.0201,-0.1077) (-0.3282,0.0347,-0.1103)(-0.2691,-0.0242,-0.0653) (-0.3282,0.0347,-0.1103)(-0.2607,0.0878,-0.1453) (-0.3282,0.0347,-0.1103)(-0.2749,-0.0742,-0.1526) (-0.3282,0.0347,-0.1103)(-0.3052,-0.1485,-0.0688) (-0.3282,0.0347,-0.1103)(-0.2747,-0.1440,-0.0422) (-0.3282,0.0347,-0.1103)(-0.2451,-0.1011,0.0355) (-0.3192,0.0407,-0.1145)(-0.2624,-0.0365,-0.1415) (-0.3192,0.0407,-0.1145)(-0.3039,-0.0562,-0.1226) (-0.3192,0.0407,-0.1145)(-0.3305,-0.0075,-0.1436) (-0.3192,0.0407,-0.1145)(-0.2870,0.0817,-0.1253) (-0.3192,0.0407,-0.1145)(-0.2901,-0.0219,-0.1636) (-0.3192,0.0407,-0.1145)(-0.3184,0.0201,-0.1077) (-0.3192,0.0407,-0.1145)(-0.2691,-0.0242,-0.0653) (-0.3192,0.0407,-0.1145)(-0.2607,0.0878,-0.1453) (-0.3192,0.0407,-0.1145)(-0.2749,-0.0742,-0.1526) (-0.3192,0.0407,-0.1145)(-0.3052,-0.1485,-0.0688) (-0.3192,0.0407,-0.1145)(-0.2747,-0.1440,-0.0422) (-0.3192,0.0407,-0.1145)(-0.2451,-0.1011,0.0355) (-0.2624,-0.0365,-0.1415)(-0.3039,-0.0562,-0.1226) (-0.2624,-0.0365,-0.1415)(-0.3305,-0.0075,-0.1436) (-0.2624,-0.0365,-0.1415)(-0.2870,0.0817,-0.1253) (-0.2624,-0.0365,-0.1415)(-0.2901,-0.0219,-0.1636) (-0.2624,-0.0365,-0.1415)(-0.3184,0.0201,-0.1077) (-0.2624,-0.0365,-0.1415)(-0.2691,-0.0242,-0.0653) (-0.2624,-0.0365,-0.1415)(-0.2607,0.0878,-0.1453) (-0.2624,-0.0365,-0.1415)(-0.2749,-0.0742,-0.1526) (-0.2624,-0.0365,-0.1415)(-0.3052,-0.1485,-0.0688) (-0.2624,-0.0365,-0.1415)(-0.2747,-0.1440,-0.0422) (-0.2624,-0.0365,-0.1415)(-0.1555,-0.2599,0.0008) (-0.2624,-0.0365,-0.1415)(-0.1901,-0.0515,0.0161) (-0.2624,-0.0365,-0.1415)(-0.2451,-0.1011,0.0355) (-0.3039,-0.0562,-0.1226)(-0.3305,-0.0075,-0.1436) (-0.3039,-0.0562,-0.1226)(-0.2870,0.0817,-0.1253) (-0.3039,-0.0562,-0.1226)(-0.2901,-0.0219,-0.1636) (-0.3039,-0.0562,-0.1226)(-0.3184,0.0201,-0.1077) (-0.3039,-0.0562,-0.1226)(-0.2691,-0.0242,-0.0653) (-0.3039,-0.0562,-0.1226)(-0.2607,0.0878,-0.1453) (-0.3039,-0.0562,-0.1226)(-0.2749,-0.0742,-0.1526) (-0.3039,-0.0562,-0.1226)(-0.3052,-0.1485,-0.0688) (-0.3039,-0.0562,-0.1226)(-0.2747,-0.1440,-0.0422) (-0.3039,-0.0562,-0.1226)(-0.2451,-0.1011,0.0355) (-0.3305,-0.0075,-0.1436)(-0.2870,0.0817,-0.1253) (-0.3305,-0.0075,-0.1436)(-0.2901,-0.0219,-0.1636) (-0.3305,-0.0075,-0.1436)(-0.3184,0.0201,-0.1077) (-0.3305,-0.0075,-0.1436)(-0.2691,-0.0242,-0.0653) (-0.3305,-0.0075,-0.1436)(-0.2607,0.0878,-0.1453) (-0.3305,-0.0075,-0.1436)(-0.2749,-0.0742,-0.1526) (-0.3305,-0.0075,-0.1436)(-0.2747,-0.1440,-0.0422) (-0.3305,-0.0075,-0.1436)(-0.3052,-0.1485,-0.0688) (-0.3305,-0.0075,-0.1436)(-0.2451,-0.1011,0.0355) (-0.2870,0.0817,-0.1253)(-0.2901,-0.0219,-0.1636) (-0.2870,0.0817,-0.1253)(-0.3184,0.0201,-0.1077) (-0.2870,0.0817,-0.1253)(-0.2691,-0.0242,-0.0653) (-0.2870,0.0817,-0.1253)(-0.2607,0.0878,-0.1453) (-0.2870,0.0817,-0.1253)(-0.2749,-0.0742,-0.1526) (-0.2870,0.0817,-0.1253)(-0.3052,-0.1485,-0.0688) (-0.2870,0.0817,-0.1253)(-0.2747,-0.1440,-0.0422) (-0.2870,0.0817,-0.1253)(-0.2451,-0.1011,0.0355) (-0.2901,-0.0219,-0.1636)(-0.3184,0.0201,-0.1077) (-0.2901,-0.0219,-0.1636)(-0.2691,-0.0242,-0.0653) (-0.2901,-0.0219,-0.1636)(-0.2607,0.0878,-0.1453) (-0.2901,-0.0219,-0.1636)(-0.2749,-0.0742,-0.1526) (-0.2901,-0.0219,-0.1636)(-0.3052,-0.1485,-0.0688) (-0.2901,-0.0219,-0.1636)(-0.2747,-0.1440,-0.0422) (-0.2901,-0.0219,-0.1636)(-0.2451,-0.1011,0.0355) (-0.3184,0.0201,-0.1077)(-0.2691,-0.0242,-0.0653) (-0.3184,0.0201,-0.1077)(-0.2607,0.0878,-0.1453) (-0.3184,0.0201,-0.1077)(-0.2749,-0.0742,-0.1526) (-0.3184,0.0201,-0.1077)(-0.3052,-0.1485,-0.0688) (-0.3184,0.0201,-0.1077)(-0.2747,-0.1440,-0.0422) (-0.3184,0.0201,-0.1077)(-0.1555,-0.2599,0.0008) (-0.3184,0.0201,-0.1077)(-0.1479,0.1116,-0.0505) (-0.3184,0.0201,-0.1077)(-0.2451,-0.1011,0.0355) (-0.2691,-0.0242,-0.0653)(-0.2607,0.0878,-0.1453) (-0.2691,-0.0242,-0.0653)(-0.2749,-0.0742,-0.1526) (-0.2691,-0.0242,-0.0653)(-0.0978,-0.1960,-0.1329) (-0.2691,-0.0242,-0.0653)(-0.3052,-0.1485,-0.0688) (-0.2691,-0.0242,-0.0653)(-0.2747,-0.1440,-0.0422) (-0.2691,-0.0242,-0.0653)(-0.2451,-0.1011,0.0355) (-0.2607,0.0878,-0.1453)(-0.2749,-0.0742,-0.1526) (-0.2607,0.0878,-0.1453)(-0.3052,-0.1485,-0.0688) (-0.2607,0.0878,-0.1453)(-0.2747,-0.1440,-0.0422) (-0.2607,0.0878,-0.1453)(-0.2451,-0.1011,0.0355) (-0.2749,-0.0742,-0.1526)(-0.3052,-0.1485,-0.0688) (-0.2749,-0.0742,-0.1526)(-0.2747,-0.1440,-0.0422) (-0.2749,-0.0742,-0.1526)(-0.1555,-0.2599,0.0008) (-0.2749,-0.0742,-0.1526)(-0.2451,-0.1011,0.0355) (-0.0978,-0.1960,-0.1329)(-0.3052,-0.1485,-0.0688) (-0.0978,-0.1960,-0.1329)(-0.2747,-0.1440,-0.0422) (-0.0978,-0.1960,-0.1329)(-0.1555,-0.2599,0.0008) (-0.0978,-0.1960,-0.1329)(-0.1362,-0.2591,0.1413) (-0.0978,-0.1960,-0.1329)(-0.2451,-0.1011,0.0355) (-0.3052,-0.1485,-0.0688)(-0.2747,-0.1440,-0.0422) (-0.3052,-0.1485,-0.0688)(-0.1555,-0.2599,0.0008) (-0.3052,-0.1485,-0.0688)(-0.2049,-0.2213,0.2777) (-0.3052,-0.1485,-0.0688)(-0.2451,-0.1011,0.0355) (-0.2747,-0.1440,-0.0422)(-0.1555,-0.2599,0.0008) (-0.2747,-0.1440,-0.0422)(-0.2451,-0.1011,0.0355) (-0.1555,-0.2599,0.0008)(-0.1362,-0.2591,0.1413) (-0.1555,-0.2599,0.0008)(-0.2451,-0.1011,0.0355) (-0.2049,-0.2213,0.2777)(-0.1362,-0.2591,0.1413) (-0.2049,-0.2213,0.2777)(-0.2451,-0.1011,0.0355) (0.0356,-0.1619,0.3789)(-0.1362,-0.2591,0.1413) (-0.1362,-0.2591,0.1413)(-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0158) (-0.1645,0.4154,-0.0394) (-0.2323,0.2469,0.0204) (-0.1182,0.3346,0.2266) (-0.0841,0.3790,0.2074) (-0.1578,0.3537,0.2720) (-0.1411,0.3688,0.1580) (-0.3692,-0.0022,-0.0791) (-0.2736,-0.0825,-0.0341) (-0.3108,0.0697,-0.1704) (-0.3282,0.0347,-0.1103) (-0.3192,0.0407,-0.1145) (-0.2624,-0.0365,-0.1415) (-0.3039,-0.0562,-0.1226) (-0.3305,-0.0075,-0.1436) (-0.2870,0.0817,-0.1253) (-0.2901,-0.0219,-0.1636) (-0.3184,0.0201,-0.1077) (-0.2691,-0.0242,-0.0653) (-0.2607,0.0878,-0.1453) (-0.2749,-0.0742,-0.1526) (-0.0978,-0.1960,-0.1329) (-0.3052,-0.1485,-0.0688) (-0.2747,-0.1440,-0.0422) (-0.1555,-0.2599,0.0008) (-0.1901,-0.0515,0.0161) (-0.1479,0.1116,-0.0505) (-0.2049,-0.2213,0.2777) (0.0356,-0.1619,0.3789) (-0.1362,-0.2591,0.1413) (-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0458)[S&P]{} (-0.1645,0.4154,-0.0794)[Nasd]{} (-0.2323,0.2469,0.0504)[Cana]{} (-0.1182,0.3346,0.2566)[Mexi]{} (-0.0841,0.3790,0.2374)[Braz]{} (-0.1578,0.3537,0.3020)[Arge]{} (-0.1411,0.3688,0.1880)[Chil]{} (-0.3692,-0.0022,-0.1091)[UK]{} (-0.2736,-0.0825,-0.0641)[Irel]{} (-0.3108,0.0697,-0.2004)[Fran]{} (-0.3282,0.0347,-0.1403)[Germ]{} (-0.3192,0.0407,-0.1445)[Swit]{} (-0.2624,-0.0365,-0.1715)[Autr]{} (-0.3039,-0.0562,-0.1526)[Belg]{} (-0.3305,-0.0075,-0.1736)[Neth]{} (-0.2870,0.0817,-0.1553)[Swed]{} (-0.2901,-0.0219,-0.1936)[Denm]{} (-0.3184,0.0201,-0.1377)[Finl]{} (-0.2691,-0.0242,-0.0953)[Norw]{} (-0.2607,0.0878,-0.1753)[Spai]{} (-0.2749,-0.0742,-0.1826)[Port]{} (-0.0978,-0.1960,-0.1629)[Gree]{} (-0.3052,-0.1485,-0.0988)[CzRe]{} (-0.2747,-0.1440,-0.0722)[Hung]{} (-0.1555,-0.2599,0.0308)[Pola]{} (-0.1901,-0.0515,0.0461)[Turk]{} (-0.1479,0.1116,-0.0805)[Isra]{} (-0.2049,-0.2213,0.3077)[HoKo]{} (0.0356,-0.1619,0.4089)[Phil]{} (-0.1362,-0.2591,0.1713)[Aust]{} (-0.2451,-0.1011,0.0655)[SoAf]{} (-8,-0.4)(1,3.5) (-4.1,2.9)[T=0.6]{} (-4.9,-2.2)(3.5,-2.2)(3.5,3.3)(-4.9,3.3)(-4.9,-2.2) (-0.1669,0.4519,0.0158)(-0.1645,0.4154,-0.0394) (-0.1669,0.4519,0.0158)(-0.2323,0.2469,0.0204) (-0.1669,0.4519,0.0158)(-0.1182,0.3346,0.2266) (-0.1669,0.4519,0.0158)(-0.0841,0.3790,0.2074) (-0.1669,0.4519,0.0158)(-0.1578,0.3537,0.2720) (-0.1669,0.4519,0.0158)(-0.1411,0.3688,0.1580) (-0.1669,0.4519,0.0158)(-0.3692,-0.0022,-0.0791) (-0.1669,0.4519,0.0158)(-0.3108,0.0697,-0.1704) (-0.1669,0.4519,0.0158)(-0.3282,0.0347,-0.1103) (-0.1669,0.4519,0.0158)(-0.3192,0.0407,-0.1145) (-0.1669,0.4519,0.0158)(-0.3305,-0.0075,-0.1436) (-0.1669,0.4519,0.0158)(-0.2870,0.0817,-0.1253) (-0.1669,0.4519,0.0158)(-0.3184,0.0201,-0.1077) (-0.1669,0.4519,0.0158)(-0.2607,0.0878,-0.1453) (-0.1645,0.4154,-0.0394)(-0.2323,0.2469,0.0204) (-0.1645,0.4154,-0.0394)(-0.1182,0.3346,0.2266) (-0.1645,0.4154,-0.0394)(-0.0841,0.3790,0.2074) (-0.1645,0.4154,-0.0394)(-0.1578,0.3537,0.2720) (-0.1645,0.4154,-0.0394)(-0.1411,0.3688,0.1580) (-0.1645,0.4154,-0.0394)(-0.3692,-0.0022,-0.0791) (-0.1645,0.4154,-0.0394)(-0.3108,0.0697,-0.1704) (-0.1645,0.4154,-0.0394)(-0.3192,0.0407,-0.1145) (-0.1645,0.4154,-0.0394)(-0.3305,-0.0075,-0.1436) (-0.1645,0.4154,-0.0394)(-0.2870,0.0817,-0.1253) (-0.1645,0.4154,-0.0394)(-0.3184,0.0201,-0.1077) (-0.2323,0.2469,0.0204)(-0.1182,0.3346,0.2266) (-0.2323,0.2469,0.0204)(-0.1578,0.3537,0.2720) (-0.2323,0.2469,0.0204)(-0.1411,0.3688,0.1580) (-0.2323,0.2469,0.0204)(-0.3692,-0.0022,-0.0791) (-0.2323,0.2469,0.0204)(-0.3108,0.0697,-0.1704) (-0.2323,0.2469,0.0204)(-0.3282,0.0347,-0.1103) (-0.2323,0.2469,0.0204)(-0.3192,0.0407,-0.1145) (-0.2323,0.2469,0.0204)(-0.2624,-0.0365,-0.1415) (-0.2323,0.2469,0.0204)(-0.3039,-0.0562,-0.1226) (-0.2323,0.2469,0.0204)(-0.3305,-0.0075,-0.1436) (-0.2323,0.2469,0.0204)(-0.2870,0.0817,-0.1253) (-0.2323,0.2469,0.0204)(-0.2901,-0.0219,-0.1636) (-0.2323,0.2469,0.0204)(-0.3184,0.0201,-0.1077) (-0.2323,0.2469,0.0204)(-0.2607,0.0878,-0.1453) (-0.2323,0.2469,0.0204)(-0.2749,-0.0742,-0.1526) (-0.2323,0.2469,0.0204)(-0.3052,-0.1485,-0.0688) (-0.2323,0.2469,0.0204)(-0.2747,-0.1440,-0.0422) (-0.2323,0.2469,0.0204)(-0.2451,-0.1011,0.0355) (-0.1182,0.3346,0.2266)(-0.0841,0.3790,0.2074) (-0.1182,0.3346,0.2266)(-0.1578,0.3537,0.2720) (-0.1182,0.3346,0.2266)(-0.1411,0.3688,0.1580) (-0.1182,0.3346,0.2266)(-0.1391,0.1214,0.1404) (-0.1182,0.3346,0.2266)(-0.3192,0.0407,-0.1145) (-0.1182,0.3346,0.2266)(-0.2451,-0.1011,0.0355) (-0.0841,0.3790,0.2074)(-0.1578,0.3537,0.2720) (-0.0841,0.3790,0.2074)(-0.1411,0.3688,0.1580) (-0.1578,0.3537,0.2720)(-0.1411,0.3688,0.1580) (-0.1578,0.3537,0.2720)(-0.1391,0.1214,0.1404) (-0.1578,0.3537,0.2720)(-0.3692,-0.0022,-0.0791) (-0.1578,0.3537,0.2720)(-0.2607,0.0878,-0.1453) (-0.1411,0.3688,0.1580)(-0.1391,0.1214,0.1404) (-0.1411,0.3688,0.1580)(-0.0761,-0.0451,0.2027) (-0.1411,0.3688,0.1580)(-0.3692,-0.0022,-0.0791) (-0.1411,0.3688,0.1580)(-0.3108,0.0697,-0.1704) (-0.1411,0.3688,0.1580)(-0.3192,0.0407,-0.1145) (-0.1411,0.3688,0.1580)(-0.3039,-0.0562,-0.1226) (-0.1411,0.3688,0.1580)(-0.3305,-0.0075,-0.1436) (-0.1411,0.3688,0.1580)(-0.2870,0.0817,-0.1253) (-0.1411,0.3688,0.1580)(-0.2607,0.0878,-0.1453) (-0.1391,0.1214,0.1404)(-0.3692,-0.0022,-0.0791) (-0.1391,0.1214,0.1404)(-0.3108,0.0697,-0.1704) (-0.1391,0.1214,0.1404)(-0.3282,0.0347,-0.1103) (-0.1391,0.1214,0.1404)(-0.3192,0.0407,-0.1145) (-0.1391,0.1214,0.1404)(-0.3305,-0.0075,-0.1436) (-0.1391,0.1214,0.1404)(-0.2870,0.0817,-0.1253) (-0.1391,0.1214,0.1404)(-0.2901,-0.0219,-0.1636) (-0.1391,0.1214,0.1404)(-0.2607,0.0878,-0.1453) (-0.1391,0.1214,0.1404)(-0.2451,-0.1011,0.0355) (-0.0761,-0.0451,0.2027)(-0.2736,-0.0825,-0.0341) (-0.0761,-0.0451,0.2027)(-0.3192,0.0407,-0.1145) (-0.0761,-0.0451,0.2027)(-0.2691,-0.0242,-0.0653) (-0.0761,-0.0451,0.2027)(-0.3052,-0.1485,-0.0688) (-0.0761,-0.0451,0.2027)(-0.2451,-0.1011,0.0355) (-0.3692,-0.0022,-0.0791)(-0.2736,-0.0825,-0.0341) (-0.3692,-0.0022,-0.0791)(-0.3108,0.0697,-0.1704) (-0.3692,-0.0022,-0.0791)(-0.3282,0.0347,-0.1103) (-0.3692,-0.0022,-0.0791)(-0.3192,0.0407,-0.1145) (-0.3692,-0.0022,-0.0791)(-0.2624,-0.0365,-0.1415) (-0.3692,-0.0022,-0.0791)(-0.3039,-0.0562,-0.1226) (-0.3692,-0.0022,-0.0791)(-0.3305,-0.0075,-0.1436) (-0.3692,-0.0022,-0.0791)(-0.2870,0.0817,-0.1253) (-0.3692,-0.0022,-0.0791)(-0.2901,-0.0219,-0.1636) (-0.3692,-0.0022,-0.0791)(-0.3184,0.0201,-0.1077) (-0.3692,-0.0022,-0.0791)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3692,-0.0022,-0.0791)(-0.2749,-0.0742,-0.1526) (-0.3692,-0.0022,-0.0791)(-0.0978,-0.1960,-0.1329) (-0.3692,-0.0022,-0.0791)(-0.3052,-0.1485,-0.0688) (-0.3692,-0.0022,-0.0791)(-0.2747,-0.1440,-0.0422) (-0.3692,-0.0022,-0.0791)(-0.1555,-0.2599,0.0008) (-0.3692,-0.0022,-0.0791)(-0.1901,-0.0515,0.0161) (-0.3692,-0.0022,-0.0791)(-0.1479,0.1116,-0.0505) (-0.3692,-0.0022,-0.0791)(-0.1351,-0.1516,0.1391) (-0.3692,-0.0022,-0.0791)(-0.2049,-0.2213,0.2777) (-0.3692,-0.0022,-0.0791)(-0.1362,-0.2591,0.1413) (-0.3692,-0.0022,-0.0791)(-0.2451,-0.1011,0.0355) (-0.2736,-0.0825,-0.0341)(-0.3108,0.0697,-0.1704) (-0.2736,-0.0825,-0.0341)(-0.3282,0.0347,-0.1103) (-0.2736,-0.0825,-0.0341)(-0.3192,0.0407,-0.1145) (-0.2736,-0.0825,-0.0341)(-0.2624,-0.0365,-0.1415) (-0.2736,-0.0825,-0.0341)(-0.3039,-0.0562,-0.1226) (-0.2736,-0.0825,-0.0341)(-0.3305,-0.0075,-0.1436) (-0.2736,-0.0825,-0.0341)(-0.2870,0.0817,-0.1253) (-0.2736,-0.0825,-0.0341)(-0.2901,-0.0219,-0.1636) (-0.2736,-0.0825,-0.0341)(-0.3184,0.0201,-0.1077) (-0.2736,-0.0825,-0.0341)(-0.2691,-0.0242,-0.0653) (-0.3692,-0.0022,-0.0791)(-0.2607,0.0878,-0.1453) (-0.3692,-0.0022,-0.0791)(-0.2749,-0.0742,-0.1526) (-0.2736,-0.0825,-0.0341)(-0.0978,-0.1960,-0.1329) (-0.2736,-0.0825,-0.0341)(-0.3052,-0.1485,-0.0688) (-0.2736,-0.0825,-0.0341)(-0.2747,-0.1440,-0.0422) (-0.2736,-0.0825,-0.0341)(-0.1555,-0.2599,0.0008) (-0.2736,-0.0825,-0.0341)(-0.1901,-0.0515,0.0161) (-0.2736,-0.0825,-0.0341)(-0.1351,-0.1516,0.1391) (-0.2736,-0.0825,-0.0341)(-0.2049,-0.2213,0.2777) (-0.2736,-0.0825,-0.0341)(-0.1362,-0.2591,0.1413) (-0.2736,-0.0825,-0.0341)(-0.2451,-0.1011,0.0355) (-0.3108,0.0697,-0.1704)(-0.3282,0.0347,-0.1103) (-0.3108,0.0697,-0.1704)(-0.3192,0.0407,-0.1145) (-0.3108,0.0697,-0.1704)(-0.2624,-0.0365,-0.1415) (-0.3108,0.0697,-0.1704)(-0.3039,-0.0562,-0.1226) (-0.3108,0.0697,-0.1704)(-0.3305,-0.0075,-0.1436) (-0.3108,0.0697,-0.1704)(-0.2870,0.0817,-0.1253) (-0.3108,0.0697,-0.1704)(-0.2901,-0.0219,-0.1636) (-0.3108,0.0697,-0.1704)(-0.3184,0.0201,-0.1077) (-0.3108,0.0697,-0.1704)(-0.2691,-0.0242,-0.0653) (-0.3108,0.0697,-0.1704)(-0.2607,0.0878,-0.1453) (-0.3108,0.0697,-0.1704)(-0.2749,-0.0742,-0.1526) (-0.3108,0.0697,-0.1704)(-0.0978,-0.1960,-0.1329) (-0.3108,0.0697,-0.1704)(-0.3052,-0.1485,-0.0688) (-0.3108,0.0697,-0.1704)(-0.2747,-0.1440,-0.0422) (-0.3108,0.0697,-0.1704)(-0.1901,-0.0515,0.0161) (-0.3108,0.0697,-0.1704)(-0.1479,0.1116,-0.0505) (-0.3108,0.0697,-0.1704)(-0.2451,-0.1011,0.0355) (-0.3282,0.0347,-0.1103)(-0.3192,0.0407,-0.1145) (-0.3282,0.0347,-0.1103)(-0.2624,-0.0365,-0.1415) (-0.3282,0.0347,-0.1103)(-0.3039,-0.0562,-0.1226) (-0.3282,0.0347,-0.1103)(-0.3305,-0.0075,-0.1436) (-0.3282,0.0347,-0.1103)(-0.2870,0.0817,-0.1253) (-0.3282,0.0347,-0.1103)(-0.2901,-0.0219,-0.1636) (-0.3282,0.0347,-0.1103)(-0.3184,0.0201,-0.1077) (-0.3282,0.0347,-0.1103)(-0.2691,-0.0242,-0.0653) (-0.3282,0.0347,-0.1103)(-0.2607,0.0878,-0.1453) (-0.3282,0.0347,-0.1103)(-0.2749,-0.0742,-0.1526) (-0.3282,0.0347,-0.1103)(-0.0978,-0.1960,-0.1329) (-0.3282,0.0347,-0.1103)(-0.3052,-0.1485,-0.0688) (-0.3282,0.0347,-0.1103)(-0.2747,-0.1440,-0.0422) (-0.3282,0.0347,-0.1103)(-0.1555,-0.2599,0.0008) (-0.3282,0.0347,-0.1103)(-0.1901,-0.0515,0.0161) (-0.3282,0.0347,-0.1103)(-0.1479,0.1116,-0.0505) (-0.3282,0.0347,-0.1103)(-0.2049,-0.2213,0.2777) (-0.3282,0.0347,-0.1103)(-0.2451,-0.1011,0.0355) (-0.3192,0.0407,-0.1145)(-0.2624,-0.0365,-0.1415) (-0.3192,0.0407,-0.1145)(-0.3039,-0.0562,-0.1226) (-0.3192,0.0407,-0.1145)(-0.3305,-0.0075,-0.1436) (-0.3192,0.0407,-0.1145)(-0.2870,0.0817,-0.1253) (-0.3192,0.0407,-0.1145)(-0.2901,-0.0219,-0.1636) (-0.3192,0.0407,-0.1145)(-0.3184,0.0201,-0.1077) (-0.3192,0.0407,-0.1145)(-0.2691,-0.0242,-0.0653) (-0.3192,0.0407,-0.1145)(-0.2607,0.0878,-0.1453) (-0.3192,0.0407,-0.1145)(-0.2749,-0.0742,-0.1526) (-0.3192,0.0407,-0.1145)(-0.0978,-0.1960,-0.1329) (-0.3192,0.0407,-0.1145)(-0.3052,-0.1485,-0.0688) (-0.3192,0.0407,-0.1145)(-0.2747,-0.1440,-0.0422) (-0.3192,0.0407,-0.1145)(-0.1555,-0.2599,0.0008) (-0.3192,0.0407,-0.1145)(-0.1901,-0.0515,0.0161) (-0.3192,0.0407,-0.1145)(-0.1479,0.1116,-0.0505) (-0.3192,0.0407,-0.1145)(-0.2049,-0.2213,0.2777) (-0.3192,0.0407,-0.1145)(-0.1362,-0.2591,0.1413) (-0.3192,0.0407,-0.1145)(-0.2451,-0.1011,0.0355) (-0.2624,-0.0365,-0.1415)(-0.3039,-0.0562,-0.1226) (-0.2624,-0.0365,-0.1415)(-0.3305,-0.0075,-0.1436) (-0.2624,-0.0365,-0.1415)(-0.2870,0.0817,-0.1253) (-0.2624,-0.0365,-0.1415)(-0.2901,-0.0219,-0.1636) (-0.2624,-0.0365,-0.1415)(-0.3184,0.0201,-0.1077) (-0.2624,-0.0365,-0.1415)(-0.2691,-0.0242,-0.0653) (-0.2624,-0.0365,-0.1415)(-0.2607,0.0878,-0.1453) (-0.2624,-0.0365,-0.1415)(-0.2749,-0.0742,-0.1526) (-0.2624,-0.0365,-0.1415)(-0.0978,-0.1960,-0.1329) (-0.2624,-0.0365,-0.1415)(-0.3052,-0.1485,-0.0688) (-0.2624,-0.0365,-0.1415)(-0.2747,-0.1440,-0.0422) (-0.2624,-0.0365,-0.1415)(-0.1555,-0.2599,0.0008) (-0.2624,-0.0365,-0.1415)(-0.1901,-0.0515,0.0161) (-0.2624,-0.0365,-0.1415)(-0.1479,0.1116,-0.0505) (-0.2624,-0.0365,-0.1415)(-0.2451,-0.1011,0.0355) (-0.3039,-0.0562,-0.1226)(-0.3305,-0.0075,-0.1436) (-0.3039,-0.0562,-0.1226)(-0.2870,0.0817,-0.1253) (-0.3039,-0.0562,-0.1226)(-0.2901,-0.0219,-0.1636) (-0.3039,-0.0562,-0.1226)(-0.3184,0.0201,-0.1077) (-0.3039,-0.0562,-0.1226)(-0.2691,-0.0242,-0.0653) (-0.3039,-0.0562,-0.1226)(-0.2607,0.0878,-0.1453) (-0.3039,-0.0562,-0.1226)(-0.2749,-0.0742,-0.1526) (-0.3039,-0.0562,-0.1226)(-0.0978,-0.1960,-0.1329) (-0.3039,-0.0562,-0.1226)(-0.3052,-0.1485,-0.0688) (-0.3039,-0.0562,-0.1226)(-0.2747,-0.1440,-0.0422) (-0.3039,-0.0562,-0.1226)(-0.1555,-0.2599,0.0008) (-0.3039,-0.0562,-0.1226)(-0.1901,-0.0515,0.0161) (-0.3039,-0.0562,-0.1226)(-0.1479,0.1116,-0.0505) (-0.3039,-0.0562,-0.1226)(-0.2049,-0.2213,0.2777) (-0.3039,-0.0562,-0.1226)(-0.1362,-0.2591,0.1413) (-0.3039,-0.0562,-0.1226)(-0.2451,-0.1011,0.0355) (-0.3305,-0.0075,-0.1436)(-0.2870,0.0817,-0.1253) (-0.3305,-0.0075,-0.1436)(-0.2901,-0.0219,-0.1636) (-0.3305,-0.0075,-0.1436)(-0.3184,0.0201,-0.1077) (-0.3305,-0.0075,-0.1436)(-0.2691,-0.0242,-0.0653) (-0.3305,-0.0075,-0.1436)(-0.2607,0.0878,-0.1453) (-0.3305,-0.0075,-0.1436)(-0.2749,-0.0742,-0.1526) (-0.3305,-0.0075,-0.1436)(-0.0978,-0.1960,-0.1329) (-0.3305,-0.0075,-0.1436)(-0.3052,-0.1485,-0.0688) (-0.3305,-0.0075,-0.1436)(-0.2747,-0.1440,-0.0422) (-0.3305,-0.0075,-0.1436)(-0.1555,-0.2599,0.0008) (-0.3305,-0.0075,-0.1436)(-0.1901,-0.0515,0.0161) (-0.3305,-0.0075,-0.1436)(-0.1479,0.1116,-0.0505) (-0.3305,-0.0075,-0.1436)(-0.2049,-0.2213,0.2777) (-0.3305,-0.0075,-0.1436)(-0.1362,-0.2591,0.1413) (-0.3305,-0.0075,-0.1436)(-0.2451,-0.1011,0.0355) (-0.2870,0.0817,-0.1253)(-0.2901,-0.0219,-0.1636) (-0.2870,0.0817,-0.1253)(-0.3184,0.0201,-0.1077) (-0.2870,0.0817,-0.1253)(-0.2691,-0.0242,-0.0653) (-0.2870,0.0817,-0.1253)(-0.2607,0.0878,-0.1453) (-0.2870,0.0817,-0.1253)(-0.2749,-0.0742,-0.1526) (-0.2870,0.0817,-0.1253)(-0.0978,-0.1960,-0.1329) (-0.2870,0.0817,-0.1253)(-0.3052,-0.1485,-0.0688) (-0.2870,0.0817,-0.1253)(-0.2747,-0.1440,-0.0422) (-0.2870,0.0817,-0.1253)(-0.1901,-0.0515,0.0161) (-0.2870,0.0817,-0.1253)(-0.1479,0.1116,-0.0505) (-0.2870,0.0817,-0.1253)(-0.2451,-0.1011,0.0355) (-0.2901,-0.0219,-0.1636)(-0.3184,0.0201,-0.1077) (-0.2901,-0.0219,-0.1636)(-0.2691,-0.0242,-0.0653) (-0.2901,-0.0219,-0.1636)(-0.2607,0.0878,-0.1453) (-0.2901,-0.0219,-0.1636)(-0.2749,-0.0742,-0.1526) (-0.2901,-0.0219,-0.1636)(-0.0978,-0.1960,-0.1329) (-0.2901,-0.0219,-0.1636)(-0.3052,-0.1485,-0.0688) (-0.2901,-0.0219,-0.1636)(-0.2747,-0.1440,-0.0422) (-0.2901,-0.0219,-0.1636)(-0.1555,-0.2599,0.0008) (-0.2901,-0.0219,-0.1636)(-0.1901,-0.0515,0.0161) (-0.2901,-0.0219,-0.1636)(-0.2049,-0.2213,0.2777) (-0.2901,-0.0219,-0.1636)(-0.1362,-0.2591,0.1413) (-0.2901,-0.0219,-0.1636)(-0.2451,-0.1011,0.0355) (-0.3184,0.0201,-0.1077)(-0.2691,-0.0242,-0.0653) (-0.3184,0.0201,-0.1077)(-0.2607,0.0878,-0.1453) (-0.3184,0.0201,-0.1077)(-0.2749,-0.0742,-0.1526) (-0.3184,0.0201,-0.1077)(-0.0978,-0.1960,-0.1329) (-0.3184,0.0201,-0.1077)(-0.3052,-0.1485,-0.0688) (-0.3184,0.0201,-0.1077)(-0.2747,-0.1440,-0.0422) (-0.3184,0.0201,-0.1077)(-0.1555,-0.2599,0.0008) (-0.3184,0.0201,-0.1077)(-0.1901,-0.0515,0.0161) (-0.3184,0.0201,-0.1077)(-0.1479,0.1116,-0.0505) (-0.3184,0.0201,-0.1077)(-0.1351,-0.1516,0.1391) (-0.3184,0.0201,-0.1077)(-0.2049,-0.2213,0.2777) (-0.3184,0.0201,-0.1077)(-0.1362,-0.2591,0.1413) (-0.3184,0.0201,-0.1077)(-0.2451,-0.1011,0.0355) (-0.2691,-0.0242,-0.0653)(-0.2607,0.0878,-0.1453) (-0.2691,-0.0242,-0.0653)(-0.2749,-0.0742,-0.1526) (-0.2691,-0.0242,-0.0653)(-0.0978,-0.1960,-0.1329) (-0.2691,-0.0242,-0.0653)(-0.3052,-0.1485,-0.0688) (-0.2691,-0.0242,-0.0653)(-0.2747,-0.1440,-0.0422) (-0.2691,-0.0242,-0.0653)(-0.1555,-0.2599,0.0008) (-0.2691,-0.0242,-0.0653)(-0.1479,0.1116,-0.0505) (-0.2691,-0.0242,-0.0653)(-0.1362,-0.2591,0.1413) (-0.2691,-0.0242,-0.0653)(-0.2451,-0.1011,0.0355) (-0.2607,0.0878,-0.1453)(-0.2749,-0.0742,-0.1526) (-0.2607,0.0878,-0.1453)(-0.0978,-0.1960,-0.1329) (-0.2607,0.0878,-0.1453)(-0.3052,-0.1485,-0.0688) (-0.2607,0.0878,-0.1453)(-0.2747,-0.1440,-0.0422) (-0.2607,0.0878,-0.1453)(-0.1901,-0.0515,0.0161) (-0.2607,0.0878,-0.1453)(-0.1479,0.1116,-0.0505) (-0.2607,0.0878,-0.1453)(-0.2451,-0.1011,0.0355) (-0.2749,-0.0742,-0.1526)(-0.0978,-0.1960,-0.1329) (-0.2749,-0.0742,-0.1526)(-0.3052,-0.1485,-0.0688) (-0.2749,-0.0742,-0.1526)(-0.2747,-0.1440,-0.0422) (-0.2749,-0.0742,-0.1526)(-0.1555,-0.2599,0.0008) (-0.2749,-0.0742,-0.1526)(-0.1901,-0.0515,0.0161) (-0.2749,-0.0742,-0.1526)(-0.2451,-0.1011,0.0355) (-0.0978,-0.1960,-0.1329)(-0.3052,-0.1485,-0.0688) (-0.0978,-0.1960,-0.1329)(-0.2747,-0.1440,-0.0422) (-0.0978,-0.1960,-0.1329)(-0.1555,-0.2599,0.0008) (-0.0978,-0.1960,-0.1329)(-0.1362,-0.2591,0.1413) (-0.0978,-0.1960,-0.1329)(-0.2451,-0.1011,0.0355) (-0.3052,-0.1485,-0.0688)(-0.2747,-0.1440,-0.0422) (-0.3052,-0.1485,-0.0688)(-0.1555,-0.2599,0.0008) (-0.3052,-0.1485,-0.0688)(-0.1901,-0.0515,0.0161) (-0.3052,-0.1485,-0.0688)(-0.1479,0.1116,-0.0505) (-0.3052,-0.1485,-0.0688)(-0.1351,-0.1516,0.1391) (-0.3052,-0.1485,-0.0688)(-0.2049,-0.2213,0.2777) (-0.3052,-0.1485,-0.0688)(-0.1362,-0.2591,0.1413) (-0.3052,-0.1485,-0.0688)(-0.2451,-0.1011,0.0355) (-0.2747,-0.1440,-0.0422)(-0.1555,-0.2599,0.0008) (-0.2747,-0.1440,-0.0422)(-0.1901,-0.0515,0.0161) (-0.2747,-0.1440,-0.0422)(-0.1479,0.1116,-0.0505) (-0.2747,-0.1440,-0.0422)(-0.1351,-0.1516,0.1391) (-0.2747,-0.1440,-0.0422)(0.0381,-0.3938,0.0302) (-0.2747,-0.1440,-0.0422)(0.0393,-0.3813,0.1645) (-0.2747,-0.1440,-0.0422)(-0.1362,-0.2591,0.1413) (-0.2747,-0.1440,-0.0422)(-0.2451,-0.1011,0.0355) (-0.1555,-0.2599,0.0008)(-0.1362,-0.2591,0.1413) (-0.1555,-0.2599,0.0008)(-0.1351,-0.1516,0.1391) (-0.1555,-0.2599,0.0008)(-0.2049,-0.2213,0.2777) (-0.1555,-0.2599,0.0008)(-0.0283,-0.2730,0.1376) (-0.1555,-0.2599,0.0008)(-0.2451,-0.1011,0.0355) (-0.1901,-0.0515,0.0161)(-0.2451,-0.1011,0.0355) (-0.1351,-0.1516,0.1391)(-0.1362,-0.2591,0.1413) (-0.1351,-0.1516,0.1391)(-0.2451,-0.1011,0.0355) (-0.2049,-0.2213,0.2777)(0.0330,-0.2011,0.3824) (-0.2049,-0.2213,0.2777)(0.0356,-0.1619,0.3789) (-0.2049,-0.2213,0.2777)(-0.1362,-0.2591,0.1413) (-0.2049,-0.2213,0.2777)(-0.2451,-0.1011,0.0355) (0.0330,-0.2011,0.3824)(0.0393,-0.3813,0.1645) (0.0330,-0.2011,0.3824)(-0.1362,-0.2591,0.1413) (0.0330,-0.2011,0.3824)(-0.2451,-0.1011,0.0355) (0.0356,-0.1619,0.3789)(-0.1362,-0.2591,0.1413) (-0.1362,-0.2591,0.1413)(-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0158) (-0.1645,0.4154,-0.0394) (-0.2323,0.2469,0.0204) (-0.1182,0.3346,0.2266) (-0.0841,0.3790,0.2074) (-0.1578,0.3537,0.2720) (-0.1411,0.3688,0.1580) (-0.1391,0.1214,0.1404) (-0.0761,-0.0451,0.2027) (-0.3692,-0.0022,-0.0791) (-0.2736,-0.0825,-0.0341) (-0.3108,0.0697,-0.1704) (-0.3282,0.0347,-0.1103) (-0.3192,0.0407,-0.1145) (-0.2624,-0.0365,-0.1415) (-0.3039,-0.0562,-0.1226) (-0.3305,-0.0075,-0.1436) (-0.2870,0.0817,-0.1253) (-0.2901,-0.0219,-0.1636) (-0.3184,0.0201,-0.1077) (-0.2691,-0.0242,-0.0653) (-0.2607,0.0878,-0.1453) (-0.2749,-0.0742,-0.1526) (-0.0978,-0.1960,-0.1329) (-0.3052,-0.1485,-0.0688) (-0.2747,-0.1440,-0.0422) (-0.1555,-0.2599,0.0008) (-0.1901,-0.0515,0.0161) (-0.1479,0.1116,-0.0505) (-0.1351,-0.1516,0.1391) (-0.2049,-0.2213,0.2777) (0.0381,-0.3938,0.0302) (0.0330,-0.2011,0.3824) (-0.0283,-0.2730,0.1376) (0.0393,-0.3813,0.1645) (0.0356,-0.1619,0.3789) (-0.1362,-0.2591,0.1413) (-0.2451,-0.1011,0.0355) (-0.1669,0.4519,0.0458)[S&P]{} (-0.1645,0.4154,-0.0794)[Nasd]{} (-0.2323,0.2469,0.0504)[Cana]{} (-0.1182,0.3346,0.2566)[Mexi]{} (-0.0841,0.3790,0.2374)[Braz]{} (-0.1578,0.3537,0.3020)[Arge]{} (-0.1411,0.3688,0.1880)[Chil]{} (-0.1391,0.1214,0.1704)[Vene]{} (-0.0761,-0.0451,0.2327)[Peru]{} (-0.3692,-0.0022,-0.1091)[UK]{} (-0.2736,-0.0825,-0.0641)[Irel]{} (-0.3108,0.0697,-0.2004)[Fran]{} (-0.3282,0.0347,-0.1403)[Germ]{} (-0.3192,0.0407,-0.1445)[Swit]{} (-0.2624,-0.0365,-0.1715)[Autr]{} (-0.3039,-0.0562,-0.1526)[Belg]{} (-0.3305,-0.0075,-0.1736)[Neth]{} (-0.2870,0.0817,-0.1553)[Swed]{} (-0.2901,-0.0219,-0.1936)[Denm]{} (-0.3184,0.0201,-0.1377)[Finl]{} (-0.2691,-0.0242,-0.0953)[Norw]{} (-0.2607,0.0878,-0.1753)[Spai]{} (-0.2749,-0.0742,-0.1826)[Port]{} (-0.0978,-0.1960,-0.1629)[Gree]{} (-0.3052,-0.1485,-0.0988)[CzRe]{} (-0.2747,-0.1440,-0.0722)[Hung]{} (-0.1555,-0.2599,0.0308)[Pola]{} (-0.1901,-0.0515,0.0461)[Turk]{} (-0.1479,0.1116,-0.0805)[Isra]{} (-0.1351,-0.1516,0.1691)[Japa]{} (-0.2049,-0.2213,0.3077)[HoKo]{} (0.0381,-0.3938,0.0602)[SoKo]{} (0.0330,-0.2011,0.4124)[Thai]{} (-0.0283,-0.2730,0.1676)[Mala]{} (0.0393,-0.3813,0.1945)[Indo]{} (0.0356,-0.1619,0.4089)[Phil]{} (-0.1362,-0.2591,0.1713)[Aust]{} (-0.2451,-0.1011,0.0655)[SoAf]{} 2.3 cm Fig. 8. Three dimensional view of the asset trees for the second semester of 1998, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm At $T=0.2$, two clusters appear: the North American one, formed by S&P and Nasdaq, and the European cluster, comprised of the UK, France, Germany, Switzerland, Belgium, Netherlands, Sweden, Finland, and Spain. At $T=0.3$, a third cluster, comprised of Brazil and Argentina, is formed. The European cluster strenghtens its ties, and gains Denmark, Norway, Portugal, the Czech Republic, and Hungary. For $T=0.4$, Canada joins the North American cluster, and Mexico and Chile join the South American one. The European cluster becomes even denser and absorbs Ireland, Austria, and Greece. South Africa makes connections with most of the European indices, effectively joining the European cluster. For $T=0.5$, the North and South American clusters merge, and Canada makes many connections with Europe. Poland joins the European cluster, which now makes connections with Turkey, Israel, Hong Kong, and Australia. Hong Kong connects with South Africa, and Australia connects with Hong Kong, Philippines, and South Africa. At $T=0.6$, there is a strong integration between the American and the European clusters, now with the addition of Venezuela and Peru. Turkey, Israel, Hong Kong, and Australia fully integrate with Europe, and Japan connects both with the European cluster and with a Pacific Asian one, comprised of Japan, Hong Kong, Thailand, Indonesia, and Australia. South Korea and Malaysia connect with Europe and not with the Pacific Asian cluster. For $T>0.6$, the connections between clusters strenghten, and the Pacific Asian cluster becomes more self-connected, but noise becomes strong from here onwards, and so connections cannot be trusted anymore. The last connections occur at $T=1.3$. 2001 - Burst of the dot-com Bubble and September 11 --------------------------------------------------- Two very distinct crises arose in 2001: the first was the burst of a bubble due to the overvaluing of the new internet-based companies, and the second was the fear caused by the severest terrorist attack a country has ever faced, when the USA was targeted by commercial airplanes overtaken by terrorists. Both crises were very different in origin and in duration, and their analysis might shed some light on how crises may develope. For the networks concerning the year 2000, we add now the indices Bolsa de Panama General from Panama (Pana), FTSE MIB (or MIB-30) from Italy (Ital), Malta SX from Malta (Malt), LUxX from Luxembourg (Luxe), BET from Romania (Roma), OMXR from Latvia (Latv), OMXV from Lithuania (Lith), PFTS from Ukraine (Ukra), Al Quds from Palestine (Pale), ASE General from Jordan (Jorda), QE (or DSM 20) from Qatar (Qata), MSE Top 20 from Mongolia (Mongo), Straits Times from Singapore (Sing), TUNINDEX from Tunisia (Tuni), EGX 30 from Egypt (Egyp), and Nigeria SX All Share from Nigeria (Nige), for a total of 74 indices. For the networks of 2001, we also add the indices SOFIX from Bulgaria (Bulga), KASE from Kazakhstan (Kaza), VN-Index from Vietnam (Viet), NZX 50 from New Zealand (NeZe), and Gaborone from Botswana (Bots), for a total of 79 indices. ### First semester, 2000 Figure 9 shows the three dimensional view of the asset trees for the first semester of 2000, with threshold ranging from $T=0.3$ to $T=0.6$. 0.8 cm (-4,-1.9)(1,1.8) (-3,1.7)[T=0.3]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.2591,-0.4498,-0.1778)(-0.2765,-0.3585,-0.1471) (-0.4405,-0.0111,0.0425)(-0.4254,-0.0865,0.0298) (-0.4405,-0.0111,0.0425)(-0.3999,-0.0837,0.1406) (-0.4405,-0.0111,0.0425)(-0.4286,0.0333,-0.0189) (-0.4405,-0.0111,0.0425)(-0.3927,-0.0353,0.1513) (-0.4254,-0.0865,0.0298)(-0.3999,-0.0837,0.1406) (-0.4254,-0.0865,0.0298)(-0.4286,0.0333,-0.0189) (-0.3999,-0.0837,0.1406)(-0.3999,-0.0837,0.1406) (-0.3855,0.0439,0.0641)(-0.3508,0.1151,0.0570) (-0.3927,-0.0353,0.1513)(-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.1778) (-0.2765,-0.3585,-0.1471) (-0.4405,-0.0111,0.0425) (-0.4254,-0.0865,0.0298) (-0.3999,-0.0837,0.1406) (-0.4286,0.0333,-0.0189) (-0.3855,0.0439,0.0641) (-0.3508,0.1151,0.0570) (-0.3927,-0.0353,0.1513) (-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.2078)[S&P]{} (-0.2765,-0.3585,-0.1771)[Nasd]{} (-0.4405,-0.0111,0.0725)[Fran]{} (-0.4254,-0.0865,0.0598)[Germ]{} (-0.3999,-0.0837,0.1706)[Ital]{} (-0.4286,0.0333,-0.0489)[Neth]{} (-0.3855,0.0439,0.0941)[Swed]{} (-0.3508,0.1151,0.0870)[Finl]{} (-0.3927,-0.0353,0.1813)[Spai]{} (-0.3622,-0.0730,0.1413)[Port]{} (-8,-1.9)(1,1.8) (-3,1.7)[T=0.4]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.2591,-0.4498,-0.1778)(-0.2765,-0.3585,-0.1471) (-0.2591,-0.4498,-0.1778)(-0.2965,-0.2159,-0.1210) (-0.2591,-0.4498,-0.1778)(-0.2411,-0.3198,-0.1859) (-0.2765,-0.3585,-0.1471)(-0.2965,-0.2159,-0.1210) (-0.2765,-0.3585,-0.1471)(-0.2411,-0.3198,-0.1859) (-0.4256,-0.0475,0.0253)(-0.4405,-0.0111,0.0425) (-0.4256,-0.0475,0.0253)(-0.4254,-0.0865,0.0298) (-0.4256,-0.0475,0.0253)(-0.3999,-0.0837,0.1406) (-0.4256,-0.0475,0.0253)(-0.4286,0.0333,-0.0189) (-0.4256,-0.0475,0.0253)(-0.3927,-0.0353,0.1513) (-0.4405,-0.0111,0.0425)(-0.4254,-0.0865,0.0298) (-0.4405,-0.0111,0.0425)(-0.3999,-0.0837,0.1406) (-0.4405,-0.0111,0.0425)(-0.4286,0.0333,-0.0189) (-0.4405,-0.0111,0.0425)(-0.3855,0.0439,0.0641) (-0.4405,-0.0111,0.0425)(-0.3508,0.1151,0.0570) (-0.4405,-0.0111,0.0425)(-0.3927,-0.0353,0.1513) (-0.4405,-0.0111,0.0425)(-0.3622,-0.0730,0.1113) (-0.4254,-0.0865,0.0298)(-0.3999,-0.0837,0.1406) (-0.4254,-0.0865,0.0298)(-0.4286,0.0333,-0.0189) (-0.4254,-0.0865,0.0298)(-0.3855,0.0439,0.0641) (-0.4254,-0.0865,0.0298)(-0.3927,-0.0353,0.1513) (-0.4254,-0.0865,0.0298)(-0.3622,-0.0730,0.1113) (-0.3999,-0.0837,0.1406)(-0.4286,0.0333,-0.0189) (-0.3999,-0.0837,0.1406)(-0.3999,-0.0837,0.1406) (-0.4286,0.0333,-0.0189)(-0.3855,0.0439,0.0641) (-0.4286,0.0333,-0.0189)(-0.3508,0.1151,0.0570) (-0.4286,0.0333,-0.0189)(-0.3927,-0.0353,0.1513) (-0.3855,0.0439,0.0641)(-0.3508,0.1151,0.0570) (-0.3855,0.0439,0.0641)(-0.3927,-0.0353,0.1513) (-0.3855,0.0439,0.0641)(-0.3622,-0.0730,0.1113) (-0.3508,0.1151,0.0570)(-0.3927,-0.0353,0.1513) (-0.3508,0.1151,0.0570)(-0.3622,-0.0730,0.1113) (-0.3927,-0.0353,0.1513)(-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.1778) (-0.2765,-0.3585,-0.1471) (-0.2965,-0.2159,-0.1210) (-0.2411,-0.3198,-0.1859) (-0.4256,-0.0475,0.0253) (-0.4405,-0.0111,0.0425) (-0.4254,-0.0865,0.0298) (-0.3999,-0.0837,0.1406) (-0.4286,0.0333,-0.0189) (-0.3855,0.0439,0.0641) (-0.3508,0.1151,0.0570) (-0.3927,-0.0353,0.1513) (-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.2078)[S&P]{} (-0.2765,-0.3585,-0.1771)[Nasd]{} (-0.2965,-0.2159,-0.1510)[Cana]{} (-0.2411,-0.3198,-0.2159)[Mexi]{} (-0.4256,-0.0475,0.0553)[UK]{} (-0.4405,-0.0111,0.0725)[Fran]{} (-0.4254,-0.0865,0.0598)[Germ]{} (-0.3999,-0.0837,0.1706)[Ital]{} (-0.4286,0.0333,-0.0489)[Neth]{} (-0.3855,0.0439,0.0941)[Swed]{} (-0.3508,0.1151,0.0870)[Finl]{} (-0.3927,-0.0353,0.1813)[Spai]{} (-0.3622,-0.0730,0.1413)[Port]{} 1.7 cm (-4,-1.9)(1,1.8) (-3,1.7)[T=0.5]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.2591,-0.4498,-0.1778)(-0.2765,-0.3585,-0.1471) (-0.2591,-0.4498,-0.1778)(-0.2965,-0.2159,-0.1210) (-0.2591,-0.4498,-0.1778)(-0.2411,-0.3198,-0.1859) (-0.2591,-0.4498,-0.1778)(-0.2339,-0.3289,-0.1857) (-0.2591,-0.4498,-0.1778)(-0.2311,-0.4492,-0.0349) (-0.2765,-0.3585,-0.1471)(-0.2965,-0.2159,-0.1210) (-0.2765,-0.3585,-0.1471)(-0.2411,-0.3198,-0.1859) (-0.2765,-0.3585,-0.1471)(-0.2339,-0.3289,-0.1857) (-0.2765,-0.3585,-0.1471)(-0.2311,-0.4492,-0.0349) (-0.2965,-0.2159,-0.1210)(-0.4254,-0.0865,0.0298) (-0.2411,-0.3198,-0.1859)(-0.2339,-0.3289,-0.1857) (-0.2411,-0.3198,-0.1859)(-0.2311,-0.4492,-0.0349) (-0.2339,-0.3289,-0.1857)(-0.2311,-0.4492,-0.0349) (-0.4256,-0.0475,0.0253)(-0.4405,-0.0111,0.0425) (-0.4256,-0.0475,0.0253)(-0.4254,-0.0865,0.0298) (-0.4256,-0.0475,0.0253)(-0.3999,-0.0837,0.1406) (-0.4256,-0.0475,0.0253)(-0.4286,0.0333,-0.0189) (-0.4256,-0.0475,0.0253)(-0.3855,0.0439,0.0641) (-0.4256,-0.0475,0.0253)(-0.3508,0.1151,0.0570) (-0.4256,-0.0475,0.0253)(-0.3047,0.0054,0.1790) (-0.4256,-0.0475,0.0253)(-0.3927,-0.0353,0.1513) (-0.4256,-0.0475,0.0253)(-0.3622,-0.0730,0.1113) (-0.4405,-0.0111,0.0425)(-0.4254,-0.0865,0.0298) (-0.4405,-0.0111,0.0425)(-0.3999,-0.0837,0.1406) (-0.4405,-0.0111,0.0425)(-0.4286,0.0333,-0.0189) (-0.4405,-0.0111,0.0425)(-0.3855,0.0439,0.0641) (-0.4405,-0.0111,0.0425)(-0.3508,0.1151,0.0570) (-0.4405,-0.0111,0.0425)(-0.3927,-0.0353,0.1513) (-0.4405,-0.0111,0.0425)(-0.3622,-0.0730,0.1113) (-0.4254,-0.0865,0.0298)(-0.3999,-0.0837,0.1406) (-0.4254,-0.0865,0.0298)(-0.4286,0.0333,-0.0189) (-0.4254,-0.0865,0.0298)(-0.3855,0.0439,0.0641) (-0.4254,-0.0865,0.0298)(-0.3508,0.1151,0.0570) (-0.4254,-0.0865,0.0298)(-0.3927,-0.0353,0.1513) (-0.4254,-0.0865,0.0298)(-0.3622,-0.0730,0.1113) (-0.3999,-0.0837,0.1406)(-0.4286,0.0333,-0.0189) (-0.3999,-0.0837,0.1406)(-0.3855,0.0439,0.0641) (-0.3999,-0.0837,0.1406)(-0.3508,0.1151,0.0570) (-0.3999,-0.0837,0.1406)(-0.3999,-0.0837,0.1406) (-0.3999,-0.0837,0.1406)(-0.3622,-0.0730,0.1113) (-0.4286,0.0333,-0.0189)(-0.3855,0.0439,0.0641) (-0.4286,0.0333,-0.0189)(-0.2527,0.1271,0.0834) (-0.4286,0.0333,-0.0189)(-0.3508,0.1151,0.0570) (-0.4286,0.0333,-0.0189)(-0.3047,0.0054,0.1790) (-0.4286,0.0333,-0.0189)(-0.3927,-0.0353,0.1513) (-0.4286,0.0333,-0.0189)(-0.3622,-0.0730,0.1113) (-0.3855,0.0439,0.0641)(-0.2527,0.1271,0.0834) (-0.3855,0.0439,0.0641)(-0.3508,0.1151,0.0570) (-0.3855,0.0439,0.0641)(-0.3047,0.0054,0.1790) (-0.3855,0.0439,0.0641)(-0.3927,-0.0353,0.1513) (-0.3855,0.0439,0.0641)(-0.3622,-0.0730,0.1113) (-0.3508,0.1151,0.0570)(-0.3047,0.0054,0.1790) (-0.3508,0.1151,0.0570)(-0.3927,-0.0353,0.1513) (-0.3508,0.1151,0.0570)(-0.3622,-0.0730,0.1113) (-0.3047,0.0054,0.1790)(-0.3927,-0.0353,0.1513) (-0.3927,-0.0353,0.1513)(-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.1778) (-0.2765,-0.3585,-0.1471) (-0.2965,-0.2159,-0.1210) (-0.2411,-0.3198,-0.1859) (-0.2339,-0.3289,-0.1857) (-0.2311,-0.4492,-0.0349) (-0.4256,-0.0475,0.0253) (-0.4405,-0.0111,0.0425) (-0.4254,-0.0865,0.0298) (-0.3999,-0.0837,0.1406) (-0.4286,0.0333,-0.0189) (-0.3855,0.0439,0.0641) (-0.2527,0.1271,0.0834) (-0.3508,0.1151,0.0570) (-0.3047,0.0054,0.1790) (-0.3927,-0.0353,0.1513) (-0.3622,-0.0730,0.1113) (-0.2591,-0.4498,-0.2078)[S&P]{} (-0.2765,-0.3585,-0.1771)[Nasd]{} (-0.2965,-0.2159,-0.1510)[Cana]{} (-0.2411,-0.3198,-0.2159)[Mexi]{} (-0.2339,-0.3289,-0.2157)[Braz]{} (-0.2311,-0.4492,-0.0649)[Arge]{} (-0.4256,-0.0475,0.0553)[UK]{} (-0.4405,-0.0111,0.0725)[Fran]{} (-0.4254,-0.0865,0.0598)[Germ]{} (-0.3999,-0.0837,0.1706)[Ital]{} (-0.4286,0.0333,-0.0489)[Neth]{} (-0.3855,0.0439,0.0941)[Swed]{} (-0.2527,0.1271,0.1134)[Denm]{} (-0.3508,0.1151,0.0870)[Finl]{} (-0.3047,0.0054,0.2090)[Norw]{} (-0.3927,-0.0353,0.1813)[Spai]{} (-0.3622,-0.0730,0.1413)[Port]{} (-8,-1.9)(1,1.8) (-3,1.7)[T=0.6]{} (-3.9,-2.9)(3.7,-2.9)(3.7,2.1)(-3.9,2.1)(-3.9,-2.9) (-0.2591,-0.4498,-0.1778)(-0.2765,-0.3585,-0.1471) (-0.2591,-0.4498,-0.1778)(-0.2965,-0.2159,-0.1210) (-0.2591,-0.4498,-0.1778)(-0.2411,-0.3198,-0.1859) (-0.2591,-0.4498,-0.1778)(-0.2339,-0.3289,-0.1857) (-0.2591,-0.4498,-0.1778)(-0.2311,-0.4492,-0.0349) (-0.2591,-0.4498,-0.1778)(-0.2190,-0.1776,-0.2638) (-0.2591,-0.4498,-0.1778)(-0.4254,-0.0865,0.0298) (-0.2765,-0.3585,-0.1471)(-0.2965,-0.2159,-0.1210) (-0.2765,-0.3585,-0.1471)(-0.2411,-0.3198,-0.1859) (-0.2765,-0.3585,-0.1471)(-0.2339,-0.3289,-0.1857) (-0.2765,-0.3585,-0.1471)(-0.2311,-0.4492,-0.0349) (-0.2765,-0.3585,-0.1471)(-0.4254,-0.0865,0.0298) (-0.2765,-0.3585,-0.1471)(-0.4286,0.0333,-0.0189) (-0.2765,-0.3585,-0.1471)(-0.3141,-0.0763,0.0303) (-0.2965,-0.2159,-0.1210)(-0.2411,-0.3198,-0.1859) (-0.2965,-0.2159,-0.1210)(-0.2339,-0.3289,-0.1857) (-0.2965,-0.2159,-0.1210)(-0.2311,-0.4492,-0.0349) (-0.2965,-0.2159,-0.1210)(-0.4256,-0.0475,0.0253) (-0.2965,-0.2159,-0.1210)(-0.4405,-0.0111,0.0425) (-0.2965,-0.2159,-0.1210)(-0.4254,-0.0865,0.0298) (-0.2965,-0.2159,-0.1210)(-0.4286,0.0333,-0.0189) (-0.2965,-0.2159,-0.1210)(-0.3855,0.0439,0.0641) (-0.2411,-0.3198,-0.1859)(-0.2339,-0.3289,-0.1857) (-0.2411,-0.3198,-0.1859)(-0.2311,-0.4492,-0.0349) (-0.2339,-0.3289,-0.1857)(-0.2311,-0.4492,-0.0349) (-0.2339,-0.3289,-0.1857)(-0.2190,-0.1776,-0.2638) (-0.2339,-0.3289,-0.1857)(-0.4254,-0.0865,0.0298) (-0.2311,-0.4492,-0.0349)(-0.4254,-0.0865,0.0298) (-0.2311,-0.4492,-0.0349)(-0.3927,-0.0353,0.1513) (-0.2190,-0.1776,-0.2638)(-0.4256,-0.0475,0.0253) (-0.2190,-0.1776,-0.2638)(-0.4286,0.0333,-0.0189) (-0.1883,-0.0771,-0.0206)(-0.3622,-0.0730,0.1113) (-0.4256,-0.0475,0.0253)(-0.1587,0.1543,0.1406) (-0.4256,-0.0475,0.0253)(-0.4405,-0.0111,0.0425) (-0.4256,-0.0475,0.0253)(-0.4254,-0.0865,0.0298) (-0.4256,-0.0475,0.0253)(-0.2335,-0.1044,0.1696) (-0.4256,-0.0475,0.0253)(-0.3999,-0.0837,0.1406) (-0.4256,-0.0475,0.0253)(-0.4286,0.0333,-0.0189) (-0.4256,-0.0475,0.0253)(-0.3855,0.0439,0.0641) (-0.4256,-0.0475,0.0253)(-0.2527,0.1271,0.0834) (-0.4256,-0.0475,0.0253)(-0.3508,0.1151,0.0570) (-0.4256,-0.0475,0.0253)(-0.3047,0.0054,0.1790) (-0.4256,-0.0475,0.0253)(-0.3927,-0.0353,0.1513) (-0.4256,-0.0475,0.0253)(-0.3622,-0.0730,0.1113) (-0.4256,-0.0475,0.0253)(-0.2360,0.0914,0.1646) (-0.4256,-0.0475,0.0253)(-0.2335,0.0939,0.0524) (-0.4256,-0.0475,0.0253)(-0.3141,-0.0763,0.0303) (-0.4256,-0.0475,0.0253)(-0.2270,0.1120,-0.0735) (-0.4405,-0.0111,0.0425)(-0.4254,-0.0865,0.0298) (-0.4405,-0.0111,0.0425)(-0.2335,-0.1044,0.1696) (-0.4405,-0.0111,0.0425)(-0.3999,-0.0837,0.1406) (-0.4405,-0.0111,0.0425)(-0.4286,0.0333,-0.0189) (-0.4405,-0.0111,0.0425)(-0.3855,0.0439,0.0641) (-0.4405,-0.0111,0.0425)(-0.2527,0.1271,0.0834) (-0.4405,-0.0111,0.0425)(-0.3508,0.1151,0.0570) (-0.4405,-0.0111,0.0425)(-0.3047,0.0054,0.1790) (-0.4405,-0.0111,0.0425)(-0.3927,-0.0353,0.1513) (-0.4405,-0.0111,0.0425)(-0.3622,-0.0730,0.1113) (-0.4405,-0.0111,0.0425)(-0.2360,0.0914,0.1646) (-0.4405,-0.0111,0.0425)(-0.2335,0.0939,0.0524) (-0.4405,-0.0111,0.0425)(-0.3141,-0.0763,0.0303) (-0.4405,-0.0111,0.0425)(-0.1730,0.2247,-0.1478) (-0.4254,-0.0865,0.0298)(-0.2335,-0.1044,0.1696) (-0.4254,-0.0865,0.0298)(-0.3999,-0.0837,0.1406) (-0.4254,-0.0865,0.0298)(-0.4286,0.0333,-0.0189) (-0.4254,-0.0865,0.0298)(-0.3855,0.0439,0.0641) (-0.4254,-0.0865,0.0298)(-0.2527,0.1271,0.0834) (-0.4254,-0.0865,0.0298)(-0.3508,0.1151,0.0570) (-0.4254,-0.0865,0.0298)(-0.3047,0.0054,0.1790) (-0.4254,-0.0865,0.0298)(-0.3927,-0.0353,0.1513) (-0.4254,-0.0865,0.0298)(-0.3622,-0.0730,0.1113) (-0.4254,-0.0865,0.0298)(-0.2360,0.0914,0.1646) (-0.4254,-0.0865,0.0298)(-0.2335,0.0939,0.0524) (-0.4254,-0.0865,0.0298)(-0.3141,-0.0763,0.0303) (-0.2335,-0.1044,0.1696)(-0.3999,-0.0837,0.1406) (-0.2335,-0.1044,0.1696)(-0.4286,0.0333,-0.0189) (-0.2335,-0.1044,0.1696)(-0.3047,0.0054,0.1790) (-0.2335,-0.1044,0.1696)(-0.3927,-0.0353,0.1513) (-0.3999,-0.0837,0.1406)(-0.4286,0.0333,-0.0189) (-0.3999,-0.0837,0.1406)(-0.3855,0.0439,0.0641) (-0.3999,-0.0837,0.1406)(-0.2527,0.1271,0.0834) (-0.3999,-0.0837,0.1406)(-0.3508,0.1151,0.0570) (-0.3999,-0.0837,0.1406)(-0.3047,0.0054,0.1790) (-0.3999,-0.0837,0.1406)(-0.3999,-0.0837,0.1406) (-0.3999,-0.0837,0.1406)(-0.3622,-0.0730,0.1113) (-0.3999,-0.0837,0.1406)(-0.3141,-0.0763,0.0303) (-0.4286,0.0333,-0.0189)(-0.3855,0.0439,0.0641) (-0.4286,0.0333,-0.0189)(-0.2527,0.1271,0.0834) (-0.4286,0.0333,-0.0189)(-0.3508,0.1151,0.0570) (-0.4286,0.0333,-0.0189)(-0.3047,0.0054,0.1790) (-0.4286,0.0333,-0.0189)(-0.3927,-0.0353,0.1513) (-0.4286,0.0333,-0.0189)(-0.3622,-0.0730,0.1113) (-0.4286,0.0333,-0.0189)(-0.2360,0.0914,0.1646) (-0.4286,0.0333,-0.0189)(-0.2335,0.0939,0.0524) (-0.4286,0.0333,-0.0189)(-0.3141,-0.0763,0.0303) (-0.4286,0.0333,-0.0189)(-0.1922,0.3397,-0.1574) (-0.4286,0.0333,-0.0189)(-0.1730,0.2247,-0.1478) (-0.4286,0.0333,-0.0189)(-0.1219,0.2300,-0.1623) (-0.4286,0.0333,-0.0189)(-0.2270,0.1120,-0.0735) (-0.3855,0.0439,0.0641)(-0.2527,0.1271,0.0834) (-0.3855,0.0439,0.0641)(-0.3508,0.1151,0.0570) (-0.3855,0.0439,0.0641)(-0.3047,0.0054,0.1790) (-0.3855,0.0439,0.0641)(-0.3927,-0.0353,0.1513) (-0.3855,0.0439,0.0641)(-0.3622,-0.0730,0.1113) (-0.3855,0.0439,0.0641)(-0.2360,0.0914,0.1646) (-0.3855,0.0439,0.0641)(-0.3141,-0.0763,0.0303) (-0.3855,0.0439,0.0641)(-0.2270,0.1120,-0.0735) (-0.2527,0.1271,0.0834)(-0.3508,0.1151,0.0570) (-0.2527,0.1271,0.0834)(-0.3047,0.0054,0.1790) (-0.2527,0.1271,0.0834)(-0.3999,-0.0837,0.1406) (-0.2527,0.1271,0.0834)(-0.3622,-0.0730,0.1113) (-0.3508,0.1151,0.0570)(-0.3047,0.0054,0.1790) (-0.3508,0.1151,0.0570)(-0.3927,-0.0353,0.1513) (-0.3508,0.1151,0.0570)(-0.3622,-0.0730,0.1113) (-0.3508,0.1151,0.0570)(-0.2360,0.0914,0.1646) (-0.3508,0.1151,0.0570)(-0.3141,-0.0763,0.0303) (-0.3508,0.1151,0.0570)(-0.1922,0.3397,-0.1574) (-0.3508,0.1151,0.0570)(-0.2270,0.1120,-0.0735) (-0.3047,0.0054,0.1790)(-0.3927,-0.0353,0.1513) (-0.3047,0.0054,0.1790)(-0.3622,-0.0730,0.1113) (-0.3047,0.0054,0.1790)(-0.2360,0.0914,0.1646) (-0.3047,0.0054,0.1790)(-0.2335,0.0939,0.0524) (-0.3047,0.0054,0.1790)(-0.1593,0.2363,0.0988) (-0.3047,0.0054,0.1790)(-0.3141,-0.0763,0.0303) (-0.3047,0.0054,0.1790)(-0.2270,0.1120,-0.0735) (-0.3927,-0.0353,0.1513)(-0.3622,-0.0730,0.1113) (-0.3927,-0.0353,0.1513)(-0.2360,0.0914,0.1646) (-0.3927,-0.0353,0.1513)(-0.2335,0.0939,0.0524) (-0.3622,-0.0730,0.1113)(-0.3141,-0.0763,0.0303) (-0.2360,0.0914,0.1646)(-0.2335,0.0939,0.0524) (-0.2360,0.0914,0.1646)(-0.1593,0.2363,0.0988) (-0.2335,0.0939,0.0524)(-0.1593,0.2363,0.0988) (-0.2335,0.0939,0.0524)(-0.2270,0.1120,-0.0735) (-0.1406,0.3275,0.0449)(-0.1922,0.3397,-0.1574) (-0.1593,0.2363,0.0988)(-0.2270,0.1120,-0.0735) (-0.0781,0.3082,-0.0955)(-0.1922,0.3397,-0.1574) (-0.0781,0.3082,-0.0955)(-0.1437,0.3074,-0.1581) (-0.0781,0.3082,-0.0955)(-0.1219,0.2300,-0.1623) (-0.1922,0.3397,-0.1574)(-0.1437,0.3074,-0.1581) (-0.1922,0.3397,-0.1574)(-0.1730,0.2247,-0.1478) (-0.1922,0.3397,-0.1574)(-0.1219,0.2300,-0.1623) (-0.2591,-0.4498,-0.1778) (-0.2765,-0.3585,-0.1471) (-0.2965,-0.2159,-0.1210) (-0.2411,-0.3198,-0.1859) (-0.2339,-0.3289,-0.1857) (-0.2311,-0.4492,-0.0349) (-0.2190,-0.1776,-0.2638) (-0.1883,-0.0771,-0.0206) (-0.4256,-0.0475,0.0253) (-0.1587,0.1543,0.1406) (-0.4405,-0.0111,0.0425) (-0.4254,-0.0865,0.0298) (-0.2335,-0.1044,0.1696) (-0.3999,-0.0837,0.1406) (-0.4286,0.0333,-0.0189) (-0.3855,0.0439,0.0641) (-0.2527,0.1271,0.0834) (-0.3508,0.1151,0.0570) (-0.3047,0.0054,0.1790) (-0.3927,-0.0353,0.1513) (-0.3622,-0.0730,0.1113) (-0.2360,0.0914,0.1646) (-0.2335,0.0939,0.0524) (-0.1406,0.3275,0.0449) (-0.1593,0.2363,0.0988) (-0.3141,-0.0763,0.0303) (-0.0781,0.3082,-0.0955) (-0.1922,0.3397,-0.1574) (-0.1437,0.3074,-0.1581) (-0.1730,0.2247,-0.1478) (-0.1219,0.2300,-0.1623) (-0.2270,0.1120,-0.0735) (-0.2591,-0.4498,-0.2078)[S&P]{} (-0.2765,-0.3585,-0.1771)[Nasd]{} (-0.2965,-0.2159,-0.1510)[Cana]{} (-0.2411,-0.3198,-0.2159)[Mexi]{} (-0.2339,-0.3289,-0.2157)[Braz]{} (-0.2311,-0.4492,-0.0649)[Arge]{} (-0.2190,-0.1776,-0.2938)[Chil]{} (-0.1883,-0.0771,-0.0506)[Peru]{} (-0.4256,-0.0475,0.0553)[UK]{} (-0.1587,0.1543,0.1706)[Irel]{} (-0.4405,-0.0111,0.0725)[Fran]{} (-0.4254,-0.0865,0.0598)[Germ]{} (-0.2335,-0.1044,0.1996)[Swit]{} (-0.3999,-0.0837,0.1706)[Ital]{} (-0.4286,0.0333,-0.0489)[Neth]{} (-0.3855,0.0439,0.0941)[Swed]{} (-0.2527,0.1271,0.1134)[Denm]{} (-0.3508,0.1151,0.0870)[Finl]{} (-0.3047,0.0054,0.2090)[Norw]{} (-0.3927,-0.0353,0.1813)[Spai]{} (-0.3622,-0.0730,0.1413)[Port]{} (-0.2360,0.0914,0.1946)[CzRe]{} (-0.2335,0.0939,0.0824)[Hung]{} (-0.1406,0.3275,0.0749)[Pola]{} (-0.1593,0.2363,0.1288)[Russ]{} (-0.3141,-0.0763,0.0603)[Isra]{} (-0.0781,0.3082,-0.1255)[Japa]{} (-0.1922,0.3397,-0.1874)[HoKo]{} (-0.1437,0.3074,-0.1881)[SoKo]{} (-0.1730,0.2247,-0.1778)[Sing]{} (-0.1219,0.2300,-0.1923)[Aust]{} (-0.2270,0.1120,-0.1035)[SoAf]{} 1.2 cm Fig. 9. Three dimensional view of the asset trees for the first semester of 2000, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm For $T=0.2$, the only connection is between S&P and Nasdaq. For $T=0.3$, a second cluster, comprised of European indices, is formed, with France, Germany, Italy, Netherlands, Sweden, Finland, Spain, and Portugal. At $T=0.4$, the North American cluster is fully connected, with the addition of Canada and Mexico. The UK joins the European cluster, which also becomes denser, with the establishment of more connections between its members. For $T=0.5$, Brazil and Argentina join the American cluster, and also connect with one another. Canada connects with Germany, in the European cluster, which now grows even denser, and has Denmark added to it. At $T=0.6$, many more connections are formed, inside clusters and between clusters. Israel connects with both America and Europe, Hong Kong, Singapore, Australia and South Africa connect with Europe. Switzerland, Norway, the Czech Republic, Hungary, and Russia also join the European cluster. A Pacific Asian cluster is formed, comprised of Japan, Hong Kong, South Korea, and Singapore, strongly connected with Australia, and lightly connected with Europe. For larger thresholds, noise starts to dominate, and many connections are made, some of them apparently at random. At $T=1.3$, all indices are connected, with the last connection being between Jamaica and Lebanon. ### Second semester, 2000 Figure 10 shows the three dimensional view of the asset trees for the second semester of 2000, with threshold ranging from $T=0.3$ to $T=0.6$. Once more, at $T=0.2$, a single connection is formed between S&P and Nasdaq. For $T=0.3$, we also have an European cluster comprised of the UK, France, Germany, Italy, Netherlands, Sweden, Finland, and Spain. At $T=0.4$, the North American cluster, now comprised of S&P, Nasdaq, Canada, and Mexico, joins the European cluster via Germany. The European cluster has now only the addition of Portugal, but it becomes denser, with more connections formed among its members. At $T=0.5$, Argentina joins the American cluster, which makes stronger connections with Europe. Switzerland, Norway, and Russia join the European cluster. At $T=0.6$, a myriad of connections is made, with Brazil and Chile joining the American cluster, and Ireland, Austria, Belgium, the Czech Republic, Hungary, and Poland joining the European Cluster. The connections between both clusters grow stronger, and Israel, Hong Kong, South Korea, Singapore, Australia, and South Africa establish connections with some or both clusters. There is also a new cluster, of Pacific Asian indices, comprised of Japan, Hong Kong, South Korea, and Singapore, with connections to Australia and to South Africa. At larger thresholds, new connections are made, subject to a lot of random noise, and the last connection is formed at $T=1.4$, between Ireland and Lebanon. 0.6 cm (-4,-1.1)(1,3.5) (-3.4,2.9)[T=0.3]{} (-4.3,-2.1)(4.1,-2.1)(4.1,3.3)(-4.3,3.3)(-4.3,-2.1) (-0.3293,0.3307,-0.1048)(-0.2767,0.3087,-0.0883) (-0.4497,-0.0240,-0.0150)(-0.4143,0.1411,-0.0345) (-0.4143,0.1411,-0.0345)(-0.4211,0.1692,-0.0737) (-0.4143,0.1411,-0.0345)(-0.4098,0.1700,-0.0147) (-0.4143,0.1411,-0.0345)(-0.4483,-0.0002,0.0071) (-0.4143,0.1411,-0.0345)(-0.4327,0.1368,0.0159) (-0.4143,0.1411,-0.0345)(-0.3578,0.1557,0.0036) (-0.4211,0.1692,-0.0737)(-0.4098,0.1700,-0.0147) (-0.4211,0.1692,-0.0737)(-0.4483,-0.0002,0.0071) (-0.4211,0.1692,-0.0737)(-0.4327,0.1368,0.0159) (-0.4098,0.1700,-0.0147)(-0.4483,-0.0002,0.0071) (-0.4327,0.1368,0.0159)(-0.3743,0.0417,0.0536) (-0.3293,0.3307,-0.1048) (-0.2767,0.3087,-0.0883) (-0.4497,-0.0240,-0.0150) (-0.4143,0.1411,-0.0345) (-0.4211,0.1692,-0.0737) (-0.4098,0.1700,-0.0147) (-0.4483,-0.0002,0.0071) (-0.4327,0.1368,0.0159) (-0.3743,0.0417,0.0536) (-0.3578,0.1557,0.0036) (-0.3293,0.3307,-0.1348)[S&P]{} (-0.2767,0.3087,-0.1183)[Nasd]{} (-0.4497,-0.0240,-0.0450)[UK]{} (-0.4143,0.1411,-0.0645)[Fran]{} (-0.4211,0.1692,-0.1037)[Germ]{} (-0.4098,0.1700,-0.0447)[Ital]{} (-0.4483,-0.0002,0.0371)[Neth]{} (-0.4327,0.1368,0.0459)[Swed]{} (-0.3743,0.0417,0.0836)[Finl]{} (-0.3578,0.1557,0.0336)[Spai]{} (-8,-1.1)(1,3.5) (-3.4,2.9)[T=0.4]{} (-4.3,-2.1)(4.1,-2.1)(4.1,3.3)(-4.3,3.3)(-4.3,-2.1) (-0.3293,0.3307,-0.1048)(-0.2767,0.3087,-0.0883) (-0.3293,0.3307,-0.1048)(-0.3053,0.1377,-0.1069) (-0.3293,0.3307,-0.1048)(-0.2434,0.2850,0.1640) (-0.3293,0.3307,-0.1048)(-0.4211,0.1692,-0.0737) (-0.2767,0.3087,-0.0883)(-0.3053,0.1377,-0.1069) (-0.2767,0.3087,-0.0883)(-0.2434,0.2850,0.1640) (-0.4497,-0.0240,-0.0150)(-0.4143,0.1411,-0.0345) (-0.4497,-0.0240,-0.0150)(-0.4211,0.1692,-0.0737) (-0.4497,-0.0240,-0.0150)(-0.4098,0.1700,-0.0147) (-0.4497,-0.0240,-0.0150)(-0.4483,-0.0002,0.0071) (-0.4497,-0.0240,-0.0150)(-0.4327,0.1368,0.0159) (-0.4497,-0.0240,-0.0150)(-0.3743,0.0417,0.0536) (-0.4143,0.1411,-0.0345)(-0.4211,0.1692,-0.0737) (-0.4143,0.1411,-0.0345)(-0.4098,0.1700,-0.0147) (-0.4143,0.1411,-0.0345)(-0.4483,-0.0002,0.0071) (-0.4143,0.1411,-0.0345)(-0.4327,0.1368,0.0159) (-0.4143,0.1411,-0.0345)(-0.3743,0.0417,0.0536) (-0.4143,0.1411,-0.0345)(-0.3578,0.1557,0.0036) (-0.4143,0.1411,-0.0345)(-0.3423,0.1394,-0.0176) (-0.4211,0.1692,-0.0737)(-0.4098,0.1700,-0.0147) (-0.4211,0.1692,-0.0737)(-0.4483,-0.0002,0.0071) (-0.4211,0.1692,-0.0737)(-0.4327,0.1368,0.0159) (-0.4211,0.1692,-0.0737)(-0.3743,0.0417,0.0536) (-0.4211,0.1692,-0.0737)(-0.3578,0.1557,0.0036) (-0.4098,0.1700,-0.0147)(-0.4483,-0.0002,0.0071) (-0.4098,0.1700,-0.0147)(-0.4327,0.1368,0.0159) (-0.4098,0.1700,-0.0147)(-0.3578,0.1557,0.0036) (-0.4483,-0.0002,0.0071)(-0.4327,0.1368,0.0159) (-0.4483,-0.0002,0.0071)(-0.3743,0.0417,0.0536) (-0.4483,-0.0002,0.0071)(-0.3578,0.1557,0.0036) (-0.4327,0.1368,0.0159)(-0.3743,0.0417,0.0536) (-0.4327,0.1368,0.0159)(-0.3578,0.1557,0.0036) (-0.4327,0.1368,0.0159)(-0.3423,0.1394,-0.0176) (-0.3743,0.0417,0.0536)(-0.3423,0.1394,-0.0176) (-0.3578,0.1557,0.0036)(-0.3423,0.1394,-0.0176) (-0.3293,0.3307,-0.1048) (-0.2767,0.3087,-0.0883) (-0.3053,0.1377,-0.1069) (-0.2434,0.2850,0.1640) (-0.4497,-0.0240,-0.0150) (-0.4143,0.1411,-0.0345) (-0.4211,0.1692,-0.0737) (-0.2692,0.1314,0.0996) (-0.4098,0.1700,-0.0147) (-0.4483,-0.0002,0.0071) (-0.4327,0.1368,0.0159) (-0.3743,0.0417,0.0536) (-0.3578,0.1557,0.0036) (-0.3423,0.1394,-0.0176) (-0.3293,0.3307,-0.1348)[S&P]{} (-0.2767,0.3087,-0.1183)[Nasd]{} (-0.3053,0.1377,-0.1369)[Cana]{} (-0.2434,0.2850,0.1940)[Mexi]{} (-0.4497,-0.0240,-0.0450)[UK]{} (-0.4143,0.1411,-0.0645)[Fran]{} (-0.4211,0.1692,-0.1037)[Germ]{} (-0.2692,0.1314,0.1296)[Swit]{} (-0.4098,0.1700,-0.0447)[Ital]{} (-0.4483,-0.0002,0.0371)[Neth]{} (-0.4327,0.1368,0.0459)[Swed]{} (-0.3743,0.0417,0.0836)[Finl]{} (-0.3578,0.1557,0.0336)[Spai]{} (-0.3423,0.1394,-0.0476)[Port]{} 1.5 cm (-4,-1.1)(1,3.5) (-3.4,2.9)[T=0.5]{} (-4.3,-2.1)(4.1,-2.1)(4.1,3.3)(-4.3,3.3)(-4.3,-2.1) (-0.3293,0.3307,-0.1048)(-0.2767,0.3087,-0.0883) (-0.3293,0.3307,-0.1048)(-0.3053,0.1377,-0.1069) (-0.3293,0.3307,-0.1048)(-0.2434,0.2850,0.1640) (-0.3293,0.3307,-0.1048)(-0.2008,0.5136,0.0513) (-0.3293,0.3307,-0.1048)(-0.4143,0.1411,-0.0345) (-0.3293,0.3307,-0.1048)(-0.4211,0.1692,-0.0737) (-0.3293,0.3307,-0.1048)(-0.4098,0.1700,-0.0147) (-0.3293,0.3307,-0.1048)(-0.4483,-0.0002,0.0071) (-0.2767,0.3087,-0.0883)(-0.3053,0.1377,-0.1069) (-0.2767,0.3087,-0.0883)(-0.2434,0.2850,0.1640) (-0.2767,0.3087,-0.0883)(-0.2008,0.5136,0.0513) (-0.2767,0.3087,-0.0883)(-0.4211,0.1692,-0.0737) (-0.2767,0.3087,-0.0883)(-0.4098,0.1700,-0.0147) (-0.3053,0.1377,-0.1069)(-0.4211,0.1692,-0.0737) (-0.3053,0.1377,-0.1069)(-0.4483,-0.0002,0.0071) (-0.3053,0.1377,-0.1069)(-0.4327,0.1368,0.0159) (-0.3053,0.1377,-0.1069)(-0.3743,0.0417,0.0536) (-0.4497,-0.0240,-0.0150)(-0.4143,0.1411,-0.0345) (-0.4497,-0.0240,-0.0150)(-0.4211,0.1692,-0.0737) (-0.4497,-0.0240,-0.0150)(-0.4098,0.1700,-0.0147) (-0.4497,-0.0240,-0.0150)(-0.4483,-0.0002,0.0071) (-0.4497,-0.0240,-0.0150)(-0.4327,0.1368,0.0159) (-0.4497,-0.0240,-0.0150)(-0.3743,0.0417,0.0536) (-0.4497,-0.0240,-0.0150)(-0.3331,-0.0481,-0.1432) (-0.4497,-0.0240,-0.0150)(-0.3578,0.1557,0.0036) (-0.4497,-0.0240,-0.0150)(-0.3423,0.1394,-0.0176) (-0.4497,-0.0240,-0.0150)(-0.2380,-0.1616,-0.0303) (-0.4143,0.1411,-0.0345)(-0.4211,0.1692,-0.0737) (-0.4143,0.1411,-0.0345)(-0.4098,0.1700,-0.0147) (-0.4143,0.1411,-0.0345)(-0.4483,-0.0002,0.0071) (-0.4143,0.1411,-0.0345)(-0.4327,0.1368,0.0159) (-0.4143,0.1411,-0.0345)(-0.3743,0.0417,0.0536) (-0.4143,0.1411,-0.0345)(-0.3331,-0.0481,-0.1432) (-0.4143,0.1411,-0.0345)(-0.3578,0.1557,0.0036) (-0.4143,0.1411,-0.0345)(-0.3423,0.1394,-0.0176) (-0.4211,0.1692,-0.0737)(-0.4098,0.1700,-0.0147) (-0.4211,0.1692,-0.0737)(-0.4483,-0.0002,0.0071) (-0.4211,0.1692,-0.0737)(-0.4327,0.1368,0.0159) (-0.4211,0.1692,-0.0737)(-0.3743,0.0417,0.0536) (-0.4211,0.1692,-0.0737)(-0.3331,-0.0481,-0.1432) (-0.4211,0.1692,-0.0737)(-0.3578,0.1557,0.0036) (-0.4211,0.1692,-0.0737)(-0.3423,0.1394,-0.0176) (-0.2692,0.1314,0.0996)(-0.4098,0.1700,-0.0147) (-0.2692,0.1314,0.0996)(-0.4483,-0.0002,0.0071) (-0.2457,0.0421,-0.0528)(-0.4483,-0.0002,0.0071) (-0.4098,0.1700,-0.0147)(-0.4483,-0.0002,0.0071) (-0.4098,0.1700,-0.0147)(-0.4327,0.1368,0.0159) (-0.4098,0.1700,-0.0147)(-0.3743,0.0417,0.0536) (-0.4098,0.1700,-0.0147)(-0.3331,-0.0481,-0.1432) (-0.4098,0.1700,-0.0147)(-0.3578,0.1557,0.0036) (-0.4098,0.1700,-0.0147)(-0.3423,0.1394,-0.0176) (-0.4483,-0.0002,0.0071)(-0.4327,0.1368,0.0159) (-0.4483,-0.0002,0.0071)(-0.3743,0.0417,0.0536) (-0.4483,-0.0002,0.0071)(-0.3331,-0.0481,-0.1432) (-0.4483,-0.0002,0.0071)(-0.3578,0.1557,0.0036) (-0.4483,-0.0002,0.0071)(-0.3423,0.1394,-0.0176) (-0.4483,-0.0002,0.0071)(-0.2602,-0.0687,-0.0884) (-0.4327,0.1368,0.0159)(-0.3743,0.0417,0.0536) (-0.4327,0.1368,0.0159)(-0.3331,-0.0481,-0.1432) (-0.4327,0.1368,0.0159)(-0.3578,0.1557,0.0036) (-0.4327,0.1368,0.0159)(-0.3423,0.1394,-0.0176) (-0.3743,0.0417,0.0536)(-0.3423,0.1394,-0.0176) (-0.3743,0.0417,0.0536)(-0.3331,-0.0481,-0.1432) (-0.3743,0.0417,0.0536)(-0.3578,0.1557,0.0036) (-0.3331,-0.0481,-0.1432)(-0.3578,0.1557,0.0036) (-0.3331,-0.0481,-0.1432)(-0.2441,-0.3293,-0.0370) (-0.3578,0.1557,0.0036)(-0.3423,0.1394,-0.0176) (-0.3293,0.3307,-0.1048) (-0.2767,0.3087,-0.0883) (-0.3053,0.1377,-0.1069) (-0.2434,0.2850,0.1640) (-0.2008,0.5136,0.0513) (-0.4497,-0.0240,-0.0150) (-0.4143,0.1411,-0.0345) (-0.4211,0.1692,-0.0737) (-0.2692,0.1314,0.0996) (-0.2457,0.0421,-0.0528) (-0.4098,0.1700,-0.0147) (-0.4483,-0.0002,0.0071) (-0.4327,0.1368,0.0159) (-0.3743,0.0417,0.0536) (-0.3331,-0.0481,-0.1432) (-0.3578,0.1557,0.0036) (-0.3423,0.1394,-0.0176) (-0.2441,-0.3293,-0.0370) (-0.2380,-0.1616,-0.0303) (-0.2602,-0.0687,-0.0884) (-0.3293,0.3307,-0.1348)[S&P]{} (-0.2767,0.3087,-0.1183)[Nasd]{} (-0.3053,0.1377,-0.1369)[Cana]{} (-0.2434,0.2850,0.1940)[Mexi]{} (-0.2008,0.5136,0.0813)[Arge]{} (-0.4497,-0.0240,-0.0450)[UK]{} (-0.4143,0.1411,-0.0645)[Fran]{} (-0.4211,0.1692,-0.1037)[Germ]{} (-0.2692,0.1314,0.1296)[Swit]{} (-0.2457,0.0421,-0.0828)[Autr]{} (-0.4098,0.1700,-0.0447)[Ital]{} (-0.4483,-0.0002,0.0371)[Neth]{} (-0.4327,0.1368,0.0459)[Swed]{} (-0.2500,-0.0978,0.0509)[Denm]{} (-0.3743,0.0417,0.0836)[Finl]{} (-0.3331,-0.0481,-0.1732)[Norw]{} (-0.3578,0.1557,0.0336)[Spai]{} (-0.3423,0.1394,-0.0476)[Port]{} (-0.2441,-0.3293,-0.0670)[Russ]{} (-0.2380,-0.1616,-0.0603)[Sing]{} (-0.2602,-0.0687,-0.1184)[SoAf]{} (-8,-1.1)(1,3.5) (-3.4,2.9)[T=0.6]{} (-4.3,-2.1)(4.1,-2.1)(4.1,3.3)(-4.3,3.3)(-4.3,-2.1) (-0.3293,0.3307,-0.1048)(-0.2767,0.3087,-0.0883) (-0.3293,0.3307,-0.1048)(-0.3053,0.1377,-0.1069) (-0.3293,0.3307,-0.1048)(-0.2434,0.2850,0.1640) (-0.3293,0.3307,-0.1048)(-0.1590,0.2962,0.0355) (-0.3293,0.3307,-0.1048)(-0.2008,0.5136,0.0513) (-0.3293,0.3307,-0.1048)(-0.4497,-0.0240,-0.0150) (-0.3293,0.3307,-0.1048)(-0.4143,0.1411,-0.0345) (-0.3293,0.3307,-0.1048)(-0.4211,0.1692,-0.0737) (-0.3293,0.3307,-0.1048)(-0.4098,0.1700,-0.0147) (-0.3293,0.3307,-0.1048)(-0.4483,-0.0002,0.0071) (-0.3293,0.3307,-0.1048)(-0.4327,0.1368,0.0159) (-0.3293,0.3307,-0.1048)(-0.3578,0.1557,0.0036) (-0.3293,0.3307,-0.1048)(-0.2602,-0.0687,-0.0884) (-0.2767,0.3087,-0.0883)(-0.3053,0.1377,-0.1069) (-0.2767,0.3087,-0.0883)(-0.2434,0.2850,0.1640) (-0.2767,0.3087,-0.0883)(-0.2008,0.5136,0.0513) (-0.2767,0.3087,-0.0883)(-0.4497,-0.0240,-0.0150) (-0.2767,0.3087,-0.0883)(-0.4143,0.1411,-0.0345) (-0.2767,0.3087,-0.0883)(-0.4211,0.1692,-0.0737) (-0.2767,0.3087,-0.0883)(-0.4098,0.1700,-0.0147) (-0.2767,0.3087,-0.0883)(-0.4483,-0.0002,0.0071) (-0.2767,0.3087,-0.0883)(-0.3743,0.0417,0.0536) (-0.2767,0.3087,-0.0883)(-0.2824,-0.0553,-0.1019) (-0.3053,0.1377,-0.1069)(-0.2434,0.2850,0.1640) (-0.3053,0.1377,-0.1069)(-0.4497,-0.0240,-0.0150) (-0.3053,0.1377,-0.1069)(-0.4143,0.1411,-0.0345) (-0.3053,0.1377,-0.1069)(-0.4211,0.1692,-0.0737) (-0.3053,0.1377,-0.1069)(-0.4098,0.1700,-0.0147) (-0.3053,0.1377,-0.1069)(-0.4483,-0.0002,0.0071) (-0.3053,0.1377,-0.1069)(-0.4327,0.1368,0.0159) (-0.3053,0.1377,-0.1069)(-0.3743,0.0417,0.0536) (-0.3053,0.1377,-0.1069)(-0.3578,0.1557,0.0036) (-0.3053,0.1377,-0.1069)(-0.2602,-0.0687,-0.0884) (-0.2434,0.2850,0.1640)(-0.1590,0.2962,0.0355) (-0.2434,0.2850,0.1640)(-0.2008,0.5136,0.0513) (-0.2434,0.2850,0.1640)(-0.4211,0.1692,-0.0737) (-0.2434,0.2850,0.1640)(-0.4098,0.1700,-0.0147) (-0.2434,0.2850,0.1640)(-0.4483,-0.0002,0.0071) (-0.2434,0.2850,0.1640)(-0.4327,0.1368,0.0159) (-0.1590,0.2962,0.0355)(-0.2008,0.5136,0.0513) (-0.2008,0.5136,0.0513)(-0.4211,0.1692,-0.0737) (-0.2008,0.5136,0.0513)(-0.4098,0.1700,-0.0147) (-0.2008,0.5136,0.0513)(-0.4327,0.1368,0.0159) (-0.2008,0.5136,0.0513)(-0.3578,0.1557,0.0036) (-0.2137,0.0764,-0.0031)(-0.4497,-0.0240,-0.0150) (-0.4497,-0.0240,-0.0150)(-0.4143,0.1411,-0.0345) (-0.4497,-0.0240,-0.0150)(-0.4211,0.1692,-0.0737) (-0.4497,-0.0240,-0.0150)(-0.4098,0.1700,-0.0147) (-0.4497,-0.0240,-0.0150)(-0.4483,-0.0002,0.0071) (-0.4497,-0.0240,-0.0150)(-0.4327,0.1368,0.0159) (-0.4497,-0.0240,-0.0150)(-0.2500,-0.0978,0.0209) (-0.4497,-0.0240,-0.0150)(-0.3743,0.0417,0.0536) (-0.4497,-0.0240,-0.0150)(-0.3331,-0.0481,-0.1432) (-0.4497,-0.0240,-0.0150)(-0.3578,0.1557,0.0036) (-0.4497,-0.0240,-0.0150)(-0.3423,0.1394,-0.0176) (-0.4497,-0.0240,-0.0150)(-0.2300,-0.1152,0.1511) (-0.4497,-0.0240,-0.0150)(-0.1998,-0.1644,-0.0280) (-0.4497,-0.0240,-0.0150)(-0.2441,-0.3293,-0.0370) (-0.4497,-0.0240,-0.0150)(-0.1606,-0.3229,-0.0570) (-0.4497,-0.0240,-0.0150)(-0.2380,-0.1616,-0.0303) (-0.4497,-0.0240,-0.0150)(-0.2602,-0.0687,-0.0884) (-0.1861,-0.2205,0.1529)(-0.4483,-0.0002,0.0071) (-0.1861,-0.2205,0.1529)(-0.1998,-0.1644,-0.0280) (-0.1861,-0.2205,0.1529)(-0.1725,-0.3789,0.0721) (-0.4143,0.1411,-0.0345)(-0.4211,0.1692,-0.0737) (-0.4143,0.1411,-0.0345)(-0.2692,0.1314,0.0996) (-0.4143,0.1411,-0.0345)(-0.2457,0.0421,-0.0528) (-0.4143,0.1411,-0.0345)(-0.4098,0.1700,-0.0147) (-0.4143,0.1411,-0.0345)(-0.4483,-0.0002,0.0071) (-0.4143,0.1411,-0.0345)(-0.4327,0.1368,0.0159) (-0.4143,0.1411,-0.0345)(-0.2500,-0.0978,0.0209) (-0.4143,0.1411,-0.0345)(-0.3743,0.0417,0.0536) (-0.4143,0.1411,-0.0345)(-0.3331,-0.0481,-0.1432) (-0.4143,0.1411,-0.0345)(-0.3578,0.1557,0.0036) (-0.4143,0.1411,-0.0345)(-0.3423,0.1394,-0.0176) (-0.4143,0.1411,-0.0345)(-0.2300,-0.1152,0.1511) (-0.4143,0.1411,-0.0345)(-0.2441,-0.3293,-0.0370) (-0.4143,0.1411,-0.0345)(-0.2380,-0.1616,-0.0303) (-0.4143,0.1411,-0.0345)(-0.2602,-0.0687,-0.0884) (-0.4211,0.1692,-0.0737)(-0.2692,0.1314,0.0996) (-0.4211,0.1692,-0.0737)(-0.4098,0.1700,-0.0147) (-0.4211,0.1692,-0.0737)(-0.4483,-0.0002,0.0071) (-0.4211,0.1692,-0.0737)(-0.4327,0.1368,0.0159) (-0.4211,0.1692,-0.0737)(-0.3743,0.0417,0.0536) (-0.4211,0.1692,-0.0737)(-0.3331,-0.0481,-0.1432) (-0.4211,0.1692,-0.0737)(-0.3578,0.1557,0.0036) (-0.4211,0.1692,-0.0737)(-0.3423,0.1394,-0.0176) (-0.4211,0.1692,-0.0737)(-0.2300,-0.1152,0.1511) (-0.4211,0.1692,-0.0737)(-0.2380,-0.1616,-0.0303) (-0.4211,0.1692,-0.0737)(-0.2602,-0.0687,-0.0884) (-0.2692,0.1314,0.0996)(-0.4098,0.1700,-0.0147) (-0.2692,0.1314,0.0996)(-0.4483,-0.0002,0.0071) (-0.2692,0.1314,0.0996)(-0.4327,0.1368,0.0159) (-0.2692,0.1314,0.0996)(-0.3331,-0.0481,-0.1432) (-0.2692,0.1314,0.0996)(-0.3578,0.1557,0.0036) (-0.2457,0.0421,-0.0528)(-0.4483,-0.0002,0.0071) (-0.2457,0.0421,-0.0528)(-0.3743,0.0417,0.0536) (-0.2457,0.0421,-0.0528)(-0.3331,-0.0481,-0.1432) (-0.2457,0.0421,-0.0528)(-0.3578,0.1557,0.0036) (-0.2457,0.0421,-0.0528)(-0.2602,-0.0687,-0.0884) (-0.4098,0.1700,-0.0147)(-0.4483,-0.0002,0.0071) (-0.4098,0.1700,-0.0147)(-0.4327,0.1368,0.0159) (-0.4098,0.1700,-0.0147)(-0.2500,-0.0978,0.0209) (-0.4098,0.1700,-0.0147)(-0.3743,0.0417,0.0536) (-0.4098,0.1700,-0.0147)(-0.3331,-0.0481,-0.1432) (-0.4098,0.1700,-0.0147)(-0.3578,0.1557,0.0036) (-0.4098,0.1700,-0.0147)(-0.3423,0.1394,-0.0176) (-0.4098,0.1700,-0.0147)(-0.2380,-0.1616,-0.0303) (-0.4098,0.1700,-0.0147)(-0.2602,-0.0687,-0.0884) (-0.1193,0.0304,-0.1252)(-0.4483,-0.0002,0.0071) (-0.4483,-0.0002,0.0071)(-0.4327,0.1368,0.0159) (-0.4483,-0.0002,0.0071)(-0.2500,-0.0978,0.0209) (-0.4483,-0.0002,0.0071)(-0.3743,0.0417,0.0536) (-0.4483,-0.0002,0.0071)(-0.3331,-0.0481,-0.1432) (-0.4483,-0.0002,0.0071)(-0.3578,0.1557,0.0036) (-0.4483,-0.0002,0.0071)(-0.3423,0.1394,-0.0176) (-0.4483,-0.0002,0.0071)(-0.2300,-0.1152,0.1511) (-0.4483,-0.0002,0.0071)(-0.1998,-0.1644,-0.0280) (-0.4483,-0.0002,0.0071)(-0.2441,-0.3293,-0.0370) (-0.4483,-0.0002,0.0071)(-0.1725,-0.3789,0.0721) (-0.4483,-0.0002,0.0071)(-0.2380,-0.1616,-0.0303) (-0.4483,-0.0002,0.0071)(-0.2602,-0.0687,-0.0884) (-0.4327,0.1368,0.0159)(-0.3743,0.0417,0.0536) (-0.4327,0.1368,0.0159)(-0.3331,-0.0481,-0.1432) (-0.4327,0.1368,0.0159)(-0.3578,0.1557,0.0036) (-0.4327,0.1368,0.0159)(-0.3423,0.1394,-0.0176) (-0.4327,0.1368,0.0159)(-0.2824,-0.0553,-0.1019) (-0.4327,0.1368,0.0159)(-0.2602,-0.0687,-0.0884) (-0.2500,-0.0978,0.0209)(-0.3331,-0.0481,-0.1432) (-0.2500,-0.0978,0.0209)(-0.2441,-0.3293,-0.0370) (-0.3743,0.0417,0.0536)(-0.3423,0.1394,-0.0176) (-0.3743,0.0417,0.0536)(-0.3331,-0.0481,-0.1432) (-0.3743,0.0417,0.0536)(-0.3578,0.1557,0.0036) (-0.3743,0.0417,0.0536)(-0.2300,-0.1152,0.1511) (-0.3743,0.0417,0.0536)(-0.2602,-0.0687,-0.0884) (-0.3331,-0.0481,-0.1432)(-0.3578,0.1557,0.0036) (-0.3331,-0.0481,-0.1432)(-0.3423,0.1394,-0.0176) (-0.3331,-0.0481,-0.1432)(-0.2441,-0.3293,-0.0370) (-0.3331,-0.0481,-0.1432)(-0.2300,-0.1152,0.1511) (-0.3331,-0.0481,-0.1432)(-0.1998,-0.1644,-0.0280) (-0.3331,-0.0481,-0.1432)(-0.2824,-0.0553,-0.1019) (-0.3331,-0.0481,-0.1432)(-0.2602,-0.0687,-0.0884) (-0.3578,0.1557,0.0036)(-0.3423,0.1394,-0.0176) (-0.3578,0.1557,0.0036)(-0.2602,-0.0687,-0.0884) (-0.2300,-0.1152,0.1511)(-0.1998,-0.1644,-0.0280) (-0.2300,-0.1152,0.1511)(-0.1905,-0.2449,0.2364) (-0.2300,-0.1152,0.1511)(-0.2441,-0.3293,-0.0370) (-0.2300,-0.1152,0.1511)(-0.2602,-0.0687,-0.0884) (-0.1998,-0.1644,-0.0280)(-0.2441,-0.3293,-0.0370) (-0.1998,-0.1644,-0.0280)(-0.1606,-0.3229,-0.0570) (-0.1998,-0.1644,-0.0280)(-0.2602,-0.0687,-0.0884) (-0.1905,-0.2449,0.2364)(-0.1725,-0.3789,0.0721) (-0.1905,-0.2449,0.2364)(-0.1073,-0.3379,0.0185) (-0.2441,-0.3293,-0.0370)(-0.1725,-0.3789,0.0721) (-0.2441,-0.3293,-0.0370)(-0.1606,-0.3229,-0.0570) (-0.2441,-0.3293,-0.0370)(-0.1073,-0.3379,0.0185) (-0.1149,-0.2784,0.0225)(-0.1725,-0.3789,0.0721) (-0.1149,-0.2784,0.0225)(-0.1073,-0.3379,0.0185) (-0.1725,-0.3789,0.0721)(-0.1606,-0.3229,-0.0570) (-0.1725,-0.3789,0.0721)(-0.2380,-0.1616,-0.0303) (-0.1725,-0.3789,0.0721)(-0.1073,-0.3379,0.0185) (-0.1725,-0.3789,0.0721)(-0.2602,-0.0687,-0.0884) (-0.1606,-0.3229,-0.0570)(-0.2380,-0.1616,-0.0303) (-0.1073,-0.3379,0.0185)(-0.2602,-0.0687,-0.0884) (-0.3293,0.3307,-0.1048) (-0.2767,0.3087,-0.0883) (-0.3053,0.1377,-0.1069) (-0.2434,0.2850,0.1640) (-0.1590,0.2962,0.0355) (-0.2008,0.5136,0.0513) (-0.2137,0.0764,-0.0031) (-0.4497,-0.0240,-0.0150) (-0.1861,-0.2205,0.1529) (-0.4143,0.1411,-0.0345) (-0.4211,0.1692,-0.0737) (-0.2692,0.1314,0.0996) (-0.2457,0.0421,-0.0528) (-0.4098,0.1700,-0.0147) (-0.1193,0.0304,-0.1252) (-0.4483,-0.0002,0.0071) (-0.4327,0.1368,0.0159) (-0.2500,-0.0978,0.0209) (-0.3743,0.0417,0.0536) (-0.3331,-0.0481,-0.1432) (-0.3578,0.1557,0.0036) (-0.3423,0.1394,-0.0176) (-0.2300,-0.1152,0.1511) (-0.1998,-0.1644,-0.0280) (-0.1905,-0.2449,0.2364) (-0.2441,-0.3293,-0.0370) (-0.2824,-0.0553,-0.1019) (-0.1149,-0.2784,0.0225) (-0.1725,-0.3789,0.0721) (-0.1606,-0.3229,-0.0570) (-0.2380,-0.1616,-0.0303) (-0.1073,-0.3379,0.0185) (-0.2602,-0.0687,-0.0884) (-0.3293,0.3307,-0.1348)[S&P]{} (-0.2767,0.3087,-0.1183)[Nasd]{} (-0.3053,0.1377,-0.1369)[Cana]{} (-0.2434,0.2850,0.1940)[Mexi]{} (-0.1590,0.2962,0.0655)[Braz]{} (-0.2008,0.5136,0.0813)[Arge]{} (-0.2137,0.0764,-0.0331)[Chil]{} (-0.4497,-0.0240,-0.0450)[UK]{} (-0.1861,-0.2205,0.1829)[Irel]{} (-0.4143,0.1411,-0.0645)[Fran]{} (-0.4211,0.1692,-0.1037)[Germ]{} (-0.2692,0.1314,0.1296)[Swit]{} (-0.2457,0.0421,-0.0828)[Autr]{} (-0.4098,0.1700,-0.0447)[Ital]{} (-0.1193,0.0304,-0.1552)[Belg]{} (-0.4483,-0.0002,0.0371)[Neth]{} (-0.4327,0.1368,0.0459)[Swed]{} (-0.2500,-0.0978,0.0509)[Denm]{} (-0.3743,0.0417,0.0836)[Finl]{} (-0.3331,-0.0481,-0.1732)[Norw]{} (-0.3578,0.1557,0.0336)[Spai]{} (-0.3423,0.1394,-0.0476)[Port]{} (-0.2300,-0.1152,0.1811)[CzRe]{} (-0.1998,-0.1644,-0.0580)[Hung]{} (-0.1905,-0.2449,0.2664)[Pola]{} (-0.2441,-0.3293,-0.0670)[Russ]{} (-0.2824,-0.0553,-0.1319)[Isra]{} (-0.1149,-0.2784,0.0525)[Japa]{} (-0.1725,-0.3789,0.1021)[HoKo]{} (-0.1606,-0.3229,-0.0870)[SoKo]{} (-0.2380,-0.1616,-0.0603)[Sing]{} (-0.1073,-0.3379,0.0485)[Aust]{} (-0.2602,-0.0687,-0.1184)[SoAf]{} 1.5 cm Fig. 10. Three dimensional view of the asset trees for the second semester of 2000, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm ### First semester, 2001 Figure 11 shows the three dimensional view of the asset trees for the first semester of 2001, with threshold ranging from $T=0.3$ to $T=0.6$. At $T=0.2$, there are two small clusters: one formed by $S\&P$ and Nasdaq, and the other formed by France, Italy, and Netherlands. At $T=0.3$, Canada joins the North American cluster, and the UK, Germany, Sweden, and Spain join the European cluster. At $T=0.4$, the European cluster is more interwoven, and have the addition of Switzerland, Belgium, Finland, Norway, and Portugal. At $T=0.5$, the North American cluster is fully formed, with the addition of Mexico, and a South American cluster forms, composed of Brazil and Argentina. Europe becomes even more densily connected, and receives the addition of Ireland, Denmark, the Czech Republic, and Hungary, the last two considered as part of Eastern Europe. We also have Israel and South Africa connecting with Europe. A new cluster is born, made of Pacific Asian countries, namely Japan, Hong Kong, and South Korea. The North American and European clusters start to get connected. At $T=0.6$, North and South America are integrated with each other and with Europe. Newcommers are Chile to South America, Austria, Luxembourg, Greece, Poland, and Russia to the European cluster, Singapore to the Pacific Asian cluster, and New Zealand, which also connects with the Pacific Asian cluster. Through Singapore and Hong Kong, the Pacific Asian cluster connects with the European one. For $T>0.6$, we see further integration and more markets joining the fold, but we then enter a region where noise dominates, and connections cannot be fully trusted. By $T=1.3$, all indices are connected with one another. (-3,-2.3)(1,3.9) (-2.5,2.7)[T=0.3]{} (-3.4,-2.7)(4.5,-2.7)(4.5,3.2)(-3.4,3.2)(-3.4,-2.7) (-0.3058,0.4366,-0.2130)(-0.2370,0.3880,-0.2352) (-0.4573,0.0026,0.0890)(-0.4374,0.0304,0.0586) (-0.4573,0.0026,0.0890)(-0.4068,0.0142,0.1478) (-0.3058,0.4366,-0.2130)(-0.3105,0.3550,-0.2430) (-0.2370,0.3880,-0.2352)(-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751)(-0.4573,0.0026,0.0890) (-0.4573,0.0026,0.0890)(-0.4269,0.1338,0.0221) (-0.4573,0.0026,0.0890)(-0.4438,0.0797,0.0084) (-0.4573,0.0026,0.0890)(-0.3813,-0.0473,0.0646) (-0.4573,0.0026,0.0890)(-0.4188,0.0934,0.0241) (-0.4269,0.1338,0.0221)(-0.4374,0.0304,0.0586) (-0.4269,0.1338,0.0221)(-0.4068,0.0142,0.1478) (-0.4269,0.1338,0.0221)(-0.4438,0.0797,0.0084) (-0.4269,0.1338,0.0221)(-0.4188,0.0934,0.0241) (-0.4374,0.0304,0.0586)(-0.4068,0.0142,0.1478) (-0.4374,0.0304,0.0586)(-0.4438,0.0797,0.0084) (-0.4374,0.0304,0.0586)(-0.4188,0.0934,0.0241) (-0.4438,0.0797,0.0084)(-0.3813,-0.0473,0.0646) (-0.3058,0.4366,-0.2130) (-0.2370,0.3880,-0.2352) (-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751) (-0.4573,0.0026,0.0890) (-0.4269,0.1338,0.0221) (-0.4374,0.0304,0.0586) (-0.4068,0.0142,0.1478) (-0.4438,0.0797,0.0084) (-0.3813,-0.0473,0.0646) (-0.4188,0.0934,0.0241) (-0.3058,0.4366,-0.2430)[S&P]{} (-0.2370,0.3880,-0.2652)[Nasd]{} (-0.3105,0.3550,-0.2730)[Cana]{} (-0.4188,0.1322,0.1051)[UK]{} (-0.4573,0.0026,0.1190)[Fran]{} (-0.4269,0.1338,0.0521)[Germ]{} (-0.4374,0.0304,0.0886)[Ital]{} (-0.4068,0.0142,0.1778)[Neth]{} (-0.4438,0.0797,0.0384)[Swed]{} (-0.3813,-0.0473,0.0946)[Finl]{} (-0.4188,0.0934,0.0541)[Spai]{} (-8,-2.3)(1,3.9) (-2.5,2.7)[T=0.4]{} (-3.4,-2.7)(4.5,-2.7)(4.5,3.2)(-3.4,3.2)(-3.4,-2.7) (-0.3058,0.4366,-0.2130)(-0.2370,0.3880,-0.2352) (-0.4573,0.0026,0.0890)(-0.4374,0.0304,0.0586) (-0.4573,0.0026,0.0890)(-0.4068,0.0142,0.1478) (-0.3058,0.4366,-0.2130)(-0.3105,0.3550,-0.2430) (-0.2370,0.3880,-0.2352)(-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751)(-0.4573,0.0026,0.0890) (-0.4573,0.0026,0.0890)(-0.4269,0.1338,0.0221) (-0.4573,0.0026,0.0890)(-0.4438,0.0797,0.0084) (-0.4573,0.0026,0.0890)(-0.3813,-0.0473,0.0646) (-0.4573,0.0026,0.0890)(-0.4188,0.0934,0.0241) (-0.4269,0.1338,0.0221)(-0.4374,0.0304,0.0586) (-0.4269,0.1338,0.0221)(-0.4068,0.0142,0.1478) (-0.4269,0.1338,0.0221)(-0.4438,0.0797,0.0084) (-0.4269,0.1338,0.0221)(-0.4188,0.0934,0.0241) (-0.4374,0.0304,0.0586)(-0.4068,0.0142,0.1478) (-0.4374,0.0304,0.0586)(-0.4438,0.0797,0.0084) (-0.4374,0.0304,0.0586)(-0.4188,0.0934,0.0241) (-0.4438,0.0797,0.0084)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.4269,0.1338,0.0221) (-0.4188,0.1322,0.0751)(-0.4374,0.0304,0.0586) (-0.4188,0.1322,0.0751)(-0.4068,0.0142,0.1478) (-0.4188,0.1322,0.0751)(-0.4438,0.0797,0.0084) (-0.4188,0.1322,0.0751)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.3166,-0.0438,0.1285) (-0.4188,0.1322,0.0751)(-0.4188,0.0934,0.0241) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3776,0.0170,0.1198) (-0.4269,0.1338,0.0221)(-0.3166,-0.0438,0.1285) (-0.4269,0.1338,0.0221)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.4068,0.0142,0.1478)(-0.3166,-0.0438,0.1285) (-0.4374,0.0304,0.0586)(-0.3813,-0.0473,0.0646) (-0.4374,0.0304,0.0586)(-0.3776,0.0170,0.1198) (-0.2235,0.0807,0.2188)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4438,0.0797,0.0084) (-0.3166,-0.0438,0.1285)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4188,0.0934,0.0241) (-0.3166,-0.0438,0.1285)(-0.3776,0.0170,0.1198) (-0.3776,0.0170,0.1198)(-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198)(-0.3776,0.0170,0.1198) (-0.4188,0.0934,0.0241)(-0.3776,0.0170,0.1198) (-0.3058,0.4366,-0.2130) (-0.2370,0.3880,-0.2352) (-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751) (-0.4573,0.0026,0.0890) (-0.4269,0.1338,0.0221) (-0.3757,0.0990,0.1322) (-0.4374,0.0304,0.0586) (-0.2235,0.0807,0.2188) (-0.4068,0.0142,0.1478) (-0.4438,0.0797,0.0084) (-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285) (-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198) (-0.3058,0.4366,-0.2430)[S&P]{} (-0.2370,0.3880,-0.2652)[Nasd]{} (-0.3105,0.3550,-0.2730)[Cana]{} (-0.4188,0.1322,0.1051)[UK]{} (-0.4573,0.0026,0.1190)[Fran]{} (-0.4269,0.1338,0.0521)[Germ]{} (-0.3757,0.0990,0.1622)[Swit]{} (-0.4374,0.0304,0.0886)[Ital]{} (-0.2235,0.0807,0.2488)[Belg]{} (-0.4068,0.0142,0.1778)[Neth]{} (-0.4438,0.0797,0.0384)[Swed]{} (-0.3813,-0.0473,0.0946)[Finl]{} (-0.3166,-0.0438,0.1585)[Norw]{} (-0.4188,0.0934,0.0541)[Spai]{} (-0.3776,0.0170,0.1498)[Port]{} 0.2 cm (-3,-2.3)(1,3.9) (-2.5,2.7)[T=0.5]{} (-3.4,-2.7)(4.5,-2.7)(4.5,3.2)(-3.4,3.2)(-3.4,-2.7) (-0.3058,0.4366,-0.2130)(-0.2370,0.3880,-0.2352) (-0.4573,0.0026,0.0890)(-0.4374,0.0304,0.0586) (-0.4573,0.0026,0.0890)(-0.4068,0.0142,0.1478) (-0.3058,0.4366,-0.2130)(-0.3105,0.3550,-0.2430) (-0.2370,0.3880,-0.2352)(-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751)(-0.4573,0.0026,0.0890) (-0.4573,0.0026,0.0890)(-0.4269,0.1338,0.0221) (-0.4573,0.0026,0.0890)(-0.4438,0.0797,0.0084) (-0.4573,0.0026,0.0890)(-0.3813,-0.0473,0.0646) (-0.4573,0.0026,0.0890)(-0.4188,0.0934,0.0241) (-0.4269,0.1338,0.0221)(-0.4374,0.0304,0.0586) (-0.4269,0.1338,0.0221)(-0.4068,0.0142,0.1478) (-0.4269,0.1338,0.0221)(-0.4438,0.0797,0.0084) (-0.4269,0.1338,0.0221)(-0.4188,0.0934,0.0241) (-0.4374,0.0304,0.0586)(-0.4068,0.0142,0.1478) (-0.4374,0.0304,0.0586)(-0.4438,0.0797,0.0084) (-0.4374,0.0304,0.0586)(-0.4188,0.0934,0.0241) (-0.4438,0.0797,0.0084)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.4269,0.1338,0.0221) (-0.4188,0.1322,0.0751)(-0.4374,0.0304,0.0586) (-0.4188,0.1322,0.0751)(-0.4068,0.0142,0.1478) (-0.4188,0.1322,0.0751)(-0.4438,0.0797,0.0084) (-0.4188,0.1322,0.0751)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.3166,-0.0438,0.1285) (-0.4188,0.1322,0.0751)(-0.4188,0.0934,0.0241) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3776,0.0170,0.1198) (-0.4269,0.1338,0.0221)(-0.3166,-0.0438,0.1285) (-0.4269,0.1338,0.0221)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.4068,0.0142,0.1478)(-0.3166,-0.0438,0.1285) (-0.4374,0.0304,0.0586)(-0.3813,-0.0473,0.0646) (-0.4374,0.0304,0.0586)(-0.3776,0.0170,0.1198) (-0.2235,0.0807,0.2188)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4438,0.0797,0.0084) (-0.3166,-0.0438,0.1285)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4188,0.0934,0.0241) (-0.3166,-0.0438,0.1285)(-0.3776,0.0170,0.1198) (-0.3776,0.0170,0.1198)(-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198)(-0.3776,0.0170,0.1198) (-0.4188,0.0934,0.0241)(-0.3776,0.0170,0.1198) (-0.3058,0.4366,-0.2130)(-0.1427,0.2351,-0.2382) (-0.3058,0.4366,-0.2130)(-0.4269,0.1338,0.0221) (-0.3105,0.3550,-0.2430)(-0.1427,0.2351,-0.2382) (-0.3105,0.3550,-0.2430)(-0.4573,0.0026,0.0890) (-0.3105,0.3550,-0.2430)(-0.4269,0.1338,0.0221) (-0.1364,0.2466,-0.0671)(-0.1755,0.1852,-0.2335) (-0.4188,0.1322,0.0751)(-0.4188,0.0934,0.0241) (-0.4188,0.1322,0.0751)(-0.3143,-0.1175,0.0845) (-0.4188,0.1322,0.0751)(-0.3776,0.0170,0.1198) (-0.2113,-0.0210,0.3142)(-0.3143,-0.1175,0.0845) (-0.4573,0.0026,0.0890)(-0.3143,-0.1175,0.0845) (-0.4573,0.0026,0.0890)(-0.2436,-0.0325,0.1106) (-0.4573,0.0026,0.0890)(-0.2283,-0.0290,0.1351) (-0.4269,0.1338,0.0221)(-0.3143,-0.1175,0.0845) (-0.4269,0.1338,0.0221)(-0.3166,-0.0438,0.1285) (-0.4269,0.1338,0.0221)(-0.3776,0.0170,0.1198) (-0.3757,0.0990,0.1322)(-0.4374,0.0304,0.0586) (-0.3757,0.0990,0.1322)(-0.2235,0.0807,0.2188) (-0.3757,0.0990,0.1322)(-0.4438,0.0797,0.0084) (-0.3757,0.0990,0.1322)(-0.3143,-0.1175,0.0845) (-0.3757,0.0990,0.1322)(-0.3813,-0.0473,0.0646) (-0.3757,0.0990,0.1322)(-0.4188,0.0934,0.0241) (-0.3757,0.0990,0.1322)(-0.3776,0.0170,0.1198) (-0.4374,0.0304,0.0586)(-0.3166,-0.0438,0.1285) (-0.4068,0.0142,0.1478)(-0.3143,-0.1175,0.0845) (-0.4068,0.0142,0.1478)(-0.2436,-0.0325,0.1106) (-0.4068,0.0142,0.1478)(-0.2283,-0.0290,0.1351) (-0.4438,0.0797,0.0084)(-0.3166,-0.0438,0.1285) (-0.4438,0.0797,0.0084)(-0.2598,-0.1947,-0.0174) (-0.4438,0.0797,0.0084)(-0.2436,-0.0325,0.1106) (-0.3143,-0.1175,0.0845)(-0.3166,-0.0438,0.1285) (-0.3813,-0.0473,0.0646)(-0.3166,-0.0438,0.1285) (-0.3813,-0.0473,0.0646)(-0.4188,0.0934,0.0241) (-0.3813,-0.0473,0.0646)(-0.3776,0.0170,0.1198) (-0.3813,-0.0473,0.0646)(-0.2598,-0.1947,-0.0174) (-0.3813,-0.0473,0.0646)(-0.2436,-0.0325,0.1106) (-0.3813,-0.0473,0.0646)(-0.2283,-0.0290,0.1351) (-0.3166,-0.0438,0.1285)(-0.4188,0.0934,0.0241) (-0.3059,0.0266,-0.0979)(-0.4573,0.0026,0.0890) (-0.3059,0.0266,-0.0979)(-0.4269,0.1338,0.0221) (-0.3059,0.0266,-0.0979)(-0.4438,0.0797,0.0084) (-0.3059,0.0266,-0.0979)(-0.3813,-0.0473,0.0646) (0.1027,-0.3304,-0.2516)(-0.0560,-0.3811,-0.1022) (0.1027,-0.3304,-0.2516)(0.0485,-0.4309,-0.2026) (-0.0560,-0.3811,-0.1022)(0.0485,-0.4309,-0.2026) (-0.3058,0.4366,-0.2130) (-0.2370,0.3880,-0.2352) (-0.3105,0.3550,-0.2430) (-0.1427,0.2351,-0.2382) (-0.1364,0.2466,-0.0671) (-0.1755,0.1852,-0.2335) (-0.4188,0.1322,0.0751) (-0.2113,-0.0210,0.3142) (-0.4573,0.0026,0.0890) (-0.4269,0.1338,0.0221) (-0.3757,0.0990,0.1322) (-0.4374,0.0304,0.0586) (-0.2235,0.0807,0.2188) (-0.4068,0.0142,0.1478) (-0.4438,0.0797,0.0084) (-0.3143,-0.1175,0.0845) (-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285) (-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198) (-0.2598,-0.1947,-0.0174) (-0.2436,-0.0325,0.1106) (-0.2283,-0.0290,0.1351) (0.1027,-0.3304,-0.2516) (-0.0560,-0.3811,-0.1022) (0.0485,-0.4309,-0.2026) (-0.3059,0.0266,-0.0979) (-0.3058,0.4366,-0.2430)[S&P]{} (-0.2370,0.3880,-0.2652)[Nasd]{} (-0.3105,0.3550,-0.2730)[Cana]{} (-0.1427,0.2351,-0.2682)[Mexi]{} (-0.1364,0.2466,-0.0971)[Braz]{} (-0.1755,0.1852,-0.2635)[Arge]{} (-0.4188,0.1322,0.1051)[UK]{} (-0.2113,-0.0210,0.3442)[Irel]{} (-0.4573,0.0026,0.1190)[Fran]{} (-0.4269,0.1338,0.0521)[Germ]{} (-0.3757,0.0990,0.1622)[Swit]{} (-0.4374,0.0304,0.0886)[Ital]{} (-0.2235,0.0807,0.2488)[Belg]{} (-0.4068,0.0142,0.1778)[Neth]{} (-0.4438,0.0797,0.0384)[Swed]{} (-0.3143,-0.1175,0.1145)[Denm]{} (-0.3813,-0.0473,0.0946)[Finl]{} (-0.3166,-0.0438,0.1585)[Norw]{} (-0.4188,0.0934,0.0541)[Spai]{} (-0.3776,0.0170,0.1498)[Port]{} (-0.2598,-0.1947,-0.0474)[CzRe]{} (-0.2436,-0.0325,0.1406)[Hung]{} (-0.2283,-0.0290,0.1651)[Isra]{} (0.1027,-0.3304,-0.2816)[Japa]{} (-0.0560,-0.3811,-0.1322)[HoKo]{} (0.0485,-0.4309,-0.2326)[SoKo]{} (-0.3059,0.0266,-0.1279)[SoAf]{} (-8,-2.3)(1,3.9) (-2.5,2.7)[T=0.6]{} (-3.4,-2.7)(4.5,-2.7)(4.5,3.2)(-3.4,3.2)(-3.4,-2.7) (-0.3058,0.4366,-0.2130)(-0.2370,0.3880,-0.2352) (-0.4573,0.0026,0.0890)(-0.4374,0.0304,0.0586) (-0.4573,0.0026,0.0890)(-0.4068,0.0142,0.1478) (-0.3058,0.4366,-0.2130)(-0.3105,0.3550,-0.2430) (-0.2370,0.3880,-0.2352)(-0.3105,0.3550,-0.2430) (-0.4188,0.1322,0.0751)(-0.4573,0.0026,0.0890) (-0.4573,0.0026,0.0890)(-0.4269,0.1338,0.0221) (-0.4573,0.0026,0.0890)(-0.4438,0.0797,0.0084) (-0.4573,0.0026,0.0890)(-0.3813,-0.0473,0.0646) (-0.4573,0.0026,0.0890)(-0.4188,0.0934,0.0241) (-0.4269,0.1338,0.0221)(-0.4374,0.0304,0.0586) (-0.4269,0.1338,0.0221)(-0.4068,0.0142,0.1478) (-0.4269,0.1338,0.0221)(-0.4438,0.0797,0.0084) (-0.4269,0.1338,0.0221)(-0.4188,0.0934,0.0241) (-0.4374,0.0304,0.0586)(-0.4068,0.0142,0.1478) (-0.4374,0.0304,0.0586)(-0.4438,0.0797,0.0084) (-0.4374,0.0304,0.0586)(-0.4188,0.0934,0.0241) (-0.4438,0.0797,0.0084)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.4269,0.1338,0.0221) (-0.4188,0.1322,0.0751)(-0.4374,0.0304,0.0586) (-0.4188,0.1322,0.0751)(-0.4068,0.0142,0.1478) (-0.4188,0.1322,0.0751)(-0.4438,0.0797,0.0084) (-0.4188,0.1322,0.0751)(-0.3813,-0.0473,0.0646) (-0.4188,0.1322,0.0751)(-0.3166,-0.0438,0.1285) (-0.4188,0.1322,0.0751)(-0.4188,0.0934,0.0241) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.3776,0.0170,0.1198) (-0.4269,0.1338,0.0221)(-0.3166,-0.0438,0.1285) (-0.4269,0.1338,0.0221)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.4068,0.0142,0.1478)(-0.3166,-0.0438,0.1285) (-0.4374,0.0304,0.0586)(-0.3813,-0.0473,0.0646) (-0.4374,0.0304,0.0586)(-0.3776,0.0170,0.1198) (-0.2235,0.0807,0.2188)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4438,0.0797,0.0084) (-0.3166,-0.0438,0.1285)(-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285)(-0.3166,-0.0438,0.1285) (-0.3166,-0.0438,0.1285)(-0.4188,0.0934,0.0241) (-0.3166,-0.0438,0.1285)(-0.3776,0.0170,0.1198) (-0.3776,0.0170,0.1198)(-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198)(-0.3776,0.0170,0.1198) (-0.4188,0.0934,0.0241)(-0.3776,0.0170,0.1198) (-0.3058,0.4366,-0.2130)(-0.1427,0.2351,-0.2382) (-0.3058,0.4366,-0.2130)(-0.4269,0.1338,0.0221) (-0.3105,0.3550,-0.2430)(-0.1427,0.2351,-0.2382) (-0.3105,0.3550,-0.2430)(-0.4573,0.0026,0.0890) (-0.3105,0.3550,-0.2430)(-0.4269,0.1338,0.0221) (-0.1364,0.2466,-0.0671)(-0.1755,0.1852,-0.2335) (-0.4188,0.1322,0.0751)(-0.4188,0.0934,0.0241) (-0.4188,0.1322,0.0751)(-0.3143,-0.1175,0.0845) (-0.4188,0.1322,0.0751)(-0.3776,0.0170,0.1198) (-0.2113,-0.0210,0.3142)(-0.3143,-0.1175,0.0845) (-0.4573,0.0026,0.0890)(-0.3143,-0.1175,0.0845) (-0.4573,0.0026,0.0890)(-0.2436,-0.0325,0.1106) (-0.4573,0.0026,0.0890)(-0.2283,-0.0290,0.1351) (-0.4269,0.1338,0.0221)(-0.3143,-0.1175,0.0845) (-0.4269,0.1338,0.0221)(-0.3166,-0.0438,0.1285) (-0.4269,0.1338,0.0221)(-0.3776,0.0170,0.1198) (-0.3757,0.0990,0.1322)(-0.4374,0.0304,0.0586) (-0.3757,0.0990,0.1322)(-0.2235,0.0807,0.2188) (-0.3757,0.0990,0.1322)(-0.4438,0.0797,0.0084) (-0.3757,0.0990,0.1322)(-0.3143,-0.1175,0.0845) (-0.3757,0.0990,0.1322)(-0.3813,-0.0473,0.0646) (-0.3757,0.0990,0.1322)(-0.4188,0.0934,0.0241) (-0.3757,0.0990,0.1322)(-0.3776,0.0170,0.1198) (-0.4374,0.0304,0.0586)(-0.3166,-0.0438,0.1285) (-0.4068,0.0142,0.1478)(-0.3143,-0.1175,0.0845) (-0.4068,0.0142,0.1478)(-0.2436,-0.0325,0.1106) (-0.4068,0.0142,0.1478)(-0.2283,-0.0290,0.1351) (-0.4438,0.0797,0.0084)(-0.3166,-0.0438,0.1285) (-0.4438,0.0797,0.0084)(-0.2598,-0.1947,-0.0174) (-0.4438,0.0797,0.0084)(-0.2436,-0.0325,0.1106) (-0.3143,-0.1175,0.0845)(-0.3166,-0.0438,0.1285) (-0.3813,-0.0473,0.0646)(-0.3166,-0.0438,0.1285) (-0.3813,-0.0473,0.0646)(-0.4188,0.0934,0.0241) (-0.3813,-0.0473,0.0646)(-0.3776,0.0170,0.1198) (-0.3813,-0.0473,0.0646)(-0.2598,-0.1947,-0.0174) (-0.3813,-0.0473,0.0646)(-0.2436,-0.0325,0.1106) (-0.3813,-0.0473,0.0646)(-0.2283,-0.0290,0.1351) (-0.3166,-0.0438,0.1285)(-0.4188,0.0934,0.0241) (-0.3059,0.0266,-0.0979)(-0.4573,0.0026,0.0890) (-0.3059,0.0266,-0.0979)(-0.4269,0.1338,0.0221) (-0.3059,0.0266,-0.0979)(-0.4438,0.0797,0.0084) (-0.3059,0.0266,-0.0979)(-0.3813,-0.0473,0.0646) (0.1027,-0.3304,-0.2516)(-0.0560,-0.3811,-0.1022) (0.1027,-0.3304,-0.2516)(0.0485,-0.4309,-0.2026) (-0.0560,-0.3811,-0.1022)(0.0485,-0.4309,-0.2026) (-0.3058,0.4366,-0.2130)(-0.1755,0.1852,-0.2335) (-0.3058,0.4366,-0.2130)(-0.4188,0.1322,0.0751) (-0.3058,0.4366,-0.2130)(-0.4573,0.0026,0.0890) (-0.3058,0.4366,-0.2130)(-0.3757,0.0990,0.1322) (-0.3058,0.4366,-0.2130)(-0.4374,0.0304,0.0586) (-0.3058,0.4366,-0.2130)(-0.4438,0.0797,0.0084) (-0.3058,0.4366,-0.2130)(-0.4188,0.0934,0.0241) (-0.2370,0.3880,-0.2352)(-0.1427,0.2351,-0.2382) (-0.2370,0.3880,-0.2352)(-0.4269,0.1338,0.0221) (-0.3058,0.4366,-0.2130)(-0.1427,0.2351,-0.2382) (-0.3105,0.3550,-0.2430)(-0.2426,-0.0812,-0.2983) (-0.3105,0.3550,-0.2430)(-0.4188,0.1322,0.0751) (-0.3105,0.3550,-0.2430)(-0.3757,0.0990,0.1322) (-0.3105,0.3550,-0.2430)(-0.4374,0.0304,0.0586) (-0.3105,0.3550,-0.2430)(-0.4068,0.0142,0.1478) (-0.3105,0.3550,-0.2430)(-0.4438,0.0797,0.0084) (-0.3105,0.3550,-0.2430)(-0.4188,0.0934,0.0241) (-0.1427,0.2351,-0.2382)(-0.1364,0.2466,-0.0671) (-0.1427,0.2351,-0.2382)(-0.1755,0.1852,-0.2335) (-0.1364,0.2466,-0.0671)(-0.4269,0.1338,0.0221) (-0.1755,0.1852,-0.2335)(-0.4269,0.1338,0.0221) (-0.1755,0.1852,-0.2335)(-0.4188,0.0934,0.0241) (-0.2426,-0.0812,-0.2983)(-0.4573,0.0026,0.0890) (-0.2426,-0.0812,-0.2983)(-0.4269,0.1338,0.0221) (-0.2426,-0.0812,-0.2983)(-0.4374,0.0304,0.0586) (-0.2426,-0.0812,-0.2983)(-0.4068,0.0142,0.1478) (-0.2426,-0.0812,-0.2983)(-0.4438,0.0797,0.0084) (-0.2426,-0.0812,-0.2983)(-0.4188,0.0934,0.0241) (-0.2426,-0.0812,-0.2983)(-0.3776,0.0170,0.1198) (-0.4188,0.1322,0.0751)(-0.2113,-0.0210,0.3142) (-0.4188,0.1322,0.0751)(-0.2235,0.0807,0.2188) (-0.4188,0.1322,0.0751)(-0.2283,-0.0290,0.1351) (-0.2113,-0.0210,0.3142)(-0.4573,0.0026,0.0890) (-0.2113,-0.0210,0.3142)(-0.3757,0.0990,0.1322) (-0.2113,-0.0210,0.3142)(-0.2235,0.0807,0.2188) (-0.2113,-0.0210,0.3142)(-0.4068,0.0142,0.1478) (-0.2113,-0.0210,0.3142)(-0.4438,0.0797,0.0084) (-0.2113,-0.0210,0.3142)(-0.3166,-0.0438,0.1285) (-0.4573,0.0026,0.0890)(-0.1827,0.0687,0.1471) (-0.4573,0.0026,0.0890)(-0.2235,0.0807,0.2188) (-0.4573,0.0026,0.0890)(-0.1431,-0.2751,-0.0514) (-0.4573,0.0026,0.0890)(-0.2598,-0.1947,-0.0174) (-0.4573,0.0026,0.0890)(-0.2753,-0.0948,-0.0362) (-0.4269,0.1338,0.0221)(-0.2235,0.0807,0.2188) (-0.4269,0.1338,0.0221)(-0.2598,-0.1947,-0.0174) (-0.4269,0.1338,0.0221)(-0.2436,-0.0325,0.1106) (-0.4269,0.1338,0.0221)(-0.2283,-0.0290,0.1351) (-0.3757,0.0990,0.1322)(-0.2436,-0.0325,0.1106) (-0.3757,0.0990,0.1322)(-0.2283,-0.0290,0.1351) (-0.1827,0.0687,0.1471)(-0.4068,0.0142,0.1478) (-0.1827,0.0687,0.1471)(-0.4188,0.0934,0.0241) (-0.4374,0.0304,0.0586)(-0.2235,0.0807,0.2188) (-0.4374,0.0304,0.0586)(-0.3143,-0.1175,0.0845) (-0.4374,0.0304,0.0586)(-0.2598,-0.1947,-0.0174) (-0.4374,0.0304,0.0586)(-0.2436,-0.0325,0.1106) (-0.4374,0.0304,0.0586)(-0.2283,-0.0290,0.1351) (-0.2235,0.0807,0.2188)(-0.4438,0.0797,0.0084) (-0.2235,0.0807,0.2188)(-0.3166,-0.0438,0.1285) (-0.2235,0.0807,0.2188)(-0.4188,0.0934,0.0241) (-0.4068,0.0142,0.1478)(-0.1431,-0.2751,-0.0514) (-0.4068,0.0142,0.1478)(-0.2598,-0.1947,-0.0174) (-0.4068,0.0142,0.1478)(-0.2753,-0.0948,-0.0362) (-0.4438,0.0797,0.0084)(-0.3143,-0.1175,0.0845) (-0.4438,0.0797,0.0084)(-0.2753,-0.0948,-0.0362) (-0.4438,0.0797,0.0084)(-0.2283,-0.0290,0.1351) (-0.3143,-0.1175,0.0845)(-0.3813,-0.0473,0.0646) (-0.3143,-0.1175,0.0845)(-0.4188,0.0934,0.0241) (-0.3143,-0.1175,0.0845)(-0.3776,0.0170,0.1198) (-0.3143,-0.1175,0.0845)(-0.2436,-0.0325,0.1106) (-0.3813,-0.0473,0.0646)(-0.2753,-0.0948,-0.0362) (-0.3813,-0.0473,0.0646)(-0.1353,-0.0787,0.0476) (-0.3166,-0.0438,0.1285)(-0.3776,0.0170,0.1198) (-0.3166,-0.0438,0.1285)(-0.1253,0.0695,0.1526) (-0.3166,-0.0438,0.1285)(-0.2598,-0.1947,-0.0174) (-0.4188,0.0934,0.0241)(-0.2598,-0.1947,-0.0174) (-0.4188,0.0934,0.0241)(-0.2283,-0.0290,0.1351) (-0.3776,0.0170,0.1198)(-0.2436,-0.0325,0.1106) (-0.2598,-0.1947,-0.0174)(-0.2436,-0.0325,0.1106) (-0.2598,-0.1947,-0.0174)(-0.2753,-0.0948,-0.0362) (-0.2598,-0.1947,-0.0174)(-0.1353,-0.0787,0.0476) (-0.2598,-0.1947,-0.0174)(-0.0560,-0.3811,-0.1022) (-0.2436,-0.0325,0.1106)(-0.2753,-0.0948,-0.0362) (-0.2436,-0.0325,0.1106)(-0.2283,-0.0290,0.1351) (-0.2753,-0.0948,-0.0362)(-0.1353,-0.0787,0.0476) (-0.0484,-0.3222,0.1614)(-0.4068,0.0142,0.1478) (-0.0484,-0.3222,0.1614)(-0.0560,-0.3811,-0.1022) (-0.0484,-0.3222,0.1614)(0.0485,-0.4309,-0.2026) (-0.0484,-0.3222,0.1614)(-0.0481,-0.2618,-0.0828) (-0.3059,0.0266,-0.0979)(-0.4188,0.1322,0.0751) (-0.3059,0.0266,-0.0979)(-0.3757,0.0990,0.1322) (-0.3059,0.0266,-0.0979)(-0.4374,0.0304,0.0586) (-0.3059,0.0266,-0.0979)(-0.4068,0.0142,0.1478) (-0.3059,0.0266,-0.0979)(-0.3166,-0.0438,0.1285) (-0.3059,0.0266,-0.0979)(-0.4188,0.0934,0.0241) (-0.3059,0.0266,-0.0979)(-0.3776,0.0170,0.1198) (-0.3059,0.0266,-0.0979)(-0.2598,-0.1947,-0.0174) (-0.3059,0.0266,-0.0979)(-0.2753,-0.0948,-0.0362) (-0.3059,0.0266,-0.0979)(-0.1353,-0.0787,0.0476) (-0.3058,0.4366,-0.2130) (-0.2370,0.3880,-0.2352) (-0.3105,0.3550,-0.2430) (-0.1427,0.2351,-0.2382) (-0.1364,0.2466,-0.0671) (-0.1755,0.1852,-0.2335) (-0.2426,-0.0812,-0.2983) (-0.4188,0.1322,0.0751) (-0.2113,-0.0210,0.3142) (-0.4573,0.0026,0.0890) (-0.4269,0.1338,0.0221) (-0.3757,0.0990,0.1322) (-0.1827,0.0687,0.1471) (-0.4374,0.0304,0.0586) (-0.2235,0.0807,0.2188) (-0.4068,0.0142,0.1478) (-0.1431,-0.2751,-0.0514) (-0.4438,0.0797,0.0084) (-0.3143,-0.1175,0.0845) (-0.3813,-0.0473,0.0646) (-0.3166,-0.0438,0.1285) (-0.4188,0.0934,0.0241) (-0.3776,0.0170,0.1198) (-0.1253,0.0695,0.1526) (-0.2598,-0.1947,-0.0174) (-0.2436,-0.0325,0.1106) (-0.2753,-0.0948,-0.0362) (-0.1353,-0.0787,0.0476) (-0.2283,-0.0290,0.1351) (0.1027,-0.3304,-0.2516) (-0.0560,-0.3811,-0.1022) (0.0485,-0.4309,-0.2026) (-0.0484,-0.3222,0.1614) (-0.0481,-0.2618,-0.0828) (-0.3059,0.0266,-0.0979) (-0.3058,0.4366,-0.2430)[S&P]{} (-0.2370,0.3880,-0.2652)[Nasd]{} (-0.3105,0.3550,-0.2730)[Cana]{} (-0.1427,0.2351,-0.2682)[Mexi]{} (-0.1364,0.2466,-0.0971)[Braz]{} (-0.1755,0.1852,-0.2635)[Arge]{} (-0.2426,-0.0812,-0.3283)[Chil]{} (-0.4188,0.1322,0.1051)[UK]{} (-0.2113,-0.0210,0.3442)[Irel]{} (-0.4573,0.0026,0.1190)[Fran]{} (-0.4269,0.1338,0.0521)[Germ]{} (-0.3757,0.0990,0.1622)[Swit]{} (-0.1827,0.0687,0.1771)[Autr]{} (-0.4374,0.0304,0.0886)[Ital]{} (-0.2235,0.0807,0.2488)[Belg]{} (-0.4068,0.0142,0.1778)[Neth]{} (-0.1431,-0.2751,-0.0814)[Luxe]{} (-0.4438,0.0797,0.0384)[Swed]{} (-0.3143,-0.1175,0.1145)[Denm]{} (-0.3813,-0.0473,0.0946)[Finl]{} (-0.3166,-0.0438,0.1585)[Norw]{} (-0.4188,0.0934,0.0541)[Spai]{} (-0.3776,0.0170,0.1498)[Port]{} (-0.1253,0.0695,0.1826)[Gree]{} (-0.2598,-0.1947,-0.0474)[CzRe]{} (-0.2436,-0.0325,0.1406)[Hung]{} (-0.2753,-0.0948,-0.0662)[Pola]{} (-0.1353,-0.0787,0.0776)[Russ]{} (-0.2283,-0.0290,0.1651)[Isra]{} (0.1027,-0.3304,-0.2816)[Japa]{} (-0.0560,-0.3811,-0.1322)[HoKo]{} (0.0485,-0.4309,-0.2326)[SoKo]{} (-0.0484,-0.3222,0.1914)[Sing]{} (-0.0481,-0.2618,-0.1128)[NZ]{} (-0.3059,0.0266,-0.1279)[SoAf]{} 0.7 cm Fig. 11. Three dimensional view of the asset trees for the first semester of 2001, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm ### Second semester, 2001 Figure 12 shows the three dimensional view of the asset trees for the second semester of 2001, with threshold ranging from $T=0.3$ to $T=0.6$. For the data concerning the second semester of 2001, for $T=0.1$, there is already a connection between France and Netherlands. At $T=0.2$, two small clusters are formed: S&P and Nasdaq (North American cluster) and an European cluster comprised of the UK, France, Germany, Switzerland, Italy, Netherlands, Sweden, and Spain. At $T=0.3$, Canada joins the North American cluster, the European cluster becomes more densely knit, and Belgium and Finland join the fold. At $T=0.4$, the North American cluster joins with the European one via Canada, and the European cluster receives the addition of Portugal. At $T=0.5$, there are many more connections formed: Mexico joins the North American cluster, and more connections are made between this cluster and the European one. Europe receives Ireland, Denmark, Norway, Greece, the Czech Republic, and Estonia. Strange connections form, like one between Sweden and Peru, and between Estonia and Hong Kong, what makes us think that random noise already has some effects here. At this same threshold, a Pacific Asian cluster is formed, already linked with Europe, comprised of Hong Kong, Taiwan, South Korea, and Singapore. Australia connects both with the Asian cluster and with the European cluster. For $T=0.6$, the threshold that we are considering as the limit to the region dominated by noise, Brazil connects with the North American cluster via Canada, and also connects itself with Chile, which establishes connections with Europe, while Peru connects with many European indices. More connections are formed between America and Europe, and the European cluster now adds to itself Luxembourg, Hungary, and Poland. Russia, Turkey, Israel, and South Africa connect with the European cluster, and Japan and India join the Pacific Asian one. At higher thresholds, an explosion of connections occur, some of them reinforcing already made clusters, some establishing more connections between them, and others connecting more indices to those clusters, many of them apparently at random. (-4,-3.5)(1,3.2) (-3.2,1.8)[T=0.3]{} (-4,-3.9)(4,-3.9)(4,2.3)(-4,2.3)(-4,-3.9) (-0.2840,-0.1677,0.0297)(-0.2608,-0.2063,-0.0058) (-0.2840,-0.1677,0.0297)(-0.3514,-0.2187,-0.0076) (-0.2608,-0.2063,-0.0058)(-0.3514,-0.2187,-0.0076) (-0.4480,-0.1057,0.0982)(-0.4639,-0.1313,0.0170) (-0.4480,-0.1057,0.0982)(-0.4865,-0.1444,0.0095) (-0.4480,-0.1057,0.0982)(-0.4533,-0.0610,0.0195) (-0.4480,-0.1057,0.0982)(-0.4408,-0.1121,0.0401) (-0.4480,-0.1057,0.0982)(-0.3951,-0.0532,0.1537) (-0.4480,-0.1057,0.0982)(-0.4699,-0.0580,0.0007) (-0.4480,-0.1057,0.0982)(-0.4608,-0.0965,0.0171) (-0.4480,-0.1057,0.0982)(-0.4590,-0.1143,-0.0084) (-0.4639,-0.1313,0.0170)(-0.4865,-0.1444,0.0095) (-0.4639,-0.1313,0.0170)(-0.4533,-0.0610,0.0195) (-0.4639,-0.1313,0.0170)(-0.4408,-0.1121,0.0401) (-0.4639,-0.1313,0.0170)(-0.3951,-0.0532,0.1537) (-0.4639,-0.1313,0.0170)(-0.4699,-0.0580,0.0007) (-0.4639,-0.1313,0.0170)(-0.4590,-0.1143,-0.0084) (-0.4639,-0.1313,0.0170)(-0.4297,-0.0197,-0.0270) (-0.4639,-0.1313,0.0170)(-0.4608,-0.0965,0.0171) (-0.4865,-0.1444,0.0095)(-0.4533,-0.0610,0.0195) (-0.4865,-0.1444,0.0095)(-0.4408,-0.1121,0.0401) (-0.4865,-0.1444,0.0095)(-0.4699,-0.0580,0.0007) (-0.4865,-0.1444,0.0095)(-0.4590,-0.1143,-0.0084) (-0.4865,-0.1444,0.0095)(-0.4608,-0.0965,0.0171) (-0.4533,-0.0610,0.0195)(-0.4408,-0.1121,0.0401) (-0.4533,-0.0610,0.0195)(-0.4699,-0.0580,0.0007) (-0.4533,-0.0610,0.0195)(-0.4590,-0.1143,-0.0084) (-0.4533,-0.0610,0.0195)(-0.4608,-0.0965,0.0171) (-0.4408,-0.1121,0.0401)(-0.3951,-0.0532,0.1537) (-0.4408,-0.1121,0.0401)(-0.4699,-0.0580,0.0007) (-0.4408,-0.1121,0.0401)(-0.4590,-0.1143,-0.0084) (-0.4408,-0.1121,0.0401)(-0.4608,-0.0965,0.0171) (-0.3951,-0.0532,0.1537)(-0.4699,-0.0580,0.0007) (-0.4699,-0.0580,0.0007)(-0.4590,-0.1143,-0.0084) (-0.4699,-0.0580,0.0007)(-0.4297,-0.0197,-0.0270) (-0.4699,-0.0580,0.0007)(-0.4608,-0.0965,0.0171) (-0.4590,-0.1143,-0.0084)(-0.4297,-0.0197,-0.0270) (-0.4590,-0.1143,-0.0084)(-0.4608,-0.0965,0.0171) (-0.2840,-0.1677,0.0297) (-0.2608,-0.2063,-0.0058) (-0.3514,-0.2187,-0.0076) (-0.4480,-0.1057,0.0982) (-0.4639,-0.1313,0.0170) (-0.4865,-0.1444,0.0095) (-0.4533,-0.0610,0.0195) (-0.4408,-0.1121,0.0401) (-0.3951,-0.0532,0.1537) (-0.4699,-0.0580,0.0007) (-0.4590,-0.1143,-0.0084) (-0.4297,-0.0197,-0.0270) (-0.4608,-0.0965,0.0171) (-0.2840,-0.1677,0.0597)[S&P]{} (-0.2608,-0.2063,-0.0358)[Nasd]{} (-0.3514,-0.2187,-0.0376)[Cana]{} (-0.4480,-0.1057,0.1282)[UK]{} (-0.4639,-0.1313,0.0470)[Fran]{} (-0.4865,-0.1444,0.0395)[Germ]{} (-0.4533,-0.0610,0.0495)[Swit]{} (-0.4408,-0.1121,0.0701)[Ital]{} (-0.3951,-0.0532,0.1837)[Belg]{} (-0.4699,-0.0580,0.0307)[Neth]{} (-0.4590,-0.1143,-0.0384)[Swed]{} (-0.4297,-0.0197,-0.0570)[Finl]{} (-0.4608,-0.0965,0.0471)[Spai]{} (-8,-3.5)(1,3.2) (-3.2,1.8)[T=0.4]{} (-4,-3.9)(4,-3.9)(4,2.3)(-4,2.3)(-4,-3.9) (-0.2840,-0.1677,0.0297)(-0.2608,-0.2063,-0.0058) (-0.2840,-0.1677,0.0297)(-0.3514,-0.2187,-0.0076) (-0.2608,-0.2063,-0.0058)(-0.3514,-0.2187,-0.0076) (-0.3514,-0.2187,-0.0076)(-0.4865,-0.1444,0.0095) (-0.3514,-0.2187,-0.0076)(-0.4590,-0.1143,-0.0084) (-0.4480,-0.1057,0.0982)(-0.4639,-0.1313,0.0170) (-0.4480,-0.1057,0.0982)(-0.4865,-0.1444,0.0095) (-0.4480,-0.1057,0.0982)(-0.4533,-0.0610,0.0195) (-0.4480,-0.1057,0.0982)(-0.4408,-0.1121,0.0401) (-0.4480,-0.1057,0.0982)(-0.3951,-0.0532,0.1537) (-0.4480,-0.1057,0.0982)(-0.4699,-0.0580,0.0007) (-0.4480,-0.1057,0.0982)(-0.4608,-0.0965,0.0171) (-0.4480,-0.1057,0.0982)(-0.4590,-0.1143,-0.0084) (-0.4480,-0.1057,0.0982)(-0.4297,-0.0197,-0.0270) (-0.4639,-0.1313,0.0170)(-0.4865,-0.1444,0.0095) (-0.4639,-0.1313,0.0170)(-0.4533,-0.0610,0.0195) (-0.4639,-0.1313,0.0170)(-0.4408,-0.1121,0.0401) (-0.4639,-0.1313,0.0170)(-0.3951,-0.0532,0.1537) (-0.4639,-0.1313,0.0170)(-0.4699,-0.0580,0.0007) (-0.4639,-0.1313,0.0170)(-0.4590,-0.1143,-0.0084) (-0.4639,-0.1313,0.0170)(-0.4297,-0.0197,-0.0270) (-0.4639,-0.1313,0.0170)(-0.4608,-0.0965,0.0171) (-0.4865,-0.1444,0.0095)(-0.4533,-0.0610,0.0195) (-0.4865,-0.1444,0.0095)(-0.3951,-0.0532,0.1537) (-0.4865,-0.1444,0.0095)(-0.4408,-0.1121,0.0401) (-0.4865,-0.1444,0.0095)(-0.4699,-0.0580,0.0007) (-0.4865,-0.1444,0.0095)(-0.4590,-0.1143,-0.0084) (-0.4865,-0.1444,0.0095)(-0.4297,-0.0197,-0.0270) (-0.4865,-0.1444,0.0095)(-0.4608,-0.0965,0.0171) (-0.4533,-0.0610,0.0195)(-0.3951,-0.0532,0.1537) (-0.4533,-0.0610,0.0195)(-0.4408,-0.1121,0.0401) (-0.4533,-0.0610,0.0195)(-0.4699,-0.0580,0.0007) (-0.4533,-0.0610,0.0195)(-0.4590,-0.1143,-0.0084) (-0.4533,-0.0610,0.0195)(-0.4297,-0.0197,-0.0270) (-0.4533,-0.0610,0.0195)(-0.4608,-0.0965,0.0171) (-0.4408,-0.1121,0.0401)(-0.3951,-0.0532,0.1537) (-0.4408,-0.1121,0.0401)(-0.4699,-0.0580,0.0007) (-0.4408,-0.1121,0.0401)(-0.4590,-0.1143,-0.0084) (-0.4408,-0.1121,0.0401)(-0.4297,-0.0197,-0.0270) (-0.4408,-0.1121,0.0401)(-0.4608,-0.0965,0.0171) (-0.3951,-0.0532,0.1537)(-0.4699,-0.0580,0.0007) (-0.3951,-0.0532,0.1537)(-0.4590,-0.1143,-0.0084) (-0.3951,-0.0532,0.1537)(-0.4608,-0.0965,0.0171) (-0.4699,-0.0580,0.0007)(-0.4590,-0.1143,-0.0084) (-0.4699,-0.0580,0.0007)(-0.4297,-0.0197,-0.0270) (-0.4699,-0.0580,0.0007)(-0.4608,-0.0965,0.0171) (-0.4699,-0.0580,0.0007)(-0.2960,0.0903,0.0791) (-0.4590,-0.1143,-0.0084)(-0.4297,-0.0197,-0.0270) (-0.4590,-0.1143,-0.0084)(-0.4608,-0.0965,0.0171) (-0.4297,-0.0197,-0.0270)(-0.4608,-0.0965,0.0171) (-0.4608,-0.0965,0.0171)(-0.2960,0.0903,0.0791) (-0.2840,-0.1677,0.0297) (-0.2608,-0.2063,-0.0058) (-0.3514,-0.2187,-0.0076) (-0.4480,-0.1057,0.0982) (-0.4639,-0.1313,0.0170) (-0.4865,-0.1444,0.0095) (-0.4533,-0.0610,0.0195) (-0.4408,-0.1121,0.0401) (-0.3951,-0.0532,0.1537) (-0.4699,-0.0580,0.0007) (-0.4590,-0.1143,-0.0084) (-0.4297,-0.0197,-0.0270) (-0.4608,-0.0965,0.0171) (-0.2960,0.0903,0.0791) (-0.2840,-0.1677,0.0597)[S&P]{} (-0.2608,-0.2063,-0.0358)[Nasd]{} (-0.3514,-0.2187,-0.0376)[Cana]{} (-0.4480,-0.1057,0.1282)[UK]{} (-0.4639,-0.1313,0.0470)[Fran]{} (-0.4865,-0.1444,0.0395)[Germ]{} (-0.4533,-0.0610,0.0495)[Swit]{} (-0.4408,-0.1121,0.0701)[Ital]{} (-0.3951,-0.0532,0.1837)[Belg]{} (-0.4699,-0.0580,0.0307)[Neth]{} (-0.4590,-0.1143,-0.0384)[Swed]{} (-0.4297,-0.0197,-0.0570)[Finl]{} (-0.4608,-0.0965,0.0471)[Spai]{} (-0.2960,0.0903,0.1091)[Port]{} (-4,-3.5)(1,3.2) (-3.2,1.8)[T=0.5]{} (-4,-3.9)(4,-3.9)(4,2.3)(-4,2.3)(-4,-3.9) (-0.2840,-0.1677,0.0297)(-0.2608,-0.2063,-0.0058) (-0.2840,-0.1677,0.0297)(-0.3514,-0.2187,-0.0076) (-0.2840,-0.1677,0.0297)(-0.4865,-0.1444,0.0095) (-0.2608,-0.2063,-0.0058)(-0.3514,-0.2187,-0.0076) (-0.2608,-0.2063,-0.0058)(-0.2300,-0.1375,-0.0636) (-0.2608,-0.2063,-0.0058)(-0.4865,-0.1444,0.0095) (-0.3514,-0.2187,-0.0076)(-0.4480,-0.1057,0.0982) (-0.3514,-0.2187,-0.0076)(-0.4639,-0.1313,0.0170) (-0.3514,-0.2187,-0.0076)(-0.4865,-0.1444,0.0095) (-0.3514,-0.2187,-0.0076)(-0.4408,-0.1121,0.0401) (-0.3514,-0.2187,-0.0076)(-0.4699,-0.0580,0.0007) (-0.3514,-0.2187,-0.0076)(-0.4590,-0.1143,-0.0084) (-0.2309,-0.1889,-0.0867)(-0.4590,-0.1143,-0.0084) (-0.4480,-0.1057,0.0982)(-0.4639,-0.1313,0.0170) (-0.4480,-0.1057,0.0982)(-0.2421,0.0982,0.0279) (-0.4480,-0.1057,0.0982)(-0.4865,-0.1444,0.0095) (-0.4480,-0.1057,0.0982)(-0.4533,-0.0610,0.0195) (-0.4480,-0.1057,0.0982)(-0.4408,-0.1121,0.0401) (-0.4480,-0.1057,0.0982)(-0.3951,-0.0532,0.1537) (-0.4480,-0.1057,0.0982)(-0.4699,-0.0580,0.0007) (-0.4480,-0.1057,0.0982)(-0.4608,-0.0965,0.0171) (-0.4480,-0.1057,0.0982)(-0.4590,-0.1143,-0.0084) (-0.4480,-0.1057,0.0982)(-0.3832,0.1399,0.0555) (-0.4480,-0.1057,0.0982)(-0.4297,-0.0197,-0.0270) (-0.4480,-0.1057,0.0982)(-0.2666,0.0529,0.0611) (-0.4480,-0.1057,0.0982)(-0.2960,0.0903,0.0791) (-0.2421,0.0982,0.0279)(-0.4639,-0.1313,0.0170) (-0.2421,0.0982,0.0279)(-0.4865,-0.1444,0.0095) (-0.2421,0.0982,0.0279)(-0.4533,-0.0610,0.0195) (-0.2421,0.0982,0.0279)(-0.4408,-0.1121,0.0401) (-0.2421,0.0982,0.0279)(-0.3951,-0.0532,0.1537) (-0.2421,0.0982,0.0279)(-0.4699,-0.0580,0.0007) (-0.2421,0.0982,0.0279)(-0.4590,-0.1143,-0.0084) (-0.2421,0.0982,0.0279)(-0.3832,0.1399,0.0555) (-0.4639,-0.1313,0.0170)(-0.4865,-0.1444,0.0095) (-0.4639,-0.1313,0.0170)(-0.4533,-0.0610,0.0195) (-0.4639,-0.1313,0.0170)(-0.4408,-0.1121,0.0401) (-0.4639,-0.1313,0.0170)(-0.3951,-0.0532,0.1537) (-0.4639,-0.1313,0.0170)(-0.4699,-0.0580,0.0007) (-0.4639,-0.1313,0.0170)(-0.4590,-0.1143,-0.0084) (-0.4639,-0.1313,0.0170)(-0.3832,0.1399,0.0555) (-0.4639,-0.1313,0.0170)(-0.4297,-0.0197,-0.0270) (-0.4639,-0.1313,0.0170)(-0.2666,0.0529,0.0611) (-0.4639,-0.1313,0.0170)(-0.4608,-0.0965,0.0171) (-0.4639,-0.1313,0.0170)(-0.2960,0.0903,0.0791) (-0.4865,-0.1444,0.0095)(-0.4533,-0.0610,0.0195) (-0.4865,-0.1444,0.0095)(-0.3951,-0.0532,0.1537) (-0.4865,-0.1444,0.0095)(-0.4408,-0.1121,0.0401) (-0.4865,-0.1444,0.0095)(-0.4699,-0.0580,0.0007) (-0.4865,-0.1444,0.0095)(-0.4590,-0.1143,-0.0084) (-0.4865,-0.1444,0.0095)(-0.3832,0.1399,0.0555) (-0.4865,-0.1444,0.0095)(-0.4297,-0.0197,-0.0270) (-0.4865,-0.1444,0.0095)(-0.2666,0.0529,0.0611) (-0.4865,-0.1444,0.0095)(-0.4608,-0.0965,0.0171) (-0.4865,-0.1444,0.0095)(-0.2960,0.0903,0.0791) (-0.4533,-0.0610,0.0195)(-0.3951,-0.0532,0.1537) (-0.4533,-0.0610,0.0195)(-0.4408,-0.1121,0.0401) (-0.4533,-0.0610,0.0195)(-0.4699,-0.0580,0.0007) (-0.4533,-0.0610,0.0195)(-0.4590,-0.1143,-0.0084) (-0.4533,-0.0610,0.0195)(-0.3832,0.1399,0.0555) (-0.4533,-0.0610,0.0195)(-0.4297,-0.0197,-0.0270) (-0.4533,-0.0610,0.0195)(-0.2666,0.0529,0.0611) (-0.4533,-0.0610,0.0195)(-0.4608,-0.0965,0.0171) (-0.4533,-0.0610,0.0195)(-0.2960,0.0903,0.0791) (-0.4408,-0.1121,0.0401)(-0.3951,-0.0532,0.1537) (-0.4408,-0.1121,0.0401)(-0.4699,-0.0580,0.0007) (-0.4408,-0.1121,0.0401)(-0.4590,-0.1143,-0.0084) (-0.4408,-0.1121,0.0401)(-0.3832,0.1399,0.0555) (-0.4408,-0.1121,0.0401)(-0.4297,-0.0197,-0.0270) (-0.4408,-0.1121,0.0401)(-0.2666,0.0529,0.0611) (-0.4408,-0.1121,0.0401)(-0.4608,-0.0965,0.0171) (-0.4408,-0.1121,0.0401)(-0.2960,0.0903,0.0791) (-0.3951,-0.0532,0.1537)(-0.4699,-0.0580,0.0007) (-0.3951,-0.0532,0.1537)(-0.3832,0.1399,0.0555) (-0.3951,-0.0532,0.1537)(-0.4297,-0.0197,-0.0270) (-0.3951,-0.0532,0.1537)(-0.4590,-0.1143,-0.0084) (-0.3951,-0.0532,0.1537)(-0.4608,-0.0965,0.0171) (-0.3951,-0.0532,0.1537)(-0.2960,0.0903,0.0791) (-0.4699,-0.0580,0.0007)(-0.4590,-0.1143,-0.0084) (-0.4699,-0.0580,0.0007)(-0.3832,0.1399,0.0555) (-0.4699,-0.0580,0.0007)(-0.4297,-0.0197,-0.0270) (-0.4699,-0.0580,0.0007)(-0.2666,0.0529,0.0611) (-0.4699,-0.0580,0.0007)(-0.4608,-0.0965,0.0171) (-0.4699,-0.0580,0.0007)(-0.2960,0.0903,0.0791) (-0.4590,-0.1143,-0.0084)(-0.3832,0.1399,0.0555) (-0.4590,-0.1143,-0.0084)(-0.4297,-0.0197,-0.0270) (-0.4590,-0.1143,-0.0084)(-0.2666,0.0529,0.0611) (-0.4590,-0.1143,-0.0084)(-0.4608,-0.0965,0.0171) (-0.4590,-0.1143,-0.0084)(-0.2960,0.0903,0.0791) (-0.3832,0.1399,0.0555)(-0.4297,-0.0197,-0.0270) (-0.3832,0.1399,0.0555)(-0.2666,0.0529,0.0611) (-0.3832,0.1399,0.0555)(-0.4608,-0.0965,0.0171) (-0.4297,-0.0197,-0.0270)(-0.4608,-0.0965,0.0171) (-0.4297,-0.0197,-0.0270)(-0.2960,0.0903,0.0791) (-0.2666,0.0529,0.0611)(-0.4608,-0.0965,0.0171) (-0.4608,-0.0965,0.0171)(-0.2960,0.0903,0.0791) (-0.1949,0.0576,0.1490)(-0.1746,0.0364,0.0856) (-0.1432,0.3968,0.1379)(-0.1652,0.4232,-0.0405) (-0.1432,0.3968,0.1379)(0.0005,0.3560,0.1282) (-0.1652,0.4232,-0.0405)(-0.1535,0.2651,-0.1156) (-0.1652,0.4232,-0.0405)(-0.0593,0.3978,-0.0602) (-0.0446,0.0935,-0.3231)(-0.1535,0.2651,-0.1156) (-0.0593,0.3978,-0.0602)(0.0005,0.3560,0.1282) (-0.2840,-0.1677,0.0297) (-0.2608,-0.2063,-0.0058) (-0.3514,-0.2187,-0.0076) (-0.2300,-0.1375,-0.0636) (-0.2309,-0.1889,-0.0867) (-0.4480,-0.1057,0.0982) (-0.2421,0.0982,0.0279) (-0.4639,-0.1313,0.0170) (-0.4865,-0.1444,0.0095) (-0.4533,-0.0610,0.0195) (-0.4408,-0.1121,0.0401) (-0.3951,-0.0532,0.1537) (-0.4699,-0.0580,0.0007) (-0.4590,-0.1143,-0.0084) (-0.3832,0.1399,0.0555) (-0.4297,-0.0197,-0.0270) (-0.2666,0.0529,0.0611) (-0.4608,-0.0965,0.0171) (-0.2960,0.0903,0.0791) (-0.1949,0.0576,0.1490) (-0.1746,0.0364,0.0856) (-0.1432,0.3968,0.1379) (-0.1652,0.4232,-0.0405) (-0.0446,0.0935,-0.3231) (-0.1535,0.2651,-0.1156) (-0.0593,0.3978,-0.0602) (0.0005,0.3560,0.1282) (-0.2840,-0.1677,0.0597)[S&P]{} (-0.2608,-0.2063,-0.0358)[Nasd]{} (-0.3514,-0.2187,-0.0376)[Cana]{} (-0.2300,-0.1375,-0.0936)[Mexi]{} (-0.2309,-0.1889,-0.1167)[Peru]{} (-0.4480,-0.1057,0.1282)[UK]{} (-0.2421,0.0982,0.0579)[Irel]{} (-0.4639,-0.1313,0.0470)[Fran]{} (-0.4865,-0.1444,0.0395)[Germ]{} (-0.4533,-0.0610,0.0495)[Swit]{} (-0.4408,-0.1121,0.0701)[Ital]{} (-0.3951,-0.0532,0.1837)[Belg]{} (-0.4699,-0.0580,0.0307)[Neth]{} (-0.4590,-0.1143,-0.0384)[Swed]{} (-0.3832,0.1399,0.0855)[Denm]{} (-0.4297,-0.0197,-0.0570)[Finl]{} (-0.2666,0.0529,0.0911)[Norw]{} (-0.4608,-0.0965,0.0471)[Spai]{} (-0.2960,0.0903,0.1091)[Port]{} (-0.1949,0.0576,0.1790)[Gree]{} (-0.1746,0.0364,0.1156)[CzRe]{} (-0.1432,0.3968,0.1679)[Esto]{} (-0.1652,0.4232,-0.0705)[HoKo]{} (-0.0446,0.0935,-0.3531)[Taiw]{} (-0.1535,0.2651,-0.1456)[SoKo]{} (-0.0593,0.3978,-0.0902)[Sing]{} (0.0005,0.3560,0.1582)[Aust]{} (-8,-3.5)(1,3.2) (-3.2,1.8)[T=0.6]{} (-4,-3.9)(4,-3.9)(4,2.3)(-4,2.3)(-4,-3.9) (-0.2840,-0.1677,0.0297)(-0.2608,-0.2063,-0.0058) (-0.2840,-0.1677,0.0297)(-0.3514,-0.2187,-0.0076) (-0.2840,-0.1677,0.0297)(-0.4480,-0.1057,0.0982) (-0.2840,-0.1677,0.0297)(-0.4639,-0.1313,0.0170) (-0.2840,-0.1677,0.0297)(-0.4865,-0.1444,0.0095) (-0.2840,-0.1677,0.0297)(-0.4533,-0.0610,0.0195) (-0.2840,-0.1677,0.0297)(-0.4408,-0.1121,0.0401) (-0.2840,-0.1677,0.0297)(-0.3951,-0.0532,0.1537) (-0.2840,-0.1677,0.0297)(-0.4699,-0.0580,0.0007) (-0.2840,-0.1677,0.0297)(-0.4590,-0.1143,-0.0084) (-0.2840,-0.1677,0.0297)(-0.4608,-0.0965,0.0171) (-0.2840,-0.1677,0.0297)(-0.1894,0.0650,0.1362) (-0.2608,-0.2063,-0.0058)(-0.3514,-0.2187,-0.0076) (-0.2608,-0.2063,-0.0058)(-0.2300,-0.1375,-0.0636) (-0.2608,-0.2063,-0.0058)(-0.4480,-0.1057,0.0982) (-0.2608,-0.2063,-0.0058)(-0.4639,-0.1313,0.0170) (-0.2608,-0.2063,-0.0058)(-0.4865,-0.1444,0.0095) (-0.2608,-0.2063,-0.0058)(-0.4408,-0.1121,0.0401) (-0.2608,-0.2063,-0.0058)(-0.4590,-0.1143,-0.0084) (-0.2608,-0.2063,-0.0058)(-0.4297,-0.0197,-0.0270) (-0.2608,-0.2063,-0.0058)(-0.4608,-0.0965,0.0171) (-0.3514,-0.2187,-0.0076)(-0.1686,-0.2075,-0.2804) (-0.3514,-0.2187,-0.0076)(-0.4480,-0.1057,0.0982) (-0.3514,-0.2187,-0.0076)(-0.4639,-0.1313,0.0170) (-0.3514,-0.2187,-0.0076)(-0.4865,-0.1444,0.0095) (-0.3514,-0.2187,-0.0076)(-0.4533,-0.0610,0.0195) (-0.3514,-0.2187,-0.0076)(-0.4408,-0.1121,0.0401) (-0.3514,-0.2187,-0.0076)(-0.3951,-0.0532,0.1537) (-0.3514,-0.2187,-0.0076)(-0.4699,-0.0580,0.0007) (-0.3514,-0.2187,-0.0076)(-0.4590,-0.1143,-0.0084) (-0.3514,-0.2187,-0.0076)(-0.4297,-0.0197,-0.0270) (-0.3514,-0.2187,-0.0076)(-0.2666,0.0529,0.0611) (-0.3514,-0.2187,-0.0076)(-0.4608,-0.0965,0.0171) (-0.2300,-0.1375,-0.0636)(-0.4480,-0.1057,0.0982) (-0.2300,-0.1375,-0.0636)(-0.4639,-0.1313,0.0170) (-0.2300,-0.1375,-0.0636)(-0.4865,-0.1444,0.0095) (-0.2300,-0.1375,-0.0636)(-0.4533,-0.0610,0.0195) (-0.2300,-0.1375,-0.0636)(-0.4408,-0.1121,0.0401) (-0.2300,-0.1375,-0.0636)(-0.4699,-0.0580,0.0007) (-0.2300,-0.1375,-0.0636)(-0.4590,-0.1143,-0.0084) (-0.2300,-0.1375,-0.0636)(-0.4297,-0.0197,-0.0270) (-0.2300,-0.1375,-0.0636)(-0.4608,-0.0965,0.0171) (-0.2300,-0.1375,-0.0636)(-0.1584,-0.1453,-0.0642) (-0.1686,-0.2075,-0.2804)(-0.1573,-0.3443,-0.3808) (-0.1573,-0.3443,-0.3808)(-0.4699,-0.0580,0.0007) (-0.1573,-0.3443,-0.3808)(-0.4608,-0.0965,0.0171) (-0.2309,-0.1889,-0.0867)(-0.4639,-0.1313,0.0170) (-0.2309,-0.1889,-0.0867)(-0.4865,-0.1444,0.0095) (-0.2309,-0.1889,-0.0867)(-0.4533,-0.0610,0.0195) (-0.2309,-0.1889,-0.0867)(-0.4408,-0.1121,0.0401) (-0.2309,-0.1889,-0.0867)(-0.4699,-0.0580,0.0007) (-0.2309,-0.1889,-0.0867)(-0.4590,-0.1143,-0.0084) (-0.2309,-0.1889,-0.0867)(-0.4297,-0.0197,-0.0270) (-0.2309,-0.1889,-0.0867)(-0.4608,-0.0965,0.0171) (-0.4480,-0.1057,0.0982)(-0.4639,-0.1313,0.0170) (-0.4480,-0.1057,0.0982)(-0.2421,0.0982,0.0279) (-0.4480,-0.1057,0.0982)(-0.4865,-0.1444,0.0095) (-0.4480,-0.1057,0.0982)(-0.4533,-0.0610,0.0195) (-0.4480,-0.1057,0.0982)(-0.4408,-0.1121,0.0401) (-0.4480,-0.1057,0.0982)(-0.3951,-0.0532,0.1537) (-0.4480,-0.1057,0.0982)(-0.4699,-0.0580,0.0007) (-0.4480,-0.1057,0.0982)(-0.4590,-0.1143,-0.0084) (-0.4480,-0.1057,0.0982)(-0.3832,0.1399,0.0555) (-0.4480,-0.1057,0.0982)(-0.4297,-0.0197,-0.0270) (-0.4480,-0.1057,0.0982)(-0.2666,0.0529,0.0611) (-0.4480,-0.1057,0.0982)(-0.4608,-0.0965,0.0171) (-0.4480,-0.1057,0.0982)(-0.2960,0.0903,0.0791) (-0.2421,0.0982,0.0279)(-0.4639,-0.1313,0.0170) (-0.2421,0.0982,0.0279)(-0.4865,-0.1444,0.0095) (-0.2421,0.0982,0.0279)(-0.4533,-0.0610,0.0195) (-0.2421,0.0982,0.0279)(-0.4408,-0.1121,0.0401) (-0.2421,0.0982,0.0279)(-0.3951,-0.0532,0.1537) (-0.2421,0.0982,0.0279)(-0.4699,-0.0580,0.0007) (-0.2421,0.0982,0.0279)(-0.4590,-0.1143,-0.0084) (-0.2421,0.0982,0.0279)(-0.3832,0.1399,0.0555) (-0.2421,0.0982,0.0279)(-0.4297,-0.0197,-0.0270) (-0.2421,0.0982,0.0279)(-0.2666,0.0529,0.0611) (-0.2421,0.0982,0.0279)(-0.4608,-0.0965,0.0171) (-0.2421,0.0982,0.0279)(-0.2960,0.0903,0.0791) (-0.2421,0.0982,0.0279)(-0.1894,0.0650,0.1362) (-0.2421,0.0982,0.0279)(-0.1584,-0.1453,-0.0642) (-0.2421,0.0982,0.0279)(0.0005,0.3560,0.1282) (-0.2421,0.0982,0.0279)(-0.1526,0.1682,0.1480) (-0.4639,-0.1313,0.0170)(-0.4865,-0.1444,0.0095) (-0.4639,-0.1313,0.0170)(-0.4533,-0.0610,0.0195) (-0.4639,-0.1313,0.0170)(-0.4408,-0.1121,0.0401) (-0.4639,-0.1313,0.0170)(-0.3951,-0.0532,0.1537) (-0.4639,-0.1313,0.0170)(-0.4699,-0.0580,0.0007) (-0.4639,-0.1313,0.0170)(-0.1757,0.1726,-0.1015) (-0.4639,-0.1313,0.0170)(-0.4590,-0.1143,-0.0084) (-0.4639,-0.1313,0.0170)(-0.3832,0.1399,0.0555) (-0.4639,-0.1313,0.0170)(-0.4297,-0.0197,-0.0270) (-0.4639,-0.1313,0.0170)(-0.2666,0.0529,0.0611) (-0.4639,-0.1313,0.0170)(-0.4608,-0.0965,0.0171) (-0.4639,-0.1313,0.0170)(-0.2960,0.0903,0.0791) (-0.4639,-0.1313,0.0170)(-0.1616,0.0866,0.0675) (-0.4639,-0.1313,0.0170)(-0.1584,-0.1453,-0.0642) (-0.4865,-0.1444,0.0095)(-0.4533,-0.0610,0.0195) (-0.4865,-0.1444,0.0095)(-0.3951,-0.0532,0.1537) (-0.4865,-0.1444,0.0095)(-0.4408,-0.1121,0.0401) (-0.4865,-0.1444,0.0095)(-0.4699,-0.0580,0.0007) (-0.4865,-0.1444,0.0095)(-0.4590,-0.1143,-0.0084) (-0.4865,-0.1444,0.0095)(-0.3832,0.1399,0.0555) (-0.4865,-0.1444,0.0095)(-0.4297,-0.0197,-0.0270) (-0.4865,-0.1444,0.0095)(-0.2666,0.0529,0.0611) (-0.4865,-0.1444,0.0095)(-0.4608,-0.0965,0.0171) (-0.4865,-0.1444,0.0095)(-0.2960,0.0903,0.0791) (-0.4533,-0.0610,0.0195)(-0.3951,-0.0532,0.1537) (-0.4533,-0.0610,0.0195)(-0.4408,-0.1121,0.0401) (-0.4533,-0.0610,0.0195)(-0.4699,-0.0580,0.0007) (-0.4533,-0.0610,0.0195)(-0.1757,0.1726,-0.1015) (-0.4533,-0.0610,0.0195)(-0.4590,-0.1143,-0.0084) (-0.4533,-0.0610,0.0195)(-0.3832,0.1399,0.0555) (-0.4533,-0.0610,0.0195)(-0.4297,-0.0197,-0.0270) (-0.4533,-0.0610,0.0195)(-0.2666,0.0529,0.0611) (-0.4533,-0.0610,0.0195)(-0.4608,-0.0965,0.0171) (-0.4533,-0.0610,0.0195)(-0.2960,0.0903,0.0791) (-0.4533,-0.0610,0.0195)(-0.1894,0.0650,0.1362) (-0.4533,-0.0610,0.0195)(-0.1526,0.1682,0.1480) (-0.4408,-0.1121,0.0401)(-0.3951,-0.0532,0.1537) (-0.4408,-0.1121,0.0401)(-0.4699,-0.0580,0.0007) (-0.4408,-0.1121,0.0401)(-0.1757,0.1726,-0.1015) (-0.4408,-0.1121,0.0401)(-0.4590,-0.1143,-0.0084) (-0.4408,-0.1121,0.0401)(-0.3832,0.1399,0.0555) (-0.4408,-0.1121,0.0401)(-0.4297,-0.0197,-0.0270) (-0.4408,-0.1121,0.0401)(-0.2666,0.0529,0.0611) (-0.4408,-0.1121,0.0401)(-0.4608,-0.0965,0.0171) (-0.4408,-0.1121,0.0401)(-0.2960,0.0903,0.0791) (-0.4408,-0.1121,0.0401)(-0.1949,0.0576,0.1490) (-0.4408,-0.1121,0.0401)(-0.1584,-0.1453,-0.0642) (-0.3951,-0.0532,0.1537)(-0.4699,-0.0580,0.0007) (-0.3951,-0.0532,0.1537)(-0.1757,0.1726,-0.1015) (-0.3951,-0.0532,0.1537)(-0.4590,-0.1143,-0.0084) (-0.3951,-0.0532,0.1537)(-0.3832,0.1399,0.0555) (-0.3951,-0.0532,0.1537)(-0.4297,-0.0197,-0.0270) (-0.3951,-0.0532,0.1537)(-0.2666,0.0529,0.0611) (-0.3951,-0.0532,0.1537)(-0.4608,-0.0965,0.0171) (-0.3951,-0.0532,0.1537)(-0.2960,0.0903,0.0791) (-0.3951,-0.0532,0.1537)(-0.1949,0.0576,0.1490) (-0.3951,-0.0532,0.1537)(-0.1896,0.1194,-0.1259) (-0.4699,-0.0580,0.0007)(-0.1757,0.1726,-0.1015) (-0.4699,-0.0580,0.0007)(-0.4590,-0.1143,-0.0084) (-0.4699,-0.0580,0.0007)(-0.3832,0.1399,0.0555) (-0.4699,-0.0580,0.0007)(-0.4297,-0.0197,-0.0270) (-0.4699,-0.0580,0.0007)(-0.2666,0.0529,0.0611) (-0.4699,-0.0580,0.0007)(-0.4608,-0.0965,0.0171) (-0.4699,-0.0580,0.0007)(-0.2960,0.0903,0.0791) (-0.4699,-0.0580,0.0007)(-0.1949,0.0576,0.1490) (-0.4699,-0.0580,0.0007)(-0.1746,0.0364,0.0856) (-0.4699,-0.0580,0.0007)(-0.1896,0.1194,-0.1259) (-0.4699,-0.0580,0.0007)(-0.1584,-0.1453,-0.0642) (-0.1757,0.1726,-0.1015)(-0.2666,0.0529,0.0611) (-0.1757,0.1726,-0.1015)(-0.4608,-0.0965,0.0171) (-0.1757,0.1726,-0.1015)(-0.1652,0.4232,-0.0405) (-0.1757,0.1726,-0.1015)(-0.1526,0.1682,0.1480) (-0.4590,-0.1143,-0.0084)(-0.3832,0.1399,0.0555) (-0.4590,-0.1143,-0.0084)(-0.4297,-0.0197,-0.0270) (-0.4590,-0.1143,-0.0084)(-0.2666,0.0529,0.0611) (-0.4590,-0.1143,-0.0084)(-0.4608,-0.0965,0.0171) (-0.4590,-0.1143,-0.0084)(-0.2960,0.0903,0.0791) (-0.3832,0.1399,0.0555)(-0.4297,-0.0197,-0.0270) (-0.3832,0.1399,0.0555)(-0.2666,0.0529,0.0611) (-0.3832,0.1399,0.0555)(-0.4608,-0.0965,0.0171) (-0.3832,0.1399,0.0555)(-0.2960,0.0903,0.0791) (-0.3832,0.1399,0.0555)(-0.1949,0.0576,0.1490) (-0.3832,0.1399,0.0555)(-0.1746,0.0364,0.0856) (-0.3832,0.1399,0.0555)(-0.1894,0.0650,0.1362) (-0.3832,0.1399,0.0555)(-0.1432,0.3968,0.1379) (-0.3832,0.1399,0.0555)(-0.1652,0.4232,-0.0405) (-0.4297,-0.0197,-0.0270)(-0.2666,0.0529,0.0611) (-0.4297,-0.0197,-0.0270)(-0.4608,-0.0965,0.0171) (-0.4297,-0.0197,-0.0270)(-0.2960,0.0903,0.0791) (-0.4297,-0.0197,-0.0270)(-0.1746,0.0364,0.0856) (-0.4297,-0.0197,-0.0270)(-0.1535,0.2651,-0.1156) (-0.2666,0.0529,0.0611)(-0.4608,-0.0965,0.0171) (-0.2666,0.0529,0.0611)(-0.2960,0.0903,0.0791) (-0.2666,0.0529,0.0611)(-0.1949,0.0576,0.1490) (-0.2666,0.0529,0.0611)(-0.1616,0.0866,0.0675) (-0.2666,0.0529,0.0611)(-0.1584,-0.1453,-0.0642) (-0.4608,-0.0965,0.0171)(-0.2960,0.0903,0.0791) (-0.2960,0.0903,0.0791)(-0.1746,0.0364,0.0856) (-0.1949,0.0576,0.1490)(-0.1746,0.0364,0.0856) (-0.1949,0.0576,0.1490)(-0.1616,0.0866,0.0675) (-0.1949,0.0576,0.1490)(-0.1896,0.1194,-0.1259) (-0.1746,0.0364,0.0856)(-0.1894,0.0650,0.1362) (-0.1746,0.0364,0.0856)(-0.1616,0.0866,0.0675) (-0.1894,0.0650,0.1362)(-0.1616,0.0866,0.0675) (-0.1894,0.0650,0.1362)(-0.1526,0.1682,0.1480) (-0.1616,0.0866,0.0675)(0.0131,0.1941,-0.1817) (-0.1432,0.3968,0.1379)(-0.1652,0.4232,-0.0405) (-0.1432,0.3968,0.1379)(-0.0593,0.3978,-0.0602) (0.0131,0.1941,-0.1817)(-0.1535,0.2651,-0.1156) (-0.0986,0.2845,-0.0797)(-0.1652,0.4232,-0.0405) (-0.0986,0.2845,-0.0797)(-0.1535,0.2651,-0.1156) (-0.0986,0.2845,-0.0797)(-0.0593,0.3978,-0.0602) (-0.0986,0.2845,-0.0797)(0.0005,0.3560,0.1282) (-0.0780,0.3458,0.0219)(-0.1652,0.4232,-0.0405) (-0.0780,0.3458,0.0219)(-0.1535,0.2651,-0.1156) (-0.0780,0.3458,0.0219)(-0.0593,0.3978,-0.0602) (-0.0780,0.3458,0.0219)(0.0005,0.3560,0.1282) (-0.1652,0.4232,-0.0405)(-0.1535,0.2651,-0.1156) (-0.1652,0.4232,-0.0405)(-0.0593,0.3978,-0.0602) (-0.0446,0.0935,-0.3231)(-0.1535,0.2651,-0.1156) (-0.0593,0.3978,-0.0602)(0.0005,0.3560,0.1282) (-0.2840,-0.1677,0.0297) (-0.2608,-0.2063,-0.0058) (-0.3514,-0.2187,-0.0076) (-0.2300,-0.1375,-0.0636) (-0.1686,-0.2075,-0.2804) (-0.1573,-0.3443,-0.3808) (-0.2309,-0.1889,-0.0867) (-0.4480,-0.1057,0.0982) (-0.2421,0.0982,0.0279) (-0.4639,-0.1313,0.0170) (-0.4865,-0.1444,0.0095) (-0.4533,-0.0610,0.0195) (-0.4408,-0.1121,0.0401) (-0.3951,-0.0532,0.1537) (-0.4699,-0.0580,0.0007) (-0.1757,0.1726,-0.1015) (-0.4590,-0.1143,-0.0084) (-0.3832,0.1399,0.0555) (-0.4297,-0.0197,-0.0270) (-0.2666,0.0529,0.0611) (-0.4608,-0.0965,0.0171) (-0.2960,0.0903,0.0791) (-0.1949,0.0576,0.1490) (-0.1746,0.0364,0.0856) (-0.1894,0.0650,0.1362) (-0.1616,0.0866,0.0675) (-0.1432,0.3968,0.1379) (0.0131,0.1941,-0.1817) (-0.1896,0.1194,-0.0959) (-0.1584,-0.1453,-0.0642) (-0.0986,0.2845,-0.0797) (-0.0780,0.3458,0.0219) (-0.1652,0.4232,-0.0405) (-0.0446,0.0935,-0.3231) (-0.1535,0.2651,-0.1156) (-0.0593,0.3978,-0.0602) (0.0005,0.3560,0.1282) (-0.1526,0.1682,0.1480) (-0.2840,-0.1677,0.0597)[S&P]{} (-0.2608,-0.2063,-0.0358)[Nasd]{} (-0.3514,-0.2187,-0.0376)[Cana]{} (-0.2300,-0.1375,-0.0936)[Mexi]{} (-0.1686,-0.2075,-0.3104)[Braz]{} (-0.1573,-0.3443,-0.4108)[Chil]{} (-0.2309,-0.1889,-0.1167)[Peru]{} (-0.4480,-0.1057,0.1282)[UK]{} (-0.2421,0.0982,0.0579)[Irel]{} (-0.4639,-0.1313,0.0470)[Fran]{} (-0.4865,-0.1444,0.0395)[Germ]{} (-0.4533,-0.0610,0.0495)[Swit]{} (-0.4408,-0.1121,0.0701)[Ital]{} (-0.3951,-0.0532,0.1837)[Belg]{} (-0.4699,-0.0580,0.0307)[Neth]{} (-0.1757,0.1726,-0.1315)[Luxe]{} (-0.4590,-0.1143,-0.0384)[Swed]{} (-0.3832,0.1399,0.0855)[Denm]{} (-0.4297,-0.0197,-0.0570)[Finl]{} (-0.2666,0.0529,0.0911)[Norw]{} (-0.4608,-0.0965,0.0471)[Spai]{} (-0.2960,0.0903,0.1091)[Port]{} (-0.1949,0.0576,0.1790)[Gree]{} (-0.1746,0.0364,0.1156)[CzRe]{} (-0.1894,0.0650,0.1662)[Hung]{} (-0.1616,0.0866,0.0975)[Pola]{} (-0.1432,0.3968,0.1679)[Esto]{} (0.0131,0.1941,-0.2117)[Russ]{} (-0.1896,0.1194,-0.1259)[Turk]{} (-0.1584,-0.1453,-0.0942)[Isra]{} (-0.0986,0.2845,-0.1097)[Indi]{} (-0.0780,0.3458,0.0519)[Japa]{} (-0.1652,0.4232,-0.0705)[HoKo]{} (-0.0446,0.0935,-0.3531)[Taiw]{} (-0.1535,0.2651,-0.1456)[SoKo]{} (-0.0593,0.3978,-0.0902)[Sing]{} (0.0005,0.3560,0.1582)[Aust]{} (-0.1526,0.1682,0.1780)[SoAf]{} 0.7 cm Fig. 12. Three dimensional view of the asset trees for the second semester of 2001, with threshold ranging from $T=0.3$ to $T=0.6$. 0.3 cm Overview -------- By considering all the time periods being studied, the first fact that stands out is the presence of two cluster throughout all periods: the American and the European clusters. The connections between the S&P 500 and Nasdaq are present at all times and at the smallest thresholds. What we call an European cluster also appears early on, but its core varies with time, beginning with the pair Germany-Netherlands and then evolving to a new core, consisting on the UK, France, Germany, Switzerland, Italy, Netherlands, Sweden, and Spain. It is interesting to notice that the UK, which was more connected to the American cluster in the 80’s, moves to the European cluster later on. At higher thresholds, the American cluster is joined by Canada and some South American indices, and the European cluster grows with the addition of some more indices of Scandinavian countries and of other Western European indices. As the thresholds grow, Europe is joined by Eastern European countries and only then we witness the formation of a Pacific Asian cluster, which starts to solidify after 1997, which coincides with the Asian Financial Crisis. South Africa and, in a lesser proportion, Israel, connect with Europe rather than with their neighbouring countries. Australia and New Zealand are more connected with Pacific Asia than with Europe. Indices from the Caribbean, of most islands, of the majority of Africa, and of the Arab countries, connect only at much higher values of the threshold, where noise reigns. What is also clear from the graphics is that the financial indices tend to group according to the geographic proximity of their countries, and also according to cultural similarities, as are the case of South Africa and Israel with Europe. Other striking feature is that the American and European clusters are fairly independent, and only connect at higher thresholds, when both cluster have already grown larger and denser. It is good to remind ourselves that networks built from correlation matrices are not directed networks, so that we can deduct no effects of causality between the indices using just the correlation matrix (an alternative would be parcial correlation [@Dror1]-[@Dror4]). Also to be noticed is that networks shrink in size in times of crises, reflecting the growth in correlation between markets in those times [@leocorr]. Information in the second largest eigenvalue ============================================ It is well documented in the literature that the eigenvalues of the correlation matrix of the time series of assets may be used to identify part of the information carried by those time series separating it from noise. This is usually done using Random Matrix Theory [@rmt1], that is based on the analysis of the eigenvalue frequency distribution of a matrix which is obtained from the correlation between $N$ time series of random numbers generated over a Gaussian frequency distribution with zero mean and standard deviation $\sigma $. If $L$ is the number of elements of each time series, then, in the limit $L\to \infty $ and $N\to \infty $ such that $Q=L/N$ is constant, finite and greater than one, then the probability distribution function of the eigenvalues of such a matrix is given by the expression $$\label{dist} \rho(\lambda )=\frac{Q}{2\pi \sigma ^2}\frac{\sqrt{(\lambda_+-\lambda )(\lambda -\lambda_-)}}{\lambda }\ ,$$ where $$\lambda_-=\sigma ^2\left( 1+\frac{1}{Q}-2\sqrt{\frac{1}{Q}}\right) \ \ ,\ \ \lambda_+=\sigma ^2\left( 1+\frac{1}{Q}+2\sqrt{\frac{1}{Q}}\right) \ ,$$ and $\lambda $ is restricted to the interval $\left[ \lambda_-,\lambda_+\right] $. This probability distribution function is called a Marčenku-Pastur distribution [@rmt2], and it establishes limits for the eigenvalues generated from random data. So, in theory, any eigenvalue falling out of the interval $\left[ \lambda_-,\lambda_+\right] $ has a good chance of representing true information about the system. As stated before, the Marčenku-Pastur distribution is valid only for the limit of an infinite amount of available data, and also for random time series generated over a Gaussian. Since this is not the case for real time series of financial data, for they are finite and usually their probability frequency distributions are not Gaussian, an alternative would be to use randomized data for a collection of time series and then analyze the minimum and maximum values of eigenvalues found for the correlation matrix thus generated. The result is similar to the one obtained for theoretical distributions, but not quite the same. In this section, we use the results obtained from randomized data in order to establish regions where noise can be a major concern and then analize the eigenvectors of some eigenvalues that are outside those regions. Figures 13 to 15 show the eigenvalues (represented as lines) and the values associated with noise (shaded region) in the graphs for the years being studied in this article. The first feature which is clearly visible is that the highest eigenvalue always stands out from the others. Each eigenvalue has a correponding eigenvector with the same number of indices used in creating the correlation matrix. Each entry in an eigenvector corresponds to the participation of that index in the building of a “portfolio” of indices. It is well known that the eigenvectors of the highest eigenvalues are usually built in such a way that every index has a similar participation in such a portfolio, and that this particular combination emulates the general behavior of a world market index [@leocorr]. (-3,-0.5)(3.5,1.8) (-1,1)[**01/1986**]{} (0.48,0)(0.48,1.2)(1.66,1.2)(1.66,0) (0,0)(3,0) (3.3,0)[$\lambda $]{} (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (0.1835,0)(0.1835,1) (0.3219,0)(0.3219,1) (0.4163,0)(0.4163,1) (0.6024,0)(0.6024,1) (0.6108,0)(0.6108,1) (0.6937,0)(0.6937,1) (0.7321,0)(0.7321,1) (0.9041,0)(0.9041,1) (0.9752,0)(0.9752,1) (0.9960,0)(0.9960,1) (1.0880,0)(1.0880,1) (1.1602,0)(1.1602,1) (1.3879,0)(1.3879,1) (1.5226,0)(1.5226,1) (1.7134,0)(1.7134,1) (2.6918,0)(2.6918,1) (-3,-0.5)(3.5,1.8) (-1,1)[**02/1986**]{} (0.48,0)(0.48,1.2)(1.66,1.2)(1.66,0) (0,0)(4,0) (4.3,0)[$\lambda $]{} (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (0.2246,0)(0.2246,1) (0.3412,0)(0.3412,1) (0.4613,0)(0.4613,1) (0.5246,0)(0.5246,1) (0.6565,0)(0.6565,1) (0.7238,0)(0.7238,1) (0.7734,0)(0.7734,1) (0.8605,0)(0.8605,1) (0.9100,0)(0.9100,1) (0.9988,0)(0.9988,1) (1.0824,0)(1.0824,1) (1.1533,0)(1.1533,1) (1.2802,0)(1.2802,1) (1.3373,0)(1.3373,1) (1.6490,0)(1.6490,1) (3.0230,0)(3.0230,1) (-3,-0.5)(3.5,1.5) (-1,1)[**01/1987**]{} (0.38,0)(0.38,1.2)(1.87,1.2)(1.87,0) (0,0)(4,0) (4.3,0)[$\lambda $]{} (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (0.1451,0)(0.1451,1) (0.3194,0)(0.3194,1) (0.3624,0)(0.3624,1) (0.4985,0)(0.4985,1) (0.5462,0)(0.5462,1) (0.5970,0)(0.5970,1) (0.6196,0)(0.6196,1) (0.6523,0)(0.6523,1) (0.7198,0)(0.7198,1) (0.7803,0)(0.7803,1) (0.8159,0)(0.8159,1) (0.8380,0)(0.8380,1) (0.9693,0)(0.9693,1) (1.0294,0)(1.0294,1) (1.0566,0)(1.0566,1) (1.1273,0)(1.1273,1) (1.1753,0)(1.1753,1) (1.2262,0)(1.2262,1) (1.3476,0)(1.3476,1) (1.5497,0)(1.5497,1) (1.7208,0)(1.7208,1) (2.0986,0)(2.0986,1) (2.8046,0)(2.8046,1) (-3,-0.5)(3.5,1.5) (-1,1)[**02/1987**]{} (0.38,0)(0.38,1.2)(1.85,1.2)(1.85,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[5]{} (0.1479,0)(0.1479,1) (0.2324,0)(0.2324,1) (0.2695,0)(0.2695,1) (0.3258,0)(0.3258,1) (0.3456,0)(0.3456,1) (0.4009,0)(0.4009,1) (0.4811,0)(0.4811,1) (0.5035,0)(0.5035,1) (0.5311,0)(0.5311,1) (0.5557,0)(0.5557,1) (0.7229,0)(0.7229,1) (0.7734,0)(0.7734,1) (0.8225,0)(0.8225,1) (0.8353,0)(0.8353,1) (0.9325,0)(0.9325,1) (1.0251,0)(1.0251,1) (1.1459,0)(1.1459,1) (1.2339,0)(1.2339,1) (1.2646,0)(1.2646,1) (1.3724,0)(1.3724,1) (1.4596,0)(1.4596,1) (2.3156,0)(2.3156,1) (5.3026,0)(5.3026,1) Fig. 13. Eigenvalues for the first and second semesters of 1986 and 1987 in order of magnitude. The shaded area corresponds to the eigenvalues predicted for randomized data. 0.4 cm (-1.5,-0.5)(7,1.5) (-1,1)[**01/1997**]{} (0.12,0)(0.12,1.2)(2.62,1.2)(2.62,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[9]{} (0.0784,0)(0.0784,1) (0.0917,0)(0.0917,1) (0.0930,0)(0.0930,1) (0.1182,0)(0.1182,1) (0.1270,0)(0.1270,1) (0.1579,0)(0.1579,1) (0.1689,0)(0.1689,1) (0.1727,0)(0.1727,1) (0.1846,0)(0.1846,1) (0.2066,0)(0.2066,1) (0.2163,0)(0.2163,1) (0.2338,0)(0.2338,1) (0.2669,0)(0.2669,1) (0.2707,0)(0.2707,1) (0.3050,0)(0.3050,1) (0.3238,0)(0.3238,1) (0.3394,0)(0.3394,1) (0.3588,0)(0.3588,1) (0.3811,0)(0.3811,1) (0.4084,0)(0.4084,1) (0.4339,0)(0.4339,1) (0.4758,0)(0.4758,1) (0.4842,0)(0.4842,1) (0.5201,0)(0.5201,1) (0.5274,0)(0.5274,1) (0.6060,0)(0.6060,1) (0.6212,0)(0.6212,1) (0.6713,0)(0.6713,1) (0.6760,0)(0.6760,1) (0.6945,0)(0.6945,1) (0.7589,0)(0.7589,1) (0.7926,0)(0.7926,1) (0.8014,0)(0.8014,1) (0.8613,0)(0.8613,1) (0.9020,0)(0.9020,1) (0.9677,0)(0.9677,1) (1.0054,0)(1.0054,1) (1.0398,0)(1.0398,1) (1.0849,0)(1.0849,1) (1.1073,0)(1.1073,1) (1.1643,0)(1.1643,1) (1.2417,0)(1.2417,1) (1.2786,0)(1.2786,1) (1.3035,0)(1.3035,1) (1.4146,0)(1.4146,1) (1.4245,0)(1.4245,1) (1.4867,0)(1.4867,1) (1.5682,0)(1.5682,1) (1.6121,0)(1.6121,1) (1.7872,0)(1.7872,1) (1.9238,0)(1.9238,1) (2.0037,0)(2.0037,1) (2.1390,0)(2.1390,1) (2.4504,0)(2.4504,1) (2.5330,0)(2.5330,1) (2.9962,0)(2.9962,1) (5.1377,0)(5.1377,1) (-1.5,-0.5)(3.5,1.5) (-1,1)[**02/1997**]{} (0.13,0)(0.13,1.2)(2.60,1.2)(2.60,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[14]{} (0.0457,0)(0.0457,1) (0.0713,0)(0.0713,1) (0.0804,0)(0.0804,1) (0.0961,0)(0.0961,1) (0.1025,0)(0.1025,1) (0.1075,0)(0.1075,1) (0.1187,0)(0.1187,1) (0.1318,0)(0.1318,1) (0.1428,0)(0.1428,1) (0.1572,0)(0.1572,1) (0.1665,0)(0.1665,1) (0.1727,0)(0.1727,1) (0.1953,0)(0.1953,1) (0.2160,0)(0.2160,1) (0.2334,0)(0.2334,1) (0.2502,0)(0.2502,1) (0.2732,0)(0.2732,1) (0.2909,0)(0.2909,1) (0.3023,0)(0.3023,1) (0.3189,0)(0.3189,1) (0.3459,0)(0.3459,1) (0.3557,0)(0.3557,1) (0.3890,0)(0.3890,1) (0.4425,0)(0.4425,1) (0.4561,0)(0.4561,1) (0.4775,0)(0.4775,1) (0.4816,0)(0.4816,1) (0.5331,0)(0.5331,1) (0.5779,0)(0.5779,1) (0.6001,0)(0.6001,1) (0.6255,0)(0.6255,1) (0.6359,0)(0.6359,1) (0.6821,0)(0.6821,1) (0.7425,0)(0.7425,1) (0.7563,0)(0.7563,1) (0.7854,0)(0.7854,1) (0.8270,0)(0.8270,1) (0.8986,0)(0.8986,1) (0.9059,0)(0.9059,1) (0.9721,0)(0.9721,1) (1.0253,0)(1.0253,1) (1.0478,0)(1.0478,1) (1.1078,0)(1.1078,1) (1.1789,0)(1.1789,1) (1.2135,0)(1.2135,1) (1.2967,0)(1.2967,1) (1.3223,0)(1.3223,1) (1.3820,0)(1.3820,1) (1.4189,0)(1.4189,1) (1.5180,0)(1.5180,1) (1.6306,0)(1.6306,1) (1.7064,0)(1.7064,1) (1.7138,0)(1.7138,1) (1.9462,0)(1.9462,1) (2.1543,0)(2.1543,1) (2.5761,0)(2.5761,1) (3.0409,0)(3.0409,1) (5.7564,0)(5.7564,1) (-1.5,-0.5)(7,1.5) (-1,1)[**01/1998**]{} (0.12,0)(0.12,1.2)(2.65,1.2)(2.65,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[13]{} (0.0588,0)(0.0588,1) (0.0693,0)(0.0693,1) (0.0776,0)(0.0776,1) (0.0892,0)(0.0892,1) (0.1136,0)(0.1136,1) (0.1162,0)(0.1162,1) (0.1385,0)(0.1385,1) (0.1434,0)(0.1434,1) (0.1614,0)(0.1614,1) (0.1653,0)(0.1653,1) (0.1853,0)(0.1853,1) (0.1949,0)(0.1949,1) (0.2239,0)(0.2239,1) (0.2339,0)(0.2339,1) (0.2565,0)(0.2565,1) (0.2749,0)(0.2749,1) (0.2860,0)(0.2860,1) (0.2925,0)(0.2925,1) (0.3396,0)(0.3396,1) (0.3592,0)(0.3592,1) (0.3816,0)(0.3816,1) (0.3930,0)(0.3930,1) (0.4122,0)(0.4122,1) (0.4292,0)(0.4292,1) (0.4540,0)(0.4540,1) (0.4679,0)(0.4679,1) (0.5148,0)(0.5148,1) (0.5349,0)(0.5349,1) (0.5515,0)(0.5515,1) (0.5703,0)(0.5703,1) (0.6258,0)(0.6258,1) (0.6512,0)(0.6512,1) (0.7095,0)(0.7095,1) (0.7380,0)(0.7380,1) (0.7793,0)(0.7793,1) (0.8362,0)(0.8362,1) (0.8542,0)(0.8542,1) (0.8971,0)(0.8971,1) (0.9408,0)(0.9408,1) (0.9789,0)(0.9789,1) (1.0430,0)(1.0430,1) (1.0615,0)(1.0615,1) (1.0997,0)(1.0997,1) (1.1558,0)(1.1558,1) (1.1972,0)(1.1972,1) (1.2744,0)(1.2744,1) (1.3447,0)(1.3447,1) (1.3822,0)(1.3822,1) (1.4050,0)(1.4050,1) (1.4966,0)(1.4966,1) (1.5727,0)(1.5727,1) (1.6295,0)(1.6295,1) (1.7670,0)(1.7670,1) (2.0354,0)(2.0354,1) (2.1162,0)(2.1162,1) (2.6913,0)(2.6913,1) (3.5327,0)(3.5327,1) (5.6943,0)(5.6943,1) (-1.5,-0.5)(3.5,1.5) (-1,1)[**02/1998**]{} (0.13,0)(0.13,1.2)(2.61,1.2)(2.61,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[17]{} (0.0419,0)(0.0419,1) (0.0481,0)(0.0481,1) (0.0604,0)(0.0604,1) (0.0621,0)(0.0621,1) (0.0775,0)(0.0775,1) (0.0880,0)(0.0880,1) (0.0993,0)(0.0993,1) (0.1182,0)(0.1182,1) (0.1200,0)(0.1200,1) (0.1334,0)(0.1334,1) (0.1400,0)(0.1400,1) (0.1552,0)(0.1552,1) (0.1719,0)(0.1719,1) (0.1838,0)(0.1838,1) (0.2115,0)(0.2115,1) (0.2214,0)(0.2214,1) (0.2350,0)(0.2350,1) (0.2495,0)(0.2495,1) (0.2594,0)(0.2594,1) (0.2811,0)(0.2811,1) (0.3010,0)(0.3010,1) (0.3062,0)(0.3062,1) (0.3509,0)(0.3509,1) (0.3905,0)(0.3905,1) (0.3976,0)(0.3976,1) (0.4217,0)(0.4217,1) (0.4351,0)(0.4351,1) (0.4564,0)(0.4564,1) (0.4685,0)(0.4685,1) (0.5017,0)(0.5017,1) (0.5248,0)(0.5248,1) (0.5664,0)(0.5664,1) (0.6403,0)(0.6403,1) (0.6450,0)(0.6450,1) (0.6900,0)(0.6900,1) (0.7096,0)(0.7096,1) (0.7475,0)(0.7475,1) (0.7996,0)(0.7996,1) (0.8188,0)(0.8188,1) (0.8354,0)(0.8354,1) (0.9130,0)(0.9130,1) (0.9688,0)(0.9688,1) (1.0248,0)(1.0248,1) (1.0594,0)(1.0594,1) (1.1301,0)(1.1301,1) (1.2110,0)(1.2110,1) (1.2619,0)(1.2619,1) (1.2722,0)(1.2722,1) (1.3543,0)(1.3543,1) (1.4106,0)(1.4106,1) (1.5028,0)(1.5028,1) (1.5712,0)(1.5712,1) (1.7518,0)(1.7518,1) (2.0331,0)(2.0331,1) (2.0489,0)(2.0489,1) (2.8198,0)(2.8198,1) (3.4073,0)(3.4073,1) (5.2941,0)(5.2941,1) Fig. 14. Eigenvalues for the first and second semesters of 1997 and 1998 in order of magnitude. The shaded area corresponds to the eigenvalues predicted for randomized data. 0.4 cm (-1.5,-0.5)(7,1.5) (-1,1)[**01/2000**]{} (0.063,0)(0.063,1.2)(2.95,1.2)(2.95,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[13]{} (0.0303,0)(0.0303,1) (0.0460,0)(0.0460,1) (0.0487,0)(0.0487,1) (0.0576,0)(0.0576,1) (0.0643,0)(0.0643,1) (0.0683,0)(0.0683,1) (0.0858,0)(0.0858,1) (0.1071,0)(0.1071,1) (0.1115,0)(0.1115,1) (0.1199,0)(0.1199,1) (0.1301,0)(0.1301,1) (0.1471,0)(0.1471,1) (0.1576,0)(0.1576,1) (0.1588,0)(0.1588,1) (0.1769,0)(0.1769,1) (0.1972,0)(0.1972,1) (0.2014,0)(0.2014,1) (0.2130,0)(0.2130,1) (0.2202,0)(0.2202,1) (0.2464,0)(0.2464,1) (0.2686,0)(0.2686,1) (0.2789,0)(0.2789,1) (0.2963,0)(0.2963,1) (0.3053,0)(0.3053,1) (0.3124,0)(0.3124,1) (0.3424,0)(0.3424,1) (0.3511,0)(0.3511,1) (0.3780,0)(0.3780,1) (0.4005,0)(0.4005,1) (0.4098,0)(0.4098,1) (0.4316,0)(0.4316,1) (0.4501,0)(0.4501,1) (0.4806,0)(0.4806,1) (0.4974,0)(0.4974,1) (0.5244,0)(0.5244,1) (0.5520,0)(0.5520,1) (0.5775,0)(0.5775,1) (0.6019,0)(0.6019,1) (0.6234,0)(0.6234,1) (0.6470,0)(0.6470,1) (0.6735,0)(0.6735,1) (0.6993,0)(0.6993,1) (0.7265,0)(0.7265,1) (0.7628,0)(0.7628,1) (0.8115,0)(0.8115,1) (0.8490,0)(0.8490,1) (0.8856,0)(0.8856,1) (0.9105,0)(0.9105,1) (0.9201,0)(0.9201,1) (0.9595,0)(0.9595,1) (0.9991,0)(0.9991,1) (1.0603,0)(1.0603,1) (1.0950,0)(1.0950,1) (1.1357,0)(1.1357,1) (1.1759,0)(1.1759,1) (1.2176,0)(1.2176,1) (1.2522,0)(1.2522,1) (1.4162,0)(1.4162,1) (1.4786,0)(1.4786,1) (1.5386,0)(1.5386,1) (1.5802,0)(1.5802,1) (1.6439,0)(1.6439,1) (1.6731,0)(1.6731,1) (1.7697,0)(1.7697,1) (1.8622,0)(1.8622,1) (1.9457,0)(1.9457,1) (2.0625,0)(2.0625,1) (2.0975,0)(2.0975,1) (2.1735,0)(2.1735,1) (2.2936,0)(2.2936,1) (2.4376,0)(2.4376,1) (3.0488,0)(3.0488,1) (4.1355,0)(4.1355,1) (5.3910,0)(5.3910,1) (-1.5,-0.5)(3.5,1.5) (-1,1)[**02/2000**]{} (0.067,0)(0.067,1.2)(2.93,1.2)(2.93,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[13]{} (0.0366,0)(0.0366,1) (0.0376,0)(0.0376,1) (0.0470,0)(0.0470,1) (0.0600,0)(0.0600,1) (0.0691,0)(0.0691,1) (0.0765,0)(0.0765,1) (0.0847,0)(0.0847,1) (0.0998,0)(0.0998,1) (0.1054,0)(0.1054,1) (0.1174,0)(0.1174,1) (0.1326,0)(0.1326,1) (0.1579,0)(0.1579,1) (0.1702,0)(0.1702,1) (0.1881,0)(0.1881,1) (0.2021,0)(0.2021,1) (0.2108,0)(0.2108,1) (0.2215,0)(0.2215,1) (0.2334,0)(0.2334,1) (0.2381,0)(0.2381,1) (0.2554,0)(0.2554,1) (0.2696,0)(0.2696,1) (0.2876,0)(0.2876,1) (0.3075,0)(0.3075,1) (0.3210,0)(0.3210,1) (0.3428,0)(0.3428,1) (0.3459,0)(0.3459,1) (0.3580,0)(0.3580,1) (0.3839,0)(0.3839,1) (0.4058,0)(0.4058,1) (0.4211,0)(0.4211,1) (0.4811,0)(0.4811,1) (0.5069,0)(0.5069,1) (0.5148,0)(0.5148,1) (0.5286,0)(0.5286,1) (0.5664,0)(0.5664,1) (0.5804,0)(0.5804,1) (0.6086,0)(0.6086,1) (0.6477,0)(0.6477,1) (0.6761,0)(0.6761,1) (0.7367,0)(0.7367,1) (0.7382,0)(0.7382,1) (0.7782,0)(0.7782,1) (0.7921,0)(0.7921,1) (0.8354,0)(0.8354,1) (0.8607,0)(0.8607,1) (0.8765,0)(0.8765,1) (0.9159,0)(0.9159,1) (0.9502,0)(0.9502,1) (1.0027,0)(1.0027,1) (1.0495,0)(1.0495,1) (1.0593,0)(1.0593,1) (1.0866,0)(1.0866,1) (1.1420,0)(1.1420,1) (1.1752,0)(1.1752,1) (1.2028,0)(1.2028,1) (1.3032,0)(1.3032,1) (1.3440,0)(1.3440,1) (1.3763,0)(1.3763,1) (1.4118,0)(1.4118,1) (1.4776,0)(1.4776,1) (1.5322,0)(1.5322,1) (1.5545,0)(1.5545,1) (1.6506,0)(1.6506,1) (1.7097,0)(1.7097,1) (1.7468,0)(1.7468,1) (1.8507,0)(1.8507,1) (1.9082,0)(1.9082,1) (2.0147,0)(2.0147,1) (2.1687,0)(2.1687,1) (2.2980,0)(2.2980,1) (2.3577,0)(2.3577,1) (2.4274,0)(2.4274,1) (3.8688,0)(3.8688,1) (5.4996,0)(5.4996,1) (-1.5,-0.5)(7,1.5) (-1,1)[**01/2001**]{} (0.050,0)(0.050,1.2)(3.04,1.2)(3.04,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[14]{} (0.0318,0)(0.0318,1) (0.0367,0)(0.0367,1) (0.0427,0)(0.0427,1) (0.0523,0)(0.0523,1) (0.0589,0)(0.0589,1) (0.0702,0)(0.0702,1) (0.0801,0)(0.0801,1) (0.0863,0)(0.0863,1) (0.1000,0)(0.1000,1) (0.1025,0)(0.1025,1) (0.1231,0)(0.1231,1) (0.1282,0)(0.1282,1) (0.1401,0)(0.1401,1) (0.1448,0)(0.1448,1) (0.1520,0)(0.1520,1) (0.1704,0)(0.1704,1) (0.1815,0)(0.1815,1) (0.1914,0)(0.1914,1) (0.2018,0)(0.2018,1) (0.2206,0)(0.2206,1) (0.2372,0)(0.2372,1) (0.2432,0)(0.2432,1) (0.2486,0)(0.2486,1) (0.2593,0)(0.2593,1) (0.2773,0)(0.2773,1) (0.2854,0)(0.2854,1) (0.3191,0)(0.3191,1) (0.3361,0)(0.3361,1) (0.3518,0)(0.3518,1) (0.3670,0)(0.3670,1) (0.3848,0)(0.3848,1) (0.4076,0)(0.4076,1) (0.4247,0)(0.4247,1) (0.4444,0)(0.4444,1) (0.5059,0)(0.5059,1) (0.5068,0)(0.5068,1) (0.5200,0)(0.5200,1) (0.5650,0)(0.5650,1) (0.5786,0)(0.5786,1) (0.6197,0)(0.6197,1) (0.6334,0)(0.6334,1) (0.6541,0)(0.6541,1) (0.6749,0)(0.6749,1) (0.6848,0)(0.6848,1) (0.7212,0)(0.7212,1) (0.7746,0)(0.7746,1) (0.7915,0)(0.7915,1) (0.8214,0)(0.8214,1) (0.8704,0)(0.8704,1) (0.8808,0)(0.8808,1) (0.9636,0)(0.9636,1) (0.9764,0)(0.9764,1) (0.9990,0)(0.9990,1) (1.0316,0)(1.0316,1) (1.0791,0)(1.0791,1) (1.1074,0)(1.1074,1) (1.1292,0)(1.1292,1) (1.1820,0)(1.1820,1) (1.2457,0)(1.2457,1) (1.2617,0)(1.2617,1) (1.2913,0)(1.2913,1) (1.3895,0)(1.3895,1) (1.4287,0)(1.4287,1) (1.4903,0)(1.4903,1) (1.5540,0)(1.5540,1) (1.6086,0)(1.6086,1) (1.6731,0)(1.6731,1) (1.7391,0)(1.7391,1) (1.8593,0)(1.8593,1) (1.9167,0)(1.9167,1) (1.9892,0)(1.9892,1) (2.1206,0)(2.1206,1) (2.1607,0)(2.1607,1) (2.1661,0)(2.1661,1) (2.2651,0)(2.2651,1) (2.4046,0)(2.4046,1) (2.9499,0)(2.9499,1) (4.1528,0)(4.1528,1) (5.1599,0)(5.1599,1) (-1.5,-0.5)(3.5,1.5) (-1,1)[**02/2001**]{} (0.052,0)(0.052,1.2)(3.04,1.2)(3.04,0) (0,0)(6,0) (6.3,0)[$\lambda $]{} (4.4,0)(4.6,0) (4.3,-0.1)(4.5,0.1) (4.5,-0.1)(4.7,0.1) (1,-0.1)(1,0.1) (1,-0.3)[1]{} (2,-0.1)(2,0.1) (2,-0.3)[2]{} (3,-0.1)(3,0.1) (3,-0.3)[3]{} (4,-0.1)(4,0.1) (4,-0.3)[4]{} (5,-0.1)(5,0.1) (5,-0.3)[15]{} (0.0146,0)(0.0146,1) (0.0298,0)(0.0298,1) (0.0299,0)(0.0299,1) (0.0350,0)(0.0350,1) (0.0455,0)(0.0455,1) (0.0491,0)(0.0491,1) (0.0621,0)(0.0621,1) (0.0665,0)(0.0665,1) (0.0753,0)(0.0753,1) (0.0814,0)(0.0814,1) (0.0963,0)(0.0963,1) (0.1050,0)(0.1050,1) (0.1123,0)(0.1123,1) (0.1348,0)(0.1348,1) (0.1441,0)(0.1441,1) (0.1520,0)(0.1520,1) (0.1608,0)(0.1608,1) (0.1865,0)(0.1865,1) (0.1906,0)(0.1906,1) (0.2006,0)(0.2006,1) (0.2075,0)(0.2075,1) (0.2177,0)(0.2177,1) (0.2326,0)(0.2326,1) (0.2453,0)(0.2453,1) (0.2705,0)(0.2705,1) (0.2912,0)(0.2912,1) (0.3070,0)(0.3070,1) (0.3196,0)(0.3196,1) (0.3346,0)(0.3346,1) (0.3553,0)(0.3553,1) (0.3647,0)(0.3647,1) (0.3737,0)(0.3737,1) (0.3992,0)(0.3992,1) (0.4148,0)(0.4148,1) (0.4428,0)(0.4428,1) (0.4490,0)(0.4490,1) (0.4762,0)(0.4762,1) (0.5105,0)(0.5105,1) (0.5195,0)(0.5195,1) (0.5436,0)(0.5436,1) (0.5819,0)(0.5819,1) (0.5965,0)(0.5965,1) (0.6232,0)(0.6232,1) (0.6932,0)(0.6932,1) (0.7093,0)(0.7093,1) (0.7353,0)(0.7353,1) (0.7759,0)(0.7759,1) (0.7837,0)(0.7837,1) (0.7969,0)(0.7969,1) (0.8448,0)(0.8448,1) (0.8743,0)(0.8743,1) (0.9088,0)(0.9088,1) (0.9842,0)(0.9842,1) (1.0179,0)(1.0179,1) (1.0339,0)(1.0339,1) (1.1035,0)(1.1035,1) (1.1304,0)(1.1304,1) (1.2044,0)(1.2044,1) (1.2283,0)(1.2283,1) (1.2589,0)(1.2589,1) (1.3318,0)(1.3318,1) (1.3949,0)(1.3949,1) (1.4197,0)(1.4197,1) (1.4759,0)(1.4759,1) (1.5016,0)(1.5016,1) (1.5967,0)(1.5967,1) (1.6598,0)(1.6598,1) (1.7310,0)(1.7310,1) (1.7737,0)(1.7737,1) (1.9172,0)(1.9172,1) (1.9498,0)(1.9498,1) (2.0046,0)(2.0046,1) (2.1100,0)(2.1100,1) (2.2628,0)(2.2628,1) (2.3915,0)(2.3915,1) (2.5297,0)(2.5297,1) (2.7279,0)(2.7279,1) (4.3622,0)(4.3622,1) (5.7266,0)(5.7266,1) Fig. 15. Eigenvalues for the first and second semesters of 2000 and 2001 in order of magnitude. The shaded area corresponds to the eigenvalues predicted for randomized data. 0.3 cm To illustrate this, if one compares the log-returns of a portfolio consisting on the indices as expressed by the eingenvectors corresponding to the highest eigenvalues with the log-returns of the world index calculated by the MSCI (Morgan Stanley Capital International), then one obtains the correlations between those indices as stated in Table 1. $$\begin{array}{|c|cccc|cccc|} \hline \text{Semester/Year} & 01/1986 & 02/1986 & 01/1987 & 02/1987 & 01/1997 & 02/1997 & 01/1998 & 02/1998 \\ \hline \text{Correlation} & 0.69 & 0.60 & 0.40 & 0.87 & 0.64 & 0.81 & 0.83 & 0.82 \\ \hline \end{array}$$ $$\begin{array}{|c|cccc|} \hline \text{Semester/Year} & 01/2000 & 02/2000 & 01/2001 & 02/2001 \\ \hline \text{Correlation} & 0.71 & 0.83 & 0.77 & 0.82 \\ \hline \end{array}$$ Table 1. Correlations between indices built on the eigenvector corresponding to the highest eigenvalue and the MSCI world index according to semester and year. 0.3 cm Although the correlation is weak for early years, probably due to the small number of indices that were considered for them, correlation is strong for subsequent years. So, the eigenvector corresponding to the highest eigenvalue is associated with a “market mode”, or the general oscillations common to all indices. Now, we move our attention to the second highest eigenvalue. Figures 13 to 15 show that such eigenvalue is also detached from the noisy region, although less so than the highest eigenvalue. This second highest eigenvalue has been connected with some internal structures of markets when dealing with assets from particular markets. For stock market indices, it has a different meaning that is peculiar to systems that operate at different times. Figures 16 to 21 show representations of the eigenvectors corresponding to the second highest eigenvalues for each of the intervals of time we are studying in this article. White rectangles correspond to positive values and dark rectangles correspond to negative values of the elements of the eigenvectors. (-0.2,-0.5)(3.5,2.3) (0.5,0)(0.5,0.295)(1.5,0.295)(1.5,0) (1.5,0)(1.5,0.227)(2.5,0.227)(2.5,0) (2.5,0)(2.5,0.157)(3.5,0.157)(3.5,0) (3.5,0)(3.5,0.196)(4.5,0.196)(4.5,0) (4.5,0)(4.5,0.089)(5.5,0.089)(5.5,0) (5.5,0)(5.5,0.537)(6.5,0.537)(6.5,0) (6.5,0)(6.5,0.160)(7.5,0.160)(7.5,0) (7.5,0)(7.5,0.471)(8.5,0.471)(8.5,0) (8.5,0)(8.5,0.040)(9.5,0.040)(9.5,0) (9.5,0)(9.5,0.275)(10.5,0.275)(10.5,0) (10.5,0)(10.5,0.068)(11.5,0.068)(11.5,0) (11.5,0)(11.5,0.230)(12.5,0.230)(12.5,0) (12.5,0)(12.5,0.046)(13.5,0.046)(13.5,0) (13.5,0)(13.5,0.175)(14.5,0.175)(14.5,0) (14.5,0)(14.5,0.228)(15.5,0.228)(15.5,0) (15.5,0)(15.5,0.186)(16.5,0.186)(16.5,0) (0.5,0)(0.5,0.295)(1.5,0.295)(1.5,0) (1.5,0)(1.5,0.227)(2.5,0.227)(2.5,0) (2.5,0)(2.5,0.157)(3.5,0.157)(3.5,0) (3.5,0)(3.5,0.196)(4.5,0.196)(4.5,0) (4.5,0)(4.5,0.089)(5.5,0.089)(5.5,0) (5.5,0)(5.5,0.537)(6.5,0.537)(6.5,0) (6.5,0)(6.5,0.160)(7.5,0.160)(7.5,0) (7.5,0)(7.5,0.471)(8.5,0.471)(8.5,0) (8.5,0)(8.5,0.040)(9.5,0.040)(9.5,0) (9.5,0)(9.5,0.275)(10.5,0.275)(10.5,0) (10.5,0)(10.5,0.068)(11.5,0.068)(11.5,0) (11.5,0)(11.5,0.230)(12.5,0.230)(12.5,0) (12.5,0)(12.5,0.046)(13.5,0.046)(13.5,0) (13.5,0)(13.5,0.175)(14.5,0.175)(14.5,0) (14.5,0)(14.5,0.228)(15.5,0.228)(15.5,0) (15.5,0)(15.5,0.186)(16.5,0.186)(16.5,0) (0,0)(17.5,0) (0,0)(0,0.8) (1.8,0.8)[$e_{2}\ (01/1986)$]{} (1,-0.04)(1,0.04) (1,-0.12)[S&P]{} (2,-0.04)(2,0.04) (2,-0.12)[Nasd]{} (3,-0.04)(3,0.04) (3,-0.12)[Cana]{} (4,-0.04)(4,0.04) (4,-0.12)[Braz]{} (5,-0.04)(5,0.04) (5,-0.12)[UK]{} (6,-0.04)(6,0.04) (6,-0.12)[Germ]{} (7,-0.04)(7,0.04) (7,-0.12)[Autr]{} (8,-0.04)(8,0.04) (8,-0.12)[Neth]{} (9,-0.04)(9,0.04) (9,-0.12)[Indi]{} (10,-0.04)(10,0.04) (10,-0.12)[SrLa]{} (11,-0.04)(11,0.04) (11,-0.12)[Japa]{} (12,-0.04)(12,0.04) (12,-0.12)[HoKo]{} (13,-0.04)(13,0.04) (13,-0.12)[Taiw]{} (14,-0.04)(14,0.04) (14,-0.12)[SoKo]{} (15,-0.04)(15,0.04) (15,-0.12)[Mala]{} (16,-0.04)(16,0.04) (16,-0.12)[Indo]{} (-0.5,0)[$0$]{} (-0.14,0.2)(0.14,0.2) (-0.5,0.2)[$0.2$]{} (-0.14,0.4)(0.14,0.4) (-0.5,0.4)[$0.4$]{} (-0.14,0.6)(0.14,0.6) (-0.5,0.6)[$0.6$]{} (-0.2,-0.5)(3.5,2.3) (0.5,0)(0.5,0.033)(1.5,0.033)(1.5,0) (1.5,0)(1.5,0.122)(2.5,0.122)(2.5,0) (2.5,0)(2.5,0.080)(3.5,0.080)(3.5,0) (3.5,0)(3.5,0.040)(4.5,0.040)(4.5,0) (4.5,0)(4.5,0.058)(5.5,0.058)(5.5,0) (5.5,0)(5.5,0.001)(6.5,0.001)(6.5,0) (6.5,0)(6.5,0.280)(7.5,0.280)(7.5,0) (7.5,0)(7.5,0.078)(8.5,0.078)(8.5,0) (8.5,0)(8.5,0.000)(9.5,0.000)(9.5,0) (9.5,0)(9.5,0.533)(10.5,0.533)(10.5,0) (10.5,0)(10.5,0.129)(11.5,0.129)(11.5,0) (11.5,0)(11.5,0.365)(12.5,0.365)(12.5,0) (12.5,0)(12.5,0.296)(13.5,0.296)(13.5,0) (13.5,0)(13.5,0.005)(14.5,0.005)(14.5,0) (14.5,0)(14.5,0.406)(15.5,0.406)(15.5,0) (15.5,0)(15.5,0.450)(16.5,0.450)(16.5,0) (0.5,0)(0.5,0.033)(1.5,0.033)(1.5,0) (1.5,0)(1.5,0.122)(2.5,0.122)(2.5,0) (2.5,0)(2.5,0.080)(3.5,0.080)(3.5,0) (3.5,0)(3.5,0.040)(4.5,0.040)(4.5,0) (4.5,0)(4.5,0.058)(5.5,0.058)(5.5,0) (5.5,0)(5.5,0.001)(6.5,0.001)(6.5,0) (6.5,0)(6.5,0.280)(7.5,0.280)(7.5,0) (7.5,0)(7.5,0.078)(8.5,0.078)(8.5,0) (8.5,0)(8.5,0.000)(9.5,0.000)(9.5,0) (9.5,0)(9.5,0.533)(10.5,0.533)(10.5,0) (10.5,0)(10.5,0.129)(11.5,0.129)(11.5,0) (11.5,0)(11.5,0.365)(12.5,0.365)(12.5,0) (12.5,0)(12.5,0.296)(13.5,0.296)(13.5,0) (13.5,0)(13.5,0.005)(14.5,0.005)(14.5,0) (14.5,0)(14.5,0.406)(15.5,0.406)(15.5,0) (15.5,0)(15.5,0.450)(16.5,0.450)(16.5,0) (0,0)(17.5,0) (0,0)(0,0.8) (1.8,0.8)[$e_{2}\ (02/1986)$]{} (1,-0.04)(1,0.04) (1,-0.12)[S&P]{} (2,-0.04)(2,0.04) (2,-0.12)[Nasd]{} (3,-0.04)(3,0.04) (3,-0.12)[Cana]{} (4,-0.04)(4,0.04) (4,-0.12)[Braz]{} (5,-0.04)(5,0.04) (5,-0.12)[UK]{} (6,-0.04)(6,0.04) (6,-0.12)[Germ]{} (7,-0.04)(7,0.04) (7,-0.12)[Autr]{} (8,-0.04)(8,0.04) (8,-0.12)[Neth]{} (9,-0.04)(9,0.04) (9,-0.12)[Indi]{} (10,-0.04)(10,0.04) (10,-0.12)[SrLa]{} (11,-0.04)(11,0.04) (11,-0.12)[Japa]{} (12,-0.04)(12,0.04) (12,-0.12)[HoKo]{} (13,-0.04)(13,0.04) (13,-0.12)[Taiw]{} (14,-0.04)(14,0.04) (14,-0.12)[SoKo]{} (15,-0.04)(15,0.04) (15,-0.12)[Mala]{} (16,-0.04)(16,0.04) (16,-0.12)[Indo]{} (-0.5,0)[$0$]{} (-0.14,0.2)(0.14,0.2) (-0.5,0.2)[$0.2$]{} (-0.14,0.4)(0.14,0.4) (-0.5,0.4)[$0.4$]{} (-0.14,0.6)(0.14,0.6) (-0.5,0.6)[$0.6$]{} Fig. 16. Contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 1986. (-0.2,-0.5)(3.5,2.3) (0.5,0)(0.5,0.242)(1.5,0.242)(1.5,0) (1.5,0)(1.5,0.252)(2.5,0.252)(2.5,0) (2.5,0)(2.5,0.278)(3.5,0.278)(3.5,0) (3.5,0)(3.5,0.023)(4.5,0.023)(4.5,0) (4.5,0)(4.5,0.024)(5.5,0.024)(5.5,0) (5.5,0)(5.5,0.043)(6.5,0.043)(6.5,0) (6.5,0)(6.5,0.437)(7.5,0.437)(7.5,0) (7.5,0)(7.5,0.005)(8.5,0.005)(8.5,0) (8.5,0)(8.5,0.394)(9.5,0.394)(9.5,0) (9.5,0)(9.5,0.321)(10.5,0.321)(10.5,0) (10.5,0)(10.5,0.248)(11.5,0.248)(11.5,0) (11.5,0)(11.5,0.217)(12.5,0.217)(12.5,0) (12.5,0)(12.5,0.176)(13.5,0.176)(13.5,0) (13.5,0)(13.5,0.130)(14.5,0.130)(14.5,0) (14.5,0)(14.5,0.142)(15.5,0.142)(15.5,0) (15.5,0)(15.5,0.046)(16.5,0.046)(16.5,0) (16.5,0)(16.5,0.306)(17.5,0.306)(17.5,0) (17.5,0)(17.5,0.107)(18.5,0.107)(18.5,0) (18.5,0)(18.5,0.087)(19.5,0.087)(19.5,0) (19.5,0)(19.5,0.003)(20.5,0.003)(20.5,0) (20.5,0)(20.5,0.179)(21.5,0.179)(21.5,0) (21.5,0)(21.5,0.098)(22.5,0.098)(22.5,0) (22.5,0)(22.5,0.126)(23.5,0.126)(23.5,0) (0.5,0)(0.5,0.242)(1.5,0.242)(1.5,0) (1.5,0)(1.5,0.252)(2.5,0.252)(2.5,0) (2.5,0)(2.5,0.278)(3.5,0.278)(3.5,0) (3.5,0)(3.5,0.023)(4.5,0.023)(4.5,0) (4.5,0)(4.5,0.024)(5.5,0.024)(5.5,0) (5.5,0)(5.5,0.043)(6.5,0.043)(6.5,0) (6.5,0)(6.5,0.437)(7.5,0.437)(7.5,0) (7.5,0)(7.5,0.005)(8.5,0.005)(8.5,0) (8.5,0)(8.5,0.394)(9.5,0.394)(9.5,0) (9.5,0)(9.5,0.321)(10.5,0.321)(10.5,0) (10.5,0)(10.5,0.248)(11.5,0.248)(11.5,0) (11.5,0)(11.5,0.217)(12.5,0.217)(12.5,0) (12.5,0)(12.5,0.176)(13.5,0.176)(13.5,0) (13.5,0)(13.5,0.130)(14.5,0.130)(14.5,0) (14.5,0)(14.5,0.142)(15.5,0.142)(15.5,0) (15.5,0)(15.5,0.046)(16.5,0.046)(16.5,0) (16.5,0)(16.5,0.306)(17.5,0.306)(17.5,0) (17.5,0)(17.5,0.107)(18.5,0.107)(18.5,0) (18.5,0)(18.5,0.087)(19.5,0.087)(19.5,0) (19.5,0)(19.5,0.003)(20.5,0.003)(20.5,0) (20.5,0)(20.5,0.179)(21.5,0.179)(21.5,0) (21.5,0)(21.5,0.098)(22.5,0.098)(22.5,0) (22.5,0)(22.5,0.126)(23.5,0.126)(23.5,0) (0,0)(24.5,0) (0,0)(0,0.8) (1.8,0.8)[$e_{2}\ (01/1987)$]{} (1,-0.04)(1,0.04) (1,-0.12)[S&P]{} (2,-0.04)(2,0.04) (2,-0.12)[Nasd]{} (3,-0.04)(3,0.04) (3,-0.12)[Cana]{} (4,-0.04)(4,0.04) (4,-0.12)[Braz]{} (5,-0.04)(5,0.04) (5,-0.12)[UK]{} (6,-0.04)(6,0.04) (6,-0.12)[Irel]{} (7,-0.04)(7,0.04) (7,-0.12)[Germ]{} (8,-0.04)(8,0.04) (8,-0.12)[Autr]{} (9,-0.04)(9,0.04) (9,-0.12)[Neth]{} (10,-0.04)(10,0.04) (10,-0.12)[Swed]{} (11,-0.04)(11,0.04) (11,-0.12)[Finl]{} (12,-0.04)(12,0.04) (12,-0.12)[Spai]{} (13,-0.04)(13,0.04) (13,-0.12)[Gree]{} (14,-0.04)(14,0.04) (14,-0.12)[Indi]{} (15,-0.04)(15,0.04) (15,-0.12)[SrLa]{} (16,-0.04)(16,0.04) (16,-0.12)[Bang]{} (17,-0.04)(17,0.04) (17,-0.12)[Japa]{} (18,-0.04)(18,0.04) (18,-0.12)[HoKo]{} (19,-0.04)(19,0.04) (19,-0.12)[Taiw]{} (20,-0.04)(20,0.04) (20,-0.12)[SoKo]{} (21,-0.04)(21,0.04) (21,-0.12)[Mala]{} (22,-0.04)(22,0.04) (22,-0.12)[Indo]{} (23,-0.04)(23,0.04) (23,-0.12)[Phil]{} (-0.5,0)[$0$]{} (-0.14,0.2)(0.14,0.2) (-0.5,0.2)[$0.2$]{} (-0.14,0.4)(0.14,0.4) (-0.5,0.4)[$0.4$]{} (-0.14,0.6)(0.14,0.6) (-0.5,0.6)[$0.6$]{} (-0.2,-0.5)(3.5,2.3) (0.5,0)(0.5,0.323)(1.5,0.323)(1.5,0) (1.5,0)(1.5,0.204)(2.5,0.204)(2.5,0) (2.5,0)(2.5,0.162)(3.5,0.162)(3.5,0) (3.5,0)(3.5,0.019)(4.5,0.019)(4.5,0) (4.5,0)(4.5,0.241)(5.5,0.241)(5.5,0) (5.5,0)(5.5,0.037)(6.5,0.037)(6.5,0) (6.5,0)(6.5,0.012)(7.5,0.012)(7.5,0) (7.5,0)(7.5,0.439)(8.5,0.439)(8.5,0) (8.5,0)(8.5,0.169)(9.5,0.169)(9.5,0) (9.5,0)(9.5,0.141)(10.5,0.141)(10.5,0) (10.5,0)(10.5,0.281)(11.5,0.281)(11.5,0) (11.5,0)(11.5,0.364)(12.5,0.364)(12.5,0) (12.5,0)(12.5,0.211)(13.5,0.211)(13.5,0) (13.5,0)(13.5,0.138)(14.5,0.138)(14.5,0) (14.5,0)(14.5,0.015)(15.5,0.015)(15.5,0) (15.5,0)(15.5,0.208)(16.5,0.208)(16.5,0) (16.5,0)(16.5,0.193)(17.5,0.193)(17.5,0) (17.5,0)(17.5,0.170)(18.5,0.170)(18.5,0) (18.5,0)(18.5,0.176)(19.5,0.176)(19.5,0) (19.5,0)(19.5,0.117)(20.5,0.117)(20.5,0) (20.5,0)(20.5,0.233)(21.5,0.233)(21.5,0) (21.5,0)(21.5,0.098)(22.5,0.098)(22.5,0) (22.5,0)(22.5,0.185)(23.5,0.185)(23.5,0) (0.5,0)(0.5,0.323)(1.5,0.323)(1.5,0) (1.5,0)(1.5,0.204)(2.5,0.204)(2.5,0) (2.5,0)(2.5,0.162)(3.5,0.162)(3.5,0) (3.5,0)(3.5,0.019)(4.5,0.019)(4.5,0) (4.5,0)(4.5,0.241)(5.5,0.241)(5.5,0) (5.5,0)(5.5,0.037)(6.5,0.037)(6.5,0) (6.5,0)(6.5,0.012)(7.5,0.012)(7.5,0) (7.5,0)(7.5,0.439)(8.5,0.439)(8.5,0) (8.5,0)(8.5,0.169)(9.5,0.169)(9.5,0) (9.5,0)(9.5,0.141)(10.5,0.141)(10.5,0) (10.5,0)(10.5,0.281)(11.5,0.281)(11.5,0) (11.5,0)(11.5,0.364)(12.5,0.364)(12.5,0) (12.5,0)(12.5,0.211)(13.5,0.211)(13.5,0) (13.5,0)(13.5,0.138)(14.5,0.138)(14.5,0) (14.5,0)(14.5,0.015)(15.5,0.015)(15.5,0) (15.5,0)(15.5,0.208)(16.5,0.208)(16.5,0) (16.5,0)(16.5,0.193)(17.5,0.193)(17.5,0) (17.5,0)(17.5,0.170)(18.5,0.170)(18.5,0) (18.5,0)(18.5,0.176)(19.5,0.176)(19.5,0) (19.5,0)(19.5,0.117)(20.5,0.117)(20.5,0) (20.5,0)(20.5,0.233)(21.5,0.233)(21.5,0) (21.5,0)(21.5,0.098)(22.5,0.098)(22.5,0) (22.5,0)(22.5,0.185)(23.5,0.185)(23.5,0) (0,0)(24.5,0) (0,0)(0,0.8) (1.8,0.8)[$e_{2}\ (02/1987)$]{} (1,-0.04)(1,0.04) (1,-0.12)[S&P]{} (2,-0.04)(2,0.04) (2,-0.12)[Nasd]{} (3,-0.04)(3,0.04) (3,-0.12)[Cana]{} (4,-0.04)(4,0.04) (4,-0.12)[Braz]{} (5,-0.04)(5,0.04) (5,-0.12)[UK]{} (6,-0.04)(6,0.04) (6,-0.12)[Irel]{} (7,-0.04)(7,0.04) (7,-0.12)[Germ]{} (8,-0.04)(8,0.04) (8,-0.12)[Autr]{} (9,-0.04)(9,0.04) (9,-0.12)[Neth]{} (10,-0.04)(10,0.04) (10,-0.12)[Swed]{} (11,-0.04)(11,0.04) (11,-0.12)[Finl]{} (12,-0.04)(12,0.04) (12,-0.12)[Spai]{} (13,-0.04)(13,0.04) (13,-0.12)[Gree]{} (14,-0.04)(14,0.04) (14,-0.12)[Indi]{} (15,-0.04)(15,0.04) (15,-0.12)[SrLa]{} (16,-0.04)(16,0.04) (16,-0.12)[Bang]{} (17,-0.04)(17,0.04) (17,-0.12)[Japa]{} (18,-0.04)(18,0.04) (18,-0.12)[HoKo]{} (19,-0.04)(19,0.04) (19,-0.12)[Taiw]{} (20,-0.04)(20,0.04) (20,-0.12)[SoKo]{} (21,-0.04)(21,0.04) (21,-0.12)[Mala]{} (22,-0.04)(22,0.04) (22,-0.12)[Indo]{} (23,-0.04)(23,0.04) (23,-0.12)[Phil]{} (-0.5,0)[$0$]{} (-0.14,0.2)(0.14,0.2) (-0.5,0.2)[$0.2$]{} (-0.14,0.4)(0.14,0.4) (-0.5,0.4)[$0.4$]{} (-0.14,0.6)(0.14,0.6) (-0.5,0.6)[$0.6$]{} Fig. 17. Contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 1987. (-0.1,-0.3)(3.5,2.9) (0.5,0)(0.5,0.405)(1.5,0.405)(1.5,0) (1.5,0)(1.5,0.324)(2.5,0.324)(2.5,0) (2.5,0)(2.5,0.331)(3.5,0.331)(3.5,0) (3.5,0)(3.5,0.176)(4.5,0.176)(4.5,0) (4.5,0)(4.5,0.080)(5.5,0.080)(5.5,0) (5.5,0)(5.5,0.063)(6.5,0.063)(6.5,0) (6.5,0)(6.5,0.152)(7.5,0.152)(7.5,0) (7.5,0)(7.5,0.282)(8.5,0.282)(8.5,0) (8.5,0)(8.5,0.355)(9.5,0.355)(9.5,0) (9.5,0)(9.5,0.217)(10.5,0.217)(10.5,0) (10.5,0)(10.5,0.049)(11.5,0.049)(11.5,0) (11.5,0)(11.5,0.153)(12.5,0.153)(12.5,0) (12.5,0)(12.5,0.029)(13.5,0.029)(13.5,0) (13.5,0)(13.5,0.100)(14.5,0.100)(14.5,0) (14.5,0)(14.5,0.016)(15.5,0.016)(15.5,0) (15.5,0)(15.5,0.110)(16.5,0.110)(16.5,0) (16.5,0)(16.5,0.136)(17.5,0.136)(17.5,0) (17.5,0)(17.5,0.089)(18.5,0.089)(18.5,0) (18.5,0)(18.5,0.070)(19.5,0.070)(19.5,0) (19.5,0)(19.5,0.105)(20.5,0.105)(20.5,0) (20.5,0)(20.5,0.007)(21.5,0.007)(21.5,0) (21.5,0)(21.5,0.134)(22.5,0.134)(22.5,0) (22.5,0)(22.5,0.133)(23.5,0.133)(23.5,0) (23.5,0)(23.5,0.041)(24.5,0.041)(24.5,0) (24.5,0)(24.5,0.031)(25.5,0.031)(25.5,0) (25.5,0)(25.5,0.002)(26.5,0.002)(26.5,0) (26.5,0)(26.5,0.102)(27.5,0.102)(27.5,0) (27.5,0)(27.5,0.031)(28.5,0.031)(28.5,0) (28.5,0)(28.5,0.003)(29.5,0.003)(29.5,0) (29.5,0)(29.5,0.003)(30.5,0.003)(30.5,0) (30.5,0)(30.5,0.160)(31.5,0.160)(31.5,0) (31.5,0)(31.5,0.062)(32.5,0.062)(32.5,0) (32.5,0)(32.5,0.142)(33.5,0.142)(33.5,0) (33.5,0)(33.5,0.000)(34.5,0.000)(34.5,0) (34.5,0)(34.5,0.082)(35.5,0.082)(35.5,0) (35.5,0)(35.5,0.021)(36.5,0.021)(36.5,0) (36.5,0)(36.5,0.104)(37.5,0.104)(37.5,0) (37.5,0)(37.5,0.073)(38.5,0.073)(38.5,0) (38.5,0)(38.5,0.022)(39.5,0.022)(39.5,0) (39.5,0)(39.5,0.055)(40.5,0.055)(40.5,0) (40.5,0)(40.5,0.105)(41.5,0.105)(41.5,0) (41.5,0)(41.5,0.032)(42.5,0.032)(42.5,0) (42.5,0)(42.5,0.006)(43.5,0.006)(43.5,0) (43.5,0)(43.5,0.125)(44.5,0.125)(44.5,0) (44.5,0)(44.5,0.068)(45.5,0.068)(45.5,0) (45.5,0)(45.5,0.006)(46.5,0.006)(46.5,0) (46.5,0)(46.5,0.084)(47.5,0.084)(47.5,0) (47.5,0)(47.5,0.036)(48.5,0.036)(48.5,0) (48.5,0)(48.5,0.044)(49.5,0.044)(49.5,0) (49.5,0)(49.5,0.095)(50.5,0.095)(50.5,0) (50.5,0)(50.5,0.104)(51.5,0.104)(51.5,0) (51.5,0)(51.5,0.000)(52.5,0.000)(52.5,0) (52.5,0)(52.5,0.005)(53.5,0.005)(53.5,0) (53.5,0)(53.5,0.039)(54.5,0.039)(54.5,0) (54.5,0)(54.5,0.002)(55.5,0.002)(55.5,0) (55.5,0)(55.5,0.018)(56.5,0.018)(56.5,0) (56.5,0)(56.5,0.139)(57.5,0.139)(57.5,0) (57.5,0)(57.5,0.059)(58.5,0.059)(58.5,0) (0.5,0)(0.5,0.405)(1.5,0.405)(1.5,0) (1.5,0)(1.5,0.324)(2.5,0.324)(2.5,0) (2.5,0)(2.5,0.331)(3.5,0.331)(3.5,0) (3.5,0)(3.5,0.176)(4.5,0.176)(4.5,0) (4.5,0)(4.5,0.080)(5.5,0.080)(5.5,0) (5.5,0)(5.5,0.063)(6.5,0.063)(6.5,0) (6.5,0)(6.5,0.152)(7.5,0.152)(7.5,0) (7.5,0)(7.5,0.282)(8.5,0.282)(8.5,0) (8.5,0)(8.5,0.355)(9.5,0.355)(9.5,0) (9.5,0)(9.5,0.217)(10.5,0.217)(10.5,0) (10.5,0)(10.5,0.049)(11.5,0.049)(11.5,0) (11.5,0)(11.5,0.153)(12.5,0.153)(12.5,0) (12.5,0)(12.5,0.029)(13.5,0.029)(13.5,0) (13.5,0)(13.5,0.100)(14.5,0.100)(14.5,0) (14.5,0)(14.5,0.016)(15.5,0.016)(15.5,0) (15.5,0)(15.5,0.110)(16.5,0.110)(16.5,0) (16.5,0)(16.5,0.136)(17.5,0.136)(17.5,0) (17.5,0)(17.5,0.089)(18.5,0.089)(18.5,0) (18.5,0)(18.5,0.070)(19.5,0.070)(19.5,0) (19.5,0)(19.5,0.105)(20.5,0.105)(20.5,0) (20.5,0)(20.5,0.007)(21.5,0.007)(21.5,0) (21.5,0)(21.5,0.134)(22.5,0.134)(22.5,0) (22.5,0)(22.5,0.133)(23.5,0.133)(23.5,0) (23.5,0)(23.5,0.041)(24.5,0.041)(24.5,0) (24.5,0)(24.5,0.031)(25.5,0.031)(25.5,0) (25.5,0)(25.5,0.002)(26.5,0.002)(26.5,0) (26.5,0)(26.5,0.102)(27.5,0.102)(27.5,0) (27.5,0)(27.5,0.031)(28.5,0.031)(28.5,0) (28.5,0)(28.5,0.003)(29.5,0.003)(29.5,0) (29.5,0)(29.5,0.003)(30.5,0.003)(30.5,0) (30.5,0)(30.5,0.160)(31.5,0.160)(31.5,0) (31.5,0)(31.5,0.062)(32.5,0.062)(32.5,0) (32.5,0)(32.5,0.142)(33.5,0.142)(33.5,0) (33.5,0)(33.5,0.000)(34.5,0.000)(34.5,0) (34.5,0)(34.5,0.082)(35.5,0.082)(35.5,0) (35.5,0)(35.5,0.021)(36.5,0.021)(36.5,0) (36.5,0)(36.5,0.104)(37.5,0.104)(37.5,0) (37.5,0)(37.5,0.073)(38.5,0.073)(38.5,0) (38.5,0)(38.5,0.022)(39.5,0.022)(39.5,0) (39.5,0)(39.5,0.055)(40.5,0.055)(40.5,0) (40.5,0)(40.5,0.105)(41.5,0.105)(41.5,0) (41.5,0)(41.5,0.032)(42.5,0.032)(42.5,0) (42.5,0)(42.5,0.006)(43.5,0.006)(43.5,0) (43.5,0)(43.5,0.125)(44.5,0.125)(44.5,0) (44.5,0)(44.5,0.068)(45.5,0.068)(45.5,0) (45.5,0)(45.5,0.006)(46.5,0.006)(46.5,0) (46.5,0)(46.5,0.084)(47.5,0.084)(47.5,0) (47.5,0)(47.5,0.036)(48.5,0.036)(48.5,0) (48.5,0)(48.5,0.044)(49.5,0.044)(49.5,0) (49.5,0)(49.5,0.095)(50.5,0.095)(50.5,0) (50.5,0)(50.5,0.104)(51.5,0.104)(51.5,0) (51.5,0)(51.5,0.000)(52.5,0.000)(52.5,0) (52.5,0)(52.5,0.005)(53.5,0.005)(53.5,0) (53.5,0)(53.5,0.039)(54.5,0.039)(54.5,0) (54.5,0)(54.5,0.002)(55.5,0.002)(55.5,0) (55.5,0)(55.5,0.018)(56.5,0.018)(56.5,0) (56.5,0)(56.5,0.139)(57.5,0.139)(57.5,0) (57.5,0)(57.5,0.059)(58.5,0.059)(58.5,0) (0,0)(60.5,0) (0,0)(0,0.5) (4.6,0.5)[$e_{2}\ (01/1997)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[CoRi]{} (10,-0.02)(10,0.02) (10,-0.06)[Chil]{} (15,-0.02)(15,0.02) (15,-0.06)[Fran]{} (20,-0.02)(20,0.02) (20,-0.06)[Neth]{} (25,-0.02)(25,0.02) (25,-0.06)[Icel]{} (30,-0.02)(30,0.02) (30,-0.06)[Slok]{} (35,-0.02)(35,0.02) (35,-0.06)[Turk]{} (40,-0.02)(40,0.02) (40,-0.06)[Paki]{} (45,-0.02)(45,0.02) (45,-0.06)[HoKo]{} (50,-0.02)(50,0.02) (50,-0.06)[Mala]{} (55,-0.02)(55,0.02) (55,-0.06)[Ghan]{} (58,-0.02)(58,0.02) (58,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.2,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.2,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.2,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.2,0.4)[$0.4$]{} (-0.1,-0.3)(3.5,2.9) (0.5,0)(0.5,0.282)(1.5,0.282)(1.5,0) (1.5,0)(1.5,0.209)(2.5,0.209)(2.5,0) (2.5,0)(2.5,0.109)(3.5,0.109)(3.5,0) (3.5,0)(3.5,0.152)(4.5,0.152)(4.5,0) (4.5,0)(4.5,0.050)(5.5,0.050)(5.5,0) (5.5,0)(5.5,0.048)(6.5,0.048)(6.5,0) (6.5,0)(6.5,0.060)(7.5,0.060)(7.5,0) (7.5,0)(7.5,0.165)(8.5,0.165)(8.5,0) (8.5,0)(8.5,0.185)(9.5,0.185)(9.5,0) (9.5,0)(9.5,0.057)(10.5,0.057)(10.5,0) (10.5,0)(10.5,0.088)(11.5,0.088)(11.5,0) (11.5,0)(11.5,0.017)(12.5,0.017)(12.5,0) (12.5,0)(12.5,0.037)(13.5,0.037)(13.5,0) (13.5,0)(13.5,0.038)(14.5,0.038)(14.5,0) (14.5,0)(14.5,0.047)(15.5,0.047)(15.5,0) (15.5,0)(15.5,0.007)(16.5,0.007)(16.5,0) (16.5,0)(16.5,0.077)(17.5,0.077)(17.5,0) (17.5,0)(17.5,0.006)(18.5,0.006)(18.5,0) (18.5,0)(18.5,0.091)(19.5,0.091)(19.5,0) (19.5,0)(19.5,0.102)(20.5,0.102)(20.5,0) (20.5,0)(20.5,0.067)(21.5,0.067)(21.5,0) (21.5,0)(21.5,0.021)(22.5,0.021)(22.5,0) (22.5,0)(22.5,0.034)(23.5,0.034)(23.5,0) (23.5,0)(23.5,0.013)(24.5,0.013)(24.5,0) (24.5,0)(24.5,0.158)(25.5,0.158)(25.5,0) (25.5,0)(25.5,0.097)(26.5,0.097)(26.5,0) (26.5,0)(26.5,0.072)(27.5,0.072)(27.5,0) (27.5,0)(27.5,0.076)(28.5,0.076)(28.5,0) (28.5,0)(28.5,0.113)(29.5,0.113)(29.5,0) (29.5,0)(29.5,0.008)(30.5,0.008)(30.5,0) (30.5,0)(30.5,0.082)(31.5,0.082)(31.5,0) (31.5,0)(31.5,0.137)(32.5,0.137)(32.5,0) (32.5,0)(32.5,0.125)(33.5,0.125)(33.5,0) (33.5,0)(33.5,0.118)(34.5,0.118)(34.5,0) (34.5,0)(34.5,0.167)(35.5,0.167)(35.5,0) (35.5,0)(35.5,0.106)(36.5,0.106)(36.5,0) (36.5,0)(36.5,0.116)(37.5,0.116)(37.5,0) (37.5,0)(37.5,0.111)(38.5,0.111)(38.5,0) (38.5,0)(38.5,0.040)(39.5,0.040)(39.5,0) (39.5,0)(39.5,0.151)(40.5,0.151)(40.5,0) (40.5,0)(40.5,0.027)(41.5,0.027)(41.5,0) (41.5,0)(41.5,0.054)(42.5,0.054)(42.5,0) (42.5,0)(42.5,0.095)(43.5,0.095)(43.5,0) (43.5,0)(43.5,0.137)(44.5,0.137)(44.5,0) (44.5,0)(44.5,0.186)(45.5,0.186)(45.5,0) (45.5,0)(45.5,0.130)(46.5,0.130)(46.5,0) (46.5,0)(46.5,0.181)(47.5,0.181)(47.5,0) (47.5,0)(47.5,0.134)(48.5,0.134)(48.5,0) (48.5,0)(48.5,0.208)(49.5,0.208)(49.5,0) (49.5,0)(49.5,0.271)(50.5,0.271)(50.5,0) (50.5,0)(50.5,0.316)(51.5,0.316)(51.5,0) (51.5,0)(51.5,0.276)(52.5,0.276)(52.5,0) (52.5,0)(52.5,0.188)(53.5,0.188)(53.5,0) (53.5,0)(53.5,0.010)(54.5,0.010)(54.5,0) (54.5,0)(54.5,0.057)(55.5,0.057)(55.5,0) (55.5,0)(55.5,0.210)(56.5,0.210)(56.5,0) (56.5,0)(56.5,0.107)(57.5,0.107)(57.5,0) (57.5,0)(57.5,0.037)(58.5,0.037)(58.5,0) (0.5,0)(0.5,0.282)(1.5,0.282)(1.5,0) (1.5,0)(1.5,0.209)(2.5,0.209)(2.5,0) (2.5,0)(2.5,0.109)(3.5,0.109)(3.5,0) (3.5,0)(3.5,0.152)(4.5,0.152)(4.5,0) (4.5,0)(4.5,0.050)(5.5,0.050)(5.5,0) (5.5,0)(5.5,0.048)(6.5,0.048)(6.5,0) (6.5,0)(6.5,0.060)(7.5,0.060)(7.5,0) (7.5,0)(7.5,0.165)(8.5,0.165)(8.5,0) (8.5,0)(8.5,0.185)(9.5,0.185)(9.5,0) (9.5,0)(9.5,0.057)(10.5,0.057)(10.5,0) (10.5,0)(10.5,0.088)(11.5,0.088)(11.5,0) (11.5,0)(11.5,0.017)(12.5,0.017)(12.5,0) (12.5,0)(12.5,0.037)(13.5,0.037)(13.5,0) (13.5,0)(13.5,0.038)(14.5,0.038)(14.5,0) (14.5,0)(14.5,0.047)(15.5,0.047)(15.5,0) (15.5,0)(15.5,0.007)(16.5,0.007)(16.5,0) (16.5,0)(16.5,0.077)(17.5,0.077)(17.5,0) (17.5,0)(17.5,0.006)(18.5,0.006)(18.5,0) (18.5,0)(18.5,0.091)(19.5,0.091)(19.5,0) (19.5,0)(19.5,0.102)(20.5,0.102)(20.5,0) (20.5,0)(20.5,0.067)(21.5,0.067)(21.5,0) (21.5,0)(21.5,0.021)(22.5,0.021)(22.5,0) (22.5,0)(22.5,0.034)(23.5,0.034)(23.5,0) (23.5,0)(23.5,0.013)(24.5,0.013)(24.5,0) (24.5,0)(24.5,0.158)(25.5,0.158)(25.5,0) (25.5,0)(25.5,0.097)(26.5,0.097)(26.5,0) (26.5,0)(26.5,0.072)(27.5,0.072)(27.5,0) (27.5,0)(27.5,0.076)(28.5,0.076)(28.5,0) (28.5,0)(28.5,0.113)(29.5,0.113)(29.5,0) (29.5,0)(29.5,0.008)(30.5,0.008)(30.5,0) (30.5,0)(30.5,0.082)(31.5,0.082)(31.5,0) (31.5,0)(31.5,0.137)(32.5,0.137)(32.5,0) (32.5,0)(32.5,0.125)(33.5,0.125)(33.5,0) (33.5,0)(33.5,0.118)(34.5,0.118)(34.5,0) (34.5,0)(34.5,0.167)(35.5,0.167)(35.5,0) (35.5,0)(35.5,0.106)(36.5,0.106)(36.5,0) (36.5,0)(36.5,0.116)(37.5,0.116)(37.5,0) (37.5,0)(37.5,0.111)(38.5,0.111)(38.5,0) (38.5,0)(38.5,0.040)(39.5,0.040)(39.5,0) (39.5,0)(39.5,0.151)(40.5,0.151)(40.5,0) (40.5,0)(40.5,0.027)(41.5,0.027)(41.5,0) (41.5,0)(41.5,0.054)(42.5,0.054)(42.5,0) (42.5,0)(42.5,0.095)(43.5,0.095)(43.5,0) (43.5,0)(43.5,0.137)(44.5,0.137)(44.5,0) (44.5,0)(44.5,0.186)(45.5,0.186)(45.5,0) (45.5,0)(45.5,0.130)(46.5,0.130)(46.5,0) (46.5,0)(46.5,0.181)(47.5,0.181)(47.5,0) (47.5,0)(47.5,0.134)(48.5,0.134)(48.5,0) (48.5,0)(48.5,0.208)(49.5,0.208)(49.5,0) (49.5,0)(49.5,0.271)(50.5,0.271)(50.5,0) (50.5,0)(50.5,0.316)(51.5,0.316)(51.5,0) (51.5,0)(51.5,0.276)(52.5,0.276)(52.5,0) (52.5,0)(52.5,0.188)(53.5,0.188)(53.5,0) (53.5,0)(53.5,0.010)(54.5,0.010)(54.5,0) (54.5,0)(54.5,0.057)(55.5,0.057)(55.5,0) (55.5,0)(55.5,0.210)(56.5,0.210)(56.5,0) (56.5,0)(56.5,0.107)(57.5,0.107)(57.5,0) (57.5,0)(57.5,0.037)(58.5,0.037)(58.5,0) (0,0)(60.5,0) (0,0)(0,0.5) (4.6,0.5)[$e_{2}\ (02/1997)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[CoRi]{} (10,-0.02)(10,0.02) (10,-0.06)[Chil]{} (15,-0.02)(15,0.02) (15,-0.06)[Fran]{} (20,-0.02)(20,0.02) (20,-0.06)[Neth]{} (25,-0.02)(25,0.02) (25,-0.06)[Icel]{} (30,-0.02)(30,0.02) (30,-0.06)[Slok]{} (35,-0.02)(35,0.02) (35,-0.06)[Turk]{} (40,-0.02)(40,0.02) (40,-0.06)[Paki]{} (45,-0.02)(45,0.02) (45,-0.06)[HoKo]{} (50,-0.02)(50,0.02) (50,-0.06)[Mala]{} (55,-0.02)(55,0.02) (55,-0.06)[Ghan]{} (58,-0.02)(58,0.02) (58,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.2,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.2,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.2,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.2,0.4)[$0.4$]{} 0.4 cm Fig. 18: contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 1997. The indices are aligned in the following way: [**S&P**]{}, Nasd, Cana, Mexi, [**CoRi**]{}, Berm, Jama, Bra, Arg, [**Chil**]{}, Ven, Peru, UK, Irel, [**Fran**]{}, Germ, Swit, Autr, Belg, [**Neth**]{}, Swed, Denm, Finl, Norw, [**Icel**]{}, Spai, Port, Gree, CzRe, [**Slok**]{}, Hung, Pola, Esto, Russ, [**Turk**]{}, Isra, Leba, SaAr, Ohma, [**Paki**]{}, Indi, SrLa, Bang, Japa, [**HoKo**]{}, Chin, Taiw, SoKo, Thai, [**Mala**]{}, Indo, Phil, Aust, Moro, [**Ghan**]{}, Keny, SoAf, [**Maur**]{}. (-0.1,-0.3)(3.5,2.9) (0.5,0)(0.5,0.002)(1.5,0.002)(1.5,0) (1.5,0)(1.5,0.039)(2.5,0.039)(2.5,0) (2.5,0)(2.5,0.026)(3.5,0.026)(3.5,0) (3.5,0)(3.5,0.148)(4.5,0.148)(4.5,0) (4.5,0)(4.5,0.116)(5.5,0.116)(5.5,0) (5.5,0)(5.5,0.013)(6.5,0.013)(6.5,0) (6.5,0)(6.5,0.079)(7.5,0.079)(7.5,0) (7.5,0)(7.5,0.076)(8.5,0.076)(8.5,0) (8.5,0)(8.5,0.185)(9.5,0.185)(9.5,0) (9.5,0)(9.5,0.079)(10.5,0.079)(10.5,0) (10.5,0)(10.5,0.065)(11.5,0.065)(11.5,0) (11.5,0)(11.5,0.001)(12.5,0.001)(12.5,0) (12.5,0)(12.5,0.155)(13.5,0.155)(13.5,0) (13.5,0)(13.5,0.060)(14.5,0.060)(14.5,0) (14.5,0)(14.5,0.192)(15.5,0.192)(15.5,0) (15.5,0)(15.5,0.114)(16.5,0.114)(16.5,0) (16.5,0)(16.5,0.168)(17.5,0.168)(17.5,0) (17.5,0)(17.5,0.105)(18.5,0.105)(18.5,0) (18.5,0)(18.5,0.142)(19.5,0.142)(19.5,0) (19.5,0)(19.5,0.170)(20.5,0.170)(20.5,0) (20.5,0)(20.5,0.181)(21.5,0.181)(21.5,0) (21.5,0)(21.5,0.158)(22.5,0.158)(22.5,0) (22.5,0)(22.5,0.126)(23.5,0.126)(23.5,0) (23.5,0)(23.5,0.065)(24.5,0.065)(24.5,0) (24.5,0)(24.5,0.058)(25.5,0.058)(25.5,0) (25.5,0)(25.5,0.218)(26.5,0.218)(26.5,0) (26.5,0)(26.5,0.157)(27.5,0.157)(27.5,0) (27.5,0)(27.5,0.080)(28.5,0.080)(28.5,0) (28.5,0)(28.5,0.177)(29.5,0.177)(29.5,0) (29.5,0)(29.5,0.000)(30.5,0.000)(30.5,0) (30.5,0)(30.5,0.115)(31.5,0.115)(31.5,0) (31.5,0)(31.5,0.264)(32.5,0.264)(32.5,0) (32.5,0)(32.5,0.088)(33.5,0.088)(33.5,0) (33.5,0)(33.5,0.239)(34.5,0.239)(34.5,0) (34.5,0)(34.5,0.044)(35.5,0.044)(35.5,0) (35.5,0)(35.5,0.036)(36.5,0.036)(36.5,0) (36.5,0)(36.5,0.094)(37.5,0.094)(37.5,0) (37.5,0)(37.5,0.136)(38.5,0.136)(38.5,0) (38.5,0)(38.5,0.038)(39.5,0.038)(39.5,0) (39.5,0)(39.5,0.195)(40.5,0.195)(40.5,0) (40.5,0)(40.5,0.051)(41.5,0.051)(41.5,0) (41.5,0)(41.5,0.087)(42.5,0.087)(42.5,0) (42.5,0)(42.5,0.042)(43.5,0.042)(43.5,0) (43.5,0)(43.5,0.169)(44.5,0.169)(44.5,0) (44.5,0)(44.5,0.151)(45.5,0.151)(45.5,0) (45.5,0)(45.5,0.049)(46.5,0.049)(46.5,0) (46.5,0)(46.5,0.076)(47.5,0.076)(47.5,0) (47.5,0)(47.5,0.166)(48.5,0.166)(48.5,0) (48.5,0)(48.5,0.231)(49.5,0.231)(49.5,0) (49.5,0)(49.5,0.267)(50.5,0.267)(50.5,0) (50.5,0)(50.5,0.149)(51.5,0.149)(51.5,0) (51.5,0)(51.5,0.214)(52.5,0.214)(52.5,0) (52.5,0)(52.5,0.100)(53.5,0.100)(53.5,0) (53.5,0)(53.5,0.058)(54.5,0.058)(54.5,0) (54.5,0)(54.5,0.073)(55.5,0.073)(55.5,0) (55.5,0)(55.5,0.003)(56.5,0.003)(56.5,0) (56.5,0)(56.5,0.033)(57.5,0.033)(57.5,0) (57.5,0)(57.5,0.137)(58.5,0.137)(58.5,0) (0.5,0)(0.5,0.002)(1.5,0.002)(1.5,0) (1.5,0)(1.5,0.039)(2.5,0.039)(2.5,0) (2.5,0)(2.5,0.026)(3.5,0.026)(3.5,0) (3.5,0)(3.5,0.148)(4.5,0.148)(4.5,0) (4.5,0)(4.5,0.116)(5.5,0.116)(5.5,0) (5.5,0)(5.5,0.013)(6.5,0.013)(6.5,0) (6.5,0)(6.5,0.079)(7.5,0.079)(7.5,0) (7.5,0)(7.5,0.076)(8.5,0.076)(8.5,0) (8.5,0)(8.5,0.185)(9.5,0.185)(9.5,0) (9.5,0)(9.5,0.079)(10.5,0.079)(10.5,0) (10.5,0)(10.5,0.065)(11.5,0.065)(11.5,0) (11.5,0)(11.5,0.001)(12.5,0.001)(12.5,0) (12.5,0)(12.5,0.155)(13.5,0.155)(13.5,0) (13.5,0)(13.5,0.060)(14.5,0.060)(14.5,0) (14.5,0)(14.5,0.192)(15.5,0.192)(15.5,0) (15.5,0)(15.5,0.114)(16.5,0.114)(16.5,0) (16.5,0)(16.5,0.168)(17.5,0.168)(17.5,0) (17.5,0)(17.5,0.105)(18.5,0.105)(18.5,0) (18.5,0)(18.5,0.142)(19.5,0.142)(19.5,0) (19.5,0)(19.5,0.170)(20.5,0.170)(20.5,0) (20.5,0)(20.5,0.181)(21.5,0.181)(21.5,0) (21.5,0)(21.5,0.158)(22.5,0.158)(22.5,0) (22.5,0)(22.5,0.126)(23.5,0.126)(23.5,0) (23.5,0)(23.5,0.065)(24.5,0.065)(24.5,0) (24.5,0)(24.5,0.058)(25.5,0.058)(25.5,0) (25.5,0)(25.5,0.218)(26.5,0.218)(26.5,0) (26.5,0)(26.5,0.157)(27.5,0.157)(27.5,0) (27.5,0)(27.5,0.080)(28.5,0.080)(28.5,0) (28.5,0)(28.5,0.177)(29.5,0.177)(29.5,0) (29.5,0)(29.5,0.000)(30.5,0.000)(30.5,0) (30.5,0)(30.5,0.115)(31.5,0.115)(31.5,0) (31.5,0)(31.5,0.264)(32.5,0.264)(32.5,0) (32.5,0)(32.5,0.088)(33.5,0.088)(33.5,0) (33.5,0)(33.5,0.239)(34.5,0.239)(34.5,0) (34.5,0)(34.5,0.044)(35.5,0.044)(35.5,0) (35.5,0)(35.5,0.036)(36.5,0.036)(36.5,0) (36.5,0)(36.5,0.094)(37.5,0.094)(37.5,0) (37.5,0)(37.5,0.136)(38.5,0.136)(38.5,0) (38.5,0)(38.5,0.038)(39.5,0.038)(39.5,0) (39.5,0)(39.5,0.195)(40.5,0.195)(40.5,0) (40.5,0)(40.5,0.051)(41.5,0.051)(41.5,0) (41.5,0)(41.5,0.087)(42.5,0.087)(42.5,0) (42.5,0)(42.5,0.042)(43.5,0.042)(43.5,0) (43.5,0)(43.5,0.169)(44.5,0.169)(44.5,0) (44.5,0)(44.5,0.151)(45.5,0.151)(45.5,0) (45.5,0)(45.5,0.049)(46.5,0.049)(46.5,0) (46.5,0)(46.5,0.076)(47.5,0.076)(47.5,0) (47.5,0)(47.5,0.166)(48.5,0.166)(48.5,0) (48.5,0)(48.5,0.231)(49.5,0.231)(49.5,0) (49.5,0)(49.5,0.267)(50.5,0.267)(50.5,0) (50.5,0)(50.5,0.149)(51.5,0.149)(51.5,0) (51.5,0)(51.5,0.214)(52.5,0.214)(52.5,0) (52.5,0)(52.5,0.100)(53.5,0.100)(53.5,0) (53.5,0)(53.5,0.058)(54.5,0.058)(54.5,0) (54.5,0)(54.5,0.073)(55.5,0.073)(55.5,0) (55.5,0)(55.5,0.003)(56.5,0.003)(56.5,0) (56.5,0)(56.5,0.033)(57.5,0.033)(57.5,0) (57.5,0)(57.5,0.137)(58.5,0.137)(58.5,0) (0,0)(60.5,0) (0,0)(0,0.5) (4.6,0.5)[$e_{2}\ (01/1998)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[CoRi]{} (10,-0.02)(10,0.02) (10,-0.06)[Chil]{} (15,-0.02)(15,0.02) (15,-0.06)[Fran]{} (20,-0.02)(20,0.02) (20,-0.06)[Neth]{} (25,-0.02)(25,0.02) (25,-0.06)[Icel]{} (30,-0.02)(30,0.02) (30,-0.06)[Slok]{} (35,-0.02)(35,0.02) (35,-0.06)[Turk]{} (40,-0.02)(40,0.02) (40,-0.06)[Paki]{} (45,-0.02)(45,0.02) (45,-0.06)[HoKo]{} (50,-0.02)(50,0.02) (50,-0.06)[Mala]{} (55,-0.02)(55,0.02) (55,-0.06)[Ghan]{} (58,-0.02)(58,0.02) (58,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.2,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.2,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.2,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.2,0.4)[$0.4$]{} (-0.1,-0.3)(3.5,2.9) (0.5,0)(0.5,0.341)(1.5,0.341)(1.5,0) (1.5,0)(1.5,0.309)(2.5,0.309)(2.5,0) (2.5,0)(2.5,0.194)(3.5,0.194)(3.5,0) (3.5,0)(3.5,0.252)(4.5,0.252)(4.5,0) (4.5,0)(4.5,0.009)(5.5,0.009)(5.5,0) (5.5,0)(5.5,0.014)(6.5,0.014)(6.5,0) (6.5,0)(6.5,0.011)(7.5,0.011)(7.5,0) (7.5,0)(7.5,0.288)(8.5,0.288)(8.5,0) (8.5,0)(8.5,0.277)(9.5,0.277)(9.5,0) (9.5,0)(9.5,0.258)(10.5,0.258)(10.5,0) (10.5,0)(10.5,0.084)(11.5,0.084)(11.5,0) (11.5,0)(11.5,0.029)(12.5,0.029)(12.5,0) (12.5,0)(12.5,0.019)(13.5,0.019)(13.5,0) (13.5,0)(13.5,0.069)(14.5,0.069)(14.5,0) (14.5,0)(14.5,0.059)(15.5,0.059)(15.5,0) (15.5,0)(15.5,0.023)(16.5,0.023)(16.5,0) (16.5,0)(16.5,0.038)(17.5,0.038)(17.5,0) (17.5,0)(17.5,0.042)(18.5,0.042)(18.5,0) (18.5,0)(18.5,0.030)(19.5,0.030)(19.5,0) (19.5,0)(19.5,0.001)(20.5,0.001)(20.5,0) (20.5,0)(20.5,0.063)(21.5,0.063)(21.5,0) (21.5,0)(21.5,0.011)(22.5,0.011)(22.5,0) (22.5,0)(22.5,0.012)(23.5,0.012)(23.5,0) (23.5,0)(23.5,0.025)(24.5,0.025)(24.5,0) (24.5,0)(24.5,0.100)(25.5,0.100)(25.5,0) (25.5,0)(25.5,0.054)(26.5,0.054)(26.5,0) (26.5,0)(26.5,0.037)(27.5,0.037)(27.5,0) (27.5,0)(27.5,0.157)(28.5,0.157)(28.5,0) (28.5,0)(28.5,0.103)(29.5,0.103)(29.5,0) (29.5,0)(29.5,0.012)(30.5,0.012)(30.5,0) (30.5,0)(30.5,0.103)(31.5,0.103)(31.5,0) (31.5,0)(31.5,0.179)(32.5,0.179)(32.5,0) (32.5,0)(32.5,0.017)(33.5,0.017)(33.5,0) (33.5,0)(33.5,0.061)(34.5,0.061)(34.5,0) (34.5,0)(34.5,0.033)(35.5,0.033)(35.5,0) (35.5,0)(35.5,0.037)(36.5,0.037)(36.5,0) (36.5,0)(36.5,0.035)(37.5,0.035)(37.5,0) (37.5,0)(37.5,0.102)(38.5,0.102)(38.5,0) (38.5,0)(38.5,0.023)(39.5,0.023)(39.5,0) (39.5,0)(39.5,0.067)(40.5,0.067)(40.5,0) (40.5,0)(40.5,0.175)(41.5,0.175)(41.5,0) (41.5,0)(41.5,0.027)(42.5,0.027)(42.5,0) (42.5,0)(42.5,0.015)(43.5,0.015)(43.5,0) (43.5,0)(43.5,0.090)(44.5,0.090)(44.5,0) (44.5,0)(44.5,0.139)(45.5,0.139)(45.5,0) (45.5,0)(45.5,0.064)(46.5,0.064)(46.5,0) (46.5,0)(46.5,0.119)(47.5,0.119)(47.5,0) (47.5,0)(47.5,0.221)(48.5,0.221)(48.5,0) (48.5,0)(48.5,0.141)(49.5,0.141)(49.5,0) (49.5,0)(49.5,0.181)(50.5,0.181)(50.5,0) (50.5,0)(50.5,0.241)(51.5,0.241)(51.5,0) (51.5,0)(51.5,0.117)(52.5,0.117)(52.5,0) (52.5,0)(52.5,0.189)(53.5,0.189)(53.5,0) (53.5,0)(53.5,0.083)(54.5,0.083)(54.5,0) (54.5,0)(54.5,0.097)(55.5,0.097)(55.5,0) (55.5,0)(55.5,0.020)(56.5,0.020)(56.5,0) (56.5,0)(56.5,0.075)(57.5,0.075)(57.5,0) (57.5,0)(57.5,0.100)(58.5,0.100)(58.5,0) (0.5,0)(0.5,0.341)(1.5,0.341)(1.5,0) (1.5,0)(1.5,0.309)(2.5,0.309)(2.5,0) (2.5,0)(2.5,0.194)(3.5,0.194)(3.5,0) (3.5,0)(3.5,0.252)(4.5,0.252)(4.5,0) (4.5,0)(4.5,0.009)(5.5,0.009)(5.5,0) (5.5,0)(5.5,0.014)(6.5,0.014)(6.5,0) (6.5,0)(6.5,0.011)(7.5,0.011)(7.5,0) (7.5,0)(7.5,0.288)(8.5,0.288)(8.5,0) (8.5,0)(8.5,0.277)(9.5,0.277)(9.5,0) (9.5,0)(9.5,0.258)(10.5,0.258)(10.5,0) (10.5,0)(10.5,0.084)(11.5,0.084)(11.5,0) (11.5,0)(11.5,0.029)(12.5,0.029)(12.5,0) (12.5,0)(12.5,0.019)(13.5,0.019)(13.5,0) (13.5,0)(13.5,0.069)(14.5,0.069)(14.5,0) (14.5,0)(14.5,0.059)(15.5,0.059)(15.5,0) (15.5,0)(15.5,0.023)(16.5,0.023)(16.5,0) (16.5,0)(16.5,0.038)(17.5,0.038)(17.5,0) (17.5,0)(17.5,0.042)(18.5,0.042)(18.5,0) (18.5,0)(18.5,0.030)(19.5,0.030)(19.5,0) (19.5,0)(19.5,0.001)(20.5,0.001)(20.5,0) (20.5,0)(20.5,0.063)(21.5,0.063)(21.5,0) (21.5,0)(21.5,0.011)(22.5,0.011)(22.5,0) (22.5,0)(22.5,0.012)(23.5,0.012)(23.5,0) (23.5,0)(23.5,0.025)(24.5,0.025)(24.5,0) (24.5,0)(24.5,0.100)(25.5,0.100)(25.5,0) (25.5,0)(25.5,0.054)(26.5,0.054)(26.5,0) (26.5,0)(26.5,0.037)(27.5,0.037)(27.5,0) (27.5,0)(27.5,0.157)(28.5,0.157)(28.5,0) (28.5,0)(28.5,0.103)(29.5,0.103)(29.5,0) (29.5,0)(29.5,0.012)(30.5,0.012)(30.5,0) (30.5,0)(30.5,0.103)(31.5,0.103)(31.5,0) (31.5,0)(31.5,0.179)(32.5,0.179)(32.5,0) (32.5,0)(32.5,0.017)(33.5,0.017)(33.5,0) (33.5,0)(33.5,0.061)(34.5,0.061)(34.5,0) (34.5,0)(34.5,0.033)(35.5,0.033)(35.5,0) (35.5,0)(35.5,0.037)(36.5,0.037)(36.5,0) (36.5,0)(36.5,0.035)(37.5,0.035)(37.5,0) (37.5,0)(37.5,0.102)(38.5,0.102)(38.5,0) (38.5,0)(38.5,0.023)(39.5,0.023)(39.5,0) (39.5,0)(39.5,0.067)(40.5,0.067)(40.5,0) (40.5,0)(40.5,0.175)(41.5,0.175)(41.5,0) (41.5,0)(41.5,0.027)(42.5,0.027)(42.5,0) (42.5,0)(42.5,0.015)(43.5,0.015)(43.5,0) (43.5,0)(43.5,0.090)(44.5,0.090)(44.5,0) (44.5,0)(44.5,0.139)(45.5,0.139)(45.5,0) (45.5,0)(45.5,0.064)(46.5,0.064)(46.5,0) (46.5,0)(46.5,0.119)(47.5,0.119)(47.5,0) (47.5,0)(47.5,0.221)(48.5,0.221)(48.5,0) (48.5,0)(48.5,0.141)(49.5,0.141)(49.5,0) (49.5,0)(49.5,0.181)(50.5,0.181)(50.5,0) (50.5,0)(50.5,0.241)(51.5,0.241)(51.5,0) (51.5,0)(51.5,0.117)(52.5,0.117)(52.5,0) (52.5,0)(52.5,0.189)(53.5,0.189)(53.5,0) (53.5,0)(53.5,0.083)(54.5,0.083)(54.5,0) (54.5,0)(54.5,0.097)(55.5,0.097)(55.5,0) (55.5,0)(55.5,0.020)(56.5,0.020)(56.5,0) (56.5,0)(56.5,0.075)(57.5,0.075)(57.5,0) (57.5,0)(57.5,0.100)(58.5,0.100)(58.5,0) (0,0)(60.5,0) (0,0)(0,0.5) (4.6,0.5)[$e_{2}\ (02/1998)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[CoRi]{} (10,-0.02)(10,0.02) (10,-0.06)[Chil]{} (15,-0.02)(15,0.02) (15,-0.06)[Fran]{} (20,-0.02)(20,0.02) (20,-0.06)[Neth]{} (25,-0.02)(25,0.02) (25,-0.06)[Icel]{} (30,-0.02)(30,0.02) (30,-0.06)[Slok]{} (35,-0.02)(35,0.02) (35,-0.06)[Turk]{} (40,-0.02)(40,0.02) (40,-0.06)[Paki]{} (45,-0.02)(45,0.02) (45,-0.06)[HoKo]{} (50,-0.02)(50,0.02) (50,-0.06)[Mala]{} (55,-0.02)(55,0.02) (55,-0.06)[Ghan]{} (58,-0.02)(58,0.02) (58,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.2,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.2,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.2,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.2,0.4)[$0.4$]{} 0.4 cm Fig. 19. Contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 1997. The indices are aligned in the following way: [**S&P**]{}, Nasd, Cana, Mexi, [**CoRi**]{}, Berm, Jama, Bra, Arg, [**Chil**]{}, Ven, Peru, UK, Irel, [**Fran**]{}, Germ, Swit, Autr, Belg, [**Neth**]{}, Swed, Denm, Finl, Norw, [**Icel**]{}, Spai, Port, Gree, CzRe, [**Slok**]{}, Hung, Pola, Esto, Russ, [**Turk**]{}, Isra, Leba, SaAr, Ohma, [**Paki**]{}, Indi, SrLa, Bang, Japa, [**HoKo**]{}, Chin, Taiw, SoKo, Thai, [**Mala**]{}, Indo, Phil, Aust, Moro, [**Ghan**]{}, Keny, SoAf, [**Maur**]{}. 0.3 cm (-0.1,0)(3.5,2.7) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.293)(2.5,0.293)(2.5,0) (2.5,0)(2.5,0.226)(3.5,0.226)(3.5,0) (3.5,0)(3.5,0.278)(4.5,0.278)(4.5,0) (4.5,0)(4.5,0.024)(5.5,0.024)(5.5,0) (5.5,0)(5.5,0.033)(6.5,0.033)(6.5,0) (6.5,0)(6.5,0.017)(7.5,0.017)(7.5,0) (7.5,0)(7.5,0.112)(8.5,0.112)(8.5,0) (8.5,0)(8.5,0.265)(9.5,0.265)(9.5,0) (9.5,0)(9.5,0.270)(10.5,0.270)(10.5,0) (10.5,0)(10.5,0.161)(11.5,0.161)(11.5,0) (11.5,0)(11.5,0.011)(12.5,0.011)(12.5,0) (12.5,0)(12.5,0.057)(13.5,0.057)(13.5,0) (13.5,0)(13.5,0.049)(14.5,0.049)(14.5,0) (14.5,0)(14.5,0.100)(15.5,0.100)(15.5,0) (15.5,0)(15.5,0.000)(16.5,0.000)(16.5,0) (16.5,0)(16.5,0.043)(17.5,0.043)(17.5,0) (17.5,0)(17.5,0.006)(18.5,0.006)(18.5,0) (18.5,0)(18.5,0.094)(19.5,0.094)(19.5,0) (19.5,0)(19.5,0.025)(20.5,0.025)(20.5,0) (20.5,0)(20.5,0.104)(21.5,0.104)(21.5,0) (21.5,0)(21.5,0.043)(22.5,0.043)(22.5,0) (22.5,0)(22.5,0.024)(23.5,0.024)(23.5,0) (23.5,0)(23.5,0.091)(24.5,0.091)(24.5,0) (24.5,0)(24.5,0.083)(25.5,0.083)(25.5,0) (25.5,0)(25.5,0.092)(26.5,0.092)(26.5,0) (26.5,0)(26.5,0.113)(27.5,0.113)(27.5,0) (27.5,0)(27.5,0.017)(28.5,0.017)(28.5,0) (28.5,0)(28.5,0.117)(29.5,0.117)(29.5,0) (29.5,0)(29.5,0.005)(30.5,0.005)(30.5,0) (30.5,0)(30.5,0.053)(31.5,0.053)(31.5,0) (31.5,0)(31.5,0.127)(32.5,0.127)(32.5,0) (32.5,0)(32.5,0.023)(33.5,0.023)(33.5,0) (33.5,0)(33.5,0.051)(34.5,0.051)(34.5,0) (34.5,0)(34.5,0.003)(35.5,0.003)(35.5,0) (35.5,0)(35.5,0.178)(36.5,0.178)(36.5,0) (36.5,0)(36.5,0.011)(37.5,0.011)(37.5,0) (37.5,0)(37.5,0.166)(38.5,0.166)(38.5,0) (38.5,0)(38.5,0.030)(39.5,0.030)(39.5,0) (39.5,0)(39.5,0.014)(40.5,0.014)(40.5,0) (40.5,0)(40.5,0.036)(41.5,0.036)(41.5,0) (41.5,0)(41.5,0.103)(42.5,0.103)(42.5,0) (42.5,0)(42.5,0.026)(43.5,0.026)(43.5,0) (43.5,0)(43.5,0.111)(44.5,0.111)(44.5,0) (44.5,0)(44.5,0.070)(45.5,0.070)(45.5,0) (45.5,0)(45.5,0.097)(46.5,0.097)(46.5,0) (46.5,0)(46.5,0.017)(47.5,0.017)(47.5,0) (47.5,0)(47.5,0.033)(48.5,0.033)(48.5,0) (48.5,0)(48.5,0.062)(49.5,0.062)(49.5,0) (49.5,0)(49.5,0.038)(50.5,0.038)(50.5,0) (50.5,0)(50.5,0.080)(51.5,0.080)(51.5,0) (51.5,0)(51.5,0.061)(52.5,0.061)(52.5,0) (52.5,0)(52.5,0.012)(53.5,0.012)(53.5,0) (53.5,0)(53.5,0.030)(54.5,0.030)(54.5,0) (54.5,0)(54.5,0.200)(55.5,0.200)(55.5,0) (55.5,0)(55.5,0.152)(56.5,0.152)(56.5,0) (56.5,0)(56.5,0.067)(57.5,0.067)(57.5,0) (57.5,0)(57.5,0.021)(58.5,0.021)(58.5,0) (58.5,0)(58.5,0.061)(59.5,0.061)(59.5,0) (59.5,0)(59.5,0.167)(60.5,0.167)(60.5,0) (60.5,0)(60.5,0.137)(61.5,0.137)(61.5,0) (61.5,0)(61.5,0.128)(62.5,0.128)(62.5,0) (62.5,0)(62.5,0.122)(63.5,0.122)(63.5,0) (63.5,0)(63.5,0.111)(64.5,0.111)(64.5,0) (64.5,0)(64.5,0.157)(65.5,0.157)(65.5,0) (65.5,0)(65.5,0.093)(66.5,0.093)(66.5,0) (66.5,0)(66.5,0.067)(67.5,0.067)(67.5,0) (67.5,0)(67.5,0.062)(68.5,0.062)(68.5,0) (68.5,0)(68.5,0.041)(69.5,0.041)(69.5,0) (69.5,0)(69.5,0.162)(70.5,0.162)(70.5,0) (70.5,0)(70.5,0.058)(71.5,0.058)(71.5,0) (71.5,0)(71.5,0.012)(72.5,0.012)(72.5,0) (72.5,0)(72.5,0.067)(73.5,0.067)(73.5,0) (73.5,0)(73.5,0.033)(74.5,0.033)(74.5,0) (74.5,0)(74.5,0.000)(75.5,0.000)(75.5,0) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.293)(2.5,0.293)(2.5,0) (2.5,0)(2.5,0.226)(3.5,0.226)(3.5,0) (3.5,0)(3.5,0.278)(4.5,0.278)(4.5,0) (4.5,0)(4.5,0.024)(5.5,0.024)(5.5,0) (5.5,0)(5.5,0.033)(6.5,0.033)(6.5,0) (6.5,0)(6.5,0.017)(7.5,0.017)(7.5,0) (7.5,0)(7.5,0.112)(8.5,0.112)(8.5,0) (8.5,0)(8.5,0.265)(9.5,0.265)(9.5,0) (9.5,0)(9.5,0.270)(10.5,0.270)(10.5,0) (10.5,0)(10.5,0.161)(11.5,0.161)(11.5,0) (11.5,0)(11.5,0.011)(12.5,0.011)(12.5,0) (12.5,0)(12.5,0.057)(13.5,0.057)(13.5,0) (13.5,0)(13.5,0.049)(14.5,0.049)(14.5,0) (14.5,0)(14.5,0.100)(15.5,0.100)(15.5,0) (15.5,0)(15.5,0.000)(16.5,0.000)(16.5,0) (16.5,0)(16.5,0.043)(17.5,0.043)(17.5,0) (17.5,0)(17.5,0.006)(18.5,0.006)(18.5,0) (18.5,0)(18.5,0.094)(19.5,0.094)(19.5,0) (19.5,0)(19.5,0.025)(20.5,0.025)(20.5,0) (20.5,0)(20.5,0.104)(21.5,0.104)(21.5,0) (21.5,0)(21.5,0.043)(22.5,0.043)(22.5,0) (22.5,0)(22.5,0.024)(23.5,0.024)(23.5,0) (23.5,0)(23.5,0.091)(24.5,0.091)(24.5,0) (24.5,0)(24.5,0.083)(25.5,0.083)(25.5,0) (25.5,0)(25.5,0.092)(26.5,0.092)(26.5,0) (26.5,0)(26.5,0.113)(27.5,0.113)(27.5,0) (27.5,0)(27.5,0.017)(28.5,0.017)(28.5,0) (28.5,0)(28.5,0.117)(29.5,0.117)(29.5,0) (29.5,0)(29.5,0.005)(30.5,0.005)(30.5,0) (30.5,0)(30.5,0.053)(31.5,0.053)(31.5,0) (31.5,0)(31.5,0.127)(32.5,0.127)(32.5,0) (32.5,0)(32.5,0.023)(33.5,0.023)(33.5,0) (33.5,0)(33.5,0.051)(34.5,0.051)(34.5,0) (34.5,0)(34.5,0.003)(35.5,0.003)(35.5,0) (35.5,0)(35.5,0.178)(36.5,0.178)(36.5,0) (36.5,0)(36.5,0.011)(37.5,0.011)(37.5,0) (37.5,0)(37.5,0.166)(38.5,0.166)(38.5,0) (38.5,0)(38.5,0.030)(39.5,0.030)(39.5,0) (39.5,0)(39.5,0.014)(40.5,0.014)(40.5,0) (40.5,0)(40.5,0.036)(41.5,0.036)(41.5,0) (41.5,0)(41.5,0.103)(42.5,0.103)(42.5,0) (42.5,0)(42.5,0.026)(43.5,0.026)(43.5,0) (43.5,0)(43.5,0.111)(44.5,0.111)(44.5,0) (44.5,0)(44.5,0.070)(45.5,0.070)(45.5,0) (45.5,0)(45.5,0.097)(46.5,0.097)(46.5,0) (46.5,0)(46.5,0.017)(47.5,0.017)(47.5,0) (47.5,0)(47.5,0.033)(48.5,0.033)(48.5,0) (48.5,0)(48.5,0.062)(49.5,0.062)(49.5,0) (49.5,0)(49.5,0.038)(50.5,0.038)(50.5,0) (50.5,0)(50.5,0.080)(51.5,0.080)(51.5,0) (51.5,0)(51.5,0.061)(52.5,0.061)(52.5,0) (52.5,0)(52.5,0.012)(53.5,0.012)(53.5,0) (53.5,0)(53.5,0.030)(54.5,0.030)(54.5,0) (54.5,0)(54.5,0.200)(55.5,0.200)(55.5,0) (55.5,0)(55.5,0.152)(56.5,0.152)(56.5,0) (56.5,0)(56.5,0.067)(57.5,0.067)(57.5,0) (57.5,0)(57.5,0.021)(58.5,0.021)(58.5,0) (58.5,0)(58.5,0.061)(59.5,0.061)(59.5,0) (59.5,0)(59.5,0.167)(60.5,0.167)(60.5,0) (60.5,0)(60.5,0.137)(61.5,0.137)(61.5,0) (61.5,0)(61.5,0.128)(62.5,0.128)(62.5,0) (62.5,0)(62.5,0.122)(63.5,0.122)(63.5,0) (63.5,0)(63.5,0.111)(64.5,0.111)(64.5,0) (64.5,0)(64.5,0.157)(65.5,0.157)(65.5,0) (65.5,0)(65.5,0.093)(66.5,0.093)(66.5,0) (66.5,0)(66.5,0.067)(67.5,0.067)(67.5,0) (67.5,0)(67.5,0.062)(68.5,0.062)(68.5,0) (68.5,0)(68.5,0.041)(69.5,0.041)(69.5,0) (69.5,0)(69.5,0.162)(70.5,0.162)(70.5,0) (70.5,0)(70.5,0.058)(71.5,0.058)(71.5,0) (71.5,0)(71.5,0.012)(72.5,0.012)(72.5,0) (72.5,0)(72.5,0.067)(73.5,0.067)(73.5,0) (73.5,0)(73.5,0.033)(74.5,0.033)(74.5,0) (74.5,0)(74.5,0.000)(75.5,0.000)(75.5,0) (0,0)(77,0) (0,0)(0,0.5) (5.8,0.5)[$e_{2}\ (01/2000)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[Pana]{} (10,-0.02)(10,0.02) (10,-0.06)[Arge]{} (15,-0.02)(15,0.02) (15,-0.06)[Irel]{} (20,-0.02)(20,0.02) (20,-0.06)[Ital]{} (25,-0.02)(25,0.02) (25,-0.06)[Swed]{} (30,-0.02)(30,0.02) (30,-0.06)[Spai]{} (35,-0.02)(35,0.02) (35,-0.06)[Hung]{} (40,-0.02)(40,0.02) (40,-0.06)[Lith]{} (45,-0.02)(45,0.02) (45,-0.06)[Pale]{} (50,-0.02)(50,0.02) (49,-0.06)[Ohma]{} (55,-0.02)(55,0.02) (55,-0.06)[Japa]{} (60,-0.02)(60,0.02) (60,-0.06)[SoKo]{} (65,-0.02)(65,0.02) (65,-0.06)[Phil]{} (70,-0.02)(70,0.02) (70,-0.06)[Ghan]{} (74,-0.02)(74,0.02) (74,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.38,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.38,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.38,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.38,0.4)[$0.4$]{} (-0.1,0)(3.5,3.2) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.216)(2.5,0.216)(2.5,0) (2.5,0)(2.5,0.194)(3.5,0.194)(3.5,0) (3.5,0)(3.5,0.091)(4.5,0.091)(4.5,0) (4.5,0)(4.5,0.157)(5.5,0.157)(5.5,0) (5.5,0)(5.5,0.006)(6.5,0.006)(6.5,0) (6.5,0)(6.5,0.023)(7.5,0.023)(7.5,0) (7.5,0)(7.5,0.089)(8.5,0.089)(8.5,0) (8.5,0)(8.5,0.016)(9.5,0.016)(9.5,0) (9.5,0)(9.5,0.150)(10.5,0.150)(10.5,0) (10.5,0)(10.5,0.281)(11.5,0.281)(11.5,0) (11.5,0)(11.5,0.026)(12.5,0.026)(12.5,0) (12.5,0)(12.5,0.039)(13.5,0.039)(13.5,0) (13.5,0)(13.5,0.025)(14.5,0.025)(14.5,0) (14.5,0)(14.5,0.019)(15.5,0.019)(15.5,0) (15.5,0)(15.5,0.147)(16.5,0.147)(16.5,0) (16.5,0)(16.5,0.121)(17.5,0.121)(17.5,0) (17.5,0)(17.5,0.144)(18.5,0.144)(18.5,0) (18.5,0)(18.5,0.090)(19.5,0.090)(19.5,0) (19.5,0)(19.5,0.011)(20.5,0.011)(20.5,0) (20.5,0)(20.5,0.147)(21.5,0.147)(21.5,0) (21.5,0)(21.5,0.183)(22.5,0.183)(22.5,0) (22.5,0)(22.5,0.002)(23.5,0.002)(23.5,0) (23.5,0)(23.5,0.031)(24.5,0.031)(24.5,0) (24.5,0)(24.5,0.057)(25.5,0.057)(25.5,0) (25.5,0)(25.5,0.116)(26.5,0.116)(26.5,0) (26.5,0)(26.5,0.056)(27.5,0.056)(27.5,0) (27.5,0)(27.5,0.037)(28.5,0.037)(28.5,0) (28.5,0)(28.5,0.019)(29.5,0.019)(29.5,0) (29.5,0)(29.5,0.007)(30.5,0.007)(30.5,0) (30.5,0)(30.5,0.116)(31.5,0.116)(31.5,0) (31.5,0)(31.5,0.097)(32.5,0.097)(32.5,0) (32.5,0)(32.5,0.015)(33.5,0.015)(33.5,0) (33.5,0)(33.5,0.080)(34.5,0.080)(34.5,0) (34.5,0)(34.5,0.032)(35.5,0.032)(35.5,0) (35.5,0)(35.5,0.116)(36.5,0.116)(36.5,0) (36.5,0)(36.5,0.168)(37.5,0.168)(37.5,0) (37.5,0)(37.5,0.073)(38.5,0.073)(38.5,0) (38.5,0)(38.5,0.074)(39.5,0.074)(39.5,0) (39.5,0)(39.5,0.023)(40.5,0.023)(40.5,0) (40.5,0)(40.5,0.147)(41.5,0.147)(41.5,0) (41.5,0)(41.5,0.021)(42.5,0.021)(42.5,0) (42.5,0)(42.5,0.200)(43.5,0.200)(43.5,0) (43.5,0)(43.5,0.178)(44.5,0.178)(44.5,0) (44.5,0)(44.5,0.015)(45.5,0.015)(45.5,0) (45.5,0)(45.5,0.036)(46.5,0.036)(46.5,0) (46.5,0)(46.5,0.076)(47.5,0.076)(47.5,0) (47.5,0)(47.5,0.059)(48.5,0.059)(48.5,0) (48.5,0)(48.5,0.103)(49.5,0.103)(49.5,0) (49.5,0)(49.5,0.095)(50.5,0.095)(50.5,0) (50.5,0)(50.5,0.018)(51.5,0.018)(51.5,0) (51.5,0)(51.5,0.030)(52.5,0.030)(52.5,0) (52.5,0)(52.5,0.127)(53.5,0.127)(53.5,0) (53.5,0)(53.5,0.045)(54.5,0.045)(54.5,0) (54.5,0)(54.5,0.052)(55.5,0.052)(55.5,0) (55.5,0)(55.5,0.190)(56.5,0.190)(56.5,0) (56.5,0)(56.5,0.255)(57.5,0.255)(57.5,0) (57.5,0)(57.5,0.117)(58.5,0.117)(58.5,0) (58.5,0)(58.5,0.080)(59.5,0.080)(59.5,0) (59.5,0)(59.5,0.153)(60.5,0.153)(60.5,0) (60.5,0)(60.5,0.209)(61.5,0.209)(61.5,0) (61.5,0)(61.5,0.209)(62.5,0.209)(62.5,0) (62.5,0)(62.5,0.176)(63.5,0.176)(63.5,0) (63.5,0)(63.5,0.105)(64.5,0.105)(64.5,0) (64.5,0)(64.5,0.110)(65.5,0.110)(65.5,0) (65.5,0)(65.5,0.143)(66.5,0.143)(66.5,0) (66.5,0)(66.5,0.223)(67.5,0.223)(67.5,0) (67.5,0)(67.5,0.024)(68.5,0.024)(68.5,0) (68.5,0)(68.5,0.067)(69.5,0.067)(69.5,0) (69.5,0)(69.5,0.051)(70.5,0.051)(70.5,0) (70.5,0)(70.5,0.115)(71.5,0.115)(71.5,0) (71.5,0)(71.5,0.047)(72.5,0.047)(72.5,0) (72.5,0)(72.5,0.082)(73.5,0.082)(73.5,0) (73.5,0)(73.5,0.068)(74.5,0.068)(74.5,0) (74.5,0)(74.5,0.080)(75.5,0.080)(75.5,0) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.216)(2.5,0.216)(2.5,0) (2.5,0)(2.5,0.194)(3.5,0.194)(3.5,0) (3.5,0)(3.5,0.091)(4.5,0.091)(4.5,0) (4.5,0)(4.5,0.157)(5.5,0.157)(5.5,0) (5.5,0)(5.5,0.006)(6.5,0.006)(6.5,0) (6.5,0)(6.5,0.023)(7.5,0.023)(7.5,0) (7.5,0)(7.5,0.089)(8.5,0.089)(8.5,0) (8.5,0)(8.5,0.016)(9.5,0.016)(9.5,0) (9.5,0)(9.5,0.150)(10.5,0.150)(10.5,0) (10.5,0)(10.5,0.281)(11.5,0.281)(11.5,0) (11.5,0)(11.5,0.026)(12.5,0.026)(12.5,0) (12.5,0)(12.5,0.039)(13.5,0.039)(13.5,0) (13.5,0)(13.5,0.025)(14.5,0.025)(14.5,0) (14.5,0)(14.5,0.019)(15.5,0.019)(15.5,0) (15.5,0)(15.5,0.147)(16.5,0.147)(16.5,0) (16.5,0)(16.5,0.121)(17.5,0.121)(17.5,0) (17.5,0)(17.5,0.144)(18.5,0.144)(18.5,0) (18.5,0)(18.5,0.090)(19.5,0.090)(19.5,0) (19.5,0)(19.5,0.011)(20.5,0.011)(20.5,0) (20.5,0)(20.5,0.147)(21.5,0.147)(21.5,0) (21.5,0)(21.5,0.183)(22.5,0.183)(22.5,0) (22.5,0)(22.5,0.002)(23.5,0.002)(23.5,0) (23.5,0)(23.5,0.031)(24.5,0.031)(24.5,0) (24.5,0)(24.5,0.057)(25.5,0.057)(25.5,0) (25.5,0)(25.5,0.116)(26.5,0.116)(26.5,0) (26.5,0)(26.5,0.056)(27.5,0.056)(27.5,0) (27.5,0)(27.5,0.037)(28.5,0.037)(28.5,0) (28.5,0)(28.5,0.019)(29.5,0.019)(29.5,0) (29.5,0)(29.5,0.007)(30.5,0.007)(30.5,0) (30.5,0)(30.5,0.116)(31.5,0.116)(31.5,0) (31.5,0)(31.5,0.097)(32.5,0.097)(32.5,0) (32.5,0)(32.5,0.015)(33.5,0.015)(33.5,0) (33.5,0)(33.5,0.080)(34.5,0.080)(34.5,0) (34.5,0)(34.5,0.032)(35.5,0.032)(35.5,0) (35.5,0)(35.5,0.116)(36.5,0.116)(36.5,0) (36.5,0)(36.5,0.168)(37.5,0.168)(37.5,0) (37.5,0)(37.5,0.073)(38.5,0.073)(38.5,0) (38.5,0)(38.5,0.074)(39.5,0.074)(39.5,0) (39.5,0)(39.5,0.023)(40.5,0.023)(40.5,0) (40.5,0)(40.5,0.147)(41.5,0.147)(41.5,0) (41.5,0)(41.5,0.021)(42.5,0.021)(42.5,0) (42.5,0)(42.5,0.200)(43.5,0.200)(43.5,0) (43.5,0)(43.5,0.178)(44.5,0.178)(44.5,0) (44.5,0)(44.5,0.015)(45.5,0.015)(45.5,0) (45.5,0)(45.5,0.036)(46.5,0.036)(46.5,0) (46.5,0)(46.5,0.076)(47.5,0.076)(47.5,0) (47.5,0)(47.5,0.059)(48.5,0.059)(48.5,0) (48.5,0)(48.5,0.103)(49.5,0.103)(49.5,0) (49.5,0)(49.5,0.095)(50.5,0.095)(50.5,0) (50.5,0)(50.5,0.018)(51.5,0.018)(51.5,0) (51.5,0)(51.5,0.030)(52.5,0.030)(52.5,0) (52.5,0)(52.5,0.127)(53.5,0.127)(53.5,0) (53.5,0)(53.5,0.045)(54.5,0.045)(54.5,0) (54.5,0)(54.5,0.052)(55.5,0.052)(55.5,0) (55.5,0)(55.5,0.190)(56.5,0.190)(56.5,0) (56.5,0)(56.5,0.255)(57.5,0.255)(57.5,0) (57.5,0)(57.5,0.117)(58.5,0.117)(58.5,0) (58.5,0)(58.5,0.080)(59.5,0.080)(59.5,0) (59.5,0)(59.5,0.153)(60.5,0.153)(60.5,0) (60.5,0)(60.5,0.209)(61.5,0.209)(61.5,0) (61.5,0)(61.5,0.209)(62.5,0.209)(62.5,0) (62.5,0)(62.5,0.176)(63.5,0.176)(63.5,0) (63.5,0)(63.5,0.105)(64.5,0.105)(64.5,0) (64.5,0)(64.5,0.110)(65.5,0.110)(65.5,0) (65.5,0)(65.5,0.143)(66.5,0.143)(66.5,0) (66.5,0)(66.5,0.223)(67.5,0.223)(67.5,0) (67.5,0)(67.5,0.024)(68.5,0.024)(68.5,0) (68.5,0)(68.5,0.067)(69.5,0.067)(69.5,0) (69.5,0)(69.5,0.051)(70.5,0.051)(70.5,0) (70.5,0)(70.5,0.115)(71.5,0.115)(71.5,0) (71.5,0)(71.5,0.047)(72.5,0.047)(72.5,0) (72.5,0)(72.5,0.082)(73.5,0.082)(73.5,0) (73.5,0)(73.5,0.068)(74.5,0.068)(74.5,0) (74.5,0)(74.5,0.080)(75.5,0.080)(75.5,0) (0,0)(77,0) (0,0)(0,0.5) (5.8,0.5)[$e_{2}\ (02/2000)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[Pana]{} (10,-0.02)(10,0.02) (10,-0.06)[Arge]{} (15,-0.02)(15,0.02) (15,-0.06)[Irel]{} (20,-0.02)(20,0.02) (20,-0.06)[Ital]{} (25,-0.02)(25,0.02) (25,-0.06)[Swed]{} (30,-0.02)(30,0.02) (30,-0.06)[Spai]{} (35,-0.02)(35,0.02) (35,-0.06)[Hung]{} (40,-0.02)(40,0.02) (40,-0.06)[Lith]{} (45,-0.02)(45,0.02) (45,-0.06)[Pale]{} (50,-0.02)(50,0.02) (49,-0.06)[Ohma]{} (55,-0.02)(55,0.02) (55,-0.06)[Japa]{} (60,-0.02)(60,0.02) (60,-0.06)[SoKo]{} (65,-0.02)(65,0.02) (65,-0.06)[Phil]{} (70,-0.02)(70,0.02) (70,-0.06)[Ghan]{} (74,-0.02)(74,0.02) (74,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.38,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.38,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.38,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.38,0.4)[$0.4$]{} 0.6 cm Fig. 20. Contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 2000. The indices are aligned in the following way: [**S&P**]{}, Nasd, Cana, Mexi, [**Pana**]{}, CoRi, Berm, Jama, Braz, [**Arge**]{}, Chil, Vene, Peru, UK, [**Irel**]{}, Fran, Germ, Swit, Autr, [**Ita**]{}, Malt, Belg, Neth, Luxe, [**Swed**]{}, Denm, Finl, Norw, Icel, [**Spai**]{}, Port, Gree, CzRe, Slok, [**Hung**]{}, Pola, Roma, Esto, Latv, [**Lith**]{}, Ukra, Russ, Turk, Isra, [**Pale**]{}, Leba, Jord, SaAr, Qata, [**Ohma**]{}, Paki, Indi, SrLa, Bang, [**Japa**]{}, HoKo, Chin, Mong, Taiw, [**SoKo**]{}, Thai, Mala, Sing, Indo, [**Phil**]{}, Aust, Moro, Tuni, Egyp, [**Ghan**]{}, Nige, Keny, SoAf, [**Maur**]{}. (-0.1,0)(3.5,3.1) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.280)(2.5,0.280)(2.5,0) (2.5,0)(2.5,0.238)(3.5,0.238)(3.5,0) (3.5,0)(3.5,0.233)(4.5,0.233)(4.5,0) (4.5,0)(4.5,0.126)(5.5,0.126)(5.5,0) (5.5,0)(5.5,0.063)(6.5,0.063)(6.5,0) (6.5,0)(6.5,0.019)(7.5,0.019)(7.5,0) (7.5,0)(7.5,0.040)(8.5,0.040)(8.5,0) (8.5,0)(8.5,0.024)(9.5,0.024)(9.5,0) (9.5,0)(9.5,0.100)(10.5,0.100)(10.5,0) (10.5,0)(10.5,0.098)(11.5,0.098)(11.5,0) (11.5,0)(11.5,0.032)(12.5,0.032)(12.5,0) (12.5,0)(12.5,0.001)(13.5,0.001)(13.5,0) (13.5,0)(13.5,0.018)(14.5,0.018)(14.5,0) (14.5,0)(14.5,0.121)(15.5,0.121)(15.5,0) (15.5,0)(15.5,0.026)(16.5,0.026)(16.5,0) (16.5,0)(16.5,0.063)(17.5,0.063)(17.5,0) (17.5,0)(17.5,0.130)(18.5,0.130)(18.5,0) (18.5,0)(18.5,0.093)(19.5,0.093)(19.5,0) (19.5,0)(19.5,0.036)(20.5,0.036)(20.5,0) (20.5,0)(20.5,0.083)(21.5,0.083)(21.5,0) (21.5,0)(21.5,0.093)(22.5,0.093)(22.5,0) (22.5,0)(22.5,0.043)(23.5,0.043)(23.5,0) (23.5,0)(23.5,0.037)(24.5,0.037)(24.5,0) (24.5,0)(24.5,0.161)(25.5,0.161)(25.5,0) (25.5,0)(25.5,0.089)(26.5,0.089)(26.5,0) (26.5,0)(26.5,0.047)(27.5,0.047)(27.5,0) (27.5,0)(27.5,0.000)(28.5,0.000)(28.5,0) (28.5,0)(28.5,0.024)(29.5,0.024)(29.5,0) (29.5,0)(29.5,0.124)(30.5,0.124)(30.5,0) (30.5,0)(30.5,0.106)(31.5,0.106)(31.5,0) (31.5,0)(31.5,0.048)(32.5,0.048)(32.5,0) (32.5,0)(32.5,0.003)(33.5,0.003)(33.5,0) (33.5,0)(33.5,0.102)(34.5,0.102)(34.5,0) (34.5,0)(34.5,0.010)(35.5,0.010)(35.5,0) (35.5,0)(35.5,0.046)(36.5,0.046)(36.5,0) (36.5,0)(36.5,0.044)(37.5,0.044)(37.5,0) (37.5,0)(37.5,0.013)(38.5,0.013)(38.5,0) (38.5,0)(38.5,0.007)(39.5,0.007)(39.5,0) (39.5,0)(39.5,0.142)(40.5,0.142)(40.5,0) (40.5,0)(40.5,0.048)(41.5,0.048)(41.5,0) (41.5,0)(41.5,0.176)(42.5,0.176)(42.5,0) (42.5,0)(42.5,0.096)(43.5,0.096)(43.5,0) (43.5,0)(43.5,0.087)(44.5,0.087)(44.5,0) (44.5,0)(44.5,0.037)(45.5,0.037)(45.5,0) (45.5,0)(45.5,0.022)(46.5,0.022)(46.5,0) (46.5,0)(46.5,0.033)(47.5,0.033)(47.5,0) (47.5,0)(47.5,0.017)(48.5,0.017)(48.5,0) (48.5,0)(48.5,0.063)(49.5,0.063)(49.5,0) (49.5,0)(49.5,0.052)(50.5,0.052)(50.5,0) (50.5,0)(50.5,0.052)(51.5,0.052)(51.5,0) (51.5,0)(51.5,0.007)(52.5,0.007)(52.5,0) (52.5,0)(52.5,0.032)(53.5,0.032)(53.5,0) (53.5,0)(53.5,0.076)(54.5,0.076)(54.5,0) (54.5,0)(54.5,0.121)(55.5,0.121)(55.5,0) (55.5,0)(55.5,0.124)(56.5,0.124)(56.5,0) (56.5,0)(56.5,0.096)(57.5,0.096)(57.5,0) (57.5,0)(57.5,0.229)(58.5,0.229)(58.5,0) (58.5,0)(58.5,0.260)(59.5,0.260)(59.5,0) (59.5,0)(59.5,0.086)(60.5,0.086)(60.5,0) (60.5,0)(60.5,0.069)(61.5,0.069)(61.5,0) (61.5,0)(61.5,0.199)(62.5,0.199)(62.5,0) (62.5,0)(62.5,0.286)(63.5,0.286)(63.5,0) (63.5,0)(63.5,0.183)(64.5,0.183)(64.5,0) (64.5,0)(64.5,0.039)(65.5,0.039)(65.5,0) (65.5,0)(65.5,0.183)(66.5,0.183)(66.5,0) (66.5,0)(66.5,0.235)(67.5,0.235)(67.5,0) (67.5,0)(67.5,0.122)(68.5,0.122)(68.5,0) (68.5,0)(68.5,0.105)(69.5,0.105)(69.5,0) (69.5,0)(69.5,0.188)(70.5,0.188)(70.5,0) (70.5,0)(70.5,0.166)(71.5,0.166)(71.5,0) (71.5,0)(71.5,0.027)(72.5,0.027)(72.5,0) (72.5,0)(72.5,0.063)(73.5,0.063)(73.5,0) (73.5,0)(73.5,0.099)(74.5,0.099)(74.5,0) (74.5,0)(74.5,0.077)(75.5,0.077)(75.5,0) (75.5,0)(75.5,0.031)(76.5,0.031)(76.5,0) (76.5,0)(76.5,0.005)(77.5,0.005)(77.5,0) (77.5,0)(77.5,0.018)(78.5,0.018)(78.5,0) (78.5,0)(78.5,0.036)(79.5,0.036)(79.5,0) (79.5,0)(79.5,0.072)(80.5,0.072)(80.5,0) (0.5,0)(0.5,0.362)(1.5,0.362)(1.5,0) (1.5,0)(1.5,0.280)(2.5,0.280)(2.5,0) (2.5,0)(2.5,0.238)(3.5,0.238)(3.5,0) (3.5,0)(3.5,0.233)(4.5,0.233)(4.5,0) (4.5,0)(4.5,0.126)(5.5,0.126)(5.5,0) (5.5,0)(5.5,0.063)(6.5,0.063)(6.5,0) (6.5,0)(6.5,0.019)(7.5,0.019)(7.5,0) (7.5,0)(7.5,0.040)(8.5,0.040)(8.5,0) (8.5,0)(8.5,0.024)(9.5,0.024)(9.5,0) (9.5,0)(9.5,0.100)(10.5,0.100)(10.5,0) (10.5,0)(10.5,0.098)(11.5,0.098)(11.5,0) (11.5,0)(11.5,0.032)(12.5,0.032)(12.5,0) (12.5,0)(12.5,0.001)(13.5,0.001)(13.5,0) (13.5,0)(13.5,0.018)(14.5,0.018)(14.5,0) (14.5,0)(14.5,0.121)(15.5,0.121)(15.5,0) (15.5,0)(15.5,0.026)(16.5,0.026)(16.5,0) (16.5,0)(16.5,0.063)(17.5,0.063)(17.5,0) (17.5,0)(17.5,0.130)(18.5,0.130)(18.5,0) (18.5,0)(18.5,0.093)(19.5,0.093)(19.5,0) (19.5,0)(19.5,0.036)(20.5,0.036)(20.5,0) (20.5,0)(20.5,0.083)(21.5,0.083)(21.5,0) (21.5,0)(21.5,0.093)(22.5,0.093)(22.5,0) (22.5,0)(22.5,0.043)(23.5,0.043)(23.5,0) (23.5,0)(23.5,0.037)(24.5,0.037)(24.5,0) (24.5,0)(24.5,0.161)(25.5,0.161)(25.5,0) (25.5,0)(25.5,0.089)(26.5,0.089)(26.5,0) (26.5,0)(26.5,0.047)(27.5,0.047)(27.5,0) (27.5,0)(27.5,0.000)(28.5,0.000)(28.5,0) (28.5,0)(28.5,0.024)(29.5,0.024)(29.5,0) (29.5,0)(29.5,0.124)(30.5,0.124)(30.5,0) (30.5,0)(30.5,0.106)(31.5,0.106)(31.5,0) (31.5,0)(31.5,0.048)(32.5,0.048)(32.5,0) (32.5,0)(32.5,0.003)(33.5,0.003)(33.5,0) (33.5,0)(33.5,0.102)(34.5,0.102)(34.5,0) (34.5,0)(34.5,0.010)(35.5,0.010)(35.5,0) (35.5,0)(35.5,0.046)(36.5,0.046)(36.5,0) (36.5,0)(36.5,0.044)(37.5,0.044)(37.5,0) (37.5,0)(37.5,0.013)(38.5,0.013)(38.5,0) (38.5,0)(38.5,0.007)(39.5,0.007)(39.5,0) (39.5,0)(39.5,0.142)(40.5,0.142)(40.5,0) (40.5,0)(40.5,0.048)(41.5,0.048)(41.5,0) (41.5,0)(41.5,0.176)(42.5,0.176)(42.5,0) (42.5,0)(42.5,0.096)(43.5,0.096)(43.5,0) (43.5,0)(43.5,0.087)(44.5,0.087)(44.5,0) (44.5,0)(44.5,0.037)(45.5,0.037)(45.5,0) (45.5,0)(45.5,0.022)(46.5,0.022)(46.5,0) (46.5,0)(46.5,0.033)(47.5,0.033)(47.5,0) (47.5,0)(47.5,0.017)(48.5,0.017)(48.5,0) (48.5,0)(48.5,0.063)(49.5,0.063)(49.5,0) (49.5,0)(49.5,0.052)(50.5,0.052)(50.5,0) (50.5,0)(50.5,0.052)(51.5,0.052)(51.5,0) (51.5,0)(51.5,0.007)(52.5,0.007)(52.5,0) (52.5,0)(52.5,0.032)(53.5,0.032)(53.5,0) (53.5,0)(53.5,0.076)(54.5,0.076)(54.5,0) (54.5,0)(54.5,0.121)(55.5,0.121)(55.5,0) (55.5,0)(55.5,0.124)(56.5,0.124)(56.5,0) (56.5,0)(56.5,0.096)(57.5,0.096)(57.5,0) (57.5,0)(57.5,0.229)(58.5,0.229)(58.5,0) (58.5,0)(58.5,0.260)(59.5,0.260)(59.5,0) (59.5,0)(59.5,0.086)(60.5,0.086)(60.5,0) (60.5,0)(60.5,0.069)(61.5,0.069)(61.5,0) (61.5,0)(61.5,0.199)(62.5,0.199)(62.5,0) (62.5,0)(62.5,0.286)(63.5,0.286)(63.5,0) (63.5,0)(63.5,0.183)(64.5,0.183)(64.5,0) (64.5,0)(64.5,0.039)(65.5,0.039)(65.5,0) (65.5,0)(65.5,0.183)(66.5,0.183)(66.5,0) (66.5,0)(66.5,0.235)(67.5,0.235)(67.5,0) (67.5,0)(67.5,0.122)(68.5,0.122)(68.5,0) (68.5,0)(68.5,0.105)(69.5,0.105)(69.5,0) (69.5,0)(69.5,0.188)(70.5,0.188)(70.5,0) (70.5,0)(70.5,0.166)(71.5,0.166)(71.5,0) (71.5,0)(71.5,0.027)(72.5,0.027)(72.5,0) (72.5,0)(72.5,0.063)(73.5,0.063)(73.5,0) (73.5,0)(73.5,0.099)(74.5,0.099)(74.5,0) (74.5,0)(74.5,0.077)(75.5,0.077)(75.5,0) (75.5,0)(75.5,0.031)(76.5,0.031)(76.5,0) (76.5,0)(76.5,0.005)(77.5,0.005)(77.5,0) (77.5,0)(77.5,0.018)(78.5,0.018)(78.5,0) (78.5,0)(78.5,0.036)(79.5,0.036)(79.5,0) (79.5,0)(79.5,0.072)(80.5,0.072)(80.5,0) (0,0)(83,0) (0,0)(0,0.5) (5.8,0.5)[$e_{2}\ (01/2001)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[Pana]{} (10,-0.02)(10,0.02) (10,-0.06)[Arge]{} (15,-0.02)(15,0.02) (15,-0.06)[Irel]{} (20,-0.02)(20,0.02) (20,-0.06)[Ital]{} (25,-0.02)(25,0.02) (25,-0.06)[Swed]{} (30,-0.02)(30,0.02) (30,-0.06)[Spai]{} (35,-0.02)(35,0.02) (35,-0.06)[Hung]{} (40,-0.02)(40,0.02) (40,-0.06)[Latv]{} (45,-0.02)(45,0.02) (45,-0.06)[Turk]{} (50,-0.02)(50,0.02) (49,-0.06)[SaAr]{} (55,-0.02)(55,0.02) (55,-0.06)[SrLa]{} (60,-0.02)(60,0.02) (60,-0.06)[Mongo]{} (65,-0.02)(65,0.02) (65,-0.06)[Mala]{} (70,-0.02)(70,0.02) (70,-0.06)[NeZe]{} (75,-0.02)(75,0.02) (75,-0.06)[Nige]{} (79,-0.02)(79,0.02) (79,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.38,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.38,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.38,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.38,0.4)[$0.4$]{} (-0.1,0)(3.5,3.1) (0.5,0)(0.5,0.083)(1.5,0.083)(1.5,0) (1.5,0)(1.5,0.089)(2.5,0.089)(2.5,0) (2.5,0)(2.5,0.128)(3.5,0.128)(3.5,0) (3.5,0)(3.5,0.048)(4.5,0.048)(4.5,0) (4.5,0)(4.5,0.053)(5.5,0.053)(5.5,0) (5.5,0)(5.5,0.051)(6.5,0.051)(6.5,0) (6.5,0)(6.5,0.052)(7.5,0.052)(7.5,0) (7.5,0)(7.5,0.036)(8.5,0.036)(8.5,0) (8.5,0)(8.5,0.087)(9.5,0.087)(9.5,0) (9.5,0)(9.5,0.068)(10.5,0.068)(10.5,0) (10.5,0)(10.5,0.131)(11.5,0.131)(11.5,0) (11.5,0)(11.5,0.028)(12.5,0.028)(12.5,0) (12.5,0)(12.5,0.076)(13.5,0.076)(13.5,0) (13.5,0)(13.5,0.152)(14.5,0.152)(14.5,0) (14.5,0)(14.5,0.060)(15.5,0.060)(15.5,0) (15.5,0)(15.5,0.162)(16.5,0.162)(16.5,0) (16.5,0)(16.5,0.180)(17.5,0.180)(17.5,0) (17.5,0)(17.5,0.122)(18.5,0.122)(18.5,0) (18.5,0)(18.5,0.061)(19.5,0.061)(19.5,0) (19.5,0)(19.5,0.155)(20.5,0.155)(20.5,0) (20.5,0)(20.5,0.013)(21.5,0.013)(21.5,0) (21.5,0)(21.5,0.085)(22.5,0.085)(22.5,0) (22.5,0)(22.5,0.135)(23.5,0.135)(23.5,0) (23.5,0)(23.5,0.111)(24.5,0.111)(24.5,0) (24.5,0)(24.5,0.162)(25.5,0.162)(25.5,0) (25.5,0)(25.5,0.009)(26.5,0.009)(26.5,0) (26.5,0)(26.5,0.088)(27.5,0.088)(27.5,0) (27.5,0)(27.5,0.053)(28.5,0.053)(28.5,0) (28.5,0)(28.5,0.070)(29.5,0.070)(29.5,0) (29.5,0)(29.5,0.146)(30.5,0.146)(30.5,0) (30.5,0)(30.5,0.017)(31.5,0.017)(31.5,0) (31.5,0)(31.5,0.086)(32.5,0.086)(32.5,0) (32.5,0)(32.5,0.063)(33.5,0.063)(33.5,0) (33.5,0)(33.5,0.040)(34.5,0.040)(34.5,0) (34.5,0)(34.5,0.083)(35.5,0.083)(35.5,0) (35.5,0)(35.5,0.106)(36.5,0.106)(36.5,0) (36.5,0)(36.5,0.006)(37.5,0.006)(37.5,0) (37.5,0)(37.5,0.054)(38.5,0.054)(38.5,0) (38.5,0)(38.5,0.196)(39.5,0.196)(39.5,0) (39.5,0)(39.5,0.070)(40.5,0.070)(40.5,0) (40.5,0)(40.5,0.156)(41.5,0.156)(41.5,0) (41.5,0)(41.5,0.095)(42.5,0.095)(42.5,0) (42.5,0)(42.5,0.187)(43.5,0.187)(43.5,0) (43.5,0)(43.5,0.061)(44.5,0.061)(44.5,0) (44.5,0)(44.5,0.081)(45.5,0.081)(45.5,0) (45.5,0)(45.5,0.000)(46.5,0.000)(46.5,0) (46.5,0)(46.5,0.087)(47.5,0.087)(47.5,0) (47.5,0)(47.5,0.017)(48.5,0.017)(48.5,0) (48.5,0)(48.5,0.179)(49.5,0.179)(49.5,0) (49.5,0)(49.5,0.008)(50.5,0.008)(50.5,0) (50.5,0)(50.5,0.057)(51.5,0.057)(51.5,0) (51.5,0)(51.5,0.066)(52.5,0.066)(52.5,0) (52.5,0)(52.5,0.118)(53.5,0.118)(53.5,0) (53.5,0)(53.5,0.179)(54.5,0.179)(54.5,0) (54.5,0)(54.5,0.076)(55.5,0.076)(55.5,0) (55.5,0)(55.5,0.012)(56.5,0.012)(56.5,0) (56.5,0)(56.5,0.198)(57.5,0.198)(57.5,0) (57.5,0)(57.5,0.211)(58.5,0.211)(58.5,0) (58.5,0)(58.5,0.161)(59.5,0.161)(59.5,0) (59.5,0)(59.5,0.034)(60.5,0.034)(60.5,0) (60.5,0)(60.5,0.116)(61.5,0.116)(61.5,0) (61.5,0)(61.5,0.190)(62.5,0.190)(62.5,0) (62.5,0)(62.5,0.137)(63.5,0.137)(63.5,0) (63.5,0)(63.5,0.090)(64.5,0.090)(64.5,0) (64.5,0)(64.5,0.116)(65.5,0.116)(65.5,0) (65.5,0)(65.5,0.241)(66.5,0.241)(66.5,0) (66.5,0)(66.5,0.148)(67.5,0.148)(67.5,0) (67.5,0)(67.5,0.155)(68.5,0.155)(68.5,0) (68.5,0)(68.5,0.238)(69.5,0.238)(69.5,0) (69.5,0)(69.5,0.139)(70.5,0.139)(70.5,0) (70.5,0)(70.5,0.052)(71.5,0.052)(71.5,0) (71.5,0)(71.5,0.047)(72.5,0.047)(72.5,0) (72.5,0)(72.5,0.111)(73.5,0.111)(73.5,0) (73.5,0)(73.5,0.078)(74.5,0.078)(74.5,0) (74.5,0)(74.5,0.005)(75.5,0.005)(75.5,0) (75.5,0)(75.5,0.048)(76.5,0.048)(76.5,0) (76.5,0)(76.5,0.102)(77.5,0.102)(77.5,0) (77.5,0)(77.5,0.127)(78.5,0.127)(78.5,0) (78.5,0)(78.5,0.069)(79.5,0.069)(79.5,0) (79.5,0)(79.5,0.072)(80.5,0.072)(80.5,0) (0.5,0)(0.5,0.083)(1.5,0.083)(1.5,0) (1.5,0)(1.5,0.089)(2.5,0.089)(2.5,0) (2.5,0)(2.5,0.128)(3.5,0.128)(3.5,0) (3.5,0)(3.5,0.048)(4.5,0.048)(4.5,0) (4.5,0)(4.5,0.053)(5.5,0.053)(5.5,0) (5.5,0)(5.5,0.051)(6.5,0.051)(6.5,0) (6.5,0)(6.5,0.052)(7.5,0.052)(7.5,0) (7.5,0)(7.5,0.036)(8.5,0.036)(8.5,0) (8.5,0)(8.5,0.087)(9.5,0.087)(9.5,0) (9.5,0)(9.5,0.068)(10.5,0.068)(10.5,0) (10.5,0)(10.5,0.131)(11.5,0.131)(11.5,0) (11.5,0)(11.5,0.028)(12.5,0.028)(12.5,0) (12.5,0)(12.5,0.076)(13.5,0.076)(13.5,0) (13.5,0)(13.5,0.152)(14.5,0.152)(14.5,0) (14.5,0)(14.5,0.060)(15.5,0.060)(15.5,0) (15.5,0)(15.5,0.162)(16.5,0.162)(16.5,0) (16.5,0)(16.5,0.180)(17.5,0.180)(17.5,0) (17.5,0)(17.5,0.122)(18.5,0.122)(18.5,0) (18.5,0)(18.5,0.061)(19.5,0.061)(19.5,0) (19.5,0)(19.5,0.155)(20.5,0.155)(20.5,0) (20.5,0)(20.5,0.013)(21.5,0.013)(21.5,0) (21.5,0)(21.5,0.085)(22.5,0.085)(22.5,0) (22.5,0)(22.5,0.135)(23.5,0.135)(23.5,0) (23.5,0)(23.5,0.111)(24.5,0.111)(24.5,0) (24.5,0)(24.5,0.162)(25.5,0.162)(25.5,0) (25.5,0)(25.5,0.009)(26.5,0.009)(26.5,0) (26.5,0)(26.5,0.088)(27.5,0.088)(27.5,0) (27.5,0)(27.5,0.053)(28.5,0.053)(28.5,0) (28.5,0)(28.5,0.070)(29.5,0.070)(29.5,0) (29.5,0)(29.5,0.146)(30.5,0.146)(30.5,0) (30.5,0)(30.5,0.017)(31.5,0.017)(31.5,0) (31.5,0)(31.5,0.086)(32.5,0.086)(32.5,0) (32.5,0)(32.5,0.063)(33.5,0.063)(33.5,0) (33.5,0)(33.5,0.040)(34.5,0.040)(34.5,0) (34.5,0)(34.5,0.083)(35.5,0.083)(35.5,0) (35.5,0)(35.5,0.106)(36.5,0.106)(36.5,0) (36.5,0)(36.5,0.006)(37.5,0.006)(37.5,0) (37.5,0)(37.5,0.054)(38.5,0.054)(38.5,0) (38.5,0)(38.5,0.196)(39.5,0.196)(39.5,0) (39.5,0)(39.5,0.070)(40.5,0.070)(40.5,0) (40.5,0)(40.5,0.156)(41.5,0.156)(41.5,0) (41.5,0)(41.5,0.095)(42.5,0.095)(42.5,0) (42.5,0)(42.5,0.187)(43.5,0.187)(43.5,0) (43.5,0)(43.5,0.061)(44.5,0.061)(44.5,0) (44.5,0)(44.5,0.081)(45.5,0.081)(45.5,0) (45.5,0)(45.5,0.000)(46.5,0.000)(46.5,0) (46.5,0)(46.5,0.087)(47.5,0.087)(47.5,0) (47.5,0)(47.5,0.017)(48.5,0.017)(48.5,0) (48.5,0)(48.5,0.179)(49.5,0.179)(49.5,0) (49.5,0)(49.5,0.008)(50.5,0.008)(50.5,0) (50.5,0)(50.5,0.057)(51.5,0.057)(51.5,0) (51.5,0)(51.5,0.066)(52.5,0.066)(52.5,0) (52.5,0)(52.5,0.118)(53.5,0.118)(53.5,0) (53.5,0)(53.5,0.179)(54.5,0.179)(54.5,0) (54.5,0)(54.5,0.076)(55.5,0.076)(55.5,0) (55.5,0)(55.5,0.012)(56.5,0.012)(56.5,0) (56.5,0)(56.5,0.198)(57.5,0.198)(57.5,0) (57.5,0)(57.5,0.211)(58.5,0.211)(58.5,0) (58.5,0)(58.5,0.161)(59.5,0.161)(59.5,0) (59.5,0)(59.5,0.034)(60.5,0.034)(60.5,0) (60.5,0)(60.5,0.116)(61.5,0.116)(61.5,0) (61.5,0)(61.5,0.190)(62.5,0.190)(62.5,0) (62.5,0)(62.5,0.137)(63.5,0.137)(63.5,0) (63.5,0)(63.5,0.090)(64.5,0.090)(64.5,0) (64.5,0)(64.5,0.116)(65.5,0.116)(65.5,0) (65.5,0)(65.5,0.241)(66.5,0.241)(66.5,0) (66.5,0)(66.5,0.148)(67.5,0.148)(67.5,0) (67.5,0)(67.5,0.155)(68.5,0.155)(68.5,0) (68.5,0)(68.5,0.238)(69.5,0.238)(69.5,0) (69.5,0)(69.5,0.139)(70.5,0.139)(70.5,0) (70.5,0)(70.5,0.052)(71.5,0.052)(71.5,0) (71.5,0)(71.5,0.047)(72.5,0.047)(72.5,0) (72.5,0)(72.5,0.111)(73.5,0.111)(73.5,0) (73.5,0)(73.5,0.078)(74.5,0.078)(74.5,0) (74.5,0)(74.5,0.005)(75.5,0.005)(75.5,0) (75.5,0)(75.5,0.048)(76.5,0.048)(76.5,0) (76.5,0)(76.5,0.102)(77.5,0.102)(77.5,0) (77.5,0)(77.5,0.127)(78.5,0.127)(78.5,0) (78.5,0)(78.5,0.069)(79.5,0.069)(79.5,0) (79.5,0)(79.5,0.072)(80.5,0.072)(80.5,0) (0,0)(83,0) (0,0)(0,0.5) (5.8,0.5)[$e_{2}\ (02/2001)$]{} (1,-0.02)(1,0.02) (1,-0.06)[S&P]{} (5,-0.02)(5,0.02) (5,-0.06)[Pana]{} (10,-0.02)(10,0.02) (10,-0.06)[Arge]{} (15,-0.02)(15,0.02) (15,-0.06)[Irel]{} (20,-0.02)(20,0.02) (20,-0.06)[Ital]{} (25,-0.02)(25,0.02) (25,-0.06)[Swed]{} (30,-0.02)(30,0.02) (30,-0.06)[Spai]{} (35,-0.02)(35,0.02) (35,-0.06)[Hung]{} (40,-0.02)(40,0.02) (40,-0.06)[Latv]{} (45,-0.02)(45,0.02) (45,-0.06)[Turk]{} (50,-0.02)(50,0.02) (49,-0.06)[SaAr]{} (55,-0.02)(55,0.02) (55,-0.06)[SrLa]{} (60,-0.02)(60,0.02) (60,-0.06)[Mongo]{} (65,-0.02)(65,0.02) (65,-0.06)[Mala]{} (70,-0.02)(70,0.02) (70,-0.06)[NeZe]{} (75,-0.02)(75,0.02) (75,-0.06)[Nige]{} (79,-0.02)(79,0.02) (79,-0.06)[Maur]{} (-0.28,0.1)(0.28,0.1) (-1.38,0.1)[$0.1$]{} (-0.28,0.2)(0.28,0.2) (-1.38,0.2)[$0.2$]{} (-0.28,0.3)(0.28,0.3) (-1.38,0.3)[$0.3$]{} (-0.28,0.4)(0.28,0.4) (-1.38,0.4)[$0.4$]{} 0.6 cm Fig. 21. Contributions of the stock market indices to eigenvector $e_{2}$, corresponding to the second largest eigenvalue of the correlation matrix. White bars indicate positive values, and gray bars indicate negative values, corresponding to the first and second semesters of 2001. The indices are aligned in the following way: [**S&P**]{}, Nasd, Cana, Mexi, [**Pana**]{}, CoRi, Berm, Jama, Braz, [**Arge**]{}, Chil, Vene, Peru, UK, [**Irel**]{}, Fran, Germ, Swit, Autr, [**Ital**]{}, Malt, Belg, Neth, Luxe, [**Swed**]{}, Denm, Finl, Norw, Icel, [**Spai**]{}, Port, Gree, CzRe, Slok, [**Hung**]{}, Pola, Roma, Bulg, Esto, [**Latv**]{}, Lith, Ukra, Russ, Kaza, [**Turk**]{}, Isra, Pale, Leba, Jord, [**SaAr**]{}, Qata, Ohma, Paki, Indi, [**SrLa**]{}, Bang, Japa, HoKo, Chin, [**Mong**]{}, Taiw, SoKo, Thai, Viet, [**Mala**]{}, Sing, Indo, Phil, Aust, [**NeZe**]{}, Moro, Tuni, Egyp, Ghan, [**Nige**]{}, Keny, Bots, SoAf, [**Maur**]{}. 0.4 cm What is not clear at the beginning, but becomes more evident in later times and with more data, is that there are two main blocks that move together as a second approximation to the market movement, and that those blocks are related with time zones, which reflect the operation hours of the stock exchanges. Usually, from Eastern Europe to Pacific Asia, indices belong to the second group, which appears with negative values in the eigenvectors. This is a characteristic peculiar to data related with international financial indices, and does not appear in ordinary correlations between assets in the same stock exchange. A certain contamination exists from other internal structures, like an European Union, quite clear in the data prior to the first semester of 1998, but further studies into the future (2007 to 2010) reveal that this separation in two blocks persists and even increases in time. A third highest eigenvalue sometimes also stand out of the noisy region, but contamination by random noise makes it very difficult to analize any internal structure revealed by its corresponding eigenvector. Conclusion ========== As we have seen in this article, correlation matrices between international stock market indices may be used in the construction of networks based on distance thresholds between the indices. By varying the thresholds, one may contemplate the cluster structure of those networks at different levels, revealing two strong and persistent clusters, one of American and the other of European indices, the formation of a Pacific Asian cluster in the 90’s, and the slow integration of some of the indices. One could also see that those networks tend to shrink in size in times of crises, and, studying randomized data, that there are values of thresholds above which noise takes over. It was also shown that the eigenvectors of the correlation matrix hold some important information about the networks formed from it. In particular, the one corresponding to the second highest eigenvalue shows a structure based on the difference in operation hours between markets. Still, none of this study gives us information about causality between networks, and this shall indeed be a very interesting topic for future research. 0.6 cm [**Acknowledgements**]{} 0.4 cm The author acknowledges the support of this work by a grant from Insper, Instituto de Ensino e Pesquisa. I am also grateful to Siew Ann Cheong, for useful discussions, and to the attenders and organizers of the Econophysics Colloquium 2010. This article was written using LaTeX, all figures were made using PSTricks, and the calculations were made using Matlab and Excel. All data are freely available upon request on leonidassj@insper.edu.br. [99]{} R.N. Mantegna, [*Hierarchical structure in financial markets*]{}, The European Phys. J. B [**11**]{} (1999) 193. G. Bonanno, F. Lillo, and R.N. Mantegna, [*High-frequency cross-correlation in a set of stocks*]{}, Quantitative Finance [**1**]{} (2001) 96-104. S. Micchichè, G. Bonanno, F. Lillo, and R.N. Mantegna, [*Degree stability of a minimum spanning tree of price return and volatility*]{}, Physica A [**324**]{} (2003) 66-73. G. Bonanno, G. Caldarelli, F. Lillo, S. Miccichè, N. Vandewalle, and R.N. Mantegna, [*Networks of equities in financial markets*]{}, The European Phys. J. B [**38**]{}, (2004) 363–371. R. Coelho, C.G. Gilmore, B. Lucey, P. Richmond, and S. Hutzler, [*The evolution of interdependence in world equity markets - evidence from minimum spanning trees*]{}, Physica A [**376**]{} (2007) 455-466. C. Borghesi, M. Marsili, and S. Miccichè, [*Emergence of time-horizon invariant correlation structure in financial returns by subtraction of the market mode*]{}, Phys. Rev. E [**76**]{} (2007), 026104. J.G. Brida and W.A. Risso, [*Multidimensional minimal spanning tree: the Dow Jones case*]{}, Physica A [**387**]{} (2008) 5205-5210. P. Sieczka and J.A. Ho[ł]{}yst, [*Correlations in commodity markets*]{}, Physica A [**388**]{} (2009), 1621-1630. J. Kwapień, S. Gworek, S. Drożdż, and A. Górski, [*Analysis of a network structure of the foreign currency exchange market*]{}, J. of Economic Interaction and Coordination [**4**]{} (2009), 55-72. J. Kwapień, S. Gworek, and S. Drożdż, [*Structure and evolution of the foreign exchange networks*]{}, Acta Physica Polonica B [**40**]{} (2009), 175-194. S. Drożdż, J. Kwapień, and J. Speth, [*Coherent patterns in nuclei and in financial markets*]{}, AIP Conf. Proc. [**1261**]{} (2010), 256-264. D.J. Fenn, M.A. Porter, P.J. Mucha, M. McDonald, S. Williams, N.F. Johnson, and N.S. Jones, [*Dynamical clustering of exchange rates*]{}, preprint (2010) arXiv:0905.4912v2. Y. Zhanga, G.H.T. Leea, J.C. Wonga, J.L. Kokb, M. Prustyb, and S.A. Cheong, [*Will the US Economy Recover in 2010? A Minimal Spanning Tree Study*]{}, (2010) Physica A [**390**]{} (2011) 2020-2050. M. Keskin, B. Deviren, and Y. Kocalkaplan, [*Topology of the correlation networks among major currencies using hierarchical structure methods*]{}, Physica A [**390**]{} (2011), 719-730. L. Sandoval Jr., [*Pruning a minimum spanning tree*]{}, to appear in Physica A (2012). C. Coronnello, M. Tumminello, F. Lillo, S. Micchichè, and R.N. Mantegna, [*Sector identification in a set of stock return time series traded at the London Stock Exchange*]{}, Acta Phys. Pol. B [**36**]{} (2005) 2653-2679. T. Aste and T. Di Matteo, [*Correlation filtering in financial time series*]{}, [*Noise and Fluctuations in Econophysics and Finance*]{}, Proceedings of the SPIE 5848 (2005) 100-109. M. Tumminello, T. Di Matteo, T. Aste, and R.N. Mantegna, [*Correlation based networks of equity returns sampled at different time horizons*]{}, Eur. Phys. J. B [**55**]{} (2007) 209–217. C. Coronnello, M. Tumminello, F. Lillo, S. Micchichè, and R.N. Mantegna, [*Economic sector identification in a set of stocks traded at the New York Stock Exchange: a comparative analysis*]{}, Proceedings of the SPIE, vol. 6601, 66010T (2007). M. Eryiǧit and R. Eryiǧit, [*Network structure of cross-correlations among the world market indices*]{}, Physica A [**388**]{} (2009), 3551-3562. M. Tumminello, F. Liloo, and R.N. Mantegna, [*Correlation, hierarchies, and networks in financial markets*]{}, Journal of Economic Behavior & Organizations [**75**]{} (2010), 40-58. J.-P. Onnela, A. Chakraborti, K. Kaski, and J. Kertész, [*Dynamic asset trees and portfolio analysis*]{}, The European Phys. J. B [**30**]{} (2002) 285-288. J.-P. Onnela, A. Chakraborti, and K. Kaski, [*Dynamics of market correlations: taxonomy and portfolio analysis*]{}, Phys. Rev. E [**68**]{} (2003) 1-12. J.-P. Onnela, A. Chakraborti, K. Kaski, and J. Kertész, [*Dynamic asset trees and Black Monday*]{}, Physica A [**324**]{} (2003) 247-252. J.-P. Onnela, A. Chakraborti, K. Kaski, J. Kertész, and A. Kanto, [*Asset trees and asset graphs in financial markets*]{}, Phys. Scripta T [**106**]{} (2003) 48-54. J.-P. Onnela, K. Kaski, and J. Kertész, [*Clustering and information in correlation based financial networks*]{}, Eur. Phys. J. B [**38**]{} (2004) 353-362. S. Sinha and R.K. Pan, [*Uncovering the internal structure of the Indian financial market: cross-correlation behavior in the NSE*]{}, in “Econophysics of markets and business networks”, Springer (2007) 215-226. M. Ausloos and R. Lambiotte, [*Clusters or networks of economies? A macroeconomy study through gross domestic product*]{}, Physica A [**382**]{} (2007) 16-21. L. Sandoval Jr., [*A Map of the Brazilian Stock Market*]{}, (2011) arXiv:1107.4146v1. L. Sandoval Jr. and I. De P. Franca, [*Correlation of financial markets in times of crisis*]{}, Physica A [**391**]{}, (2012) 187-208. D.Y. Kenett, Y. Shapira, and E. Ben-Jacob, [*RMT assessments of the market latent information embedded in the stock’s raw, normalized, and partial correlations*]{}, J. Prob. Stat. [**2009**]{}, (2009) 1-13. D.Y. Kenett, M. Tumminello, A. Madi, G. Gur-Gershgoren, R. Mategna, and E. Ben-Jacob, [*Dominating clasp of the financial sector revealed by partial correlation analysis of the stock market*]{}, Plos One [**5**]{}, (2010). D.Y. Kenett, Y. Shapira, A. Madi, S. Bransburg-Zabary, G. Gur-Gershgoren, and E. Ben-Jacob, [*Dynamics of stock market correlations*]{}, AUCO Czech Economic Review [**4**]{}, (2010) 330-340. Y. Shapira, D.Y. Kenett, and E. Ben-Jacob, [*The index cohesive effect on stock market correlations*]{}, Eur. Phys. J. B [**72**]{}, (2009) 657-669. M. L. Mehta, [*Random Matrices*]{}, (2004) Academic Press. V.A. Marěnko and L.A. Pastur, USSR-Sb [**1**]{} (1967) 457-483.
--- abstract: 'The paper studies the problem of distributed parameter estimation in multi-agent networks with exponential family observation statistics. A certainty-equivalence type distributed consensus + innovations estimator is proposed, which, under global observability of the networked sensing model and mean connectivity of the inter-agent communication network, is shown to yield consistent parameter estimates at each network agent. Further, it is shown that the distributed estimator is asymptotically efficient, in that, the asymptotic covariances of the agent estimates coincide with that of the optimal centralized estimator, i.e., the inverse of the centralized Fisher information rate. From a technical viewpoint, the proposed distributed estimator leads to non-Markovian mixed time-scale stochastic recursions and the analytical methods developed in the paper contribute to the general theory of distributed stochastic approximation.' bibliography: - 'IEEEabrv.bib' - 'CentralBib.bib' title: Asymptotically Efficient Distributed Estimation With Exponential Family Statistics --- Distributed estimation, exponential family, consistency, asymptotic efficiency, stochastic approximation. Introduction {#sec:introduction} ============ Motivation {#subsec:mot} ---------- Motivated by applications in multi-agent networked information processing, we revisit the problem of distributed sequential parameter estimation. The setup considered is a highly non-classical distributed information setting, in which each network agent samples over time an independent and identically distributed (i.i.d.) time-series with exponential family statistics[^1] parameterized by the (vector) parameter of interest. Further, in the spirit of typical agent-networking and wireless sensing applications with limited agent communication and computation capabilities, we restrict ourselves to scenarios in which each agent is only aware of its local observation statistics and, assuming slotted-discrete time, may only communicate (collaborate) with its agent-neighborhood (possibly dynamic and random) once per epoch of new observation acquisition, i.e., we consider scenarios in which the inter-agent communication rate is at most as high as the observation sampling rate. Broadly speaking, the goal of distributed parameter estimation in such multi-agent scenarios is to update over time the local agent estimates by effectively processing local observation samples and exchanging information with neighboring agents. To this end, the paper presents a distributed estimation approach of the consensus + innovations type, which, among other things, accomplishes the following: [**Consistency under distributed observability**]{}: Under *global observability*[^2] of the multi-agent sensing model and *mean connectivity* of the inter-agent communication-collaboration network, our distributed estimation approach is shown to yield strongly consistent parameter estimates at each agent. Conversely, it may be readily seen that the conditions of global observability and mean network connectivity are in fact necessary for obtaining consistent parameter estimates in our distributed information-collaboration setup. Indeed, global observability is the minimal requirement for consistency even in centralized estimation, whereas, in the absence of network connectivity, there may be locally unobservable agent-network components which, under no circumstance, will be able to generate consistent parameter estimates. Interestingly, the above leads to the following characterization of *distributed observability*: distributed observability, i.e., the minimal structural conditions on the sensing and communication models such that there exists a distributed estimation scheme leading to consistent parameter estimates at each network agent, is equivalent to global observability and mean network connectivity. [**Asymptotic efficiency**]{}: Under the same conditions of distributed observability, the proposed distributed estimation approach is shown to be asymptotically efficient. In other words, in terms of asymptotic convergence rate, the local agent estimates are as good as the optimal centralized, i.e., they all achieve asymptotic covariance equal to the inverse of the centralized Fisher information rate. The key point to note here is that the above optimality holds as long as the mean communication network is connected irrespective of how sparse the link realizations are. Conforming to the sensing and communication architecture, our distributed estimation approach is of the *consensus* + *innovations* type, in which at every observation sampling epoch the local agent estimate refinement step embeds a single round of local neighborhood estimate mixing, the consensus or agreement potential [@Bertsekas-survey; @olfatisaberfaxmurray07; @Dimakis-Gossip-SPM-2011; @jadbabailinmorse03], with local processing of the sampled new observation, the innovation potential. Multi-agent stochastic recursive algorithms of the above type have been proposed in prior work – see, for example, early work [@tsitsiklisphd84; @tsitsiklisbertsekasathans86; @Bertsekas-survey; @Kushner-dist] on parallel stochastic gradient and stochastic approximation; consensus + innovation approaches for nonlinear distributed estimation [@KarMouraRamanan-Est-2008], detection [@bajovicjakoveticxaviresinopolimoura-11; @jakovetic2012distributed; @Kar-Tandon-ISIT-2011], adaptive control [@Kar-Bandit-CDC-2011] and learning [@Kar-QD-learning-TSP-2012]; diffusion approaches for network inference and optimization [@Sayed-LMS; @chen2012diffusion]; networked LMS and variants [@Sayed-LMS; @Stankovic-parameter; @Giannakis-LMS; @Nedic-parameter; @sundhar2010distributed]. The key distinction between the above and the current work is that, in the former the focus has been mainly on consistency (or minimizing the asymptotic error residual between the estimated and the true parameter), but not on asymptotic efficiency. The requirement of asymptotic efficiency complicates the construction of such distributed algorithms non-trivially and necessitates the use of time-varying consensus and innovation gains in the update process; further these time-varying gains driving the persistent consensus and innovation potentials need to decay at strictly different rates in order for the distributed scheme to achieve the asymptotic covariance of the optimal centralized estimator. Such mixed time-scale construction for asymptotically efficient distributed parameter estimation in linear statistical models was obtained in [@KarMoura-LinEst-JSTSP-2011; @Kar-AdaptiveDistEst-SICON-2012]. However, in contrast to optimal estimation in linear statistical models [@KarMoura-LinEst-JSTSP-2011; @Kar-AdaptiveDistEst-SICON-2012], in the nonlinear non-Gaussian setting, the local innovation gains that achieve asymptotic efficiency are necessarily dependent on the true value of the parameter to be estimated, and on the statistics of the global sensing model. Since the value of the parameter and (and hence the optimal estimator gains) are not available in advance, our proposed distributed estimation approach involves a distributed online gain learning procedure that proceeds in conjunction with the sequential estimation task. As a result, a closed-loop interaction occurs between the gain learning and parameter estimation which is reminiscent of the certainty-equivalence approach for adaptive estimation and control – although the analysis methodology is significantly different from classical techniques used in adaptive processing (see, for example, [@Lai-Wei; @Lai-nonlinlsad] and the references therein, in the context of parameter estimation), primarily due to the distributed nature of our problem. Specifically, in our approach, each agent runs simultaneously three local time recursions: (1) an auxiliary distributed consensus + innovations estimator driven by non-adaptive innovation gains; (2) an online distributed learning procedure that uses the auxiliary distributed local estimators to generate a sequence of optimal adaptive innovations gains; and 3) the desired distributed consensus + innovations estimator whose innovations are weighted by the optimal adaptive innovations gains, thus achieving asymptotic efficiency. We note in this context that the idea of recovering asymptotically efficient estimates from consistent (but suboptimal) auxiliary estimates, although novel from a distributed estimation standpoint, has been investigated in prior work on (centralized) recursive estimation, see, for example, [@Hasminskii1974estimation; @Fabian1978efficient]. In summary, in contrast to existing work, the current paper presents a principled development of distributed parameter estimation as applicable to the general and important class of multi-agent statistical exponential families; paralleling the classical development of centralized parameter estimation, it quantifies notions of distributed observability, performance metrics, information measures and algorithmic optimality. Due to the mixed time-scale behavior and the non-Markovianity (induced by the learning process), the stochastic procedure does not fall under the purview of standard stochastic approximation (see, for example, [@Nevelson]) or distributed stochastic approximation (see, for example, [@tsitsiklisbertsekasathans86; @Bertsekas-survey; @Kushner-dist; @KarMouraRamanan-Est-2008; @Stankovic-parameter; @Li-Feng; @Huang; @KarMoura-DistCons-TSP-2009; @sundhar2010distributed]) procedures. In fact, some of the intermediate results on the pathwise convergence rates of mixed time-scale stochastic procedures obtained in the paper are more broadly applicable and contribute to the general theory of distributed stochastic approximation. In this context, we note the study of mixed time-scale stochastic procedures that arise in algorithms of the simulated annealing type (see, for example, [@Gelfand-Mitter]). Apart from being distributed, our scheme technically differs from [@Gelfand-Mitter] in that, whereas the additive perturbation in [@Gelfand-Mitter] is a martingale difference sequence, ours is a network dependent consensus potential manifesting past dependence. In fact, intuitively, a key step in the analysis is to derive pathwise strong approximation results to characterize the rate at which the consensus term/process converges to a martingale difference process. We also emphasize that our notion of mixed time-scale is different from that of stochastic algorithms with coupling (see [@Borkar-stochapp; @Yin-book]), where a quickly switching parameter influences the relatively slower dynamics of another state, leading to *averaged* dynamics. Mixed time scale procedures of this latter type arise in multi-scale distributed information diffusion problems, see, in particular, the paper [@Krishnamurthy-Yin-consensus], that studies interactive consensus formations in Markov-modulated switching networks. We comment briefly on the organization of the rest of the paper. Section \[notgraph\] sets up notation. The multi-agent sensing model is formalized in Section \[subsec:sensmod\], whereas, preliminary facts pertaining to the model and assumptions are summarized in Section \[subsec:prel-exp\]. Section \[subsec:alg\] describes the distributed estimation approach and the main results of the paper (concerning consistency and asymptotic efficiency of the proposed approach) are stated in Section \[subsec:mainres\]. The major technical developments are presented in Section \[sec:genconsest\] culminating to the proofs of the main results in Section \[sec:proof\_main\_res\]. Finally, Section \[sec:conclusions\] concludes the paper. Notation {#notgraph} -------- We denote by $\mathbb{R}$ the set of reals, $\mathbb{R}_{+}$ the set of non-negative reals, and by $\mathbb{R}^{k}$ the $k$-dimensional Euclidean. For $a,b\in\mathbb{R}$, we use $a\vee b$ and $a\wedge b$ to denote the maximum and minimum of $a$ and $b$ respectively. For deterministic $\mathbb{R}_{+}$-valued sequences $\{a_{t}\}$ and $\{b_{t}\}$, the notation $a_{t}=O(b_{t})$ denotes the existence of a constant $c>0$ such that $a_{t}\leq cb_{t}$ for all $t$ sufficiently large. Further, the notation $a_{t}=o(b_{t})$ is used to indicate that $a_{t}/b_{t}\rightarrow 0$ as $\tri$. For $\mathbb{R}_{+}$-valued stochastic processes $\{a_{t}\}$ and $\{b_{t}\}$, the corresponding order notations are to be interpreted to hold pathwise almost surely (a.s.). The set of $k\times k$ real matrices is denoted by $\mathbb{R}^{k\times k}$. The corresponding subspace of symmetric matrices is denoted by $\mathbb{S}^{k}$. The cone of positive semidefinite matrices is denoted by $\mathbb{S}_{+}^{k}$, whereas $\mathbb{S}_{++}^{k}$ denotes the subset of positive definite matrices. The $k\times k$ identity matrix is denoted by $I_{k}$, while $\mathbf{1}_{k}$ and $\mathbf{0}_{k}$ denote respectively the column vector of ones and zeros in $\mathbb{R}^{k}$. Often the symbol $\mathbf{0}$ is used to denote the $k\times p$ zero matrix, the dimensions being clear from the context. The symbol $\top$ denotes matrix transpose, whereas, for a finite set of matrices $A_{n}\in\mathbb{R}^{k_{n}\times p}$, $n=1,\cdots,N$, the quantity ${\boldsymbol{\operatorname{Vec}}}(A_{n})$ denotes the $(k_{1}+\cdots+k_{N})\times p$ matrix $[A_{1}^{\top},\cdots,A_{N}^{\top}]^{\top}$ obtained as the (column-wise) stack of the matrices $A_{n}$. The operator $\left\|\cdot\right\|$ applied to a vector denotes the standard Euclidean $\mathcal{L}_{2}$ norm, while applied to matrices it denotes the induced $\mathcal{L}_{2}$ norm, which is equivalent to the matrix spectral radius for symmetric matrices. Also, for $\mathbf{a}\in\mathbb{R}^{k}$ and $\Vap>0$, we will use $\mathbb{B}_{\Vap}(\mathbf{a})$ to denote the closed $\Vap$-neighborhood of $\mathbf{a}$, i.e., $$\label{notgraph1} \mathbb{B}_{\Vap}(\mathbf{a})=\left\{\mathbf{b}\in\mathbb{R}^{k}~:~\|\mathbf{b}-\mathbf{a}\|\leq\Vap\right\}.$$ The notation $A\otimes B$ is used to denote the Kronecker product of two matrices $A$ and $B$. The following notion of consensus subspace and its complement will be used: \[def:consspace\] Let $N$ and $M$ be positive integers and consider the Euclidean space $\mathbb{R}^{NM}$. The consensus or agreement subspace $\mathcal{C}$ of $\mathbb{R}^{NM}$ is then defined as $$\label{def:consspace1}\mathcal{C}=\left\{\mathbf{z}\in\mathbb{R}^{NM}~:~\mathbf{z}=\mathbf{1}_{N}\otimes\mathbf{a}~\mbox{for some $\mathbf{a}\in\mathbb{R}^{M}$}\right\}.$$ The orthogonal complement of $\mathcal{C}$ in $\mathbb{R}^{NM}$ is denoted by $\PC$. Finally, for a given vector $\mathbf{z}\in\mathbb{R}^{NM}$, its projection on the consensus subspace $\C$ is to be denoted by $\mathbf{z}_{\C}$, whereas, $\mathbf{z}_{\PC}=\mathbf{z}-\mathbf{z}_{\C}$ denotes the projection on the orthogonal complement $\PC$. Also, for $\mathbf{z}\in\mathcal{C}$, we will denote by $\mathbf{z}^{a}$ the vector $\mathbf{a}\in\mathbb{R}^{M}$ such that $\mathbf{z}=\mathbf{1}_{N}\otimes\mathbf{a}$. Time is assumed to be discrete or slotted throughout the paper. The symbols $t$ and $s$ denote time, and $\mathbb{T}_{+}$ is the discrete index set $\{0,1,2,\cdots\}$. The parameter to be estimated belongs to a subset $\Theta$ (generally open) of the Euclidean space $\mathbb{R}^{M}$. We reserve the symbol $\btheta$ to denote a canonical element of the parameter space $\Theta$, whereas, the true (but unknown) value of the parameter (to be estimated) is denoted by $\abtheta$. The symbol $\mathbf{x}_{n}(t)$ is used to denote the $\mathbb{R}^{M}$-valued estimate of $\abtheta$ at time $t$ at agent $n$. Without loss of generality, the initial estimate, $\mathbf{x}_{n}(0)$, at time $0$ at agent $n$ is assumed to be a non-random quantity. **Spectral graph theory**: The inter-agent communication topology at a given time instant may be described by an *undirected* graph $G=(V,E)$, with $V=\left[1\cdots N\right]$ and $E$ denoting the set of agents (nodes) and inter-agent communication links (edges) respectively. The unordered pair $(n,l)\in E$ if there exists an edge between nodes $n$ and $l$. We consider simple graphs, i.e., graphs devoid of self-loops and multiple edges. A graph is connected if there exists a path[^3], between each pair of nodes. The neighborhood of node $n$ is $$\label{def:omega} \Omega_{n}=\left\{l\in V\,|\,(n,l)\in E\right\}. $$ Node $n$ has degree $d_{n}=|\Omega_{n}|$ (the number of edges with $n$ as one end point.) The structure of the graph can be described by the symmetric $N\times N$ adjacency matrix, $A=\left[A_{nl}\right]$, $A_{nl}=1$, if $(n,l)\in E$, $A_{nl}=0$, otherwise. Let the degree matrix be the diagonal matrix $D=\mbox{diag}\left(d_{1}\cdots d_{N}\right)$. By definition, the positive semidefinite matrix $L=D-A$ is called the graph Laplacian matrix. The eigenvalues of $L$ can be ordered as $0=\lambda_{1}(L)\leq\lambda_{2}(L)\leq\cdots\leq\lambda_{N}(L)$, the eigenvector corresponding to $\lambda_{1}(L)$ being $(1/\sqrt{N})\mathbf{1}_{N}$. The multiplicity of the zero eigenvalue equals the number of connected components of the network; for a connected graph, $\lambda_{2}(L)>0$. This second eigenvalue is the algebraic connectivity or the Fiedler value of the network; see [@FanChung] for detailed treatment of graphs and their spectral theory. Multi-agent sensing model {#sec:sensmod} ========================= Let $\btheta^{\ast}\in\mathbb{R}^{M}$ be an $M$-dimensional (vector) parameter that is to be estimated by a network of $N$ agents. Throughout, we assume that all the random objects are defined on a common measurable space $\left(\Omega,\mathcal{F}\right)$ equipped with a filtration $\{\mathcal{F}_{t}\}$. Probability and expectation, when the true (but unknown) parameter value $\abtheta$ is in force, are denoted by $\Past(\cdot)$ and $\East[\cdot]$ respectively. All inequalities involving random variables are to be interpreted a.s. Sensing Model {#subsec:sensmod} ------------- Each network agent $n$ sequentially observes an independent and identically distributed (i.i.d.) time-series $\{\mathbf{y}_{n}(t)\}$ of noisy measurements of $\abtheta$, where the distribution $\bmu_{n}^{\abtheta}$ of $\mathbf{y}_{n}(t)$ belongs to a $\btheta$-parameterized exponential family, formalized as follows: \[ass:sensmod\] For each $n$, let $\bnu_{n}$ be a $\sigma$-finite measure on $\mathbb{R}^{M_{n}}$. Let $g_{n}:\mathbb{R}^{M_{n}}\mapsto\mathbb{R}^{M}$ be a Borel function such that for all $\btheta\in\mathbb{R}^{M}$ the following expectation exists: $$\label{ass:sensmod1}\lambda_{n}(\btheta)=\int_{\mathbb{R}^{M_{n}}}e^{\btheta^{\top}g_{n}(\mathbf{y}_{n})}d\bnu_{n}(\mathbf{y}_{n})<\infty.$$ Finally, let $\left\{\bmu_{n}^{\btheta}\right\}$, for $\btheta\in\mathbb{R}^{M}$, be the corresponding $\btheta$-parameterized exponential family of distributions on $\mathbb{R}^{M_{n}}$, i.e., for each $\btheta\in\mathbb{R}^{M}$ the probability measure $\bmu_{n}^{\btheta}$ on $\mathbb{R}^{M_{n}}$ is given by the Radon-Nikodym derivative $$\label{ass:sensmod2}\frac{d\bmu_{n}^{\btheta}}{d\bnu_{n}}(\mathbf{y}_{n})=e^{\left(\btheta^{\top}g_{n}(\mathbf{y}_{n})-\psi_{n}(\btheta)\right)}$$ for all $\mathbf{y}_{n}\in\mathbb{R}^{M_{n}}$, where $\psi_{n}(\cdot)$ denotes the function $\psi_{n}(\btheta)=\log\lambda_{n}(\btheta)$. We assume that each network agent $n$ obtains an $\{\mathcal{F}_{t+1}\}$-adapted independent and identically distributed (i.i.d.) sequence $\{\mathbf{y}_{n}(t)\}$ of observations of the (true) parameter $\btheta^{\ast}$ with distribution $\bmu_{n}(\btheta^{\ast})$, and, for each $t$, $\mathbf{y}_{n}(t)$ is independent of $\mathcal{F}_{t}$. Further, we assume that the observation sequences $\{\mathbf{y}_{n}(t)\}$ and $\{\mathbf{y}_{l}(t)\}$ at any two agents $n$ and $l$ are mutually independent. We will also denote by $\mathbf{y}_{t}$ the totality of agent observations at a given time $t$, i.e., $\mathbf{y}_{t}={\boldsymbol{\operatorname{Vec}}}(\mathbf{y}_{n}(t))=\left[\mathbf{y}_{1}^{\top}(t),\cdots,\mathbf{y}_{N}^{\top}(t)\right]^{\top}$. For $\btheta\in\mathbb{R}^{M}$ let $\bmu^{\btheta}$ denote the product measure $\bmu_{1}^{\btheta}\otimes\cdots\otimes\bmu_{N}^{\btheta}$ on the product space $\mathbb{R}^{M_{1}}\otimes\cdots\otimes\mathbb{R}^{M_{N}}$; it is readily seen that $\{\bmu^{\btheta}\}$ is a $\btheta$-parameterized exponential family with respect to (w.r.t.) the product measure $\bnu = \bnu_{1}\otimes\cdots\otimes\bnu_{N}$ and given by the Radon-Nikodym derivatives $$\label{sensmod3} \frac{d\bmu^{\btheta}}{d\bnu}(\mathbf{y})=e^{\left(\btheta^{\top}g(\mathbf{y})-\psi(\btheta)\right)},$$ where $\mathbf{y}={\boldsymbol{\operatorname{Vec}}}(\mathbf{y}_{n})$ denotes a generic element of the product space and the functions $g(\cdot)$ and $\psi(\cdot)$ are given by $$\label{sensmod5}g(\mathbf{y})=\sum_{n=1}^{N}g_{n}(\mathbf{y}_{n})~~~\mbox{and}~~~\psi(\btheta)=\sum_{n=1}^{N}\psi_{n}(\btheta)$$ respectively. It is readily seen that under Assumption \[ass:sensmod\] the global observation sequence $\{\mathbf{y}_{t}\}$ is $\{\mathcal{F}_{t+1}\}$-adapted, with $\mathbf{y}_{t}$ being independent of $\mathcal{F}_{t}$ and distributed as $\bmu^{\btheta^{\ast}}$ for all $t$. For most practical agent network applications, each agent observes only a subset of $M_n$ of the components of the parameter vector, with $M_{n}\ll M$. It is then necessary for the agents to collaborate by means of occasional local inter-agent message exchanges to achieve a reasonable estimate of the parameter $\btheta^{\ast}$. To formalize, while we do not require *local observability* for $\btheta^{\ast}$, we assume that the network sensing model is *globally observable* as follows: \[ass:globobs\] The network sensing model is globally observable, i.e., we assume $\mathfrak{K}(\btheta,\btheta^{\prime})>0$ and $\mathfrak{K}(\btheta^{\prime},\btheta)>0$ for each pair $(\btheta,\btheta^{\prime})$ of parameter values, where $\mathfrak{K}(\btheta,\btheta^{\prime})$ denotes the Kullback-Leibler divergence between the distributions $\bmu^{\btheta}$ and $\bmu^{\btheta^{\prime}}$, i.e., $$\label{ass:globobs1} \mathfrak{K}(\btheta,\btheta^{\prime})=\int_{\mathbf{y}}\log\left(\frac{d\bmu^{\btheta}}{d\bmu^{\btheta^{\prime}}}(\mathbf{y})\right)d\bmu^{\btheta}(\mathbf{y}).$$ Some preliminaries {#subsec:prel-exp} ------------------ We state some useful analytical properties associated with the multi-agent sensing model, in particular, the implications of the global observability condition (see Assumption \[ass:globobs\]). Most of the listed properties are direct consequences of standard analytical arguments involving statistical exponential families, see, for example, [@Brown:expfam]. \[prop:analytic\] Let Assumption \[ass:sensmod\] hold. Then, - For each $n$, the function $\psi_{n}(\cdot)$ is infinitely differentiable on $\mathbb{R}^{M}$. - For each $n$, let $h_{n}:\mathbb{R}^{M}\mapsto\mathbb{R}^{M}$ be the gradient of $\psi_{n}(\cdot)$, i.e., $h_{n}(\btheta)=\nabla_{\btheta}\psi_{n}(\btheta)$ for all $\btheta\in\mathbb{R}^{M}$. Then[^4] $$\label{prop:analytic1}h_{n}(\btheta)=\int_{\mathbf{y}_{n}\in\mathbb{R}^{M_{n}}}g_{n}(\mathbf{y}_{n})d\bmu_{n}^{\btheta}(\mathbf{y}_{n})~~~\forall\btheta\in\mathbb{R}^{M}$$ and the following inequality (monotonicity) holds for each pair $\left(\btheta,\btheta^{\prime}\right)$ in $\mathbb{R}^{M}$: $$\label{prop:analytic2}\left(\btheta-\btheta^{\prime}\right)^{\top}\left(h_{n}(\btheta)-h_{n}(\btheta^{\prime})\right)\geq 0.$$ - If, in addition, Assumption \[ass:globobs\] holds, denoting by $h(\cdot)$ the gradient of $\psi(\cdot)$, see , we have the following strict monotonicity $$\label{prop:analytic3} \left(\btheta-\btheta^{\prime}\right)^{\top}\left(h(\btheta)-h(\btheta^{\prime})\right)=\sum_{n=1}^{N}\left(\btheta-\btheta^{\prime}\right)^{\top}\left(h_{n}(\btheta)-h_{n}(\btheta^{\prime})\right)>0$$ for each pair $\left(\btheta,\btheta^{\prime}\right)$ in $\mathbb{R}^{M}$ such that $\btheta\neq\btheta^{\prime}$. The first assertion is an immediate consequence of the fact that the function $\psi_{n}(\btheta)$ associated with the exponential family $\{\bmu_{n}^{\btheta}\}$ is infinitely differentiable on the interior of the natural parameter space (the set on which the expectation in  exists), see Theorem 2.2 in [@Brown:expfam]. The second assertion constitutes a well-known property of statistical exponential families (see Corollary 2.5 in [@Brown:expfam]). The same corollary in [@Brown:expfam] asserts that the inequality in  is strict whenever the measures $\bmu^{\btheta}$ and $\bmu^{\btheta^{\prime}}$ are different for $\btheta\neq\btheta^{\prime}$, the latter being ensured by the positivity of the Kullback-Leibler divergences as in Assumption \[ass:globobs\]. The next proposition characterizes the information matrices (or Fisher matrices) associated with the sensing model and may be stated as follows (see [@Brown:expfam] for a proof): \[prop:inf\] Let Assumption \[ass:sensmod\] hold. Then, - For each $n$ and $\btheta\in\mathbb{R}^{M}$, let $I_{n}(\btheta)$ denote the Fisher information matrix associated with the exponential family $\{\bmu_{n}^{\btheta}\}$, i.e., $$\label{prop:inf1} I_{n}(\btheta)=-\int_{\mathbf{y}_{n}}\left(\nabla^{2}_{\btheta}\frac{d\bmu_{n}^{\btheta}}{d\bnu_{n}}(\mathbf{y}_{n})\right)d\bmu_{n}^{\btheta}(\mathbf{y}_{n}),$$ where the expectation integral is to be interpreted entry-wise. Then, $I_{n}(\btheta)$ is positive semidefinite and satisfies $I_{n}(\btheta)=\nabla_{\btheta}\left(h_{n}(\btheta)\right)$ for all $\btheta$, with $h_{n}(\cdot)$ denoting the function in . - If, in addition, Assumption \[ass:globobs\] holds, the global Fisher information matrix $I(\btheta)$, given by, $$\label{prop:inf2}I(\btheta)=-\int_{\mathbf{y}}\left(\nabla^{2}_{\btheta}\frac{d\bmu^{\btheta}}{d\bnu}(\mathbf{y})\right)d\bmu^{\btheta}(\mathbf{y}),$$ is positive definite and satisfies $$\label{prop:inf3} I(\btheta)=\nabla^{2}_{\btheta}h(\btheta)=\sum_{n=1}^{N}\nabla^{2}_{\btheta}h_{n}(\btheta)=\sum_{n=1}^{N}I_{n}(\btheta)$$ for all $\btheta\in\mathbb{R}^{M}$. For the multi-agent statistical exponential families under consideration, the well-known Cramér-Rao characterization holds, and it may be shown that the mean-squared estimation error of any (centralized) estimator based on $t$ sets of observation samples from all the agents is lower bounded by the quantity $t^{-1}I^{-1}(\abtheta)$, where $\abtheta$ denotes the true value of the parameter. Making $t$ tend to $\infty$, the class of asymptotically efficient (optimal) estimators is defined as follows: \[def:asyeff\] An asymptotically efficient estimator of $\abtheta$ is an $\{\mathcal{F}_{t}\}$-adapted sequence $\{\wtheta_{t}\}$, such that $\{\wtheta_{t}\}$ is asymptotically normal with asymptotic covariance $I^{-1}(\abtheta)$, i.e., $$\label{def:asyeff1}\sqrt{t+1}\left(\wtheta_{t}-\abtheta\right)\Longrightarrow\mathcal{N}\left(\mathbf{0},I^{-1}(\abtheta)\right),$$ where $\Longrightarrow$ and $\mathcal{N}(\cdot,\cdot)$ denote convergence in distribution and the normal distribution respectively. \[rem:mle\] We emphasize that the centralized (recursive or batch) estimators, as discussed above, are based on the availability of the entire set of agent observations at a centralized resource at all times, which further require the global model information (the statistics of the agent exponential families $\{\bmu^{\btheta}_{n}\}$) for all $n$ such that the nonlinear innovation gains driving the recursive estimators may be designed appropriately to achieve asymptotic efficiency. In contrast, the goal of this paper is to develop collaborative distributed asymptotically efficient estimators of $\abtheta$ at each agent $n$ of the network, in which, the information is distributed, i.e., at a given instant of time $t$ each agent $n$ has access to its local sensed data $\mathbf{y}_{n}(t)$ only; to start with, each agent $n$ is only aware of its local sensing model $\{\bmu^{\btheta}_{n}\}$ only; and, the agents may only collaborate by exchanging information over a (sparse) pre-defined communication network, where inter-agent communication and observation sampling occurs at the same rate, i.e., each agent $n$ may only exchange one round of messages with its designated communication neighbors per sampling epoch. To this end, the proposed estimators consist of simultaneous distributed local estimate update and distributed local gain refinement (learning) at each network agent $n$, with closed-loop interaction between the estimation and learning processes. From a technical point of view, in contrast to centralized stochastic approximation based estimators, the estimators developed in the paper are of the distributed nonlinear stochastic approximation type with necessarily mixed time-scale dynamics; the mixed time-scale dynamics arise as a result of suitably crafting the relative intensities of the potentials for local collaboration and local innovation, necessary for achieving asymptotic efficiency. Distributed estimators of mixed time-scale dynamics have been introduced and studied in [@KarMouraRamanan-Est-2008; @KarMoura-LinEst-JSTSP-2011]; we refer to them as *consensus + innovations* estimators. Asymptotically Efficient Distributed Estimator {#sec:asefdistest} ============================================== In this section, we provide distributed sequential estimators for $\abtheta$ that are not only consistent but asymptotically optimal, in that, the local asymptotic covariances at each agent coincide with the inverse of the centralized Fisher information rate $I^{-1}(\abtheta)$ associated with the exponential observation statistics in consideration. Other than challenges encountered in the distributed implementation, a major difficulty in obtaining such asymptotically efficient distributed estimators concerns the design of the local estimator or innovation gains (to be made precise later); in particular, in contrast to optimal estimation in linear statistical models [@KarMoura-LinEst-JSTSP-2011; @Kar-AdaptiveDistEst-SICON-2012], in the nonlinear non-Gaussian setting, the innovation gains that achieve asymptotic efficiency are necessarily dependent on the true value $\abtheta$ of the parameter to be estimated. Since the value of $\abtheta$ (and hence the optimal estimator gains) are not available in advance. We propose a distributed estimation approach that involves a distributed online gain learning procedure that proceeds in conjunction with the sequential estimation task. As a result, a somewhat closed-loop interaction occurs between the gain learning and parameter estimation that is reminiscent of the certainty equivalence approach to adaptive estimation and control – although the analysis methodology is significantly different from classical techniques used in adaptive processing, primarily due to the distributed nature of our problem and its mixed time-scale dynamics. Specifically, the main idea in the proposed distributed estimation methodology is to generate simultaneously two distributed estimators $\{\bx_{n}(t)\}$ and $\{\mathbf{x}_{n}(t)\}$ at each agent $n$; the former, the auxiliary estimate sequences $\{\bx_{n}(t)\}$, are driven by constant (non-adaptive) innovation gains, and, while supposed to be consistent for $\abtheta$, are suboptimal in the sense of asymptotic covariance. The consistent auxiliary estimates are used to generate the sequence of optimal adaptive innovation gains through another online distributed learning procedure; the resulting adaptive gain process is in turn used to drive the evolution of the desired estimate sequences $\{\mathbf{x}_{n}(t)\}$ at each agent $n$, which will be shown to be asymptotically efficient from the asymptotic covariance viewpoint. As will be seen below, we emphasize here that the construction of the auxiliary estimate sequences, the adaptive gain refining, and the generation of the optimal estimators are all executed simultaneously. Algorithms and Assumptions {#subsec:alg} -------------------------- The proposed optimal distributed estimation methodology consists of the following three simultaneous update processes at each agent $n$: auxiliary estimate sequence $\{\bx_{n}(t)\}$ generation; adaptive gain refinement; and optimal estimate sequence $\{\mathbf{x}_{n}(t)\}$ generation. Formally: **Auxiliary Estimate Generation**: Each agent $n$ maintains an $\{\mathcal{F}_{t}\}$-adapted $\mathbb{R}^{M}$-valued estimate sequence $\{\bx_{n}(t)\}$ for $\abtheta$, recursively updated in a distributed fashion as follows: $$\label{aux:1} \bx_{n}(t+1)=\bx_{n}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\bx_{n}(t)-\bx_{l}(t)\right)+\alpha_{t}\left(g_{n}(\mathbf{y}_{n}(t))-h_{n}(\bx_{n}(t))\right),$$ where $\{\beta_{t}\}$ and $\{\alpha_{t}\}$ correspond to appropriate time-varying weighting factors for the agreement (consensus) and innovation (new observation) potentials, respectively, whereas, $\Omega_{n}(t)$ denotes the $\{\mathcal{F}_{t+1}\}$-adapted time-varying random neighborhood of agent $n$ at time $t$. **Optimal Estimate Generation**: In addition, each agent $n$ generates an optimal (or refined) estimate sequence $\{\mathbf{x}_{n}(t)\}$, which is also $\{\mathcal{F}_{t}\}$-adapted and evolves as $$\label{opt:1} \mathbf{x}_{n}(t+1)=\mathbf{x}_{n}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\mathbf{x}_{n}(t)-\mathbf{x}_{l}(t)\right)+\alpha_{t}K_{n}(t)\left(g_{n}(\mathbf{y}_{n}(t))-h_{n}(\mathbf{x}_{n}(t))\right).$$ Note that the key difference between the estimate updates in  and  is in the use of adaptive (time-varying) gains $K_{n}(t)$ in the innovation part in the latter, as opposed to static gains in the former. Specifically, the adaptive gain sequence $\{K_{n}(t)\}$ at an agent $n$ is an $\{\mathcal{F}_{t}\}$-adapted $\mathbb{R}^{M\times M}$-valued process which is generated according to a distributed learning process as follows. **Adaptive Gain Refinement**: The $\{\mathcal{F}_{t}\}$-adapted gain sequence $\{K_{n}(t)\}$ at an agent $n$ is generated according to a distributed learning process, driven by the auxiliary estimates $\{\bx_{n}(t)\}$ obtained in , as follows: $$\label{gain1}K_{n}(t)=\left(G_{n}(t)+\varphi_{t}I_{M}\right)^{-1}~~~\forall n,$$ where, $\{\varphi_{t}\}$ is a deterministic sequence of positive numbers such that $\varphi_{t}\rightarrow 0$ as $t\rightarrow\infty$, and, each agent $n$ maintains another $\{\mathcal{F}_{t}\}$-adapted $\mathbb{S}_{+}^{M}$-valued process $\{G_{n}(t)\}$ evolving in a distributed fashion as $$\label{gain2}G_{n}(t+1)=G_{n}(t)-\beta_{t}\left(G_{n}(t)-G_{l}(t)\right)+\alpha_{t}\left(I_{n}(\bx_{n}(t))-G_{n}(t)\right)$$ for all $t$, with some positive semidefinite initial condition $G_{n}(0)$ and $I_{n}(\cdot)$ denoting the local Fisher information matrix, see . Before discussing further, we formalize the assumptions on the inter-agent stochastic communication and the algorithm weight sequences $\{\alpha_{t}\}$ and $\{\beta_{t}\}$ in the following: \[ass:conn\] The $\{\mathcal{F}_{t+1}\}$-adapted sequence $\{L_{t}\}$ of communication network Laplacians (modeling the agent communication neighborhoods $\Omega_{n}(t)$-s at each time $t$) is temporally i.i.d. with $L_{t}$ being independent of $\mathcal{F}_{t}$ for each $t$. Further, the sequence $\{L_{t}\}$ is connected on the average, i.e., $\lambda_{2}(\overline{L})>0$, where $\overline{L}=\East[L_{t}]$ denotes the mean Laplacian. \[ass:weight\] The weight sequences $\{\beta_{t}\}$ and $\{\alpha_{t}\}$ satisfy $$\label{weight} \alpha_{t}=\frac{1}{(t+1)}~~~\mbox{and}~~~\beta_{t}=\frac{b}{(t+1)^{\tau_{2}}},$$ where $b>0$ and $0<\tau_{2}< 1/2$. Further, the sequence $\{\varphi_{t}\}$ in  satisfies $$\label{ass:weight3} \lim_{\tri}(t+1)^{\mu_{2}}\varphi_{t}=0$$ for some positive constant $\mu_{2}$. The following weak linear growth condition on the functions $h_{n}(\cdot)$ driving the (nonlinear) innovations in - will be assumed: \[ass:lingrowth\] For each $\btheta\in\mathbb{R}^{M}$, there exist positive constants $c_{1}^{\btheta}$ and $c_{2}^{\btheta}$, such that, for each $n$, function $h_{n}(\cdot)$ in  satisfies the local linear growth condition, $$\label{ass:lingrowth1} \left\|h_{n}(\btheta^{\prime})-h_{n}(\btheta)\right\|\leq c_{1}^{\btheta}\left\|\btheta^{\prime}-\btheta\right\|+c_{2}^{\btheta},$$ for all $\btheta^{\prime}\in\mathbb{R}^{M}$. Main Results {#subsec:mainres} ------------ We formally state the main results of the paper, the proofs appearing in Section \[sec:proof\_main\_res\]. \[th:estcons\] Let Assumptions \[ass:globobs\],\[ass:conn\],\[ass:lingrowth\] and \[ass:weight\] hold. Then, for each $n$ the estimate sequence $\{\mathbf{x}_{n}(t)\}$ is strongly consistent. In particular, we have $$\label{th:estcons1} \Past\left(\lim_{t\rightarrow\infty}(t+1)^{\tau}\left\|\mathbf{x}_{n}(t)-\mathbf{\theta}^{\ast}\right\|=0\right)=1$$ for each $n$ and $\tau\in [0,1/2)$. The consistency in Theorem \[th:estcons\] is order optimal in that  fails to hold with an exponent $\tau\geq 1/2$ for any (including centralized) estimation procedure. The next result concerns the asymptotic efficiency of the estimates generated by the proposed distributed scheme. \[th:estn\] Let Assumptions \[ass:globobs\],\[ass:conn\],\[ass:lingrowth\] and \[ass:weight\] hold. Then, for each $n$ we have $$\label{th:estn200} \sqrt(t+1)\left(\mathbf{x}_{n}(t)-\mathbf{\theta}^{\ast}\right)\Longrightarrow\mathcal{N}\left(\mathbf{0},I^{-1}(\abtheta)\right),$$ where $\mathcal{N}(\cdot,\cdot)$ and $\Longrightarrow$ denote the Gaussian distribution and weak convergence, respectively. A Generic Consistent Distributed Estimator {#sec:genconsest} ========================================== With a view to understanding the asymptotic behavior of the auxiliary estimate processes $\{\bx_{n}(t)\}$, $n=1,\cdots,N$, introduced in Section \[subsec:alg\], see , we study a somewhat more general class of distributed estimate processes with time-varying local innovation gains. Other than establishing consistency of these estimates (see Theorem \[th:genest\]), we obtain pathwise convergence rate asymptotics of the estimate processes to $\abtheta$ (see Theorem \[th:genrate\]). These latter convergence rate results will be used to analyze the impact of the auxiliary estimates in the adaptive gain computation -. \[th:genest\] For each $n$, let $\{\mathbf{z}_{n}(t)\}$ be an $\mathbb{R}^{M}$-valued $\{\mathcal{F}_{t}\}$-adapted process (estimator) evolving as follows: $$\label{th:genest1}\mathbf{z}_{n}(t+1)=\mathbf{z}_{n}(t)-\beta_{t}\sum_{n\in\Omega_{n}(t)}\left(\mathbf{z}_{n}(t)-\mathbf{z}_{l}(t)\right)+\alpha_{t}K_{n}(t)\left(g_{n}(\mathbf{y}_{n}(t)-h_{n}(\mathbf{z}_{n}(t))\right).$$ Suppose Assumptions \[ass:sensmod\],\[ass:lingrowth\] and \[ass:conn\] on the network system model hold, and the weight sequences $\{\beta_{t}\}$ and $\{\alpha_{t}\}$ satisfy Assumption \[ass:weight\]. Additionally, let the matrix gain processes $\{K_{n}(t)\}$ be $\mathbb{S}_{+}^{M}$-valued $\{\mathcal{F}_{t}\}$-adapted, and there exist a positive definite matrix $\mathcal{K}$ and a constant $\tau_{3}>0$, such that the gain processes $\{K_{n}(t)\}$ converge uniformly to $\mathcal{K}$ at rate $\tau_{3}$, i.e., for each $\delta>0$, there exists a deterministic time $t_{\delta}$, such that for all $n$ $$\label{lm:bg1}\Past\left(\sup_{t\geq t_{\delta}}(t+1)^{\tau_{3}}\left\|K_{n}(t)-\mathcal{K}\right\|\leq\delta\right)=1.$$ Then, for each $n$, $\{\mathbf{z}_{n}(t)\}$ is a consistent estimator of $\theta^{\ast}$, i.e., $\mathbf{z}_{n}(t)\rightarrow\btheta^{\ast}$ as $t\rightarrow\infty$ a.s. The proof of Theorem \[th:genest\] is accomplished in steps, the key intermediate ingredients being Lemma \[lm:bg\] and Proposition \[prop:Lg\] concerning the boundedness of the processes $\{\mathbf{z}_{n}(t)\}$, $n=1,\cdots,N$, and a Lyapunov type-construction, respectively. \[lm:bg\] Let the hypotheses of Theorem \[th:genest\] hold. Then, for each $n$, the process $\{\mathbf{z}_{n}(t)\}$ is bounded a.s., i.e., $$\label{lm:bg2}\Past\left(\sup_{t\geq 0}\left\|\mathbf{z}_{n}(t)\right\|<\infty\right)=1.$$ Let $\wz_{n}(t)=\mathbf{z}_{n}(t)-\abtheta$ and denote by $\mathbf{z}_{t}$, $\wz_{t}$ and $\abbtheta$ the $\mathbb{R}^{NM}$-valued ${\boldsymbol{\operatorname{Vec}}}\left(\mathbf{z}_{n}(t)\right)$, ${\boldsymbol{\operatorname{Vec}}}\left(\wz_{n}(t)\right)$, and $\mathbf{1}_{N}\otimes\abtheta$, respectively. Noting that $\left(L_{t}\otimes I_{M}\right)\left(\mathbf{1}_{N}\otimes\abtheta\right)=\mathbf{0}$, the process $\{\wz_{t}\}$ is seen to satisfy $$\label{lm:bg3} \wz_{t+1}=\wz_{t}-\beta_{t}\left(L_{t}\otimes I_{M}\right)\wz_{t}-\alpha_{t}\bK_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)+\alpha_{t}\bK_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right),$$ where $$\label{lm:bg4} \bh(\mathbf{z}_{t})={\boldsymbol{\operatorname{Vec}}}\left(h_{n}(\mathbf{z}_{n}(t))\right),~~\bh(\abbtheta)={\boldsymbol{\operatorname{Vec}}}\left(h_{n}(\abtheta)\right),~~\bg(\mathbf{y}_{t})={\boldsymbol{\operatorname{Vec}}}\left(g_{n}(\mathbf{y}_{n}(t))\right),$$ and $\bK_{t}={\boldsymbol{\operatorname{Diag}}}(K_{n}(t))$. Note that, by hypothesis, $\bK_{t}\in\mathbb{S}_{++}^{NM}$ and define the $\mathbb{R}_{+}$-valued $\{\mathcal{F}_{t}\}$-adapted process $\{V_{t}\}$ by $$\label{lm:bg5}V_{t}=\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\wz_{t}$$ for all $t$. Note that by  we obtain $$\begin{aligned} \label{lm:bg84} \left(I_{N}\otimes\mathcal{K}\right)^{-1}\wz_{t+1}=\left(I_{N}\otimes\mathcal{K}^{-1}\right)\wz_{t}-\beta_{t}\left(L_{t}\otimes\mathcal{K}^{-1}\right)\wz_{t}\\-\alpha_{t}\left(I_{N}\otimes\mathcal{K}\right)^{-1}\bK_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right) + \alpha_{t}\left(I_{N}\otimes\mathcal{K}\right)^{-1}\bK_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right).\end{aligned}$$ By  we have for all $t\geq 0$ $$\label{lm:bg6}\East\left[\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right]=\mathbf{0},$$ and using the temporal independence of the Laplacian sequence we obtain $$\begin{aligned} \nonumber \East\left[V_{t+1}~|~\mathcal{F}_{t}\right] &=V_{t}-2\beta_{t}\wz_{t}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\wz_{t}-2\alpha_{t}\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right) K_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right) \\ \nonumber & +\beta_{t}^{2}\wz_{t}^{\top}\East\left[\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(\OL\otimes I_{M}\right)\right]\wz_{t} \\ \nonumber & +2\alpha_{t}\beta_{t}\wz_{t}^{\top}\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t}-\bh(\abbtheta)\right)\\ & +\alpha_{t}^{2}\left(\bh(\mathbf{z}_{t}-\bh(\abtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t} \left(\bh(\mathbf{z}_{t}-\bh(\abbtheta)\right) \\ \label{lm:bg7} & +\alpha_{t}^{2}\East\left[\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right) K_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)\right]\end{aligned}$$ for all $t\geq 0$. Recall the definition of consensus subspace in Definition \[def:consspace\] and note that by using the properties of the Laplacian $\OL$ and matrix Kronecker products we have $$\label{lm:bg201} \wz_{t}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\wz_{t}=\left(\wz_{t}\right)_{\PC}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\left(\wz_{t}\right)_{\PC}\geq\lambda_{2}(\OL)\lambda_{1}\left(\mathcal{K}^{-1}\right)\left\|\left(\wz_{t}\right)_{\PC}\right\|^{2}$$ for all $t\geq 0$, where $\lambda_{1}\left(\mathcal{K}^{-1}\right)>0$ denotes the smallest eigenvalue of the positive definite matrix $\mathcal{K}^{-1}$. Now consider the inequality $$\begin{aligned} \label{lm:bg8}\wz_{t}^{\top}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right) = \sum_{n=1}^{N}\left(\mathbf{z}_{n}(t)-\abtheta\right)^{\top}\left(h_{n}(\mathbf{z}_{n}(t))-h_{n}(\abtheta)\right)\geq 0\end{aligned}$$ (where the non-negativity of the terms in the summation follows from Proposition \[prop:analytic\]), and note that, by Assumption \[ass:lingrowth\] and hypothesis , there exist positive constants $c_{1}$ and $t_{1}$ large enough such that $$\begin{aligned} \label{lm:bg9}\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\\ \geq \wz_{t}^{\top}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)-\left|\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(K_{t}-\left(I_{N}\otimes\mathcal{K}^{-1}\right)\right)\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\right|\\ \geq -\left\|\wz_{t}\right\|\left\|I_{N}\otimes\mathcal{K}^{-1}\right\|\left\|K_{t}-\left(I_{N}\otimes\mathcal{K}^{-1}\right)\right\|\left\|\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right\|\\ \geq -c_{1}\left(1/(t+1)^{\tau_{3}}\right)\left(1+\left\|\wz_{t}\right\|^{2}\right)\end{aligned}$$ for all $t\geq t_{1}$, where we also use the inequality $\|\wz_{t}\|\leq \|\wz_{t}\|^{2}+1$. Similarly, by invoking the boundedness of the matrices involved and the linear growth condition on the $h_{n}(\cdot)$-s and making $c_{1}$ and $t_{1}$ larger if necessary, we obtain the following sequence of inequalities for all $t\geq t_{1}$: $$\begin{aligned} \label{lm:bg10} \wz_{t}^{\top}\East\left[\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(\OL\otimes I_{M}\right)\right]\wz_{t}\\ =\left(\wz_{t}\right)_{\PC}^{\top}\East\left[\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(\OL\otimes I_{M}\right)\right]\left(\wz_{t}\right)_{\PC}\leq c_{1}\left\|\left(\wz_{t}\right)_{\PC}\right\|^{2},\end{aligned}$$ $$\label{lm:bg11}\wz_{t}^{\top}\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\leq c_{1}\left(1+\left\|\wz_{t}\right\|^{2}\right),$$ $$\label{lm:bg12}\left(\bh(\mathbf{z}_{t}-\bh(\abtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t}-\bh(\abbtheta)\right)\leq c_{1}\left(1+\left\|\wz_{t}\right\|^{2}\right),$$ and $$\label{lm:bg13}\East\left[\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)\right]\leq c_{1},$$ where the last inequality uses the fact that $\bg(\mathbf{y}_{t})$ possesses moments of all orders due to the exponential statistics. Noting that there exist positive constants $c_{2}$ and $c_{3}$ such that $$\label{lm:bg14} c_{2}\left\|\wz_{t}\right\|^{2}\leq \wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\wz_{t}=V_{t}\leq c_{3}\left\|\wz_{t}\right\|^{2}$$ for all $t$, by - we have for all $t\geq t_{1}$ $$\begin{aligned} \label{lm:bg15} \East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1+c_{4}\alpha_{t}\left(\frac{1}{(t+1)^{\tau_{3}}}+\beta_{t}+ \alpha_{t}\right)\right)V_{t} \\ -c_{5}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|\left(\wz_{t}\right)_{\PC}\right\|^{2}+c_{6}\left(\frac{\alpha_{t}}{(t+1)^{\tau_{3}}}+\alpha_{t}\beta_{t}+\alpha_{t}^{2}\right)\end{aligned}$$ for some positive constants $c_{4}$, $c_{5}$ and $c_{6}$. Since $\beta_{t}\rightarrow 0$ as $t\rightarrow\infty$ by , we may choose $t_{2}$ large enough (larger than $t_{1}$) such that $\left(\beta_{t}-\beta_{t}^{2}\right)\geq 0$ for all $t\geq t_{2}$. Further, the hypotheses on the weight sequences  confirm the existence of constants $\tau_{4}$ and $\tau_{5}$ strictly greater than 1, and positive constants $c_{7}$ and $c_{8}$, such that $$\label{lm:bg16} c_{4}\alpha_{t}\left(\frac{1}{(t+1)^{\tau_{3}}}+\beta_{t}+\alpha_{t}\right)\leq \frac{c_{7}}{(t+1)^{\tau_{4}}}=\gamma_{t}$$ and $$\label{lm:bg17} c_{6}\left(\frac{\alpha_{t}}{(t+1)^{\tau_{3}}}+\alpha_{t}\beta_{t}+\alpha_{t}^{2}\right)\leq \frac{c_{8}}{(t+1)^{\tau_{5}}}=\gamma^{\prime}_{t}$$ for all $t\geq t_{2}$ (by making $t_{2}$ larger if necessary). By the above construction we then obtain $$\label{lm:bg18}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1+\gamma_{t}\right)V_{t}+\gamma^{\prime}_{t}$$ for all $t\geq t_{2}$ with the positive weight sequences $\{\gamma_{t}\}$ and $\{\gamma^{\prime}_{t}\}$ being summable, i.e., $$\label{lm:bg19}\sum_{t\geq 0}\gamma_{t}<\infty~~\mbox{and}~~\sum_{t\geq 0}\gamma^{\prime}_{t}<\infty.$$ Note that, by , the product $\prod_{s=t}^{\infty}(1+\gamma_{s})$ exists for all $t$, and define by $\{W_{t}\}$ the $\mathbb{R}_{+}$-valued $\{\mathcal{F}_{t}\}$-adapted process such that $$\label{lm:bg20} W_{t}=\left(\prod_{s=t}^{\infty}(1+\gamma_{s})\right)V_{t}+\sum_{s=t}^{\infty}\gamma^{\prime}_{s},~~~\forall t.$$ By , the process $\{W_{t}\}$ may be shown to satisfy $$\label{lm:bg21}\East\left[W_{t+1}~|~\mathcal{F}_{t}\right]\leq W_{t}$$ for all $t\geq t_{2}$. Being a non-negative supermartingale the process $\{W_{t}\}$ converges a.s. to a bounded random variable $W^{\ast}$ as $t\rightarrow\infty$. It then follows readily by  that $V_{t}\rightarrow W^{\ast}$ a.s. as $t\rightarrow\infty$. In particular, we conclude that the process $\{V_{t}\}$ is bounded a.s., which establishes the desired boundedness of the sequences $\{\mathbf{z}_{n}(t)\}$ for all $n$. The following useful convergence may be extracted as a corollary to Lemma \[lm:bg\]. \[corr:bg\] Under the hypotheses of Lemma \[lm:bg\], there exists a finite random variable $V^{\ast}$ such that $V_{t}\rightarrow V^{\ast}$ a.s. as $t\rightarrow\infty$, where $V_{t}=\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\wz_{t}$ as in . The following Lyapunov-type construction, whose proof is relegated to Appendix \[sec:app1\], will be critical to the subsequent development. \[prop:Lg\] Let $\Vap\in (0,1)$ and $\Gamma_{\Vap}$ denote the set $$\label{prop:Lg1} \Gamma_{\Vap}=\left\{\mathbf{z}\in\mathbb{R}^{NM}~:~\Vap\leq\left\|\mathbf{z}-\abbtheta\right\|\leq 1/\Vap\right\}.$$ For each $t\geq 0$, denote by $\mathcal{H}_{t}:\mathbb{R}^{NM}\mapsto\mathbb{R}$ the function given by $$\label{prop:Lg3} \mathcal{H}_{t}(\mathbf{z})=\frac{b_{\beta}\beta_{t}}{\alpha_{t}}\left(\mathbf{z}-\abbtheta\right)^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\left(\mathbf{z}-\abbtheta\right)+\left(\mathbf{z}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)$$ for all $\mathbf{z}\in\mathbb{R}^{NM}$, where the matrix $\mathcal{K}^{-1}\in\mathbb{S}_{++}^{M}$ and $b_{\beta}>0$ is a constant. Then, there exist $t_{\Vap}>0$ and a constant $\bc_{\Vap}>0$ such that for all $t\geq t_{\Vap}$ $$\label{prop:Lg4}\mathcal{H}_{t}(\mathbf{z})\geq\bc_{\Vap}\left\|\mathbf{z}-\abbtheta\right\|^{2},~~~\forall \mathbf{z}\in\Gamma_{\Vap}.$$ We now complete the proof of Theorem \[th:genest\]. In what follows we use the notation and definitions formulated in the proof of Lemma \[lm:bg\]. Let us consider $\Vap\in (0,1)$ and let $\rho_{\Vap}$ denote the $\{\mathcal{F}_{t}\}$ stopping time $$\label{th:genest10} \rho_{\Vap}=\inf\left\{t\geq 0~:~\mathbf{z}_{t}\notin\Gamma_{\Vap}\right\},$$ where $\Gamma_{\Vap}$ is defined in . Let $\{V_{t}\}$ be the $\{\mathcal{F}_{t}\}$-adapted process defined in  and denote by $\{V^{\Vap}_{t}\}$ the stopped process $$\label{th:genest11}V^{\Vap}_{t}=V_{t\wedge\rho_{\Vap}},~~~\forall t,$$ which is readily seen to be $\{\mathcal{F}_{t}\}$ adapted. Noting that $$\label{th:genest12}V^{\Vap}_{t+1}=V_{t+1}\mathbb{I}\left(\rho_{\Vap}>t\right)+V_{\rho_{\Vap}}\mathbb{I}\left(\rho_{\Vap}\leq t\right)$$ and the fact that the indicator function $\mathbb{I}\left(\rho_{\Vap}>t\right)$ and the random variable $V_{\rho_{\Vap}}\mathbb{I}\left(\rho_{\Vap}\leq t\right)$ are adapted to $\mathcal{F}_{t}$ for all $t$ ($\rho_{\Vap}$ being an $\{\mathcal{F}_{t}\}$ stopping time), we have $$\label{th:genest13}\East\left[V^{\Vap}_{t+1}~|~\mathcal{F}_{t}\right]=\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)+V_{\rho_{\Vap}}\mathbb{I}\left(\rho_{\Vap}\leq t\right)$$ for all $t$. Recall the function $\mathcal{H}_{t}(\cdot)$ defined in ; setting $b_{\beta}=1/2$ in the definition of $\mathcal{H}_{t}(\cdot)$ we obtain $$\begin{aligned} \label{th:genest200}2\beta_{t}\wz_{t}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\wz_{t}+2\alpha_{t}\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\\ =2\alpha_{t}\mathcal{H}_{t}(\mathbf{z}_{t})+\beta_{t}\wz_{t}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\wz_{t}\\+2\alpha_{t}\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(K_{t}-\left(I_{N}\otimes\mathcal{K}^{-1}\right)\right)\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right).\end{aligned}$$ A slight rearrangement of the terms in the expansion  then yields $$\begin{aligned} \label{th:genest14} \East\left[V_{t+1}~|~\mathcal{F}_{t}\right]=V_{t}-2\alpha_{t}\mathcal{H}_{t}(\mathbf{z}_{t})-\beta_{t}\wz_{t}^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\wz_{t}\\-2\alpha_{t}\wz_{t}^{\top}\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(K_{t}-\left(I_{N}\otimes\mathcal{K}^{-1}\right)\right)\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\\ +\beta_{t}^{2}\wz_{t}^{\top}\East\left[\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)\left(\OL\otimes I_{M}\right)\right]\wz_{t}\\+2\alpha_{t}\beta_{t}\wz_{t}^{\top}\left(\OL\otimes I_{M}\right)\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t}-\bh(\abbtheta)\right)\\ +\alpha_{t}^{2}\left(\bh(\mathbf{z}_{t}-\bh(\abtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bh(\mathbf{z}_{t}-\bh(\abbtheta)\right)\\+\alpha_{t}^{2}\East\left[\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)^{\top}K_{t}\left(I_{N}\otimes\mathcal{K}^{-1}\right)K_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)\right]\end{aligned}$$ for all $t\geq 0$, where $\mathcal{H}_{t}(\cdot)$ is defined in . The inequalities in - then show that there exist positive constants $b_{1}$, $b_{2}$ and $b_{3}$, and a deterministic time $t_{1}$ (large enough), such that, $$\begin{aligned} \label{th:genest15}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1+b_{1}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\right)V_{t}-2\alpha_{t}\mathcal{H}_{t}(\mathbf{z}_{t})\\ -b_{2}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|(\wz_{t})_{\PC}\right\|^{2}+b_{3}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\end{aligned}$$ for all $t\geq t_{1}$. Note that, by definition, on the event $\{\rho_{\Vap}>t\}$ we have $\mathbf{z}_{t}\in\Gamma_{\Vap}$, and hence, an immediate application of Proposition \[prop:Lg\] establishes the existence of a positive constant $\bc_{\Vap}$ and a large enough deterministic time $t_{\Vap}>0$, such that, $$\label{th:genest16}\mathcal{H}_{t}(\mathbf{z}_{t})\mathbb{I}\left(\rho_{\Vap}>t\right)\geq\bc_{\Vap}\|\wz_{t}\|^{2}\mathbb{I}\left(\rho_{\Vap}>t\right)$$ for all $t\geq t_{\Vap}$. By  and - and making $t_{\Vap}$ larger if necessary, it then follows that there exist a constant $b_{4}(\Vap)>0$ such that $$\begin{aligned} \label{th:genest17}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\leq\left[\left(1-b_{4}(\Vap)\alpha_{t}+b_{1}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\right)V_{t}\right.\\ \left.-b_{2}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|(\wz_{t})_{\PC}\right\|^{2}+b_{3}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\end{aligned}$$ for all $t\geq t_{\Vap}$. Since $\alpha_{t}\rightarrow 0$ and $\beta_{t}\rightarrow 0$ as $t\rightarrow\infty$, by choosing $t_{\Vap}$ large enough we may assert $$\label{th:genest20}\beta_{t}-\beta_{t}^{2}\geq 0,~~\forall t\geq t_{\Vap},$$ $$\label{th:genest18} b_{4}(\Vap)\alpha_{t}-b_{1}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\geq (b_{4}(\Vap)/2)\alpha_{t},~~\forall t\geq t_{\Vap},$$ and the existence of positive constants $b_{5}$ and $\tau_{4}$ such that $$\label{th:genest19}b_{3}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\leq b_{5}\alpha_{t}(t+1)^{-\tau_{4}},~~\forall t\geq t_{\Vap}.$$ We thus obtain for $t\geq t_{\Vap}$ $$\label{th:genest21}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\leq\left[\left(1-(b_{4}(\Vap)/2)\alpha_{t}\right)V_{t}+b_{5}\alpha_{t}(t+1)^{-\tau_{4}}\right]\mathbb{I}\left(\rho_{\Vap}>t\right).$$ Note that, by definition of $\Gamma_{\Vap}$, $$\label{th:genest22}\|\wz_{t}\|^{2}\geq\Vap^{2}~~\mbox{on $\{\wz_{t}\in\Gamma_{\Vap}\}$},$$ and, hence, by  we conclude that there exists a constant $b_{6}(\Vap)>0$ such that $$\label{th:genest23}V_{t}\geq b_{6}(\Vap)~~\mbox{on $\{\rho_{\Vap}>t\}$}.$$ By  we then have for all $t\geq t_{\Vap}$ $$\label{th:genest24}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\leq\left[V_{t}-b_{7}(\Vap)\alpha_{t}+b_{5}\alpha_{t}(t+1)^{-\tau_{4}}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)$$ with $b_{7}(\Vap)$ being another positive constant. Finally, the observation that $\left(b_{7}(\Vap)/2\right)\alpha_{t}\geq b_{5}\alpha_{t}(t+1)^{-\tau_{4}}$ eventually leads to $$\begin{aligned} \label{th:genest25}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\leq\left[V_{t}-\left(b_{7}(\Vap)/2\right)\alpha_{t}\right]\mathbb{I}\left(\rho_{\Vap}>t\right)\\=V_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)-b_{8}(\Vap)\alpha_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)\end{aligned}$$ for all $t\geq t_{\Vap}$ (making $t_{\Vap}$ larger if necessary), where $b_{8}(\Vap)=b_{7}(\Vap)/2$. By  we then obtain $$\begin{aligned} \label{th:genest26}\East\left[V^{\Vap}_{t+1}~|~\mathcal{F}_{t}\right]\leq V_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)+V_{t_{\Vap}}\mathbb{I}\left(\rho_{\Vap}\leq t\right)-b_{8}(\Vap)\alpha_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)\\ = V^{\Vap}_{t}-b_{8}(\Vap)\alpha_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)\end{aligned}$$ for all $t\geq t_{\Vap}$. Note that the $\{\mathcal{F}_{t}\}$-adapted process $\{V^{\Vap}_{t}\}_{t\geq t_{\Vap}}$ satisfies $\East[V^{\Vap}_{t+1}|\mathcal{F}_{t}]\leq V^{\Vap}_{t}$ for all $t\geq t_{\Vap}$; hence, being a (non-negative) supermartingale it converges, i.e., there exists a finite random variable $V_{\Vap}^{\ast}$ such that $V^{\Vap}_{t}\rightarrow V^{\ast}_{\Vap}$ a.s. as $t\rightarrow\infty$. Now consider the $\{\mathcal{F}_{t}\}$-adapted $\mathbb{R}_{+}$-valued process $\{W^{\Vap}_{t}\}$ given by $$\label{th:genest27} W_{t}^{\Vap}=V_{t}^{\Vap}+b_{8}(\Vap)\sum_{s=0}^{t-1}\alpha_{s}\mathbb{I}\left(\rho_{\Vap}>s\right),$$ and note that, by  we obtain $$\label{th:genest28}\East\left[W^{\Vap}_{t+1}~|~\mathcal{F}_{t}\right]\leq V^{\Vap}_{t}-b_{8}(\Vap)\alpha_{t}\mathbb{I}\left(\rho_{\Vap}>t\right)+b_{8}(\Vap)\sum_{s=0}^{t}\alpha_{s}\mathbb{I}\left(\rho_{\Vap}>s\right)=W^{\Vap}_{t}$$ for all $t\geq t_{\Vap}$; hence $\{W^{\Vap}_{t}\}_{t\geq t_{\Vap}}$ is a non-negative supermartingale and there exists a finite random variable $W_{\Vap}^{\ast}$ such that $W^{\Vap}_{t}\rightarrow W^{\ast}_{\Vap}$ a.s. as $t\rightarrow\infty$. We then conclude by  that the following limit exists: $$\label{th:genest29}\lim_{t\rightarrow\infty}b_{8}(\Vap)\sum_{s=0}^{t-1}\alpha_{s}\mathbb{I}\left(\rho_{\Vap}>s\right)=W^{\ast}_{\Vap}-V^{\ast}_{\Vap}<\infty~~\mbox{a.s.}$$ Given that $\sum_{s=0}^{t-1}\alpha_{s}\rightarrow\infty$ as $t\rightarrow\infty$, the limit condition in  is fulfilled only if the summation terminates at a finite time a.s., i.e., we must have $\rho_{\Vap}<\infty$ a.s. To summarize, we have for each $\Vap\in (0,1)$, $\rho_{\Vap}<\infty$ a.s., i.e., the process $\{\wz_{t}\}$ exits the set $\Gamma_{\Vap}$ in finite time a.s. In particular, for each positive integer $r>1$, let $\rho_{1/r}$ be the stopping time obtained by choosing $\Vap=1/r$ and consider the sequence $\{\wz_{\rho_{1/r}}\}$ (which is well defined due to the a.s. finiteness of each $\rho_{1/r}$) and note that, by definition, $$\label{th:genest30}\left\|\wz_{\rho_{1/r}}\right\|\in [0,1/r)\cup (r,\infty)~~\mbox{a.s.}$$ However, the a.s. boundedness of the sequence $\{\wz_{t}\}$ (see Lemma \[lm:bg\]) implies that $$\label{th:genest31}\Past\left(\left\|\wz_{\rho_{1/r}}\right\|>r~~\mbox{i.o.}\right)=0,$$ where i.o. stands for infinitely often as $r\rightarrow\infty$. Hence, by  we conclude that there exists a finite random integer valued random variable $r^{\ast}$ such that $\|\wz_{\rho_{1/r}}\|<1/r$ for all $r\geq r^{\ast}$. This, in turn implies that $\|\wz_{\rho_{1/r}}\|\rightarrow 0$ as $r\rightarrow\infty$ a.s., and, in particular, we obtain $$\label{th:genest32}\Past\left(\liminf_{t\rightarrow\infty}\left\|\wz_{t}\right\|=0\right)=1.$$ By  we may also conclude that $\liminf_{t\rightarrow\infty}V_{t}=0$ a.s. Noting that the limit of $\{V_{t}\}$ exists a.s. (see Corollary \[corr:bg\]) we further obtain $V_{t}\rightarrow 0$ as $t\rightarrow\infty$ a.s., from which, by another application of , we conclude that $\wz_{t}\rightarrow 0$ as $t\rightarrow\infty$ a.s. and the desired consistency assertion follows. The other major result of this section concerns the pathwise convergence rate of the processes $\{\mathbf{z}_{n}(t)\}$ to $\abtheta$, stated as follows: \[th:genrate\] Let the processes $\{\mathbf{z}_{n}(t)\}$ be defined as in  and the assumptions and hypotheses of Theorem \[th:genest\] hold. Then, there exists a constant $\mu>0$ such that for all $n$ we have $$\label{th:genrate1}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\mu}\left\|\mathbf{z}_{n}(t)-\abtheta\right\|=0\right)=1.$$ In order to obtain Theorem \[th:genrate\], we will first quantify the rate of agreement among the individual agent estimates. Specifically, we have the following (see Appendix \[sec:app1\] for a proof): \[lm:consrate\] Let the hypotheses of Lemma \[lm:bg\] hold. Then, for each pair of agents $n$ and $l$, we have $$\label{lm:consrate1}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\tau}\|\mathbf{z}_{n}(t)-\mathbf{z}_{l}(t)\|=0\right)=1,$$ for all $\tau\in (0,1-\tau_{2})$. We now complete the proof of Theorem \[th:genrate\]. In what follows we reuse the notation and intermediate processes constructed in the proofs of Lemma \[lm:bg\] and Theorem \[th:genest\]. Recall $\{V_{t}\}$ to be the $\{\mathcal{F}_{t}\}$-adapted process defined in . By  (and the development preceding it) we note that there exist positive constants $b_{1}$, $b_{2}$, and $b_{3}$, and a deterministic time $t_{1}$ (large enough), such that, $$\begin{aligned} \label{th:genrate15}\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1+b_{1}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\right)V_{t}-2\alpha_{t}\mathcal{H}_{t}(\mathbf{z}_{t})\\ -b_{2}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|(\wz_{t})_{\PC}\right\|^{2}+b_{3}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\end{aligned}$$ for all $t\geq t_{1}$, where the function $\mathcal{H}_{t}(\cdot)$ is defined in . By  we obtain $$\begin{aligned} \label{th:genrate16}\mathcal{H}_{t}(\mathbf{z})\geq \left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)+\left(\mathbf{z}_{\PC}\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)\\+\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\mathbf{z}_{\C})\right)\\ =\left(\mathbf{z}^{a}-\abtheta\right)^{\top}\left(h(\mathbf{z}^{a})-h(\abtheta)\right)+\left(\mathbf{z}_{\PC}\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)\\+\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\mathbf{z}_{\C})\right)\end{aligned}$$ for all $\mathbf{z}\in\mathbb{R}^{NM}$. Note that, by Proposition \[prop:analytic\], $h(\cdot)$ is continuously differentiable with positive definite gradient $\nabla_{\btheta}h(\abtheta)=I(\abtheta)$ at $\abtheta$; hence, by the mean-value theorem, there exists $\Vap_{0}>0$ such that for all $\btheta\in\mathbb{B}_{\Vap_{0}}(\abtheta)$ we have $$\label{th:genrate17} h(\btheta)-h(\abtheta)=\left(I(\abtheta)+R(\btheta,\abtheta)\right)\left(\btheta-\abtheta\right),$$ where $R(\cdot,\abtheta)$ is a measurable $\mathbb{R}^{M\times M}$-valued function of $\btheta$ such that $$\label{th:genrate18}\left\|R(\btheta,\abtheta)\right\|\leq \frac{\lambda_{1}(I(\abtheta))}{2}~~~\forall~\btheta\in\mathbb{B}_{\Vap_{0}}(\abtheta),$$ with $\lambda_{1}(I(\abtheta))>0$ denoting the smallest eigenvalue of $I(\abtheta)$. Also, observing that the function $\bh(\cdot)$ is locally Lipschitz, we may conclude that there exists a constant $\ell_{\Vap_{0}}$ such that $$\label{th:genrate1000} \left\|\bh(\mathbf{z})-\bh(\pz)\right\|\leq\ell_{\Vap_{0}}\left\|\mathbf{z}-\pz\right\|~~~\forall~\mathbf{z},\pz\in\mathbb{B}_{\Vap_{0}}(\abbtheta).$$ Now note that, by Theorem \[th:genest\], $\wz_{t}\rightarrow\mathbf{0}$ a.s. as $t\rightarrow\infty$, and, by Lemma \[lm:consrate\], there exists a constant $\tau>0$ such that $$\label{th:genrate19}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\tau}\left\|\left(\mathbf{z}_{t}\right)_{\PC}\right\|=0\right)=1.$$ Now consider $\delta>0$ (arbitrarily small) and note that by Egorov’s theorem there exists a (deterministic) time $t_{\delta}>0$ (chosen to be larger than $t_{1}$ in ), such that $\Past\left(\mathcal{A}_{\delta}\right)\geq 1-\delta$, where $\mathcal{A}_{\delta}$ denotes the event $$\label{th:genrate20} \mathcal{A}_{\delta}=\left\{\sup_{t\geq t_{\delta}}\left\|\mathbf{z}_{t}-\abbtheta\right\|\leq\Vap_{0}\right\}\bigcup\left\{\sup_{t\geq t_{\delta}}(t+1)^{\tau}\left\|\left(\mathbf{z}_{t}\right)_{\PC}\right\|\leq\Vap_{0}\right\}.$$ Consequently, denoting by $\rho_{\delta}$ the $\{\mathcal{F}_{t}\}$ stopping time $$\label{th:genrate21} \rho_{\delta}=\inf\left\{t\geq t_{\delta}~:~\mbox{$\left\|\mathbf{z}_{t}-\abbtheta\right\|>\Vap_{0}$ or $(t+1)^{\tau}\left\|\left(\mathbf{z}_{t}\right)_{\PC}\right\|>\Vap_{0}$}\right\},$$ we have that $$\label{th:genrate22}\Past\left(\rho_{\delta}=\infty\right)\geq 1-\delta.$$ Now consider $t\in [t_{\delta},\rho_{\delta})$; noting that $$\label{th:genrate23} \|\mathbf{z}^{a}_{t}-\abtheta\|\leq\|\mathbf{z}_{t}-\abbtheta\|\leq \Vap_{0},$$ we have by the construction in - $$\begin{aligned} \label{th:genrate24} & \left(\left(\mathbf{z}_{t}\right)_{\C}-\abbtheta\right)^{\top}\left(\bh\left(\left(\mathbf{z}_{t}\right)_{\C}\right) -\bh\left(\abbtheta\right)\right)=\left(\mathbf{z}^{a}_{t}-\abtheta\right)^{\top}\left(h(\mathbf{z}^{a}_{t}-h(\abtheta)\right)\\ &\geq\left(\|I(\abtheta)\|-\|R(\mathbf{z}^{a}_{t},\abtheta)\|\right)\left\|\mathbf{z}^{a}_{t}-\abtheta\right\|^{2}\geq (1/2)\lambda_{1}(I(\abtheta))\left\|\mathbf{z}^{a}_{t}-\abtheta\right\|^{2}\\ &\geq b_{4}V_{t}\end{aligned}$$ for some constant $b_{4}>0$. Similarly, using , we have the following inequalities for $t\in [t_{\delta},\rho_{\delta})$: $$\begin{aligned} \label{th:genrate25} \left(\mathbf{z}_{t}\right)_{\PC}^{\top}\left(\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right)\leq\left\|\left(\mathbf{z}_{t}\right)_{\PC}\right\|\left\|\bh(\mathbf{z}_{t})-\bh(\abbtheta)\right\|\\ \leq\Vap_{0}(t+1)^{-\tau}\ell_{\Vap_{0}}\|\mathbf{z}_{t}-\abbtheta\|\leq\Vap_{0}^{2}(t+1)^{-\tau}\ell_{\Vap_{0}},\end{aligned}$$ and $$\begin{aligned} \label{th:genrate26} \left(\left(\mathbf{z}_{t}\right)_{\C}-\abbtheta\right)^{\top}\left(\bh\left(\mathbf{z}_{t}\right)-\bh\left(\left(\mathbf{z}_{t}\right)_{\C}\right)\right)\leq\left\|\left(\mathbf{z}_{t}\right)_{\C}-\abbtheta\right\|.\ell_{\Vap_{0}}\left\|\mathbf{z}_{t}-\left(\mathbf{z}_{t}\right)_{\C}\right\|\\ \leq\Vap_{0}\ell_{\Vap_{0}}\left\|\left(\mathbf{z}_{t}\right)_{\PC}\right\|\leq\Vap_{0}^{2}\ell_{\Vap_{0}}(t+1)^{-\tau}.\end{aligned}$$ Hence, from  and -, we conclude that for $t\in [t_{\delta},\rho_{\delta})$ we have $$\label{th:genrate27} \mathcal{H}_{t}(\mathbf{z}_{t})\geq b_{4}V_{t}-2\Vap_{0}^{2}\ell_{\Vap_{0}}(t+1)^{-\tau}.$$ Let $\{V^{\delta}_{t}\}$ be the $\mathbb{R}_{+}$-valued $\{\mathcal{F}_{t}\}$-adapted process such that $V^{\delta}_{t}=V_{t}\mathbb{I}(t<\rho_{\delta})$ for all $t$. Noting that $$\label{th:genrate28} V^{\delta}_{t+1}=V_{t+1}\mathbb{I}(t+1<\rho_{\delta})\leq V_{t+1}\mathbb{I}(t<\rho_{\delta}),$$ we have $$\label{th:genrate29}\East\left[V^{\delta}_{t+1}~|~\mathcal{F}_{t}\right]\leq\mathbb{I}(t<\rho_{\delta})\East\left[V_{t+1}~|~\mathcal{F}_{t}\right]~~~\forall t.$$ For $t\geq t_{\delta}$ we have by  $$\label{th:genrate30}\mathcal{H}_{t}(\mathbf{z}_{t})\mathbb{I}(t<\rho_{\delta})\geq \left(b_{4}V_{t}-2\Vap_{0}^{2}\ell_{\Vap_{0}}(t+1)^{-\tau}\right)\mathbb{I}(t<\rho_{\delta}),$$ hence, it follows from  and  that $$\begin{aligned} \label{th:genrate31} \East\left[V^{\delta}_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1+b_{1}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\right)V_{t}\mathbb{I}(t<\rho_{\delta})\\-2\alpha_{t}\left(b_{4}V_{t}-2\Vap_{0}^{2}\ell_{\Vap_{0}}(t+1)^{-\tau}\right)\mathbb{I}(t<\rho_{\delta})\\ -b_{2}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|(\wz_{t})_{\PC}\right\|^{2}+b_{3}\left(\alpha_{t}(t+1)^{-\tau_{3}}+\alpha^{2}_{t}+\alpha_{t}\beta_{t}\right)\\ \leq\left(1-\alpha_{t}\left(2b_{4}-b_{1}(t+1)^{-\tau_{3}}-b_{1}\alpha_{t}-b_{1}\beta_{t}\right)\right)V_{t}^{\delta}-b_{2}\left(\beta_{t}-\beta_{t}^{2}\right)\left\|(\wz_{t})_{\PC}\right\|^{2}\\+\alpha_{t}\left(b_{3}(t+1)^{-\tau_{3}}+b_{3}\alpha_{t}+b_{3}\beta_{t}+4\Vap_{0}^{2}\ell_{\Vap_{0}}(t+1)^{-\tau}\right)\end{aligned}$$ for all $t\geq t_{\delta}$. Observing the decay rates of the various coefficients (see ), we conclude that there exist a deterministic time $t^{\prime}_{\delta}\geq t_{\delta}$, and positive constants (independent of $\delta$) $b_{5}$, $b_{6}$ and $\tau_{4}$ such that $$\label{th:genrate32} \East\left[V^{\delta}_{t+1}~|~\mathcal{F}_{t}\right]\leq\left(1-b_{5}\alpha_{t}\right)V_{t}^{\delta}+b_{6}\alpha_{t}(t+1)^{-\tau_{4}}$$ and $b_{5}\alpha_{t}<1$, for all $t\geq t^{\prime}_{\delta}$. Let us now choose a constant $\omu$ (independently of $\delta$) such that $\omu\in\left(0,b_{5}\wedge\tau_{4}\wedge 1\right)$. Then, using the inequality $$\label{th:genrate33} (t+1)^{\omu}-t^{\omu}\leq \omu t^{\omu-1}$$ we have for all $t> t_{\delta}$ $$\label{th:genrate34} (t+1)^{\omu}\left(1-b_{5}\alpha_{t-1}\right)\leq t^{\omu}\left(1+\omu.t^{-1}\right)\left(1-b_{5}.t^{-1}\right)\leq t^{\omu}\left(1-(b_{5}-\omu).t^{-1}\right)$$ and $$\label{th:genrate35} (t+1)^{\omu}t^{-1-\tau_{4}}=\left(1+t^{-1}\right)^{\omu}t^{-1-(\tau_{4}-\omu)}\leq\left(1+(t^{\prime}_{\delta})^{-1}\right)^{\omu}t^{-1-(\tau_{4}-\omu)}.$$ Thus, from  we obtain $$\begin{aligned} \label{th:genrate36} \East\left[(t+1)^{\omu}V^{\delta}_{t}~|~\mathcal{F}_{t-1}\right]\leq\left(1-(b_{5}-\omu).t^{-1}\right)t^{\omu}V_{t-1}^{\delta}+b_{6}\left(1+(t^{\prime}_{\delta})^{-1}\right)^{\omu}t^{-1-(\tau_{4}-\omu)}\\ \leq t^{\omu}V_{t-1}^{\delta}+b^{\delta}_{7}t^{-1-(\tau_{4}-\omu)}\end{aligned}$$ for all $t>t^{\prime}_{\delta}$ for some constant $b^{\delta}_{7}>0$. Since $\tau_{4}>\omu$, we have $\sum_{t}t^{-1-(\tau_{4}-\omu)}<\infty$; denoting by $\{W^{\delta}_{t}\}$ the non-negative $\{\mathcal{F}_{t}\}$-adapted process such that $$\label{th:genrate37}W^{\delta}_{t}=(t+1)^{\omu}V^{\delta}_{t}+b_{6}\left(1+(t^{\prime}_{\delta})^{-1}\right)^{\omu}\sum_{s=t+1}^{\infty}s^{-1-(\tau_{4}-\omu)},$$ we have that $\East[W^{\delta}_{t}|\mathcal{F}_{t-1}]\leq W^{\delta}_{t-1}$ for all $t>t^{\prime}_{\delta}$. Hence, the process $\{W^{\delta}_{t}\}_{t\geq t^{\prime}_{\delta}}$ is a non-negative supermartingale and converges a.s. to a finite non-negative random variable $W^{\delta}_{\ast}$. By  we further conclude that $(t+1)^{\omu}V^{\delta}_{t}\rightarrow W^{\delta}_{\ast}$ a.s. as $t\rightarrow\infty$. Now let $\mu\in (0,\omu)$ be another constant (chosen independently of $\delta$); noting that the limit $W^{\delta}_{\ast}$ is finite, the above convergence leads to $$\label{th:genrate38}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\mu}V^{\delta}_{t}=0\right)=1.$$ By  and the fact that $V^{\delta}_{t}=V_{t}\mathbb{I}(t<\rho_{\delta})$ for all $t$, we conclude that, $$\label{th:genrate39}\lim_{t\rightarrow\infty}(t+1)^{\mu}V_{t}=0~\mbox{a.s. on $\{\rho_{\delta}=\infty\}$}.$$ Hence, by  we obtain $$\label{th:genrate40}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\mu}V_{t}=0\right)\geq 1-\delta.$$ Since $\delta>0$ was chosen arbitrarily and $\mu>0$ is independent of $\delta$, we have, in fact, $(t+1)^{\mu}V_{t}\rightarrow 0$ a.s. as $t\rightarrow\infty$ by taking $\delta$ to zero. The desired assertion follows immediately by noting the correspondence between the processes $\{\wz_{t}\}$ and $\{V_{t}\}$ (see ). The assertions of Theorem \[th:genrate\] may readily be extended to the case of non-uniform (over sample paths) convergence of the matrix gain sequences $\{K_{n}(t)\}$ to their designated limit $\mathcal{K}$ as follows (see Appendix \[sec:app1\] for a proof): \[corr:genrate\] Let the sequences $\{\mathbf{z}_{n}(t)\}$ be defined as in . Let Assumptions \[ass:globobs\],\[ass:conn\], and \[ass:weight\] hold as in the hypotheses of Theorem \[th:genrate\] and the matrix gain sequences $\{K_{n}(t)\}$ be such that $(t+1)^{\tau_{3}}\|K_{n}(t)-\mathcal{K}\|\rightarrow 0$ a.s. as $t\rightarrow\infty$ for all $n$. Then, the assertions of Theorem \[th:genrate\] continue to hold, i.e., there exists $\mu>0$ such that $(t+1)^{\mu}\|\mathbf{z}_{n}(t)-\abtheta\|\rightarrow 0$ a.s. as $t\rightarrow\infty$ for all $n$. Note that Corollary \[corr:genrate\] is in fact a restatement of Theorem \[th:genrate\] under the relaxed assumption that the convergence of the matrix gain sequences $\{K_{n}(t)\}$ need not be uniform over sample paths. Proofs of Main Results {#sec:proof_main_res} ====================== Throughout this section, Assumption \[ass:globobs\] and Assumptions \[ass:conn\]-\[ass:lingrowth\] are assumed to hold. Convergence of Auxiliary Estimates and Adaptive Gains {#subsec:gain} ----------------------------------------------------- The first result concerns the consistency of the auxiliary estimate sequence $\{\bx_{n}(t)\}$ at each agent. To this end, noting that the evolution of the auxiliary estimates, see , corresponds to a specific instantiation of the generic estimator dynamics analyzed in Theorem \[th:genrate\] (with $K_{n}(t)=I_{M}$ for all $n$ and $t$), we immediately have the following: \[lm:auxcons\] For each $n$, the auxiliary estimate sequence $\{\bx_{n}(t)\}$ (see Section \[subsec:alg\]) is a strongly consistent estimate of $\abtheta$. In particular, there exists a positive constant $\mu_{0}$ such that $(t+1)^{\mu_{0}}\|\bx_{n}(t)-\abtheta\|\rightarrow 0$ as $t\rightarrow\infty$ a.s. for all $n$. Lemma \[lm:auxcons\] and local Lipschitz continuity of the functions $h_{n}(\cdot)$ lead to the following characterization of the adaptive gain sequences $\{K_{n}(t)\}$  driving the local innovation terms of the agent estimates $\{\mathbf{x}_{n}(t)\}$  (see Appendix \[sec:app2\] for a proof): \[lm:gainconv\] There exists a positive constant $\tau^{\prime}$ such that, for each $n$, the adaptive gain sequence $\{K_{n}(t)\}$, see , converges a.s. to $N.I^{-1}(\abtheta)$ at rate $\tau^{\prime}$, i.e., $$\label{lm:gainconv1}\mathbb{P}\left((t+1)^{\tau^{\prime}}\left\|K_{n}(t)-N.I^{-1}(\abtheta)\right\|=0\right)=1$$ where $I(\cdot)$ denotes the centralized Fisher information, see . As an immediate consequence of the above development, we have the following consistency of the distributed agent estimates $\{\mathbf{x}_{n}(t)\}$: \[corr:xconv\] For each $n$, the estimate sequence $\{\mathbf{x}_{n}(t)\}$ see Section \[subsec:alg\] is a strongly consistent estimate of $\abtheta$, i.e., $\mathbf{x}_{n}(t)\rightarrow\abtheta$ as $\tri$ a.s. Note that, by Lemma \[lm:gainconv\], there exists $\tau^{\prime}>0$ such that $(t+1)^{\tau^{\prime}}\|K_{n}(t)-N.I^{-1}(\abtheta)\|\rightarrow 0$ as $\tri$ a.s. Thus, the sequences $\{\mathbf{x}_{n}(t)\}$ fall under the purview of Theorem \[th:genrate\] (with $\mathcal{K}=N.I^{-1}(\abtheta)$) and the assertion follows. Proofs of Theorem \[th:estcons\] and Theorem \[th:estn\] {#subsec:mainproofs} -------------------------------------------------------- The key idea in proving Theorem \[th:estcons\] and Theorem \[th:estn\] consists of comparing the nonlinear estimate recursions, see , to a suitably linearized recursion. To this end, we consider the following result on distributed linear stochastic recursions developed in [@Kar-AdaptiveDistEst-SICON-2012] in the context of asymptotically efficient distributed parameter estimation in linear multi-agent models. The result to be stated below is somewhat less general than the development in [@Kar-AdaptiveDistEst-SICON-2012], but serves the current scenario. \[th:optlindist\] For each $n$, let $\{\mathbf{v}_{n}(t)\}$ be an $\mathbb{R}^{M}$-valued $\{\mathcal{F}_{t}\}$-adapted process evolving in a distributed fashion as follows: $$\label{th:optlindist1} \mathbf{v}_{n}(t+1)=\mathbf{v}_{n}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\mathbf{v}_{n}(t)-\mathbf{v}_{l}(t)\right)+\alpha_{t}D_{n}(t)\left(B_{n}\left(\abtheta-\mathbf{v}_{n}(t)\right)+\bzeta_{n}(t)\right),$$ where $B_{n}$, for each $n$, is an $M_{n}\times M$ matrix for some positive integer $M_{n}$ such that - the matrix $A=\sum_{n=1}^{N}B^{\top}_{n}B_{n}$ is positive definite - for each $n$, the $M\times M_{n}$ matrix valued process $\{D_{n}(t)\}$ is $\{\mathcal{F}_{t}\}$-adapted with $D_{n}(t)\rightarrow N.A^{-1}B^{\top}_{n}$ as $\tri$ a.s. - for each $n$, the $\{\mathcal{F}_{t+1}$}-adapted sequence is such that $\{\bzeta_{n}(t)\}$ is independent of $\mathcal{F}_{t}$ for all $t$, the sequence $\{\bzeta_{n}(t)\}$ is i.i.d. with zero mean and covariance $I_{M_{n}}$, and there exists a constant $\Vap>0$ such that $\mathbb{E}[\|\bzeta_{n}(t)\|^{2+\Vap}]<\infty$ - the Laplacian sequence $\{L_{t}\}$ representing the random communication neighborhoods $\Omega_{n}(t)$, $n=1,\cdots,N$, satisfies Assumption \[ass:conn\] and - the weight sequences $\{\alpha_{t}\}$ and $\{\beta_{t}\}$ satisfy Assumption \[ass:weight\]. Then the following hold for the processes $\{\mathbf{v}_{n}(t)\}$, $n=1,\cdots,N$ - for each $n$ and $\tau\in [0,1/2)$, we have $$\label{th:optlindist2}\mathbb{P}\left(\lim_{\tri}(t+1)^{\tau}\left\|\mathbf{v}_{n}(t)-\abtheta\right\|=0\right)=1;$$ - for each $n$, the sequence $\{\mathbf{v}_{n}(t)\}$, viewed as an estimate of $\abtheta$, is asymptotically normal with asymptotic covariance $A^{-1}$, i.e. $$\label{th:optlindist3} \sqrt{t+1}\left(\mathbf{v}_{n}(t)-\abtheta\right)~\Longrightarrow~\mathcal{N}\left(\mathbf{0},A^{-1}\right).$$ The following corollary to Theorem \[th:optlindist\] will be used in the sequel. \[corr:linz\] For each $n$, let $\{\bv_{n}(t)\}$ be the $\{\mathcal{F}_{t}\}$-adapted $\mathbb{R}^{M}$-valued process evolving in a distributed fashion as $$\label{corr:linz1}\bv_{n}(t+1)=\bv_{n}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\bv_{n}(t)-\bv_{l}(t)\right)+\alpha_{t}K_{n}(t)\left(I_{n}(\abtheta)\left(\abtheta-\bv_{n}(t)\right)+\mathbf{w}_{n}(t)\right),$$ where - for each $n$, $\{\mathbf{w}_{n}(t)\}$ is the $\{\mathcal{F}_{t+1}\}$-adapted sequence given by $\mathbf{w}_{n}(t)=g_{n}(\mathbf{y}_{n}(t))-h_{n}(\abtheta)$ for all $t$ - for each $n$, $\{K_{n}(t)\}$ denotes the $\{\mathcal{F}_{t}\}$-adapted innovation gain sequence defined as in  - the Laplacian sequence $\{L_{t}\}$ representing the random communication neighborhoods $\Omega_{n}(t)$, $n=1,\cdots,N$, satisfies Assumption \[ass:conn\] and - the weight sequences $\{\alpha_{t}\}$ and $\{\beta_{t}\}$ satisfy Assumption \[ass:weight\]. Then the following hold for the processes $\{\bv_{n}(t)\}$, $n=1,\cdots,N$ - for each $n$ and $\tau\in [0,1/2)$, we have $$\label{corr:linz3}\Past\left(\lim_{\tri}(t+1)^{\tau}\left\|\bv_{n}(t)-\abtheta\right\|=0\right)=1;$$ - for each $n$, the sequence $\{\bv_{n}(t)\}$, viewed as an estimate of $\abtheta$, is asymptotically efficient as per Definition \[def:asyeff\], i.e. $$\label{corr:linz4} \sqrt{t+1}\left(\bv_{n}(t)-\abtheta\right)~\Longrightarrow~\mathcal{N}\left(\mathbf{0},I^{-1}(\abtheta)\right).$$ Note that, by Proposition \[prop:inf\], for each $n$ the local Fisher information matrix $I_{n}(\abtheta)$ is positive semidefinite; hence, there exists (for example, by a Cholesky factorization) a positive integer $M_{n}$ and an $M_{n}\times M$ matrix $B_{n}$ such that $I_{n}(\abtheta)=B_{n}^{\top}B_{n}$. By Proposition \[prop:analytic\], for each $n$, the sequence $\{\mathbf{w}_{n}(t)\}$ possesses moments of all orders, is zero-mean with covariance $I_{n}(\btheta)$. Since $I_{n}(\abtheta)=B_{n}^{\top}B_{n}$, there exists another $\{\mathcal{F}_{t+1}$ adapted sequence $\{\bzeta_{n}(t)\}$ (not necessarily unique depending on the rank of the matrix $B_{n}$) satisfying condition (3) in the hypothesis of Theorem \[th:optlindist\], such that $B_{n}^{\top}\bzeta_{n}(t)=\mathbf{w}_{n}(t)$ for all $t$ a.s. Also, for each $n$, denote by $\{D_{n}(t)\}$ the $M\times M_{n}$ matrix-valued $\{\mathcal{F}_{t}\}$-adapted process such that $D_{n}(t)=K_{n}(t)B_{n}^{\top}$ for all $t$; since, by Lemma \[lm:gainconv\], $K_{n}(t)\rightarrow N.I^{-1}(\abtheta)$ as $\tri$ a.s., we have that $D_{n}(t)\rightarrow N.I^{-1}(\abtheta)B_{n}^{\top}$ as $\tri$ a.s. It is now clear, that the evolution of the sequences $\{\bv_{n}(t)\}$ may be rewritten as follows in terms of the newly introduced variables: $$\label{corr:linz5} \bv_{n}(t+1)=\bv_{n}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\bv_{n}(t)-\bv_{l}(t)\right)+\alpha_{t}D_{n}(t)\left(B_{n}\left(\abtheta-\bv_{n}(t)\right)+\bzeta_{n}(t)\right).$$ Finally noting that, by construction and Proposition \[prop:inf\], $$\label{corr:linz6} I(\abtheta)=\sum_{n=1}^{N}I_{n}(\abtheta)=\sum_{n=1}^{N}B_{n}^{\top}B_{n},$$ we conclude that the evolution in  falls under the purview of Theorem \[th:optlindist\] (with the identification that $A=I(\abtheta)$) and the desired assertions follow. The processes $\{\bv_{n}(t)\}$ as introduced and analyzed in Corollary \[corr:linz\] may, in fact, be viewed as linearizations of the nonlinear estimator dynamics, see , the linearizations being performed in the vicinity of the true parameter value $\abtheta$. Clearly, in order for such linearization to provide meaningful insight into the actual nonlinear dynamics of the estimators $\{\mathbf{x}_{n}(t)\}$’s, it is necessary that the latter approach stay close to $\abtheta$ (around which the linearization is carried out) asymptotically, which, in turn, is guaranteed by the consistency of the estimators $\{\mathbf{x}_{n}(t)\}$’s, see Corollary \[corr:xconv\]. The consistency allows us to obtain insight into the detailed dynamics of the estimators $\{\mathbf{x}_{n}(t)\}$’s by characterizing the pathwise deviations of the former from their linearized counterparts. These ideas are formalized below in Lemma \[lm:dev\] (see Appendix \[sec:app2\] for a proof) leading to the main results of the paper as presented in Section \[subsec:mainres\]. \[lm:dev\] For each $n$, let $\{\mathbf{x}_{n}(t)\}$ be the estimate sequence at agent $n$ as defined in , and $\{\bv_{n}(t)\}$ denote the process defined in  under the hypotheses of Corollary \[corr:linz\]. Then, there exists a constant $\overline{\tau}>1/2$ such that $$\label{lm:dev1}\Past\left(\lim_{\tri}(t+1)^{\overline{\tau}}\left\|\mathbf{x}_{n}(t)-\bv_{n}(t)\right\|=0\right)=1$$ for all $n$. With the above development, we may now complete the proofs of Theorem \[th:estcons\] and Theorem \[th:estn\] as follows. Let $\tau\in [0,1/2)$ and note that, for each $n$, $$\label{th:estcons2}(t+1)^{\tau}\left\|\mathbf{x}_{n}(t)-\abtheta\right\|\leq (t+1)^{\tau}\left\|\mathbf{x}_{n}(t)-\bv_{n}(t)\right\| + (t+1)^{\tau}\left\|\bv_{n}(t)-\abtheta\right\|,$$ where $\{\bv_{n}(t)\}$ is the (*linearized*) approximation introduced and analyzed in Corollary \[corr:linz\]. By Lemma \[lm:dev\] (first assertion) and Corollary \[corr:linz\], since $\tau<1/2$, we have $(t+1)^{\tau}\|\mathbf{x}_{n}(t)-\bv_{n}(t)\|\rightarrow 0$ and $(t+1)^{\tau}\|\bv_{n}(t)-\abtheta\|\rightarrow 0$ respectively as $\tri$ a.s. Hence, by  we obtain $(t+1)^{\tau}\|\mathbf{x}_{n}(t)-\abtheta\|\rightarrow 0$ as $\tri$ a.s., thus establishing Theorem \[th:estcons\]. Note that by Lemma \[lm:dev\] (first assertion), for each $n$ $$\begin{aligned} \label{th:estn3} \Past\left(\lim_{\tri}\left\|\sqrt{t+1}\left(\mathbf{x}_{n}(t)-\abtheta\right)-\sqrt{t+1}\left(\bv_{n}(t)-\abtheta\right)\right\|=0\right)\\=\Past\left(\lim_{\tri}\sqrt{t+1}\left\|\mathbf{x}_{n}(t)-\bv_{n}(t)\right\|=0\right)=1.\end{aligned}$$ Hence, in particular, the sequences $\{\sqrt{t+1}(\mathbf{x}_{n}(t)-\abtheta)\}$ and $\{\sqrt{t+1}(\bv_{n}(t)-\abtheta)\}$ possess the same weak limit (if the latter exists for one of the sequences); the asymptotic normality (efficiency) in Theorem \[th:estn\] then follows immediately by the corresponding for the $\{\bv_{n}(t)\}$ sequence in Corollary \[corr:linz\] (second assertion). Conclusions {#sec:conclusions} =========== We have developed distributed estimators of the consensus + innovations type for multi-agent scenarios with general exponential family observation statistics that yield consistent and asymptotically efficient parameter estimates at all agents. Moreover, the above estimator properties and optimality hold as long as the aggregate or global sensing model is observable and the inter-agent communication network is connected in the mean (otherwise, irrespective of the network sparsity). Along the way, we have characterized analogues of classical system and information theoretic notions such as observability to the distributed-information setting. An important future research question arises naturally: in this paper we have assumed that the parametrization is continuous unconstrained, i.e., $\btheta$ may take values over the entire space $\mathbb{R}^{M}$. It would be of interest to extend the approach to account for constrained parametrization – the parameter $\btheta$ could belong to a restricted subset $\Theta\subset\mathbb{R}^{M}$ either because of direct physical constraints or due to constrained natural parameterizations of the local exponential families involved, i.e., the domains of definition of the functions $\lambda_{n}(\cdot)$ in  being strict subsets of $\mathbb{R}^{M}$. A specific instance being finite classification or detection (hypothesis testing) problems in which $\btheta$ may only assume a finite set of values[^5]. The unconstrained estimation approach - may still be applicable to a subclass of such constrained cases by considering *suitable* analytical extensions of the various functions $\lambda_{n}(\cdot)$’s, $h_{n}(\cdot)$’s etc. over $\mathbb{R}^{M}$; provided such extensions exist[^6], the proposed algorithm will lead to asymptotically efficient estimates at the network agents although the intermediate iterates may not belong to $\Theta$. As a familiar example where such extension may be achievable by embedding, we may envision a binary hypothesis testing problem corresponding to the presence or absence of a signal observed in additive zero-mean Gaussian noise with known variance. In cases, where such analytical extensions may not be obtained, other modifications of the proposed scheme, for example by supplementing the local estimate update processes with a projection step onto the set $\Theta$, may be helpful. In the interest of obtaining a unified distributed inference framework, it would be worthwhile to study such extensions and modifications of the proposed scheme.\ Proofs of Results in Section \[sec:genconsest\] {#sec:app1} =============================================== Let $\mathbf{z}\in\Gamma_{\Vap}$ and note that by reasoning along the lines of  we obtain $$\label{prop:Lg5}\left(\mathbf{z}-\abbtheta\right)^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\left(\mathbf{z}-\abbtheta\right)=\left(\mathbf{z}_{\PC}\right)^{\top}\left(\OL\otimes\mathcal{K}^{-1}\right)\mathbf{z}_{\PC}\geq\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|^{2},$$ where $\mathbf{z}_{\PC}$ denotes the projection of $\mathbf{z}$ onto the orthogonal complement of the consensus subspace (see Definition \[def:consspace\]). We thus obtain $$\begin{aligned} \label{prop:Lg6}\mathcal{H}_{t}(\mathbf{z})\geq\frac{b_{\beta}\beta_{t}}{\alpha_{t}}\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|^{2}+\left(\mathbf{z}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)\\ \geq \frac{b_{\beta}\beta_{t}}{\alpha_{t}}\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|^{2} + \left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)\\+\left(\mathbf{z}_{\PC}\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)+\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\mathbf{z}_{\C})\right).\end{aligned}$$ In order to bound the last two terms in the above inequality, note that, for $\mathbf{z}\in\Gamma_{\Vap}$, by invoking Assumption \[ass:lingrowth\] we obtain $$\label{prop:Lg7}\left|\left(\mathbf{z}_{\PC}\right)^{\top}\left(\bh(\mathbf{z})-\bh(\abbtheta)\right)\right|\leq c_{1}\left\|\mathbf{z}_{\PC}\right\|\left(1+\left\|\mathbf{z}-\abbtheta\right\|\right)\leq c_{1}\left(1/\Vap + 1\right)\left\|\mathbf{z}_{\PC}\right\|,$$ where $c_{1}$ is a positive constant. Also, by Proposition \[prop:analytic\], the functions $h_{n}(\cdot)$ are infinitely continuously differentiable and hence locally Lipschitz; in particular, noting that the set $\Gamma_{\Vap}^{\prime}$ $$\label{prop:Lg8}\Gamma^{\prime}_{\Vap}=\left\{\mathbf{z}\in\mathbb{R}^{NM}~:~\left\|\mathbf{z}-\abbtheta\right\|\leq 1/\Vap\right\}$$ is compact, there exists a constant $\ell_{\Gamma_{\Vap}}>0$, such that, $$\label{prop:Lg9}\left\|\bh(\mathbf{z})-\bh(\mathbf{z}^{\prime})\right\|\leq\ell_{\Gamma_{\Vap}}\left\|\mathbf{z}-\mathbf{z}^{\prime}\right\|,~~~\forall\mathbf{z},\mathbf{z}^{\prime}\in\Gamma_{\Vap}^{\prime}.$$ By observing that for $\mathbf{z}\in\Gamma_{\Vap}\subset\Gamma_{\Vap}^{\prime}$ $$\label{prop:Lg10}\left\|\mathbf{z}_{\C}-\abbtheta\right\|=\left\|\mathbf{z}-\abbtheta\right\|-\left\|\mathbf{z}_{\PC}\right\|\leq \left\|\mathbf{z}-\abbtheta\right\|\leq 1/\Vap,$$ we obtain $\mathbf{z}_{\C}\in\Gamma^{\prime}_{\Vap}$; hence, by , we may conclude that $$\label{prop:Lg11}\left\|\bh(\mathbf{z})-\bh(\mathbf{z}_{\C})\right\|\leq\ell_{\Gamma^{\prime}_{\Vap}}\left\|\mathbf{z}-\mathbf{z}_{\C}\right\|=\ell_{\Gamma^{\prime}_{\Vap}}\left\|\mathbf{z}_{\PC}\right\|$$ for all $\mathbf{z}\in\Gamma_{\Vap}$. Thus, for $\mathbf{z}\in\Gamma_{\Vap}$, we have $$\label{prop:Lg12}\left|\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z})-\bh(\mathbf{z}_{\C})\right)\right|\leq\ell_{\Gamma^{\prime}_{\Vap}}\left\|\mathbf{z}_{\PC}\right\|\left\|\mathbf{z}-\abbtheta\right\|\leq\left(\ell_{\Gamma^{\prime}_{\Vap}}/\Vap\right)\left\|\mathbf{z}_{\PC}\right\|.$$ Combining - and  we then obtain $$\begin{aligned} \label{prop:Lg13}\mathcal{H}_{t}(\mathbf{z})\geq\left(\frac{b_{\beta}\beta_{t}}{\alpha_{t}}\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|-c_{1}\left(\frac{1}{\Vap}+1\right)-\frac{\ell_{\Gamma^{\prime}_{\Vap}}}{\Vap}\right)\left\|\mathbf{z}_{\PC}\right\|\\ +\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)\end{aligned}$$ for all $\mathbf{z}\in\Gamma_{\Vap}$. By invoking standard properties of quadratic minimization, we note that there exist positive constants $\bar{c}_{\Vap}$, $c_{3}(\Vap)$ and $c_{4}(\Vap)$ such that for all $t\geq 0$ $$\label{prop:Lg14}\left(\frac{b_{\beta}\beta_{t}}{\alpha_{t}}\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|-c_{1}\left(\frac{1}{\Vap}+1\right)-\frac{\ell_{\Gamma^{\prime}_{\Vap}}}{\Vap}\right)\left\|\mathbf{z}_{\PC}\right\|> \bar{c}_{\Vap}$$ for all $\mathbf{z}$ with $\left\|\mathbf{z}_{\PC}\right\|> c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)$, and $$\label{prop:Lg15}\left(\frac{b_{\beta}\beta_{t}}{\alpha_{t}}\lambda_{2}(\OL)\lambda_{1}(\mathcal{K}^{-1})\left\|\mathbf{z}_{\PC}\right\|-c_{1}\left(\frac{1}{\Vap}+1\right)-\frac{\ell_{\Gamma^{\prime}_{\Vap}}}{\Vap}\right)\left\|\mathbf{z}_{\PC}\right\|\geq -c_{4}(\Vap)\left(\alpha_{t}/\beta_{t}\right)$$ for all $\mathbf{z}$, in particular, $\mathbf{z}\in\Gamma_{\Vap}$. Now, note that, by Proposition \[prop:analytic\] (third assertion), for all $\mathbf{z}\in\mathbb{R}^{NM}$ with $\mathbf{z}_{\C}\neq\abbtheta$ we have $$\label{prop:Lg16}\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)=\sum_{n=1}^{N}\left(\mathbf{z}_{\C}^{a}-\abtheta\right)^{\top}\left(h_{n}(\mathbf{z}_{\C}^{a})-h_{n}(\abtheta)\right)>0$$ (see also Definition \[def:consspace\]). Let us choose $\Vap^{\prime}$ such that $\Vap^{\prime}\in (0,\Vap)$; noting that the functions $h_{n}(\cdot)$ are continuous and the set $\Phi_{\Vap,\Vap^{\prime}}$ $$\label{prop:Lg17}\Phi_{\Vap,\Vap^{\prime}}=\Gamma_{\Vap}\bigcap\left\{\mathbf{z}\in\mathbb{R}^{NM}~:~\left\|\mathbf{z}_{\C}-\abbtheta\right\|\geq\Vap^{\prime}\right\}$$ is compact, we conclude that there exists $\delta_{\Vap}>0$ such that $$\label{prop:Lg18}\inf_{\mathbf{z}\in\Phi_{\Vap,\Vap^{\prime}}}\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)>\delta.$$ Further, since $\alpha_{t}/\beta_{t}\rightarrow 0$ as $t\rightarrow\infty$ (by hypothesis), there exist $t_{\Vap}$ large enough and a constant $\bar{\delta}_{\Vap}>0$ such that $$\label{prop:Lg19} \Vap^{\prime}<\Vap-c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)~~~\mbox{and}~~~c_{4}(\Vap)\left(\alpha_{t}/\beta_{t}\right)<\delta_{\Vap}-\bar{\delta}_{\Vap}$$ for all $t\geq t_{\Vap}$. We now show that there exists $\delta_{\Vap}^{\prime}>0$ (independent of $t$) such that $$\label{prop:Lg20}\inf_{\mathbf{z}\in\Gamma_{\Vap}}\mathcal{H}_{t}(\mathbf{z})>\delta^{\prime}_{\Vap}$$ for all $t\geq t_{\Vap}$. To this end, for $t\geq t_{\Vap}$, let $\mathbf{z}\in\Gamma_{\Vap}$ and consider the two cases as to whether $\left\|\mathbf{z}_{\PC}\right\|> c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)$ or not. Noting that by Proposition \[prop:analytic\] $$\label{prop:Lg21}\left(\mathbf{z}_{\C}-\abbtheta\right)^{\top}\left(\bh(\mathbf{z}_{\C})-\bh(\abbtheta)\right)\geq 0,~~~\forall\mathbf{z}\in\mathbb{R}^{NM},$$ we have by - that $$\label{prop:Lg22} \mathcal{H}_{t}(\mathbf{z})>\bar{c}_{\Vap}$$ for all $\mathbf{z}\in\Gamma_{\Vap}$ with $\left\|\mathbf{z}_{\PC}\right\|> c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)$. Now consider the other case, i.e., let $\mathbf{z}\in\Gamma_{\Vap}$ with $\left\|\mathbf{z}_{\PC}\right\|\leq c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)$; note that, since $t\geq t_{\Vap}$, we have for such $\mathbf{z}$ by  $$\label{prop:Lg23}\left\|\mathbf{z}_{\C}-\abbtheta\right\|=\left\|\mathbf{z}-\abbtheta\right\|-\left\|\mathbf{z}_{\PC}\right\|\geq\Vap-c_{3}(\Vap)\left(\alpha_{t}/\beta_{t}\right)\Vap^{\prime}.$$ Hence, necessarily $\mathbf{z}\in\Phi_{\Vap,\Vap^{\prime}}$ and we have by ,, and - $$\label{prop:Lg24}\mathcal{H}_{t}(\mathbf{z})>\delta_{\Vap}-c_{4}(\Vap)\left(\alpha_{t}/\beta_{t}\right)>\bar{\delta}_{\Vap}.$$ From  and  we then obtain for all $\mathbf{z}\in\Gamma_{\Vap}$ and $t\geq t_{\Vap}$ $$\label{prop:Lg25} \mathcal{H}_{t}(\mathbf{z})>\delta_{\Vap}^{\prime}>0,$$ where $\delta_{\Vap}^{\prime}=\bar{c}_{\Vap}\wedge\bar{\delta}_{\Vap}$, thus establishing the assertion in . Finally, let $\bc_{\Vap}=\Vap^{2}\bar{\delta}_{\Vap}$, and note that the desired claim follows by $$\label{prop:Lg26}\mathcal{H}_{t}(\mathbf{z})>\delta_{\Vap}^{\prime}\geq\delta_{\Vap}^{\prime}\left(\Vap^{2}\left\|\mathbf{z}-\abbtheta\right\|^{2}\right)=\bc_{\Vap}\left\|\mathbf{z}-\abbtheta\right\|^{2}$$ for all $\mathbf{z}\in\Gamma_{\Vap}$ and $t\geq t_{\Vap}$, where we used that fact that $\Vap^{2}\left\|\mathbf{z}-\abbtheta\right\|^{2}\leq 1$ on $\Gamma_{\Vap}$. [**Proof of Lemma \[lm:consrate\]**]{}: Before proceeding to the proof of Lemma \[lm:consrate\], we state the following approximation results obtained in [@Kar-AdaptiveDistEst-SICON-2012] on convergence estimates of stochastic recursions (Lemma \[lm:mean-conv\]) and certain attributes of time-varying stochastic Laplacian matrices (Lemma \[lm:conn\]), to be used as intermediate ingredients in the proof. \[lm:mean-conv\] Let $\{\mathbf{w}_{t}\}$ be an $\mathbb{R}_{+}$-valued $\{\mathcal{F}_{t}\}$ adapted process that satisfies $$\label{lm:mean-conv1} \mathbf{w}_{t+1}\leq \left(1-r_{1}(t)\right)\mathbf{w}_{t}+r_{2}(t)U_{t}\left(1+J_{t}\right).$$ In the above, $\{r_{1}(t)\}$ is an $\{\mathcal{F}_{t+1}\}$ adapted process, such that for all $t$, $r_{1}(t)$ satisfies $0\leq r_{1}(t)\leq 1$ and $$\label{lm:JSTSP2} \frac{a_{1}}{(t+1)^{\delta_{1}}}\leq\mathbb{E}\left[r_{1}(t)~|~\mathcal{F}_{t}\right]\leq 1$$ with $a_{1}>0$ and $0\leq \delta_{1}< 1$; $\{r_{2}(t)\}$ is a deterministic sequence satisfying $r_{2}(t)\leq a_{2}.(t+1)^{-\delta_{2}}$ for all $t\geq 0$, where $a_{2}$ and $\delta_{2}$ are positive constants. Further, let $\{U_{t}\}$ and $\{J_{t}\}$ be $\mathbb{R}_{+}$ valued $\{\mathcal{F}_{t}\}$ and $\{\mathcal{F}_{t+1}\}$ adapted processes respectively with $\sup_{t\geq 0}U_{t}<\infty$ a.s. The process $\{J_{t}\}$ is i.i.d. with $J_{t}$ independent of $\mathcal{F}_{t}$ for each $t$ and satisfies the moment condition $\mathbb{E}\left[J_{t}^{2+\varepsilon}\right]<\infty$ for a constant $\varepsilon>0$. Then, if $\delta_{2}>\delta_{1}+1/(2+\varepsilon)$, we have $(t+1)^{\delta_{0}}\mathbf{w}_{t}\rightarrow 0$ a.s. as $t\rightarrow\infty$ for all $\delta_{0}\in [0,\delta_{2}-\delta_{1}-1/(2+\varepsilon))$. \[lm:conn\] Let $\{\mathbf{w}_{t}\}$ be an $\mathbb{R}^{NM}$-valued $\{\mathcal{F}_{t}\}$ adapted process such that $\mathbf{w}_{t}\in\mathcal{C}^{\perp}$ for all $t$, where $\PC$ denotes the orthogonal complement of the consensus subspace $\C$, see Definition \[def:consspace\]. Also, let $\{L_{t}\}$ be an $\{\mathcal{F}_{t}\} $-adapted sequence of Laplacians satisfying Assumption \[ass:conn\]. Then there exists an $\{\mathcal{F}_{t+1}\}$ adapted $\mathbb{R}_{+}$-valued process $\{r_{t}\}$ (depending on $\{\mathbf{w}_{t}\}$ and $\{L_{t}\}$), a deterministic time $t_{r}$ (large enough), and a constant $c_{r}>0$, such that $0\leq r_{t}\leq 1$ a.s. and $$\label{lm:conn20} \left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\mathbf{w}_{t}\right\|\leq\left(1-r_{t}\right)\left\|\mathbf{w}_{t}\right\|$$ with $$\label{lm:conn2} \mathbb{E}\left[r_{t}~|~\mathcal{F}_{t}\right]\geq\frac{c_{r}}{(t+1)^{\tau_{2}}}~~\mbox{a.s.}$$ for all $t\geq t_{r}$, where the weight sequence $\{\beta_{t}\}$ and $\tau_{2}$ are defined in .\ Recall the $\{\mathcal{F}_{t}\}$-adapted process $\{\mathbf{z}_{t}\}$ with $\mz_{t}={\boldsymbol{\operatorname{Vec}}}(\mz_{n}(t))$ for all $t$, and note that by  we have $$\label{lm:consrate6} \mz_{t+1}=\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\mz_{t}-\alpha_{t}\bK_{t}\left(\bh(\mz_{t})-\bh(\abbtheta)\right)+\alpha_{t}\bK_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right),$$ the functions $\bh(\cdot)$ and $\bg(\cdot)$ being defined in . For each $n$, let $\bz_{n}(t)=\mz_{n}(t)-\mz^{a}_{t}$, and denote by $\{\bz_{t}\}$ the $\{\mathcal{F}_{t}\}$-adapted process where $\bz_{t}={\boldsymbol{\operatorname{Vec}}}(\bz_{n}(t))$ for all $t$. Using the fact $(L_{t}\otimes I_{M})(\mathbf{1}_{N}\otimes\mz^{a}_{t})=\mathbf{0}$, we have $$\label{lm:consrate7}\bz_{t+1}=\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bz_{t}-\alpha_{t}\bU^{\prime}_{t}+\alpha_{t}\bJ^{\prime}_{t},$$ where $\{\bU^{\prime}_{t}\}$ and $\{\bJ^{\prime}_{t}\}$ are $\{\mathcal{F}_{t}\}$-adapted and $\{\mathcal{F}_{t+1}\}$-adapted processes given by $$\label{lm:consrate8}\bU^{\prime}_{t}=\left(I_{NM}-(\mathbf{1}_{N}\mathbf{1}_{N}^{\top})\otimes I_{M}\right)\bK_{t}\left(\bh(\mz_{t})-\bh(\abbtheta)\right),$$ and $$\label{lm:consrate9} \bJ^{\prime}_{t} = \left(I_{NM}-(\mathbf{1}_{N}\mathbf{1}_{N}^{\top})\otimes I_{M}\right)\bK_{t}\left(\bg(\mathbf{y}_{t})-\bh(\abbtheta)\right)$$ respectively. Note that by hypothesis $\sup_{t}\|\bK_{t}\|<\infty$ a.s., and, by Theorem \[th:genest\], $\sup_{t}\|\mz_{t}\|<\infty$ a.s. Hence, by the linear growth condition on $\bh(\cdot)$ (see Assumption \[ass:lingrowth\]), there exists an $\{\mathcal{F}_{t}\}$-adapted process $\{U^{\prime}_{t}\}$ such that, $\|\bU^{\prime}_{t}\|\leq U^{\prime}_{t}$ for all $t$ and $\sup_{t\geq 0}\|U^{\prime}_{t}\|<\infty$ a.s. Then, defining $U_{t}$ to be $$\label{lm:consrate10}U_{t}=U^{\prime}_{t}\bigvee\left\|\left(I_{NM}-(\mathbf{1}_{N}\mathbf{1}_{N}^{\top})\otimes I_{M}\right)\bK_{t}\right\|~~\forall t,$$ we have by - $$\label{lm:consrate11}\|\bU^{\prime}_{t}\|+\|\bJ^{\prime}_{t}\|\leq U_{t}\left(1+J_{t}\right),$$ with $\{U_{t}\}$ being $\{\mathcal{F}_{t}\}$-adapted and $\{J_{t}\}$ being the $\{\mathcal{F}_{t+1}\}$-adapted process, $J_{t}=\|\bg(\mathbf{y}_{t})-\bh(\abbtheta)\|$ for all $t$. Note that for every $\Vap>0$ we have $$\label{lm:consrate12}\East\left[J_{t}^{2+\Vap}\right]<\infty,$$ which follows from the fact that $\bg(\mathbf{y}_{t})$ possesses moments of all orders (see Proposition \[prop:analytic\]). Hence, by  we obtain $$\label{lm:consrate13} \left\|\bz_{t+1}\right\|\leq\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bz_{t}\right\|+\alpha_{t}U_{t}\left(1+J_{t}\right)~~\forall t.$$ Observe that, by construction, $\bz_{t}\in\PC$ for all $t$, and hence, by Lemma \[lm:conn\], there exists an $\{\mathcal{F}_{t+1}\}$ adapted $\mathbb{R}_{+}$-valued process $\{r_{t}\}$ (depending on $\{\bz_{t}\}$ and $\{L_{t}\}$), a deterministic time $t_{r}$ (large enough), and a constant $c_{r}>0$, such that $0\leq r_{t}\leq 1$ a.s. and $$\label{lm:consrate14} \left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bz_{t}\right\|\leq\left(1-r_{t}\right)\left\|\bz_{t}\right\|$$ with $$\label{lm:consrate15} \mathbb{E}\left[r_{t}~|~\mathcal{F}_{t}\right]\geq\frac{c_{r}}{(t+1)^{\tau_{2}}}~~\mbox{a.s.}$$ for all $t\geq t_{r}$. We then have by - $$\label{lm:consrate16} \left\|\bz_{t+1}\right\|\leq\left(1-r_{t}\right)\left\|\bz_{t}\right\|+\alpha_{t}U_{t}\left(1+J_{t}\right)~~\forall t.$$ Now consider arbitrary $\Vap>0$ and note that, under the moment condition , the stochastic recursion in  falls under the purview of Lemma \[lm:mean-conv\] (by taking $\delta_{1}$ and $\delta_{2}$ in Lemma \[lm:mean-conv\] to be $\tau_{2}$ and 1 respectively), and we conclude that $(t+1)^{\tau}\|\bz_{t}\|\rightarrow 0$ as $t\rightarrow\infty$ a.s. for each $\tau\in (0, 1-\tau_{2}-1/(2+\Vap))$. Noting that $$\label{lm:consrate17} \left\|\mz_{n}(t)-\mz_{l}(t)\right\|\leq\left\|\mz_{n}(t)-\mz^{a}_{t}\right\|+\left\|\mz_{l}(t)-\mz^{a}_{t}\right\|\leq 2\left\|\bz_{t}\right\|$$ for each pair $n$ and $l$ of agents, we may further conclude that $$\label{lm:consrate18}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\tau}\|\mathbf{z}_{n}(t)-\mathbf{z}_{l}(t)\|=0\right)=1,$$ for all $\tau\in (0,1-\tau_{2}-1/(2+\Vap))$. Since the above holds for arbitrary $\Vap>0$, the desired assertion follows by making $\Vap$ tend to $\infty$. Since $(t+1)^{\tau_{3}}\|K_{n}(t)-\mathcal{K}\|\rightarrow 0$ a.s. as $t\rightarrow\infty$ for all $n$, by Egorov’s theorem, for each $\Vap>0$, there exist a deterministic $t_{\Vap}>0$ and a positive constant $c_{\Vap}$, such that $$\label{corr:genrate1}\Past\left(\sup_{t\geq t_{\Vap}}(t+1)^{\tau_{3}}\left\|K_{n}(t)-\mathcal{K}\right\|>c_{\Vap}\right)<\Vap$$ for all $n$. Now, for such an $\Vap>0$, define, for each $n$, the following $\{\mathcal{F}_{t}\}$-adapted sequence $\{K^{\Vap}_{n}(t)\}$: $$\label{corr:genrate2}K_{n}^{\Vap}(t)=\left\{\begin{array}{ll} K_{n}(t) & \mbox{if $t<t_{\Vap}$}\\ K_{n}(t) & \mbox{if $t\geq t_{\Vap}$ and $\|K_{n}(t)-\mathcal{K}\|\leq c_{\Vap}(t+1)^{-\tau_{3}}$}\\ \mathcal{K} & \mbox{otherwise}. \end{array}\right.$$ Note that, by the above construction, we have $\|K_{n}(t)-\mathcal{K}\|\leq c_{\Vap}(t+1)^{-\tau_{3}}$ for all $t\geq t_{\Vap}$; hence, choosing $\tau^{\prime}\in (0,\tau_{3})$ to be a constant (independent of $\Vap$), we have that $$\label{corr:genrate3}(t+1)^{\tau^{\prime}}\left\|K^{\Vap}_{n}(t)-\mathcal{K}\right\|\leq c_{\Vap}(t+1)^{-(\tau_{3}-\tau^{\prime})}~~~\forall t\geq t_{\Vap}$$ for all $n$ and each $\Vap>0$. Thus, clearly, for each $\Vap>0$ and all $n$, the sequence $\{K^{\Vap}_{n}(t)\}$ converges a.s. to $\mathcal{K}$ uniformly (over sample paths) at rate $\tau^{\prime}>0$, i.e., for each $\delta>0$, there exists (deterministic) $t_{\delta}>0$ such that $$\label{corr:genrate4}\Past\left(\sup_{t\geq t_{\delta}}(t+1)^{\tau^{\prime}}\left\|K_{n}^{\Vap}(t)-\mathcal{K}\right\|\leq\delta\right)=1.$$ Now, for each $\Vap>0$, let us define the $\{\mathcal{F}_{t}\}$-adapted sequences $\{\mathbf{z}^{\Vap}_{n}(t)\}$, $n=1,\cdots,N$, evolving as $$\label{corr:genrate5}\mathbf{z}_{n}^{\Vap}(t+1)=\mathbf{z}_{n}^{\Vap}(t)-\beta_{t}\sum_{l\in\Omega_{n}(t)}\left(\mathbf{z}_{n}^{\Vap}(t)-\mathbf{z}_{l}^{\Vap}(t)\right)+\alpha_{t}K_{n}^{\Vap}(t)\left(g_{n}(\mathbf{y}_{n}(t))-h_{n}(\mathbf{z}_{n}^{\Vap}(t))\right).$$ Noting that $$\label{corr:genrate6} \left\{\sup_{n,t}\left\|\mathbf{z}^{\Vap}_{n}(t)-\mathbf{z}_{n}(t)\right\|=0\right\}~~\mbox{on}~~\left\{\sup_{n,t}\left\|K_{n}^{\Vap}(t)-K_{n}(t)\right\|=0\right\},$$ we have $$\label{corr:genrate7}\Past\left(\sup_{n,t}\left\|\mathbf{z}^{\Vap}_{n}(t)-\mathbf{z}_{n}(t)\right\|=0\right)\geq 1-N\Vap$$ by -. The uniform convergence of the gain sequences $\{K^{\Vap}_{n}(t)\}$ to $\mathcal{K}$ at rate $\tau^{\prime}>0$ ensures that, for each $\Vap>0$, the processes $\{\mathbf{z}_{n}^{\Vap}(t)\}$ satisfy the hypotheses of Theorem \[th:genrate\] and, hence, there exists a positive constant $\mu$ (that depends on $\tau^{\prime}$ but not $\Vap$), such that $(t+1)^{\mu}\|\mathbf{z}_{n}^{\Vap}(t)-\abtheta\|\rightarrow 0$ as $t\rightarrow 0$ a.s. for each $n$. Hence, by  we have $$\label{corr:genrate8}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\mu}\left\|\mathbf{z}_{n}(t)-\abtheta\right\|=0\right)\geq 1-N\Vap$$ for all $n$. Since $\Vap>0$ is arbitrary and $\mu$ does not depend on $\Vap$, we may further conclude from  that $(t+1)^{\mu}\|\mathbf{z}_{n}(t)-\abtheta\|\rightarrow 0$ as $t\rightarrow\infty$ a.s. for all $n$. Proofs in Section \[sec:proof\_main\_res\] {#sec:app2} ========================================== [**Proof of Lemma \[lm:gainconv\]**]{}: The proof of Lemma \[lm:gainconv\] is accomplished in two steps: first, we show that the gain sequences reach consensus, and subsequently demonstrate that the limiting consensus value is indeed $I^{-1}(\abtheta)$. To this end, consider the following: \[lm:gaincons\] Recall for each $n$, the $\{\mathcal{F}_{t}\}$-adapted sequence $\{G_{n}(t)\}$ evolving as in , and denote by $\{G^{a}_{t}\}$ their instantaneous network averages, i.e., $G^{a}_{t}=(1/N)\sum_{n=1}^{N}G^{a}_{n}(t)$ for all $t$. Then, for each $n$ and $\tau\in [0,1-\tau_{2})$, we have $$\label{lm:gaincons1}\Past\left(\lim_{t\rightarrow\infty}(t+1)^{\tau}\left\|G_{n}(t)-G^{a}(t)\right\|=0\right)=1,$$ where $\tau_{2}$ is the exponent associated with the weight sequence $\{\beta_{t}\}$, see Assumption \[ass:weight\]. We will show the desired convergence in the matrix Frobenius norm (denoted by $\|\cdot\|_{F}$ in the following), the convergence in the induced $\mathcal{L}_{2}$ sense following immediately. Note that, by Lemma \[lm:auxcons\], $\bx_{n}(t)\rightarrow\abtheta$ as $t\rightarrow\infty$ a.s. for all $n$, hence, for each $n$, by the continuity of the local Fisher information matrix $I_{n}(\cdot)$, we have that $I_{n}(\bx_{n}(t))\rightarrow I_{n}(\abtheta)$ as $\tri$ a.s. Let $\bG_{n}(t)=G_{n}(t)-G^{a}_{t}$ denote the deviation at agent $n$ from the instantaneous network average $G^{a}_{t}$ and $I^{a}_{t}=(1/N)\sum_{n=1}^{N}I_{n}(\bx_{n}(t))$ the network average of the $I_{n}(\bx_{n}(t))$’s. Also, let $\bG_{t}$ and $\bI_{t}$ denote the matrices ${\boldsymbol{\operatorname{Vec}}}\left(\bG_{n}(t)\right)$ and ${\boldsymbol{\operatorname{Vec}}}\left(\bI_{n}(t)\right)$ respectively, where $\bI_{n}(t)=I_{n}(\bx_{n}(t))-I^{a}_{t}$ for all $n$. Using the following readily verifiable properties of the Laplacian $L_{t}$ $$\label{lm:gaincons3}\left(\mathbf{1}_{N}\otimes I_{M}\right)^{\top}\left(L_{t}\otimes I_{M}\right)=\mathbf{0}~~\mbox{and}~~\left(L_{t}\otimes I_{M}\right)\left(\mathbf{1}_{N}\otimes G^{a}_{t}\right)=\mathbf{0},$$ we have by  $$\label{lm:gaincons4}\bG_{t+1}=\left(I_{NM}-\beta_{t}\left(L_{t}\otimes I_{M}\right)-\alpha_{t}I_{NM}\right)\bG_{t}+\alpha_{t}\bI_{t}$$ for all $t\geq 0$. Since for all $n$, $I_{n}(\bx_{n}(t))\rightarrow I_{n}(\abtheta)$ as $\tri$ a.s., the sequences $\{I_{n}(\bx_{n}(t))\}$ are bounded a.s. and, in particular, there exists an $\{\mathcal{F}_{t}\}$-adapted a.s. bounded process $\{U_{t}\}$ such that $\|\bI_{t}\|_{F}\leq U_{t}$ for all $t$. For $m\in\{1,\cdots,M\}$, denote by $\bG_{m,t}$ the $m$-th column of $\bG_{t}$. Clearly, the process $\{\bG_{m,t}\}$ is $\{\mathcal{F}_{t}\}$-adapted and $\bG_{m,t}\in\PC$ for all $t$. Hence, by Lemma \[lm:conn\], there exist a $[0,1]$-valued $\{\mathcal{F}_{t+1}\}$-adapted process $\{r_{m,t}\}$ and a positive constant $c_{m,r}$ such that $$\label{lm:gaincons5}\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bG_{m,t}\right\|\leq \left(1-r_{m,t}\right)\left\|\bG_{m,t}\right\|$$ and $\East[r_{m,t}|\mathcal{F}_{t}]\geq c_{m,r}/(t+1)^{\tau_{2}}$ a.s. for all $t\geq t_{0}$ sufficiently large. Noting that the square of the Frobenius norm is the sum of the squared column $\mathcal{L}_{2}$ norms, we have $$\label{lm:gaincons6}\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bG_{t}\right\|^{2}_{F}\leq\sum_{m=1}^{M}\left(1-r_{m,t}\right)^{2}\left\|\bG_{m,t}\right\|^{2}\leq\left(1-r_{t}\right)^{2}\left\|\bG_{t}\right\|^{2}_{F},$$ where $\{r_{t}\}$ is the $\{\mathcal{F}_{t}\}$-adapted process given by $r_{t}=r_{1,t}\wedge\cdots\wedge r_{M,t}$ for all $t$. By the conditional Jensen’s inequality we obtain $$\label{lm:gaincons7}\East\left[r_{t}~|~\mathcal{F}_{t}\right]\geq\bigwedge_{m=1}^{M}\East\left[r_{m,t}~|~\mathcal{F}_{t}\right]\geq c_{r}/(t+1)^{\tau_{2}}$$ for some constant $c_{r}>0$ and all $t\geq t_{0}$. Since $\beta_{t}/\alpha_{t}\rightarrow\infty$ as $\tri$, by making $t_{0}$ larger if necessary, we obtain from  $$\begin{aligned} \label{lm:gaincons8}\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}-\alpha_{t}I_{NM}\right)\bG_{t}\right\|_{F}\leq\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bG_{t}\right\|_{F}+\alpha_{t}\left\|\bG_{t}\right\|_{F}\\ \leq \left(1-r_{t}\right)\left\|\bG_{t}\right\|_{F}+\alpha_{t}\left\|\bG_{t}\right\|_{F}\leq\left(1-r_{t}/2\right)\left\|\bG_{t}\right\|_{F}\end{aligned}$$ for all $t\geq t_{0}$. It then follows from  and  that $$\label{lm:gaincons9}\left\|\bG_{t+1}\right\|_{F}\leq\left\|\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}-\alpha_{t}I_{NM}\right)\bG_{t}\right\|_{F}+\alpha_{t}U_{t}\leq \left(1-r_{t}/2\right)\left\|\bG_{t}\right\|_{F}+\alpha_{t}U_{t}$$ for all $t\geq t_{0}$. Clearly, the above recursion falls under the purview of Lemma \[lm:mean-conv\] (by setting $\delta_{1}$, $\delta_{2}$ and $J_{t}$ in Lemma \[lm:mean-conv\] to $\tau_{2}$, 1 and 0 respectively), and we conclude that $(t+1)^{\tau}\|\bG_{t}\|_{F}\rightarrow 0$ as $\tri$ a.s. for each $\tau\in [0,1-\tau_{2})$. The assertion in Lemma \[lm:gaincons\] follows immediately. We state another approximation result from [@Fabian-1] regarding deterministic recursions to be used in the sequel. \[prop:Fab-1\]Let $\{b_{t}\}$ be a scalar sequence satisfying $$\label{prop:Fab-11} b_{t+1}\leq\left(1-\frac{c}{t+1}\right)b_{t}+d_{t}(t+1)^{-\tau}$$ where $c>\tau$, $\tau>0$, and the sequence $\{d_{t}\}$ is summable. Then $\limsup_{t\rightarrow\infty}(t+1)^{\tau}b_{t}<\infty$. We now complete the proof of Lemma \[lm:gainconv\]. Following the notation in the proof of Lemma \[lm:gaincons\] and using properties of the graph Laplacian , the process $\{G^{a}_{t}\}$ (the instantaneous network average of the $G_{n}(t)$’s) may be shown to satisfy the following recursion for all $t$: $$\label{lm:gainconv3}G^{a}_{t+1}=\left(1-\alpha_{t}\right)G^{a}_{t}+\alpha_{t}I^{a}_{t}.$$ Noting that the local Fisher information matrices $I_{n}(\cdot)$ are locally Lipschitz in the argument and the fact that $\bx_{n}(t)\rightarrow\abtheta$ as $\tri$ a.s. (see Lemma \[lm:auxcons\]), we have that $$\label{lm:gainconv4} \left\|I^{a}_{t}-(1/N)I(\abtheta)\right\|=O\left(\vee_{n=1}^{N}\|\bx_{n}(t)-\abtheta\|\right).$$ Since, by Lemma \[lm:auxcons\], $(t+1)^{\mu_{0}}\|\bx_{n}(t)-\abtheta\|\rightarrow 0$ as $\tri$ a.s., we may further conclude from  that $$\label{lm:gainconv5}\left\|I^{a}_{t}-(1/N)I(\abtheta)\right\|=o\left((t+1)^{-\mu_{0}}\right).$$ Now let $\tau_{5}$ be a positive constant such that $\tau_{5}<(\mu_{0}\wedge 1)$. Noting that $\alpha_{t}=(t+1)^{-1}$ by definition, by  we may then conclude that there exists an $\mathbb{R}_{+}$-valued $\{\mathcal{F}_{t}\}$-adapted stochastic process $\{\wid_{t}\}$, such that, $$\label{lm:gainconv6}\alpha_{t}\left\|I^{a}_{t}-(1/N)I(\abtheta)\right\|\leq \wid_{t}(t+1)^{-\tau_{5}}$$ for all $t$, with $\{\wid_{t}\}$ satisfying $$\label{lm:gainconv7}\wid_{t}=o\left((t+1)^{-1-\mu_{0}+\tau_{5}}\right).$$ By  and  we then obtain $$\begin{aligned} \label{lm:gainconv8} \left\|G^{a}_{t+1}-(1/N)I(\abtheta)\right\| & \leq\left(1-(t+1)^{-1}\right)\left\|G^{a}_{t}-(1/N)I(\abtheta)\right\|\\ & +\alpha_{t}\left\|I^{a}_{t}-(1/N)I(\abtheta)\right\|\\ & \leq \left(1-(t+1)^{-1}\right)\left\|G^{a}_{t}-(1/N)I(\abtheta)\right\|+\wid_{t}(t+1)^{-\tau_{5}}\end{aligned}$$ for all $t$. Further, by , we have $\sum_{t}\wid_{t}<\infty$ a.s. (since $\tau_{5}<\mu_{0}$ by construction); also noting that $\tau_{5}<1$ (again by construction), a pathwise application of Proposition \[prop:Fab-1\] to the stochastic recursion  yields $$\label{lm:gainconv9}\limsup_{\tri}(t+1)^{\tau_{5}}\left\|G^{a}_{t}-(1/N)I(\abtheta)\right\|<\infty~~\mbox{a.s.},$$ from which we may further conclude that $(t+1)^{\tau_{6}}\left\|G^{a}_{t}-(1/N)I(\abtheta)\right\|\rightarrow 0$ as $\tri$ a.s., where $\tau_{6}$ is another positive constant such that $\tau_{6}<\tau_{5}$. Now introducing another constant $\tau_{7}$ such that $0<\tau_{7}<(1-\tau_{2})\wedge\tau_{6}$, by Lemma \[lm:gaincons\] it may be readily concluded that $$\label{lm:gainconv10} \Past\left(\lim_{\tri}(t+1)^{\tau_{7}}\left\|G_{n}(t)-(1/N)I(\abtheta)\right\|=0\right)=1$$ for all $n$. Finally, noting that matrix inversion is a locally Lipschitz operator in a neighborhood of an invertible argument, we have by , , and  that $$\begin{aligned} \label{lm:gainconv11} \left\|K_{n}(t)-N.I^{-1}(\abtheta)\right\|&=\left\|\left(G_{n}(t)+\varphi_{t}I_{M}\right)^{-1}-N.I^{-1}(\abtheta)\right\|\\ &=O\left(\left\|G_{n}(t)-(1/N)I(\abtheta)\right\|+\varphi_{t}\right) =O\left((t+1)^{-\tau_{7}}+(t+1)^{-\mu_{2}}\right)\\ & =o\left((t+1)^{-\tau^{\prime}}\right),\end{aligned}$$ where $\tau^{\prime}$ may be taken to be an arbitrary positive constant satisfying $\tau^{\prime}<\tau_{7}\wedge\mu_{2}$. Hence, the desired assertion follows. [**Proof of Lemma \[lm:dev\]**]{}: The following intermediate approximation will be used in the proof of Lemma \[lm:dev\]. \[lm:devcons\] For each $n$, let $\{\mathbf{x}_{n}(t)\}$ and $\{\bv_{n}(t)\}$ be as in the hypothesis of Lemma \[lm:dev\] and denote by $\{\bu_{n}(t)\}$ the $\{\mathcal{F}_{t}\}$-adapted process such that $\bu_{n}(t)=\mathbf{x}_{n}(t)-\bv_{n}(t)$ for all $t$. Then, for each $\gamma\in [0,1-\tau_{2})$ where $\tau_{2}$ is the exponent corresponding to $\{\beta_{t}\}$, see Assumption \[ass:weight\], we have $$\label{lm:devcons1}\Past\left(\lim_{\tri}(t+1)^{\gamma}\left\|\bu_{n}(t)-\bu_{l}(t)\right\|=0\right)=1$$ for all pairs $(n,l)$ of network agents. By  and , the process $\{\bu_{n}(t)\}$ is readily seen to satisfy the recursions $$\label{lm:devcons2} \bu_{n}(t+1)=\bu_{n}(t)-\beta_{t}\sum_{n=1}^{N}\left(\bu_{n}(t)-\bu_{l}(t)\right)-\alpha_{t}K_{n}(t)\mathbf{U}^{\prime}_{n}(t),$$ where $$\label{lm:devcons3}\mathbf{U}_{n}^{\prime}(t)=h_{n}(\mathbf{x}_{n}(t))-h_{n}(\abtheta)-I_{n}(\abtheta)\left(\bv_{n}(t)-\abtheta\right)$$ for all $t$. Noting that the processes $\{\mathbf{x}_{n}(t)\}$ and $\{\bv_{n}(t)\}$ converge a.s. as $\tri$ (see Corollary \[corr:xconv\] and Corollary \[corr:linz\]), we conclude that the sequence $\{\mpu_{n}(t)\}$, thus defined, is bounded a.s. Denoting by $\mpu_{t}$ and $\bu_{t}$ the block-vectors ${\boldsymbol{\operatorname{Vec}}}\left(\mpu_{n}(t)\right)$ and ${\boldsymbol{\operatorname{Vec}}}\left(\bu_{n}(t)\right)$ respectively, from  we then have $$\label{lm:devcons4} \bu_{t+1}=\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\bu_{t}-\alpha_{t}\mathbf{u}_{t},$$ where $\{\mathbf{u}_{t}\}$ is the $\{\mathcal{F}_{t}\}$-adapted process given by $\mathbf{u}_{t}={\boldsymbol{\operatorname{Diag}}}\left(K_{n}(t)\right).\mpu_{t}$ for all $t$. Noting that $\{\mpu_{t}\}$ is bounded a.s. and the adaptive gain sequence $\{K_{n}(t)\}$ converges a.s. as $\tri$ for all $n$, the process $\{\mathbf{u}_{t}\}$ is readily seen to be bounded a.s. Further, denoting by $\{\wu_{t}\}$ and $\{\wU_{t}\}$ the processes, such that, $$\label{lm:devcons5} \wu_{t}=\left(I_{NM}-\mathbf{1}_{N}.\left(\mathbf{1}_{N}\otimes I_{M}\right)^{\top}\right)\bu_{t}~~\mbox{and}~~\wU_{t}=\left(I_{NM}-\mathbf{1}_{N}.\left(\mathbf{1}_{N}\otimes I_{M}\right)^{\top}\right)\mathbf{u}_{t}$$ for all $t$, we have (using standard properties of the Laplacian) $$\label{lm:devcons6} \wu_{t}=\left(I_{NM}-\beta_{t}L_{t}\otimes I_{M}\right)\wu_{t}-\alpha_{t}\wU_{t}$$ for all $t$. Clearly, $\wu_{t}\in\PC$ for all $t$, and we may note that, at this point the evolution  resembles the dynamics analyzed in Lemma \[lm:consrate\] (for the process $\{\bz_{t}\}$, see ). Following essentially similar arguments as in -, we have $(t+1)^{\gamma}\|\wu_{t}\|\rightarrow 0$ as $\tri$ a.s. for all $\gamma\in [0,1-\tau_{2})$, from which the desired assertion follows. In what follows we stick to the notation in the proof of Lemma \[lm:devcons\]. By  we have that $$\label{lm:dev8}\bau_{t+1}=\bau_{t}-(1/N)\alpha_{t}\sum_{n=1}^{N}K_{n}(t)\mpu_{n}(t),$$ where $\bau_{t}=(1/N)\sum_{n=1}^{N}\bu_{n}(t)$ for all $t$. Now note that, for each $n$, the function $h_{n}(\cdot)$ is twice continuously differentiable with gradient $I_{n}(\cdot)$ (see Proposition \[prop:analytic\] and Proposition \[prop:inf\]), and hence there exist positive constants $\oc$ and $R$, such that for each $n$, $$\label{lm:dev9}\left\|h_{n}(\mathbf{z})-h_{n}(\abtheta)-I_{n}(\abtheta)\left(\mathbf{z}-\abtheta\right)\right\|\leq\oc\left\|\mathbf{z}-\abtheta\right\|^{2}$$ for all $\mathbf{z}\in\mathbb{R}^{M}$ with $\|\mathbf{z}-\abtheta\|\leq R$. Since $\mathbf{x}_{n}(t)\rightarrow\abtheta$ as $\tri$ a.s. (see Corollary \[corr:xconv\]) for each $n$, there exists a finite random time $t_{R}$ such that $$\label{lm:dev10}\max_{n=1}^{N}\left\|\mathbf{x}_{n}(t)-\abtheta\right\|\leq R~~\mbox{$\forall t\geq t_{R}$ a.s.}$$ Hence, by  and -, we have that $$\begin{aligned} \label{lm:dev11} \mpu_{n}(t) &= h_{n}(\mathbf{x}_{n}(t))-h_{n}(\abtheta)-I_{n}(\abtheta)\left(\bv_{n}(t)-\abtheta\right) =I_{n}(\abtheta)\left(\mathbf{x}_{n}(t)-\bv_{n}(t)\right)+\mR_{n}(t)\\ &=I_{n}(\abtheta)\bu_{n}(t)+\mR_{n}(t),\end{aligned}$$ for all $n$ and $t$, where the residuals $\mR_{n}(t)$, $n=1,\cdots,N$ satisfy $$\label{lm:dev12} \left\|\mR_{n}(t)\right\|\leq\oc\left\|\mathbf{x}_{n}(t)-\abtheta\right\|^{2}~~\mbox{$\forall t\geq t_{R}$ a.s.}$$ Standard algebraic manipulations further yield $$\begin{aligned} \label{lm:dev14}\left\|\mR_{n}(t)\right\| & \leq\oc\left\|\mathbf{x}_{n}(t)-\abtheta\right\|^{2} \leq 2\oc\left\|\mathbf{x}_{n}(t)-\bv_{n}(t)\right\|^{2}+2\oc\left\|\bv_{n}(t)-\abtheta\right\|^{2}\\ & =2\oc\left\|\bu_{n}(t)\right\|^{2}+2\oc\left\|\bv_{n}(t)-\abtheta\right\|^{2} \leq 4\oc\left\|\bau_{t}\right\|^{2}+4\oc\left\|\bu_{n}(t)-\bau_{t}\right\|^{2}+2\oc\left\|\bv_{n}(t)-\abtheta\right\|^{2}\end{aligned}$$ for all $t\geq t_{R}$ a.s. Note that, the fact that $(t+1)^{\tau}\|\bv_{n}(t)-\abtheta\|\rightarrow 0$ as $\tri$ a.s. for all $n$ and $\tau\in [0,1/2)$ implies that there exists a constant $\gamma_{1}>1/2$ such that $$\label{lm:dev13}\max_{n=1}^{N}\left\|\bv_{n}(t)-\abtheta\right\|^{2}=o\left((t+1)^{-\gamma_{1}}\right)~~\mbox{a.s.}.$$ Also, by Lemma \[lm:devcons\] and the fact that $\tau_{2}<1/2$ (see Assumption \[ass:weight\]), we have that $$\label{lm:dev15} \max_{n=1}^{N}\left\|\bu_{n}(t)-\bau_{t}\right\|=o\left((t+1)^{-\gamma_{2}}\right)~~\mbox{a.s.}$$ for some constant $\gamma_{2}>1/2$. By the previous construction, the recursions for $\{\bau_{t}\}$ may be written as $$\label{lm:dev16} \bau_{t+1}=\bau_{t}-\alpha_{t}Q_{t}\bau_{t}-\alpha_{t}\wmR_{t},$$ where $$\label{lm:dev17} Q_{t}=(1/N)\sum_{n=1}^{N}K_{n}(t)I_{n}(\abtheta),$$ and $$\label{lm:dev18} \wmR_{t}=(1/N)\sum_{n=1}^{N}K_{n}(t)\left(I_{n}(\abtheta)\left(\bu_{n}(t)-\bau_{t}\right)+\mR_{n}(t)\right)$$ for all $t$. By  and we obtain $$\begin{aligned} \label{lm:dev19}\left\|\wmR_{t}\right\| & \leq (1/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bu_{n}(t)-\bau_{t}\right\|\\ & +(4\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bau_{t}\right\|^{2} +(4\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bu_{n}(t)-\bau_{t}\right\|^{2}\\ & +(2\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bv_{n}(t)-\abtheta\right\|^{2}\end{aligned}$$ for $t\geq t_{R}$ a.s. Then, denoting by $\{\blambda_{t}\}$ the $\{\mathcal{F}_{t}\}$-adapted process such that $$\label{lm:dev20}\blambda_{t}=(4\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bau_{t}\right\|$$ for all $t$, and observing that, by  and , $$\label{lm:dev21} (2\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bv_{n}(t)-\abtheta\right\|^{2}=o\left((t+1)^{-\gamma_{1}}\right)~~\mbox{a.s.},$$ $$\label{lm:dev22} (1/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bu_{n}(t)-\bau_{t}\right\|=o\left((t+1)^{-\gamma_{2}}\right)~~\mbox{a.s.},$$ and $$\label{lm:dev200} (4\oc/N)\sum_{n=1}^{N}\left\|K_{n}(t)I_{n}(\abtheta)\right\|\left\|\bu_{n}(t)-\bau_{t}\right\|^{2}=o\left((t+1)^{-\gamma_{2}}\right)~~\mbox{a.s.}$$ (note that the gain sequences $\{K_{n}(t)\}$’s converge a.s., hence, $K_{n}(t)=O(1)$ for all $n$), we obtain the following from : $$\label{lm:dev23}\left\|\wmR_{t}\right\|\leq \blambda_{t}\left\|\bau_{t}\right\|+o\left((t+1)^{-\gamma_{3}}\right)$$ for some constant $\gamma_{3}$ such that $1/2<\gamma_{3}<\gamma_{1}\wedge\gamma_{2}$. Thus, by - and , and by making $t_{R}$ larger if necessary, we conclude that there exists a positive constant $b$ such that $$\label{lm:dev24} \left\|\bau_{t+1}\right\|\leq\left\|I_{M}-\alpha_{t}Q_{t}+\alpha_{t}\blambda_{t}I_{M}\right\|\left\|\bau_{t}\right\|+b\alpha_{t}(t+1)^{-\gamma_{3}}$$ for all $t\geq t_{R}$ a.s. Since $K_{n}(t)\rightarrow N.I^{-1}(\abtheta)$ as $\tri$ a.s. for each $n$ and $\sum_{n=1}^{N}I_{n}(\abtheta)=I(\abtheta)$, we have $Q_{t}\rightarrow I_{M}$ as $\tri$ a.s.; similarly, since for all $n$ the sequences $\{\mathbf{x}_{n}(t)\}$ and $\{\bv_{n}(t)\}$ converge to $\abtheta$ a.s. as $\tri$ (see Corollary \[corr:xconv\] and Corollary \[corr:linz\]), it follows (from definition) that $\bu_{n}(t)\rightarrow 0$ as $\tri$ a.s. for all $n$, and hence $\blambda_{t}\rightarrow 0$ as $\tri$ a.s. The fact that, $Q_{t}\rightarrow I_{M}$ and $\blambda_{t}\rightarrow 0$ as $\tri$ a.s., ensures that, by making $t_{R}$ larger if necessary, the following holds $$\label{lm:dev25}\left\|I_{M}-\alpha_{t}Q_{t}+\alpha_{t}\blambda_{t}I_{M}\right\|\leq 1-(2/3).\alpha_{t}=1-(2/3).(t+1)^{-1}$$ for all $t\geq t_{R}$ a.s. Let $\gamma_{4}$ be a constant such that $1/2<\gamma_{4}<\gamma_{3}\wedge (2/3)$; then, by -, we have $$\label{lm:dev26} \left\|\bau_{t+1}\right\|\leq\left(1-(2/3).(t+1)^{-1}\right)\left\|\bau_{t}\right\|+d_{t}(t+1)^{-\gamma_{4}}$$ for all $t\geq t_{R}$ a.s., where $d_{t}=b\alpha_{t}(t+1)^{\gamma_{4}-\gamma_{3}}$. Since $\gamma_{4}<2/3$ and the sequence $\{d_{t}\}$ is summable, a pathwise application of Proposition \[prop:Fab-1\] yields $$\label{lm:dev27} \Past\left(\limsup_{\tri}(t+1)^{\gamma_{4}}\left\|\bau_{t}\right\|<\infty\right)=1.$$ Hence, by choosing $\overline{\tau}$, such that $1/2<\overline{\tau}<\gamma_{2}\wedge\gamma_{4}$ (where $\gamma_{2}$ is defined in ), we have that $(t+1)^{\overline{\tau}}\bu_{n}(t)\rightarrow 0$ as $\tri$ a.s. for all $n$ and the desired assertion follows. [^1]: Exponential families subsume most of the distributions encountered in practice, for example, Gaussian, gamma, beta etc. [^2]: Global observability means that for every pair of different parameter values, the corresponding probability measures induced on the aggregate or collective agent observation set are *distinguishable*. For setups involving exponential families distinguishability is aptly captured by strict positivity of the Kullback-Liebler (KL) divergence between the corresponding measures, see Assumption \[ass:globobs\] for details. [^3]: A path between nodes $n$ and $l$ of length $m$ is a sequence $(n=i_{0},i_{1},\cdots,i_{m}=l)$ of vertices, such that $(i_{k},i_{k+1})\in E\:\forall~0\leq k\leq m-1$. [^4]: For a function $f(\btheta)$, $\nabla_{\btheta}f(\btheta)\in\mathbb{R}^{M}$ denotes the vector of partial derivatives, i.e., the $i$-th component of $\nabla_{\btheta}f(\btheta)$ is given by $\frac{\partial f(\btheta)}{\partial\btheta_{i}}$. The Hessian $\nabla^{2}_{\btheta}f(\btheta)\in\mathbb{R}^{M\times M}$ denotes the matrix of second order partial derivatives, whose $i,j$-th entry corresponds to $\frac{\partial^{2} f(\btheta)}{\partial\btheta_{i}\partial\btheta_{j}}$. [^5]: We are somewhat abusing the notion of estimation which, to be precise, corresponds to inferring *continuous* parameters as pursued in this paper. However, by considering constrained parametrization, we are essentially expanding its usage to general inference problems including detection and classification. [^6]: An idea related to such analytical extensions is that of embedding into an exponential family (see [@Rukhin1994recursive] for some discussion in a related but centralized context), in which, broadly speaking, the objective is to obtain an unconstrained exponential family whose restriction to $\Theta$ coincides with the given constrained family.
--- abstract: | We analyzed 2012 and 2016 YouGov pre-election polls in order to understand how different population groups voted in the 2012 and 2016 elections. We broke the data down by demographics and state and found: - The gender gap was an increasing function of age in 2016. - In 2016 most states exhibited a U-shaped gender gap curve with respect to education indicating a larger gender gap at lower and higher levels of education. - Older white voters with less education more strongly supported Donald Trump versus younger white voters with more education. - Women more strongly supported Hillary Clinton than men, with young and more educated women most strongly supporting Hillary Clinton. - Older men with less education more strongly supported Donald Trump. - Black voters overwhelmingly supported Hillary Clinton. - The gap between college-educated voters and non-college-educated voters was about 10 percentage points in favor of Hillary Clinton We display our findings with a series of graphs and maps. The R code associated with this project is available at <https://github.com/rtrangucci/mrp_2016_election/>. author: - 'Rob Trangucci[^1]' - Imad Ali - 'Andrew Gelman[^2]' - 'Doug Rivers[^3] [^4]' bibliography: - 'mrp\_election\_bib.bib' date: 01 February 2018 nocite: '[@*]' title: '**Voting patterns in 2016: Exploration using multilevel regression and poststratification (MRP) on pre-election polls** ' --- Introduction ============ After any election, we typically want to understand how the electorate voted. While national and state results give exact measures of aggregate voting, we may be interested in voting behavior that cuts across state lines, such as how different demographic groups voted. Exit polls provide one such measure, but without access to the raw data we cannot determine aggregates beyond the margins that are supplied by the exit poll aggregates. In pursuit of this goal, we can use national pre-election polls in which respondents are asked for whom they plan to vote and post-election polls in which respondents are asked if they participated in the election, both of which record demographic information and state residency of respondents. Using this data, we then build a statistical model that uses demographics and state information to predict the probability that an eligible voter voted in the election and which candidate a voter supports. A model that accurately predicts voting intentions for specific demographic groups (e.g. college-educated Hispanic men living in Georgia) will require deep interactions as outlined in [@ghitza2013deep]. In order to precisely learn the second- and third-order interactions, we require a large dataset that covers many disparate groups. Armed with our two models, we can use U.S. Census data to yield the number of people in each demographic group. For each group, we then predict the number of voters, and the number of votes for each candidate to yield a fine-grained dataset. We can then aggregate this dataset along any demographic axes we choose in order to investigate voting behavior. Data and methods ================ Data ---- We use YouGov’s daily tracking polls from 10/24/2016 through 11/6/2016 to train the 2016 voter preference model. We included 56,946 respondents in the final dataset after filtering out incomplete cases. To train the 2012 voter preference model we used 18,716 respondents polled on 11/4/2012 from YouGov’s daily tracking poll. In order to train the 2016 voter turnout model, we use the Current Population Survey (CPS) from 2016, which includes a voting supplement ([@ipums]). The model used 80,766 responses from voters as to whether they voted in the 2016 presidential election. We used the CPS from 2012 to train the 2012 voter turnout model, which comprises 81,017 voters. We decided to use the CPS to train our model because it is viewed as the gold-standard in voter-turnout polling [@lei20162008]. We use a modified version of the 2012 Public Use Microdata Sample Census dataset (PUMS) to get a measure of the total number of eligible voters in the U.S. YouGov provided the PUMS dataset with ages and education adjusted to match the 2016 population. Methods ------- Our methodology follows that outlined in [@gelman1997poststratification], [@ghitza2013deep], and [@lei20162008]. For voter $i$ in group $g$ as defined by the values of a collection of categorical variables, we want to learn the voter’s propensity to vote and for whom they plan to vote, by using a nonrandom sample from the population of interest. We assume that an individual voter’s response in group $g$ is modeled as follows: $$\begin{aligned} T_i \sim \text{Bernoulli}(\alpha_{g[i]})\end{aligned}$$ where $T_i$ is $1$ if the voter plans to vote for Trump, or $0$ otherwise. $\alpha_{g[i]}$ is the probability of voting for Trump for voter $i$ in group $g$. In order to make inferences about $\alpha_{g[i]}$ without modeling the selection process, we need to stratify our respondents into small enough groups so that within a cell selection is random (i.e. that the responses are Bernoulli random variables conditional only on $g$). We do so by generating multidimensional cells defined by demographic variables like age, ethnicity, and state of residence that categorize our respondents. This induces data sparsity even in large polls so we must use Bayesian hierarchical models to partially pool cells along these demographic axes. Upon fitting our model, we can use the posterior mean of $\alpha_g$, $\hat{\alpha}_g$ and Census data to estimate an aggregate Trump vote proportion by calculating the weighted average $\sum_{g \in D}\tfrac{N_g \hat{\alpha}_g}{N_D}$ for whatever demographic category $D$ we like. We measure our electorate using six categorical variables: - State residency - Ethnicity - Gender - Marital status - Age - Education Each variable $v$ has $L_v$ levels. State residency has fifty levels. Ethnicity has four levels: Black, Hispanic, Other, and White. Gender has two levels. Marital status has three levels: Never married, Married, Not married. Age has four levels, corresponding to the left-closed intervals of age: $[18,30), [30,45), [45,65), [65,99)$. Education has five levels: No High School, High School, Some College, College, Post Graduate. After binning our Census data by the six-way interaction of the above attributes, we generate table \[ps\_table\]. Each row of the table represents a specific group of the population, an intersection of six observable attributes. We refer to each row as a cell, and the full table as a six-way poststratification table. Our table has 33,561 cells, reflecting the fact that not all possible six-way groups exist in the U.S.. \[ps\_table\] Cell index $g$ State Ethn. Gender … Educ. N $\phi_g$ $\alpha_g | \text{vote}$ $\mathbb{E}\left[\text{T}_g\right]$ ---------------- ------- ------- -------- --- -------------- ----- ---------- -------------------------- ------------------------------------- 1 AK Black Female … College 400 0.40 0.50 80 2 AK Black Female … High School 300 0.30 0.60 54 … … … … … … … … … … … … … … … … 33651 WY White Male … Some College 200 0.40 0.40 32 : Six-way poststratification table We then add columns to this dataset that represent the cell-by-cell probability of voting and the cell-by-cell probability of supporting Trump, which can be combined to yield the expected number of Trump voters, $\mathbb{E}\left[\text{T}_g\right]$, in each cell $g$: $\mathbb{E}\left[\text{T}_g\right] = N \times \phi_g \times \alpha_g | \text{vote} $ where $\phi_g$ is the expected probability of voting in cell $g$, and $\alpha_g | \text{vote}$ is the expected probability of voting for Trump for voters in cell $g$ In order to generate $\phi_g$ and $\alpha_g | \text{vote}$, we build two models: a voter turnout model and a vote preference model, respectively. Both models are hierarchical binomial logistic regression models of the form: $$\begin{aligned} T_g & \sim \text{Binomial}(V_g, \phi_g)\, , \, g \in \{1,\dots, G\}\\ \text{logit}\,\phi_g & = \mu + \sum_{v \, \in \, V} \beta^v_{\left[v[g]\right]} \\ \beta^v & \sim \text{Normal}(0, \tau_v)\, \forall v \, \in \, V \\ \tau_v & = \sqrt{\pi_v |V| S^2} \\ \boldsymbol{\pi} & \sim \text{Dirichlet}(\mathbf{1}) \\ S & \sim \text{Gamma}(1, 1)\end{aligned}$$ Each categorical predictor, $\beta^v$, is represented as a length-$L_v$ vector, where the elements of the vector map to the effect associated with the level $l_v$. $V$ denotes the set of all categorical predictors included in the model and $v[g]$ is a function that maps the $g$-th cell to the appropriate $l_v$-th level of the categorical predictor. For example, $\beta^{\text{state}}$ would be a 50-element vector, and $\text{state}[\,\,]$ is a length-$G$ list of integers with values between 1 and 50 indicating to which state the $g$-th cell belongs. Note that the model above can include one-way effects in $V$, as well as two-way and three-way interactions, like state $\times$ age. We use `rstanarm` to specify the voter turnout model and the voter preference model, which uses `lme4` syntax to facilitate building complex hierarchical generalized linear models like above. The full model specifications in $\texttt{lme4}$ syntax are given in the Appendix. `rstanarm` imposes more structure on the variance parameters $\tau_v$ than is typical. In our model, $\tau_v^2$ is the product of the square of a global scale parameter $S$ the $v$-th entry in the simplex parameter $\boldsymbol{\pi}$, and the cardinality of $V$, $|V|$. See [@rstanarm] for more details. Our voter preference model went through multiple iterations before we arrived at our final model. At first we intended to include past presidential vote. However, PUMS does not include past presidential vote, so we used YouGov’s imputed past presidential vote for each PUMS respondent. This induced too much sparsity in our poststratification frame. After training each of the models, and generating predictions for voter turnout by cell and two-party vote preference for each cell, we adjusted our turnout and vote proportions in each cell to match the actual state-by-state outcomes as outlined [@ghitza2013deep]. `stan_glmer()` Variable Description Type Number of Groups ------------------------------- --------------------------- ----------------------------- ---------------------------- y Vote choice Outcome variable - 1 Intercept Global intercept - female Fem.: 0.5, Male: -0.5 Global slope - state\_pres\_vote Pre-election poll average Global slope - state State of residence Varying intercept 50 age Age Varying intercept 4 educ Education attained Varying intercept 5 1 + state\_pres\_vote $|$ eth Ethnicity Varying intercept and slope 4 marstat Marital status Varying intercept 3 marstat:age Varying intercept 3$\times$4 = 12 marstat:state Varying intercept 3$\times$50 = 150 marstat:eth Varying intercept 3$\times$4 = 12 marstat:gender Varying intercept 3$\times$2 = 6 marstat:educ Varying intercept 3$\times$5 = 15 state:gender Varying intercept 50$\times$2 = 100 age:gender Varying intercept 4$\times$2 = 8 educ:gender Varying intercept 5$\times$2 = 10 eth:gender Varying intercept 4$\times$2 = 8 state:eth Varying intercept 50$\times$4 = 200 state:age Varying intercept 50$\times$4 = 200 state:educ Varying intercept 50$\times$5 = 250 eth:age Varying intercept 4$\times$4 = 16 eth:educ Varying intercept 4$\times$5 = 20 age:educ Varying intercept 4$\times$5 = 20 state:educ:age Varying intercept 50$\times$4$\times$4 = 800 educ:age:gender Varying intercept 5$\times$4$\times$2 = 40 : Variables in the vote preference model Results ======= This section presents plots at the county and state level, followed by charts and maps that illustrate the poststratification. In addition to vote intention, the charts and maps also illustrate voter turnout. The county and state level plots use 2016 and 2012 election results and 2010 US census data. The captions of the charts and maps identify which model is used to produce the data illustrated in the figure. The models are defined as follows: Model 1 : is described in *Section 2* above. Model 2 : is similar to *Model 1* but includes income as a factor variable and omits marital status. The 2016 vote turnout model for Model 2 was fitted to 2012 CPS. Election results graphs ----------------------- The graphs that follow present actual election results by county and by state. They are not model-based, but rather an examination of the Republican vote proportion swing from 2012 to 2016 by county versus various demographic variables measured at the county level. ### County-level vote swings *Notes:* The county-level Republican swing is computed as Donald Trump’s 2016 two-party vote share minus Mitt Romney’s 2012 two-party vote share. Positive values indicate Trump outperforming Romney, while negative values indicate Romney outperforming Trump. The area of each circle is proportional to the number of voters in each county. Overall, Trump outperformed Romney in counties with lower median income. While Trump mostly outperformed Romney in counties with lower voter turnout, Romney mostly outperformed Trump in counties with larger voter turnout. *Notes:* The county-level Republican swing is computed as Donald Trump’s 2016 two-party vote share minus Mitt Romney’s 2012 two-party vote share. Positive values indicate Trump outperforming Romney, while negative values indicate Romney outperforming Trump. The area of each circle is proportional to the number of voters in each county. Overall, Trump outperformed Romney in counties with lower college education. While Trump mostly outperformed Romney in counties with lower voter turnout, Romney mostly outperformed Trump in counties with larger voter turnout. *Notes:* The county-level Republican swing is computed as Donald Trump’s 2016 two-party vote share minus Mitt Romney’s 2012 two-party vote share. Positive values indicate Trump outperforming Romney, while negative values indicate Romney outperforming Trump. The area of each circle is proportional to the number of voters in each county. Across all regions there is a trend of Trump outperforming Romney in low income counties and counties with lower college education. The trend of Trump performing well in counties with lower college education is less apparent in western counties. ### State-level election results and vote swings *Notes:* The state-level Republican share of the two-party vote. States are color coded according to the results of the 2012 election. States won by Mitt Romney are in red and states won by Barack Obama are in blue. The diagonal line indicates that the 2012 and 2016 Republican candidates received identical shares of the two-party vote. In most states Trump received a higher share of the two-party vote. Nationally, Trump got 2 percent more of the two-party vote than Romney. *Notes:* The state-level Republican swing. States are color coded according to the results of the 2012 election. States won by Mitt Romney are in red and states won by Barack Obama are in blue. Positive values indicate Trump outperforming Romney and negative values indicate Romney outperforming Trump. There is lots of variation among states with Trump outperforming Romney in most states. *Notes:* A state-level comparison between Donald Trump’s actual two-party vote share and his forecasted vote share. States are color coded according to the results of the 2012 election. States won by Mitt Romney are in red and states won by Barack Obama are in blue. Values on the diagonal indicate that Trump’s actual performance was in line with his forecast. In most states Trump outperformed his poll-based forecast. *Notes:* A state-level comparison of Donald Trump’s actual vote share against his poll-based forecast. States are color coded according to the results of the 2012 election. States won by Mitt Romney are in red and states won by Barack Obama are in blue. Positive values indicate states in which Trump outperformed his forecast and negative values indicate in which Trump’s actual performance fell behind his forecast. Trump did better than predicted in states that Romney won in 2012. Poststratification graphs ------------------------- The graphs that follow are generated using the multilevel regression and poststratification method outlined in the Methodology section. ### Gender gap *Notes:* The gender gap is evaluated as men’s probability of voting for Trump minus women’s probability for of voting for Trump for various education and age levels. Larger values indicate a greater divergence in vote preference between men and women.\ (Using *Model 1*.) *Notes:* The gender gap is evaluated as men’s probability of voting for Romney minus women’s probability for of voting for Romney for various education and age levels.\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* The gender gap is evaluated as men’s probability of voting for Trump minus women’s probability for of voting for Trump for various education levels. Larger values indicate a greater divergence in vote preference among women and men. Interactions exist between age and education conditional on gender. Overall, the gender gap increases with age. Among voters under 45 the gender gap is lowest for those with a college education, and among voters 45 years or older the gender gap is lowest for those with a high school education.\ (Using *Model 1*.) *Notes:* The gender gap is evaluated as men’s probability of voting for Romney minus women’s probability for of voting for Romney for various education levels. Larger values indicate a greater divergence in vote preference among women and men. Interactions exist between age and education conditional on gender.\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* The state-level gender gap is evaluated as men’s probability of voting for Trump minus women’s probability for of voting for Trump for various education levels. Larger values indicate a greater divergence in vote preference among women and men. In most states, voters with a high school education level tend to have the lowest gender gap and voters with a post graduate education level tend to have the highest gender gap.\ (Using *Model 1*.) *Notes:* The state-level gender gap is evaluated as men’s probability of voting for Trump minus women’s probability for of voting for Trump for various education levels. Larger values indicate a greater divergence in vote preference among women and men. The gender gap increases with age in most states, with larger variation in states that supported Clinton.\ (Using *Model 1*.) ### Vote by education *Notes:* Republican share of the two-party vote against various education levels. Overall, the Republican share increases with age. The strongest support came from voters with a high school education in each age category, with the exception of 30-45 year olds.\ (Using *Model 1*.) *Notes:* Republican share of the two-party vote against various education levels. Overall, the Republican share increases with age.\ (Using *Model 1* with 2012 election results/turnout data.) ### Vote by income, age, education, and ethnicity *Notes:* Republican share of the two-party vote for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). Trump’s share of the vote is highest among white voters with a high school education level.\ (Using *Model 2* (left) and *Model 1* (right).) *Notes:* State-level Republican share of the two-party vote for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). In most states white voters with high school education have the greatest support for Trump and those with post graduate education have the lowest support for Trump.\ (Using *Model 1*.) *Notes:* State-level Republican share of the two-party vote for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue).\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* State-level Republican share of the two-party vote for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). Support for Trump increases with age. Support among Whites is consistently the strongest followed by support among other races and Hispanics.\ (Using *Model 1*.) *Notes:* State-level Republican share of the two-party vote for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). Support for Trump increases with age.\ (Using *Model 1* with 2012 election results/turnout data.) ### Voter turnout *Notes:* Voter turnout for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). Voter turnout increases with education. There is not much variation across states. Within states Hispanics typically experienced low voter turnout compared to Whites and Blacks.\ (Using *Model 1*.) *Notes:* Voter turnout for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue).\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* Voter turnout for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue). Voter turnout increases with age. There is low voter turnout among Hispanics across age levels compared to Whites and Blacks.\ (Using *Model 1*.) *Notes:* Voter turnout for Whites (orange), Blacks (black), Hispanics (red), other ethnicities (green), and overall (blue).\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* Voter turnout for women (red), men (blue), and overall (grey). Voter turnout increases with education, with women experiencing a larger voter turnout compared to men.\ (Using *Model 1*.) *Notes:* Voter turnout for women (red), men (blue), and overall (grey).\ (Using *Model 1* with 2012 election results/turnout data.) ### Maps of vote preference *Notes:* State-level gender gap evaluated as men’s probability of voting for Trump minus women’s probability for of voting for Trump. Dark green/orange indicates a larger divergence in vote preference between men and women. The greatest divergence exists among older voters with post graduate education. The weakest support exists among young voters with a college education.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age. Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Overall, older voters with lower education have stronger support for Trump and younger voters with higher levels of education have stronger support for Clinton. In each age bracket Trump has stronger support among voters with high school and some college education compared to voters with no high school education.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for women. Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Overall, older women have stronger support for Trump. Women with a post graduate education have stronger support for Clinton, and women with a high school education and some college education have stronger support for Trump.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for men. Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Older men have stronger support for Trump whereas younger men have stronger support for Clinton. Overall, men with a post graduate education have stronger support for Clinton, while men with a high school education have stronger support for Trump.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for Whites. Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Older voters with less education had stronger support for Trump, whereas younger voters with more education had stronger support for Clinton. In terms of education, the strongest support for Clinton comes from voters with a post graduate education and the strongest support for Trump comes from voters with a high school education.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for Blacks. Dark red indicates stronger support for Donald Trump among women and dark blue indicates stronger support for Hillary Clinton. Missing cells are denoted by diagonal lines. Overall, Blacks supported Clinton.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for Hispanics. Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Missing cells are denoted by diagonal lines. A majority of young Hispanics have stronger support for Clinton. Support for Trump increases with age at all education levels. There is not much variation across education levels.\ (Using *Model 1*.) *Notes:* State-level vote intention by education and age for ethnicities (not including White, Black, or Hispanic). Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. Support for Trump increases with age at all education levels. Support for Trump consistently decreases with education (with the exception of the 65+ age bracket).\ (Using *Model 1*.) *Notes:* State-level vote intention for white and non-white voters by education. No college education includes the categories “No High School", “High School", and “Some College". College education includes the categories “College" and “Post Graduate". Dark red indicates stronger support for Donald Trump and dark blue indicates stronger support for Hillary Clinton. White voters have stronger support for Trump compared to non-white voters, with white voters with no college education having the strongest support. There is little variation in vote preference across these categories for North Dakota, Wyoming, and Idaho, which consistently support Trump. There is also little variation in vote preference across education levels among non-white voters.\ (Using *Model 1*.) *Notes:* State-level vote intention for white and non-white voters by education. No college education includes the categories “No High School", “High School", and “Some College". College education includes the categories “College" and “Post Graduate". Dark red indicates stronger support for Mitt Romney and dark blue indicates stronger support for Barack Obama. White voters with no college education had the strongest support for Romney. Regardless of college education, non-White voters had the strongest support for Obama.\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* State-level vote intention for white and non-white women by education. Dark red indicates stronger support for Donald Trump among women and dark blue indicates stronger support for Hillary Clinton among women. Support for Trump among white women increases from no high school to high school education levels and declines from high school to post graduate education levels. White women with high school education have the strongest support for Trump. Overall, non-white women have stronger support for Clinton, with the exception of some Midwestern states (e.g. North Dakota and Wyoming).\ (Using *Model 1*.) *Notes:* State-level vote intention for white and non-white women by education. Dark red indicates stronger support for Mitt Romney among women and dark blue indicates stronger support for Barack Obama among women. Support for Romney among White women decreased with education. Regardless of college education, Obama had strong support among non-White women.\ (Using *Model 1* with 2012 election results/turnout data.) *Notes:* State-level vote intention for women by education. Dark red indicates stronger support for Donald Trump among women and dark blue indicates stronger support for Hillary Clinton among women. In most states, women with high school education have stronger support for Trump and women with post graduate education have stronger support for Clinton.\ (Using *Model 1*.) *Notes:* State-level vote intention for women by education. Dark red indicates stronger support for Mitt Romney among women and dark blue indicates stronger support for Barack Obama among women. In most states, women with high school education had stronger support for Romney and women with post graduate education had stronger support for Obama.\ (Using *Model 1* with 2012 election results/turnout data.) ### Maps of voter turnout *Notes:* State-level voter turnout by education and age. Yellow indicates low voter turnout and dark blue indicates high voter turnout. Younger individuals with less education were less likely to vote this election, whereas older individuals with more education were more likely to vote.\ (Using *Model 1*.) *Notes:* State-level voter turnout by education and age for women. Yellow indicates low voter turnout and dark blue indicates high voter turnout.\ (Using *Model 1*.) *Notes:* State-level voter turnout by education and age for women. Yellow indicates low voter turnout and dark blue indicates high voter turnout.\ (Using *Model 1*.) *Notes:* State-level voter turnout gender gap evaluated as voter turnout probability for men minus voter turnout probability for women. Dark green/orange indicates a large turnout gender gap.\ (Using *Model 1*.) Discussion ========== We keep the discussion short as we feel that our main contribution here is to present these graphs and maps which others can interpret how they see best, and to share our code so that others can fit these and similar models on their own. Some of our findings comport with the broader media narrative developed in the aftermath of the election. We found that white voters with lower educational attainment supported Trump nearly uniformly. We did not find that income was a strong predictor of support for Trump, perhaps a continuation of a trend apparent in 2000 through 2012 election data. We found the gender gap to be about 10%, which was a bit lower than predicted by exit polls. The marital status gap we estimated was about 2$\times$ the figure estimated by exit polls. Most surprising to us was the strong age pattern in the gender gap. Older women were much more likely to support Clinton than older men, while younger women were mildly more likely to support Clinton compared to men the same age. We are not sure what accounts for this difference. One area of future research is using age as a continuous predictor rather than binning ages and using the bins as categorical predictors. Our models predict that men in several state by education categories were more likely to support Clinton than women. We do not believe this to be true but rather believe it to be a problem with poststratification table sparsity. In order to reduce the number of poststratification cells, in future analyses we could poststratify by region rather than state. This would likely not have impacted our descriptive precision in this analysis due to the apparently strong regional patterns in voting behavior in this election. Appendix A - Model Code ======================= We specified our voter turnout model as below: cbind(vote, did_not_vote) ~ 1 + female + state_pres_vote + (1 | state) + (1 | age) + (1 | educ) + (1 + state_pres_vote | eth) + (1 | marstat) + (1 | marstat:age) + (1 | marstat:state) + (1 | marstat:eth) + (1 | marstat:gender) + (1 | marstat:educ) + (1 | state:gender) + (1 | age:gender) + (1 | educ:gender) + (1 | eth:gender) + (1 | state:eth) + (1 | state:age) + (1 | state:educ) + (1 | eth:age) + (1 | eth:educ) + (1 | age:educ) + (1 | state:educ:age) + (1 | educ:age:gender) We specified our voter preference model as below: cbind(clinton, trump) ~ 1 + female + state_pres_vote + (1 | state) + (1 | age) + (1 | educ) + (1 + state_pres_vote | eth) + (1 | marstat) + (1 | marstat:age) + (1 | marstat:state) + (1 | marstat:eth) + (1 | marstat:gender) + (1 | marstat:educ) + (1 | state:gender) + (1 | age:gender) + (1 | educ:gender) + (1 | eth:gender) + (1 | state:eth) + (1 | state:age) + (1 | state:educ) + (1 | eth:age) + (1 | eth:educ) + (1 | age:educ) + (1 | state:educ:age) + (1 | educ:age:gender) [^1]: University of Michigan [^2]: Columbia University [^3]: YouGov [^4]: Stanford University
--- abstract: | Given a positive integer $p$ and a graph $G$ with degree sequence $d_1,\ldots,d_n$, we define $e_p(G)=\sum_{i=1}^n d_i^p$. Caro and Yuster introduced a Turán-type problem for $e_p(G)$: Given a positive integer $p$ and a graph $H$, determine the function $\textup{ex}_p(n,H)$, which is the maximum value of $e_p(G)$ taken over all graphs $G$ on $n$ vertices that do not contain $H$ as a subgraph. Clearly, $\textup{ex}_1(n,H)=2\textup{ex}(n,H)$, where $\textup{ex}(n,H)$ denotes the classical Turán number. Caro and Yuster determined the function $\textup{ex}_p(n, P_\ell)$ for sufficiently large $n$, where $p\geq 2$ and $P_\ell$ denotes the path on $\ell$ vertices. In this paper, we generalise this result and determine $\textup{ex}_p(n,F)$ for sufficiently large $n$, where $p\geq 2$ and $F$ is a linear forest. We also determine $\textup{ex}_p(n,S)$, where $S$ is a star forest; and $\textup{ex}_p(n,B)$, where $B$ is a broom graph with diameter at most six.\ **Keywords:** degree power; Turán-type problem; $H$-free; forest\ \[2mm\] **AMS Subject Classification (2010):** 05C07, 05C35 author: - | Yongxin Lan$^1$, Henry Liu$^2$, Zhongmei Qin$^3$, Yongtang Shi$^1$[^1]\ \ $^1$Center for Combinatorics and LPMC\ Nankai University, Tianjin 300071, China\ yxlan0@126.com, shi@nankai.edu.cn\ \ $^2$School of Mathematics\ Sun Yat-sen University, Guangzhou 510275, China\ liaozhx5@mail.sysu.edu.cn\ \ $^3$College of Science\ Chang’an University, Xi’an, Shaanxi 710064, China\ qinzhongmei90@163.com\ date: 6 January 2018 title: Degree powers in graphs with a forbidden forest --- Introduction ============ For standard graph-theoretic notation and terminology, the reader is referred to [@B98]. All graphs considered here are finite, undirected, and have no loops or multiple edges. Let $G$ and $H$ be two graphs. The degree of a vertex $v\in V(G)$ and the maximum degree of $G$ are denoted by $d_G(v)$ and $\Delta(G)$. We use $G\cup H$ to denote the disjoint union of $G$ and $H$, and $G +H$ for the *join* of $G$ and $H$, i.e., the graph obtained from $G \cup H$ by adding all edges between $G$ and $H$. Let $kG$ denote $k$ vertex-disjoint copies of $G$. For $U\subset V(G)$, let $G[U]$ denote the subgraph of $G$ induced by $U$. Let $K_t, E_t$ and $P_t$ denote the complete graph, the empty graph, and the path on $t$ vertices, respectively. Let $S_r$ denote the star with maximum degree $r$. Let $M_t$ be the graph on $t$ vertices with a maximum matching (i.e., $\lfloor \frac{t}{2}\rfloor$ independent edges). Given a graph $H$, we say that a graph $G$ is *$H$-free* if $G$ does not contain $H$ as a subgraph. The classical *Turán number*, denote by $\textup{ex}(n,H)$, is the maximum number of edges in a $H$-free graph on $n$ vertices. Turán’s classical result [@T41] states that $\textup{ex}(n,K_{r+1})=e(T_r(n))$ for $n\ge r\ge 2$, where $T_r(n)$ denotes the *$r$-partite Turán graph* on $n$ vertices. Given a graph $G$ whose degree sequence is $d_1,\ldots,d_n$, and a positive integer $p$, let $e_p(G)=\sum_{i=1}^n {d_i^p}$. Caro and Yuster [@CY00] introduced a Turán-type problem for $e_p(G)$: Determine the function $\textup{ex}_p(n,H)$, which is the maximum value of $e_p(G)$ taken over all $H$-free graphs $G$ on $n$ vertices. Moreover, characterise the *extremal graphs*, i.e., the $H$-free graphs $G$ on $n$ vertices with $e_p(G)=\textup{ex}_p(n,H)$. Clearly, we have $\textup{ex}_1(n,H)=2\textup{ex}(n,H)$. This Turán-type problem has attracted significant interest from many researchers. Caro and Yuster [@CY00] proved that $\textup{ex}_p(n,K_{r+1})=e_p(T_r(n))$ for $p=1,2,3$. The same result does not hold if $r$ is fixed, and $p$ and $n$ are sufficiently large. For example, if $G$ is the complete bipartite graph with class sizes $\lfloor\frac{n}{2}\rfloor-1$ and $\lceil\frac{n}{2}\rceil+1$, then we have $e_4(G)>e_4(T_2(n))$. Hence, we see that the parameter $p$ does play a role in the value of $\textup{ex}_p(n,K_{r+1})$ and the extremal graphs. Bollobás and Nikiforov further studied the function $\textup{ex}_p(n,K_{r + 1})$, where they allowed $p>0$ to be real. In [@BN04], they proved that for $n$ sufficiently large, $\textup{ex}_p(n, K_{r+1})=e_p(T_r(n))$ for $0<p<r$, and $\textup{ex}_p(n, K_{r+1})>(1+\eps)e_p(T_r(n))$ for $p\ge r+\lceil\sqrt{2r}\rceil$ and some $\eps=\eps(r)>0$. In [@BN12], they proved a result which gives an extension of the Erdős-Stone Theorem by using $e_p(G)$ instead of the number of edges. When considering cycles as the forbidden subgraphs, Caro and Yuster [@CY00] proved that $\textup{ex}_2(n,\mathcal C)=e_2(F_n)$ for sufficiently large $n$, where $\mathcal C$ denotes the family of cycles with even length (notice the natural extension of the definition of $\ex_p$ to families of graphs), and $F_n$ is the *friendship graph* on $n$ vertices, i.e., $F_n$ is obtained by taking a star on $n$ vertices and adding a maximum matching on the set of leaves. They also showed that $F_n$ is the unique extremal graph, and remarked that the same result also holds for $p>2$. Nikiforov [@N09] proved that $\textup{ex}_p(n,C_{2k+2}) = (1 + o(1))kn^p$, where $C_t$ denotes the cycle of order $t$, and this settled a conjecture of Caro and Yuster. Gu et al. [@GLS15] proved that for $p\ge 1$, there exists a constant $c = c(p)$ such that the following holds: If $\textup{ex}_p(n,C_5) = e_p(G)$ for some $C_5$-free graph $G$ of order $n$, then $G$ is a complete bipartite graph with class sizes $cn + o(n)$ and $(1 - c)n + o(n)$. A *linear forest* (resp. *star forest*) is a forest whose connected components are paths (resp. stars). There are many known results about the function $\textup{ex}_p(n,F)$ where $F$ is a linear forest. For the case of the classical Turán number $\textup{ex}(n,F)$, one of the earliest results is the case when $F=P_\ell$ is a path. Erdős and Gallai [@EG59] proved in 1959 that $\textup{ex}(n,P_\ell)\leq (\frac{\ell}{2}-1)n$ for $\ell\ge 2$, and if $\ell-1$ divides $n$, then equality holds only for the graph with vertex-disjoint copies of $K_{\ell-1}$. Motivated by this result, Erdős and Sós [@E64] in 1963 made the conjecture that the same result holds for any tree, i.e., if $T$ is a tree on $t\ge 2$ vertices, then we have $\textup{ex}(n,T)\leq (\frac{t}{2}-1)n$. This long-standing conjecture remains open, and many partial results are known. The result of Erdős and Gallai was also sharpened by Faudree and Schelp [@FS75], when they determined the function $\textup{ex}(n,P_\ell)$ exactly as well as the extremal graphs. When $F$ has more components, Erdős and Gallai [@EG59] also proved that $\textup{ex}(n,kP_2)={k-1\choose 2}+(k-1)(n-k+1)$ for $k\ge 2$ and sufficiently large $n$, where the unique extremal graph is $K_{k-1}+E_{n-k+1}$. Very recently, this result was extended by Bushaw and Kettle [@BK11], who determined the function $\textup{ex}(n,kP_\ell)$ for $k\ge 2$, $\ell\ge 3$ and sufficiently large $n$. Their result was further generalised by Lidický et al. [@LLP13], who determined the function $\textup{ex}(n,F)$ for an arbitrary linear forest $F$ and sufficiently large $n$. In these two results, the extremal graph is unique. Lidický et al. [@LLP13] also determined the function $\textup{ex}(n,S)$ for an arbitrary star forest $S$ and sufficiently large $n$, and characterised the extremal graphs. On the other hand, Caro and Yuster [@CY00] determined the function $\textup{ex}_p(n, P_\ell)$ for $p\geq 2$, $\ell\ge 3$ and sufficiently large $n$. The extremal graph is again unique, and is significantly different to the extremal graphs of $\textup{ex}(n,P_\ell)$ obtained by Faudree and Schelp [@FS75]. They also determined the functions $\textup{ex}_p(n,S_r)$ and $\textup{ex}_p(n,S_r^\ast)$, and their extremal graphs, where $S_r^\ast$ is the graph obtained by attaching a pendent edge at a leaf of $S_r$. This paper will be organised as follows. In Section \[surveysect\], we will state precisely the previously known results about the function $\textup{ex}_p(n,F)$, for various forests $F$. In Sections \[linstarforestsect\] and \[broomsect\], we will determine the function $\ex_p(n,F)$ when $F$ is a linear forest, a star forest, and a broom with diameter at most $6$ (A *broom* is a path with a star attached at one end). Our results can be regarded as extensions to many of these previously known results from [@BK11; @CY00; @EG59; @LLP13]. Unless otherwise stated, we assume that $n$ is always sufficiently large, and we will make no serious attempt to minimise the lower bound on $n$. Without going into details, we remark that every large lower bound on $n$ depends only on the forest $F$, and not the parameter $p$. Known results {#surveysect} ============= In this section, we will review many of the known results about the function $\textup{ex}_p(n,F)$, for various forests $F$. Some of these results will also be helpful for us to present our results in Sections \[linstarforestsect\] and \[broomsect\]. First, we collect the results where $F$ is a single component. When $F$ is a path, Caro and Yuster [@CY00] observed that for $p\ge 1$, we have $$\label{CYobs1} \textup{ex}_p(n,P_2) = 0,\quad \textup{and} \quad \textup{ex}_{p}(n,P_{3})= \left\{ \begin{array}{ll} n-1 & \textrm{if $n$ is odd,}\\ n& \textrm{if $n$ is even.} \end{array} \right.$$ Moreover, the unique extremal graph for $\textup{ex}_{p}(n,P_{3})$ is $M_n$, the graph on $n$ vertices with a maximum matching. For $F=P_\ell$, Erdős and Gallai [@EG59] proved the following result, as we have mentioned in the introduction. \[EGthm1\] For $\ell\ge 2$, we have $\textup{ex}(n, P_\ell) \le (\frac{\ell}{2}-1)n$. Moreover, if $\ell-1$ divides $n$, then equality holds only for the graph with vertex-disjoint copies of $K_{\ell-1}$. Inspired by Theorem \[EGthm1\], Erdős and Sós [@E64] made the conjecture that the same result holds for any tree: If $T$ is a tree on $t\ge 2$ vertices, then $\textup{ex}(n,T)\leq (\frac{t}{2}-1)n$. This long-standing conjecture remains open, and many partial results are known. Theorem \[EGthm1\] was subsequently sharpened by Faudree and Schelp [@FS75], when they managed to determine $\textup{ex}(n,P_\ell)$ exactly, as well as all the extremal graphs. \[FSthm\] Let $\ell\ge 2$ and $n=a(\ell-1)+b$, where $a\ge 0$ and $0\le b<\ell-1$. We have $$\textup{ex}(n, P_\ell)=a{\ell-1 \choose 2}+{b\choose 2}.$$ Moreover, the extremal graphs are: - $aK_{\ell-1}\cup K_b$, - $a'K_{\ell-1}\cup \big(K_{\ell/2-1}+E_{\ell/2+(a-a'-1)(\ell-1)+b}\big)$, where $\ell$ is even, $a>0$, $b=\frac{\ell}{2}$ or $b=\frac{\ell}{2}-1$ and $0\le a'<a$. Caro and Yuster [@CY00] determined the function $\textup{ex}_p(n,P_\ell)$ for $p\ge 2$, $\ell\ge 4$, and sufficiently large $n$, and they showed that the extremal graph is unique. To state their result, we define the graph $H(n,\ell)$ as follows. Let $b=\lfloor\frac{\ell}{2}\rfloor-1$. Then $H(n,\ell)=K_b+E_{n-b}$ if $\ell$ is even, and $H(n,\ell)$ is $K_b+E_{n-b}$ with an edge added to $E_{n-b}$ if $\ell$ is odd. \[CYthm1\] Let $p\ge 2$, $\ell\ge 4$, and $n \ge n_0(\ell)$ be sufficiently large. Then $$\begin{aligned} \textup{ex}_p(n, P_\ell) &= e_p(H(n,\ell))\\ &=\left\{ \begin{array}{ll} b(n-1)^{p}+(n-b-2)b^{p}+2(b+1)^p & \textup{\emph{if $\ell$ is odd,}}\\ b(n-1)^{p}+(n-b)b^p & \textup{\emph{if $\ell$ is even,}} \end{array} \right.\end{aligned}$$ where $b=\lfloor\frac{\ell}{2}\rfloor-1$. Moreover, $H(n,\ell)$ is the unique extremal graph. They remarked that the extremal graph $H(n,\ell)$ for $\textup{ex}_p(n,P_\ell)$, with $p\ge 2$, is very different from the extremal graphs for $\textup{ex}(n,P_\ell)$ in Theorem \[FSthm\]. This is because $H(n,\ell)$ has large maximum degree, which plays a role in making the value of $e_p(H(n,\ell))$ large, when $p\ge 2$. When $F=S_r$ is a star, Caro and Yuster [@CY00] made the observation that $\textup{ex}_p(n,S_r)$ is attained by a graph $L$ on $n$ vertices which is an extremal graph for $\textup{ex}(n,S_r)$. Clearly if $n\le r-1$, we have $L=K_n$. For $n\ge r$, we have $L$ is an $(r-1)$-regular graph if $(r-1)n$ is even, and $L$ has $n-1$ vertices of degree $r-1$ and one vertex of degree $r-2$ if $(r-1)n$ is odd. We call such a graph $L$ a *near $(r-1)$-regular graph*, since $L$ is as close to being $(r-1)$-regular as possible. It is well-known and easy to show that such graphs $L$ exist. Note that we have $e(L)=\big\lfloor\frac{(r-1)n}{2}\big\rfloor$. Thus, the observation of Caro and Yuster is the following. \[CYprp1\] Let $p\ge 1$, and let $S_r$ be the star with maximum degree $r\ge 1$. 1. If $n\leq r-1$, then $\textup{ex}_{p}(n,S_r)=n(n-1)^{p}$. Moreover, the unique extremal graph is $K_n$. 2. If $n\ge r$, then $$\ex_p(n,S_r)= \left\{ \begin{array}{ll} (n-1)(r-1)^{p}+(r-2)^{p} & \textup{\emph{if $(r-1)n$ is odd,}}\\ n(r-1)^{p} & \textup{\emph{if $(r-1)n$ is even.}} \end{array} \right.$$ Moreover, the extremal graphs are the near $(r-1)$-regular graphs on $n$ vertices. For $\ell\ge 4$ and $s\ge 0$, let $B_{\ell,s}$ be the graph on $\ell+s$ vertices, obtained by adding $s$ pendent edges to a penultimate vertex $v$ of $P_\ell$. Such a graph $B_{\ell,s}$ is a *broom*, and $v$ is the *centre* of the broom.\ $$\unit = 1cm {{\setbox\dotb\hbox{{{\font\dotf=cmr10 scaled 400\dotf.}}}\xoff=-.5\wd\dotb \wd\dotb=0pt\yoff=-.5\ht\dotb\psep=.6\ht\dotb}}{{{\dx=-1\unit\advance\dx--3\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=-1\unit\advance\dx--3\unit \divide\dx by\nd\dy=0\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=3\unit\advance\dx-0\unit \divide\dx by\nd\dy=0\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern0\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=1.35\unit\advance\dx-2\unit\divide\dx by\psep \dy=1\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=1.35\unit\advance\dx-2\unit \divide\dx by\nd\dy=1\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=1.6\unit\advance\dx-2\unit\divide\dx by\psep \dy=1\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=1.6\unit\advance\dx-2\unit \divide\dx by\nd\dy=1\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=1.85\unit\advance\dx-2\unit\divide\dx by\psep \dy=1\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=1.85\unit\advance\dx-2\unit \divide\dx by\nd\dy=1\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=2.65\unit\advance\dx-2\unit\divide\dx by\psep \dy=1\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=2.65\unit\advance\dx-2\unit \divide\dx by\nd\dy=1\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern-3\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-2\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-1\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern3\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern2\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern1\unit\raise0\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern0\unit\raise0\unit\hbox{{\copy\ptbox}}}}} {\rlap{\kern1.6\unit\raise0.1\unit\hbox{{\footnotesize$v$}}}}{\rlap{\kern-0.65\unit\raise-0.8\unit\hbox{{\footnotesize$\ell$ vertices}}}}{\rlap{\kern1.34\unit\raise1.58\unit\hbox{{\footnotesize$s$ vertices}}}}{\rlap{\kern-0.73\unit\raise-0.1\unit\hbox{$\cdots$}}} {{{\dx=-0.1\unit\advance\dx--3\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=-0.35\unit\dx=-0.1\unit\advance\dx--3\unit \divide\dx by\nd\dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=0.1\unit\advance\dx-3\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=-0.35\unit\dx=0.1\unit\advance\dx-3\unit \divide\dx by\nd\dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-3.1\unit\advance\dx--3\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=-3.1\unit\advance\dx--3.1\unit\divide\dx by\psep \dy=-0.25\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=-3.1\unit\advance\dx--3\unit\dy=-0.35\unit \advance\dy--0.35\unit\dxx=-3.1\unit\advance\dxx--3\unit\dyy=-0.25\unit\advance \dyy--0.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=-0.35\unit\raise\yoff\rlap{\kern-3\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=3.1\unit\advance\dx-3\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=3.1\unit\advance\dx-3.1\unit\divide\dx by\psep \dy=-0.25\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=3.1\unit\advance\dx-3\unit\dy=-0.35\unit \advance\dy--0.35\unit\dxx=3.1\unit\advance\dxx-3\unit\dyy=-0.25\unit\advance \dyy--0.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=-0.35\unit\raise\yoff\rlap{\kern3\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=0\unit\advance\dx--0.1\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.45\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0\unit\advance\dx--0.1\unit\dy=-0.35\unit \advance\dy--0.35\unit\dxx=0\unit\advance\dxx--0.1\unit\dyy=-0.45\unit\advance \dyy--0.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=-0.35\unit\raise\yoff\rlap{\kern-0.1\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=0\unit\advance\dx-0.1\unit\divide\dx by\psep \dy=-0.35\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.45\unit\advance\dy--0.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0\unit\advance\dx-0.1\unit\dy=-0.35\unit \advance\dy--0.35\unit\dxx=0\unit\advance\dxx-0.1\unit\dyy=-0.45\unit\advance \dyy--0.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=-0.35\unit\raise\yoff\rlap{\kern0.1\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {\rlap{\kern2.03\unit\raise0.9\unit\hbox{$\cdots$}}}{{\rlap{\kern1.35\unit\raise1\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern1.6\unit\raise1\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern1.85\unit\raise1\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern2.65\unit\raise1\unit\hbox{{\copy\ptbox}}}}} {{{\dx=1.9\unit\advance\dx-1.35\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1.35\unit\dx=1.9\unit\advance\dx-1.35\unit \divide\dx by\nd\dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern1.35\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=2.65\unit\advance\dx-2.1\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1.35\unit\dx=2.65\unit\advance\dx-2.1\unit \divide\dx by\nd\dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.1\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=1.25\unit\advance\dx-1.35\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=1.25\unit\advance\dx-1.25\unit\divide\dx by\psep \dy=1.25\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=1.25\unit\advance\dx-1.35\unit\dy=1.35\unit \advance\dy-1.35\unit\dxx=1.25\unit\advance\dxx-1.35\unit\dyy=1.25\unit\advance \dyy-1.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.35\unit\raise\yoff\rlap{\kern1.35\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=2.75\unit\advance\dx-2.65\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2.75\unit\advance\dx-2.75\unit\divide\dx by\psep \dy=1.25\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2.75\unit\advance\dx-2.65\unit\dy=1.35\unit \advance\dy-1.35\unit\dxx=2.75\unit\advance\dxx-2.65\unit\dyy=1.25\unit\advance \dyy-1.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.35\unit\raise\yoff\rlap{\kern2.65\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=2\unit\advance\dx-1.9\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2\unit\advance\dx-2\unit\divide\dx by\psep \dy=1.45\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2\unit\advance\dx-1.9\unit\dy=1.35\unit \advance\dy-1.35\unit\dxx=2\unit\advance\dxx-1.9\unit\dyy=1.45\unit\advance \dyy-1.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.35\unit\raise\yoff\rlap{\kern1.9\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=2\unit\advance\dx-2.1\unit\divide\dx by\psep \dy=1.35\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2\unit\advance\dx-2\unit\divide\dx by\psep \dy=1.45\unit\advance\dy-1.35\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2\unit\advance\dx-2.1\unit\dy=1.35\unit \advance\dy-1.35\unit\dxx=2\unit\advance\dxx-2.1\unit\dyy=1.45\unit\advance \dyy-1.35\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.35\unit\raise\yoff\rlap{\kern2.1\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{\rlap{\kern0\unit\raise-1.8\unit\hbox{\raise.8ex{\hbox to 0pt{\hss{$\scriptstyle{\textup{\normalsize Figure 1. The broom graph $B_{\ell,s}$}}$}\hss}}}}}}$$\ It is interesting to study Turán-type problems for brooms, because a broom may be considered as a generalisation of both a path and a star. Sun and Wang [@SW11] determined the function $\ex(n, B_{4,s})$ for $s\ge 1$, as follows. \[SWthm1\] Let $s\ge 1$ and $n\ge s+4$. Let $n=a(s+3)+b$, where $a\ge 1$ and $0\le b<s+3$. We have $$\ex(n, B_{4,s})= \left\{ \begin{array}{ll} \displaystyle (a-1){s+3\choose 2}+\Big\lfloor\frac{(s+1)(s+3+b)}{2}\Big\rfloor & \textup{\emph{if $s\ge 3$ and $2\le b\le s$,}}\\[2ex] \displaystyle a{s+3\choose 2}+{b\choose 2} & \textup{\emph{otherwise.}} \end{array} \right.$$ Roughly speaking, in Theorem \[SWthm1\], the value of $\ex(n,B_{4,s})$ is attained as follows. If $b$ is close to either $0$ or $s+3$, then we would take the graph $aK_{s+3}\cup K_b$. Otherwise, we would take a graph $(a-1)K_{s+3}\cup L$, where $L$ is a near $(s+1)$-regular graph on $s+3+b$ vertices. Sun and Wang also determined the function $\ex(n,B_{5,s})$ for $s\ge 1$ and $n\ge s+5$. However, their result is complicated to state in full. A key result that they proved is the following. \[SWthm2\] Let $s\ge 1$ and $n\ge s+5$. Let $n=a(s+4)+b$, where $a\ge 1$ and $0\le b<s+4$. We have $$\ex(n, B_{5,s})= \left\{ \begin{array}{ll} \displaystyle (a-1){s+4\choose 2}+\ex(s+4+b, B_{5,s}) & \textup{\emph{if $1\le b\le s$,}}\\[2ex] \displaystyle a{s+4\choose 2}+{b\choose 2} & \textup{\emph{if $b\in\{0,s+1,s+2,s+3\}$.}} \end{array} \right.$$ Similarly, in Theorem \[SWthm2\], the value of $\ex(n,B_{5,s})$ is attained by $aK_{s+4}\cup K_b$ if $b$ is either $0$ or close to $s+4$. Otherwise, we would take a graph $(a-1)K_{s+4}\cup L$, where $L$ is an extremal graph for $B_{5,s}$ on $s+4+b$ vertices. Caro and Yuster [@CY00] determined the function $\ex_p(n,B_{4,s})$, for $p\ge 2$ and sufficiently large $n$. They remarked that the result is very different to Proposition \[CYprp1\], even though $B_{4,s}$ is very close to being a star. \[CYprp2\] Let $p\ge 2$, $s\ge 1$, and $n > 2(s+4)$. Then $\textup{ex}_{p}(n, B_{4,s}) = e_p(S_{n-1}) = (n - 1)^p + (n - 1)$. Moreover, $S_{n-1}$ is the unique extremal graph. Now we consider the case when the forest $F$ has more than one component. When $F=kP_2$, the classical Turán number $\textup{ex}(n,kP_2)$ was determined by Erdős and Gallai [@EG59]. \[EGthm2\] Let $k\ge 2$ and $n> \frac{5k}{2}-1$. We have $\ex(n,kP_2)={k-1\choose 2}+(k-1)(n-k+1)$. Moreover, $K_{k-1}+E_{n-k+1}$ is the unique extremal graph. For $n\le \frac{5k}{2}-1$, Erdős and Gallai also determined $\ex(n,kP_2)$ and the extremal graphs, which are different from those in Theorem \[EGthm2\]. For the function $\textup{ex}(n,kP_3)$, Yuan and Zhang [@YZ17] obtained the following result. \[YZthm\] Let $k\ge 2$ and $n>5k-1$. We have $\ex(n,kP_3)={k-1 \choose 2}+(k-1)(n-k+1)+\lfloor \frac{n-k+1}{2}\rfloor$. Moreover, $K_{k-1}+M_{n-k+1}$ is the unique extremal graph. In fact, Yuan and Zhang completely determined $\ex(n,kP_3)$ and the extremal graphs for all $n$, which solved a conjecture of Gorgol [@G11]. Bushaw and Kettle [@BK11] had previously proved the case of Theorem \[YZthm\] for $n\ge 7k$. Next, there are results for the case when $F=\bigcup_{i=1}^kP_{\ell_i}$ is a linear forest, where $k\ge 2$, and we may assume that $\ell_1\ge \cdots \ge \ell_k\ge 2$. To describe the results, we define the graph $H(n,F)$ as follows. Let $b=\sum_{i=1}^{k}\lfloor\frac{\ell_{i}}{2}\rfloor-1$. Then, $H(n,F)$ is $K_b+E_{n-b}$ with a single edge added to $E_{n-b}$ if all $\ell_{i}$ are odd, and $H(n,F)=K_b+E_{n-b}$ otherwise. Note that $H(n,F)$ is $F$-free. Indeed, if $H(n,F)$ contains $F$, then the path in $F$ of order $\ell_i$ must use at least $\lfloor\frac{\ell_i}{2}\rfloor$ vertices of the $K_b$. But this cannot happen for every path in $F$, by the definition of $b$. In the case when $F=kP_\ell$, we write $H(n,k,\ell)$ for $H(n,F)$. We have already seen the results for $\textup{ex}(n,kP_\ell)$ when $\ell=2,3$ (Theorems \[EGthm2\] and \[YZthm\]). For $\ell\ge 4$, Bushaw and Kettle [@BK11] proved the following result. \[BKthm1\] Let $k\geq 2$, $\ell\geq 4$, and $n\ge 2\ell+2k\ell(\lceil\frac{\ell}{2}\rceil+1)\binom{\ell}{\lfloor\frac{\ell}{2}\rfloor}$. We have $$\textup{ex}(n,kP_{\ell}) = e(H(n,k,\ell))={k\lfloor\frac{\ell}{2}\rfloor-1 \choose 2}+\Big(k\Big\lfloor\frac{\ell}{2}\Big\rfloor-1\Big)\Big(n-k\Big\lfloor\frac{\ell}{2}\Big\rfloor+1\Big)+c,$$ where $c=1$ if $\ell$ is odd, and $c=0$ if $\ell$ is even. Moreover, $H(n,k,\ell)$ is the unique extremal graph. This result was extended by Lidický et al. [@LLP13], who determined $\textup{ex}(n, F)$ for an arbitrary linear forest $F\neq kP_3$. \[LLPthm1\] Let $k\ge 2$, and $F=\bigcup_{i=1}^kP_{\ell_i}$ be a linear forest, where $\ell_{1}\ge \ell_{2}\ge\cdots\ge \ell_{k}\ge 2$ and $\ell_{i}\neq 3$ for some $i$. Let $n\ge n_0(F)$ be sufficiently large. We have $$\textup{ex}(n,F)=e(H(n,F))={\sum_{i=1}^{k}\lfloor\frac{\ell_{i}}{2}\rfloor-1 \choose 2}+\Big(\sum_{i=1}^{k}\Big\lfloor\frac{\ell_{i}}{2}\Big\rfloor-1\Big)\Big(n-\sum_{i=1}^{k}\Big\lfloor\frac{\ell_{i}}{2}\Big\rfloor+1\Big)+c,$$ where $c=1$ if all $\ell_{i}$ are odd, and $c=0$ otherwise. Moreover, $H(n,F)$ is the unique extremal graph. Finally, Lidický et al. [@LLP13] determined the function $\textup{ex}(n,F)$, when $F$ is a star forest and $n$ is sufficiently large. Let $F=\bigcup_{i=1}^kS_{r_i}$, where $r_{1}\geq \cdots \geq r_{k}\ge 1$. To describe their result, we define a graph $G(n,F)$ as follows. Let $i,r\ge 1$, and $L$ be a graph on $n-i+1$ vertices which is an extremal graph for $S_{r}$. Thus $L$ is a near $(r-1)$-regular graph, and $e(L)=\lfloor\frac{r-1}{2}(n-i+1)\rfloor$. Let $G(n,i,r)=K_{i-1}+L$. Now, let $G(n,F)$ be any graph $G(n,i,r_i)$ where $e(G(n,i,r_i))$ is maximised over $1\le i\le k$. Note that each of $G(n,i,r_i)$ and $G(n,F)$ can be one of many possible graphs. Observe that $G(n,i,r_i)$ is $F$-free for all $1\le i\le k$. Indeed, if $G(n,i,r_i)=K_{i-1}+L$ as defined and contains a copy of $F$, then each star $S_{r_1},\dots,S_{r_{i-1}}$ must have at least one vertex from the $K_{i-1}$, and $S_{r_i}$ is not a subgraph of $L$. Lidický et al. [@LLP13] proved that the graphs $G(n,F)$ are extremal for $F$. \[LLPthm2\] Let $k\ge 2$, and $F=\bigcup_{i=1}^kS_{r_i}$ be a star forest, where $r_{1}\geq \cdots \geq r_{k}\ge 1$ are the maximum degrees of the components. Let $n\ge n_0(F)$ be sufficiently large. We have $$\begin{aligned} \textup{ex}(n,F) &= e(G(n,F))\\ &= \max_{1\leq i \leq k}\Big\{(i-1)(n-i+1)+\binom{i-1}{2}+\Big\lfloor\frac{r_{i}-1}{2}(n-i+1)\Big\rfloor\Big\}.\end{aligned}$$ Moreover, the extremal graphs are the graphs $G(n,F)$. Linear and star forests {#linstarforestsect} ======================= We now study the function $\textup{ex}_p(n,F)$, where $F$ is a linear forest or a star forest, $p\ge 2$, and $n$ is sufficiently large. We assume throughout this section that $F$ has at least two components, since the single component case is covered by (\[CYobs1\]), Theorem \[CYthm1\], and Proposition \[CYprp1\]. We first consider the case when $F$ is a star forest. Recall that $S_r$ is the star with maximum degree $r$. Let $F=\bigcup_{i=1}^kS_{r_i}$, where $r_{1}\geq \cdots \geq r_{k}\ge 1$. Our first result is the following. It turns out that $\textup{ex}_p(n,F)$ is attained by the graphs $G(n,k,r_k)$. \[SFthm\] Let $k,p\ge 2$, and $F=\bigcup_{i=1}^kS_{r_i}$ be a star forest, where $r_{1}\geq \cdots \geq r_{k}\ge 1$ are the maximum degrees of the components. Let $n\ge n_0(F)$ be sufficiently large. We have $$\textup{ex}_{p}(n,F)=e_{p}(G(n,k,r_k)).$$ Moreover, the extremal graphs are the graphs $G(n,k,r_k)$. Since $G(n,k,r_k)$ does not contain a copy of $F$, we have $\textup{ex}_{p}(n,F)\geq e_{p}(G(n,k,r_k))$. To prove the theorem, it suffices to show that any $F$-free graph $G$ on $n$ vertices with $G\neq G(n,k,r_k)$ satisfies $e_{p}(G)<e_{p}(G(n,k,r_k))$. It is easy to calculate that $$\begin{aligned} e_{p}(G(n,k,r_k)) &= \left\{ \begin{array}{l} (k-1)(n-1)^{p}+(n-k+1)(r_{k}+k-2)^{p}\\ \hspace{3cm}\textup{if one of $r_{k}-1$ and $n-k+1$ is even}\\[1ex] (k-1)(n-1)^{p}+(n-k)(r_{k}+k-2)^{p}+(r_{k}+k-3)^{p}\\ \hspace{3cm}\textup{if $r_{k}-1$ and $n-k+1$ are odd} \end{array} \right.\nonumber\\[1ex] &= (k-1)n^{p}+o(n^{p}).\label{epHnSasy}\end{aligned}$$ We may assume that there exists a subset $U\subset V(G)$ of order $k-1$ such that every vertex $v\in U$ has degree $d_G(v)\ge \sum_{i=1}^kr_i+k$. Otherwise, if $G$ has at most $k-2$ such vertices, then using $p\ge 2$ and (\[epHnSasy\]), we have $$\begin{aligned} e_{p}(G) &< (k-2)n^{p}+(n-k+2)\bigg(\sum_{i=1}^kr_i+k\bigg)^p<(k-1)n^{p}+o(n^{p})\\ &= e_{p}(G(n,k,r_k)).\end{aligned}$$ Now, we prove that $G \subset G(n,k,r_k)$, which implies that $e_{p}(G)<e_{p}(G(n,k,r_k))$. Recall that $G(n,k,r_k)=K_{k-1}+L$, where $L$ is a graph on $n-k+1$ vertices which is an extremal graph for $S_{r_k}$. Thus by identifying $U$ with $K_{k-1}$ and $G-U$ with $L$, we have $G \subset G(n,k,r_k)$ if we can show that $G-U$ is $S_{r_k}$-free. Suppose that there is a copy of $S_{r_k}$ in $G-U$. Then, using the fact that $d_G(v)\ge \sum_{i=1}^kr_i+k$ for every $v\in U$, we can find vertex-disjoint copies of $S_{r_1},\dots,S_{r_{k-1}}$, using vertices in $U$ as their centres, and with their neighbours in $G-U$, not using the vertices of the $S_{r_k}$, as leaves. This gives a copy of $F$ in $G$, a contradiction. Now, we consider the case when $F$ is an arbitrary linear forest. Let $F=\bigcup_{i=1}^kP_{\ell_i}$, where $k\ge 2$ and $\ell_1\ge\cdots\ge\ell_k\ge 2$. For the case when $F=kP_3$, we can set $r_1=\cdots=r_k=2$ in Theorem \[SFthm\] to obtain the following result, which can be considered as an extension to Theorem \[YZthm\]. \[SFcor\] Let $k,p\ge 2$ and $n\ge n_0(k)$ be sufficiently large. We have $\ex_p(n,kP_3)=e_p(K_{k-1}+M_{n-k+1})$. Moreover, $K_{k-1}+M_{n-k+1}$ is the unique extremal graph. Now, let $F\neq kP_3$. We shall prove the following result, which can be considered as an extension to Theorem \[LLPthm1\]. \[LFthm\] Let $k,p\ge 2$, and $F=\bigcup_{i=1}^kP_{\ell_i}$ be a linear forest, where $\ell_{1}\ge \ell_{2}\ge\cdots\ge \ell_{k}\ge 2$ and $\ell_i\neq 3$ for some $i$. Let $n\ge n_0(F)$ be sufficiently large. We have $$\textup{ex}_{p}(n,F)=e_{p}(H(n,F)).$$ Moreover, $H(n,F)$ is the unique extremal graph. In particular, if $F=kP_\ell$ and $\ell\neq 3$, then $$\textup{ex}_{p}(n,kP_{\ell})=e_{p}(H(n,k,\ell)).$$ Moreover, $H(n,k,\ell)$ is the unique extremal graph. Before we prove Theorem \[LFthm\], we first recall a lemma of Caro and Yuster [@CY00]. \[CYlem2\] Let $b\ge 1$ and $p\ge 2$ be integers. Let $G$ be a graph on $n$ vertices such that $e(G)\le (b+\frac{1}{2})n$. Let $d_1\ge \cdots\ge d_n$ be the degree sequence of $G$. Then, if $d_b\le 0.65n$, we have $e_p(G)\le cn^p+O(n^{p-1})$ for some constant $c$ with $0<c<b$. Although Lemma \[CYlem2\] is not stated explicitly in [@CY00], it can be seen easily in the proof of Lemma 3.5 in [@CY00]. Since $H(n,F)$ is $F$-free, we have $\textup{ex}_p(n,F)\ge e_p(H(n,F))$. Hence, it suffices to show that any $F$-free graph $G$ on $n$ vertices with $G\neq H(n,F)$ has $e_{p}(G)<e_{p}(H(n,F))$. Assume the contrary, and let $G$ be an $F$-free graph on $n$ vertices, that is maximal in the sense that $e_{p}(G)=\textup{ex}_{p}(n,F)\ge e_p(H(n,F))$ and $G \neq H(n,F)$. Let $b=\sum_{i=1}^{k}\lfloor\frac{\ell_{i}}{2}\rfloor-1\ge 1$. By the definition of $H(n,F)$, it is easy to calculate that $$\begin{aligned} e_{p}(H(n,F)) &= \left\{ \begin{array}{ll} b(n-1)^{p}+(n-b-2)b^p+2(b+1)^{p} & \textup{if all $\ell_i$ are odd}\\ b(n-1)^{p}+(n-b)b^p & \textup{otherwise} \end{array} \right.\nonumber\\[1ex] &= bn^{p}+o(n^{p}).\label{ep(HnF)eq}\end{aligned}$$ According to Theorem \[LLPthm1\], we have $$\label{kPleq} e(G)\leq {b\choose 2}+b(n-b)+1\le bn.$$ Let $d_1\ge\cdots\ge d_n$ be the degree sequence of $G$. Let $X\subset V(G)$ be the set of vertices with degrees $d_1,\dots,d_b$, and $Y=V(G)\setminus X$. By Lemma \[CYlem2\] and using (\[ep(HnF)eq\]) and (\[kPleq\]), we may assume that $d_{b}>0.65n$. Let $A\subset Y$ be the set of vertices that have a neighbour in $X$, and $B=Y\setminus A$. Note that any two vertices $u,v\in X$ have at least $2(0.65n-1)-(n-2)=0.3n$ common neighbours in $G$, and hence at least $0.3n-(b-2)>0.29n$ common neighbours in $A$. This means that for any set $Y'\subset Y$ with $|Y'|$ depending only on $\ell_1,\dots,\ell_k$ (and hence $|Y'|$ is much smaller than $n$), $u$ and $v$ have a common neighbour in $A\setminus Y'$. Likewise, any vertex of $X$ has a neighbour in $A\setminus Y'$. We now prove a series of claims. \[clm1\] If $\ell_i$ is odd for all $i$, then $G[Y]$ does not contain a copy of $P_3$ with an end-vertex in $A$. If $\ell_i$ is even for some $i$, then $G[Y]$ does not contain an edge with an end-vertex in $A$. Suppose first that $\ell_i$ is odd for all $i$, and that $G[Y]$ contains a path $c_1c_2a_1$ with $a_1\in A$. Let $y_1\in X$ be a neighbour of $a_1$, and $y_2\in X\setminus\{y_1\}$. Then, $y_1$ and $y_2$ have a common neighbour $a_2\in A\setminus\{c_1,c_2,a_1\}$. Repeating this procedure, we can obtain a path $c_1c_{2}a_{1}y_{1}a_{2}y_{2}\ldots y_{b-1}a_{b}y_{b}a_{b+1}$, where $X=\{y_1,\dots,y_b\}$ and $a_2,a_3,\dots,a_{b+1}\in A$. This path has $2b+3=\ell_1+\sum_{i=2}^k(\ell_i-1)$ vertices, and so it contains vertex-disjoint paths $P_{\ell_1}, P_{\ell_2-1},\dots, P_{\ell_k-1}$ with $c_1$ in the $P_{\ell_1}$. Note that each of the paths $P_{\ell_2-1},\dots, P_{\ell_k-1}$ has an end-vertex in $X$, and so we can extend each $P_{\ell_i-1}$ to $P_{\ell_i}$ by taking a neighbour of the end-vertex in $X$. By choosing the $k-1$ neighbours to be distinct vertices in $A\setminus\{c_1,c_2,a_1,\dots,a_{b+1}\}$, we obtain a copy of $F$ in $G$, a contradiction. Now, let $Q=\{1\le i\le k:\ell_i$ is even$\}\neq\emptyset$, and suppose that $G[Y]$ contains an edge $ca_1$ with $a_1\in A$. As before, we can obtain a path $ca_1y_1a_2y_2\dots y_{b-1}a_by_ba_{b+1}$, where $X=\{y_1,\dots,y_b\}$ and $a_2,a_3,\dots,a_{b+1}\in A\setminus\{c,a_1\}$. This path has $2b+2=\sum_{i\not\in Q}(\ell_i-1)+\sum_{i\in Q}\ell_i$ vertices. We obtain vertex-disjoint paths $P_{\ell_i-1}$ for $i\not\in Q$, and $P_{\ell_i}$ for $i\in Q$, such that the path using $c$ is $P_{\ell_j}$, for some $j\in Q$. Extending each $P_{\ell_i-1}$ to $P_{\ell_i}$ for $i\not\in Q$, we again have a copy of $F$ in $G$, a contradiction. \[clm2\] $G[B]$ does not contain a copy of $P_{\ell_k}$. Suppose that $G[B]$ contains a copy of $P_{\ell_k}$. Let $Q_0=\{1\le i\le k-1:\ell_i$ is even$\}$ and $Q_1=\{1\le i\le k-1:\ell_i$ is odd$\}$. We can find a path $a_{1}y_{1}a_{2}y_{2}\ldots a_{b}y_{b}$, where $X=\{y_1,\dots,y_b\}$ and $a_1,\dots,a_b\in A$. This path has $2b\ge\sum_{i\in Q_0}\ell_i+\sum_{i\in Q_1}(\ell_i-1)$ vertices, and hence contains vertex-disjoint paths $P_{\ell_i}$ for $i\in Q_0$, and $P_{\ell_i-1}$ for $i\in Q_1$. Extending each $P_{\ell_i-1}$ to $P_{\ell_i}$ for $i\in Q_1$, we have a copy of $F$ in $G$, a contradiction. \[clm3\] $B=\emptyset$. Suppose that $B\neq\emptyset$. If $\ell_k=2$, then note that Claims \[clm1\] and \[clm2\] imply that $G[Y]$ does not contain any edges. This means that $G$ is a subgraph of $H(n,F)$, and we have $e_p(G)<e_p(H(n,F))$, a contradiction. Now, let $\ell_k\ge 3$. We will derive a contradiction by constructing a new $F$-free graph $G'$ such that $e_{p}(G')>e_{p}(G)$. Note that by Claim \[clm1\], every vertex of $B$ has at most one neighbour in $G$ lying in $A$. Since Claim \[clm2\] implies that $G[B]$ is $P_{\ell_k}$-free, by Theorem \[EGthm1\], $G[B]$ contains at most $(\frac{\ell_k}{2}-1)|B|$ edges. Hence, there exists a vertex $v\in B$ with at most $\ell_k - 2$ neighbours in $G[B]$. Now in $G$, in view of Claim \[clm1\], one of the following holds. 1. $d_G(v)=1$, with the only neighbour of $v$, say $u$, lying in $A$. 2. $0\le d_G(v)\le \ell_k-2$, with all neighbours of $v$ lying in $B$. Delete all edges adjacent to $v$ in $G$, connect $v$ to all vertices of $X$, and denote the new graph by $G'$. We claim that $G'$ is also $F$-free. Indeed, if $G'$ contains a copy of $F$, then exactly one path of $F$, say $P_{\ell_j}$, must use an edge $vy_1$, for some $y_1\in X$. If $v$ is not an end-vertex of such a $P_{\ell_j}$, then the $P_{\ell_j}$ also contains another neighbour $y_2\in X$ of $v$. We can find a common neighbour $v'\in A$ of $y_1$ and $y_2$ in $G$ which is not used in the copy of $F$. If $v$ is an end-vertex of the $P_{\ell_j}$, then we take $v'\in A$ to be any neighbour of $y_1$ not in the copy of $F$. Replacing $v$ with $v'$ on the $P_{\ell_j}$, we obtain a copy of $F$ in $G$, a contradiction. We now show that $e_p(G') > e_p(G)$. Consider the effect of the transformation from $G$ to $G'$ on the degree sequence. The degrees of the vertices of $X$ have increased by one. The degree of $v$ has not decreased, since $d_{G'}(v)-d_G(v)\ge b-(\ell_k -2)\ge (2\lfloor\frac{\ell_k}{2}\rfloor-1)-(\ell_k-2)\ge 0$. The degrees of the neighbours of $v$ in $G$ have decreased by $1$. Since every vertex of $X$ has degree at least $0.65n$, the total increase in $e_p(G') - e_p(G)$ contributed by the vertices of $X$ is at least $$b((0.65n + 1)^p - (0.65n)^p) = bp(0.65n)^{p-1} + o(n^{p-1}).$$ Also, $d_b > 0.65n$ implies that $|B| < 0.35n$. If (i) holds, then Claim \[clm1\] implies that in $G$, $u$ has no neighbours in $A$, and hence $d_G(u)<0.35n+b<0.36n$. The decrease in $e_p(G') - e_p(G)$ contributed by $u$ is at most $$(0.36n)^p-(0.36n-1)^p=p(0.36n)^{p-1}+o(n^{p-1}).$$ Suppose that (ii) holds. Then in $G$, every neighbour of $v$ (lying in $B$) cannot have a neighbour in $A$, in view of Claim \[clm1\]. Hence, every neighbour of $v$ has degree at most $0.35n$ in $G$. The total decrease in $e_p(G') - e_p(G)$ contributed by the neighbours of $v$ is at most $$(\ell_k -2)((0.35n)^p - (0.35n - 1)^p) \le (\ell_k - 2)p(0.36n)^{p-1} + o(n^{p-1}).$$ Hence, $$e_p(G') - e_p(G)\ge p(b(1.8)^{p-1} - \ell_k + 2)(0.36)^{p-1}n^{p-1} + o(n^{p-1}) > 0.$$ By Claim \[clm3\], we may assume that $A=Y$ for the rest of the proof. \[clm4\] If $\ell_i$ is odd for all $i$, then $G[Y]$ contains at most one edge. If $\ell_i$ is even for some $i$, then $G[Y]$ does not contain an edge. The latter assertion follows immediately from Claim \[clm1\] and the fact that $A=Y$. Now, assume that $\ell_i$ is odd for all $i$. Then, since we do not have $\ell_1=\cdots=\ell_k=3$, we have $\ell_1\ge 5$. Assuming the contrary, Claim \[clm1\] implies that the subgraph $G[Y]$ is a set of at least two independent edges and isolated vertices. We consider three cases.\ *Case 1.* Let $q=\lfloor\frac{\ell_1}{2}\rfloor-1$. For $\ell_1=5$, let $c_2,c_3$ have a common neighbour $y_1\in X$, so that $c_1c_2y_1c_3c_4$ is a copy of $P_5$. For $\ell_1\ge 7$, let $c_2,c_3$ have distinct neighbours $y_1,y_q\in X$. Then, as before, we can find a copy of $P_{\ell_1}$ in the form $c_1c_2y_1a_2y_2\dots a_qy_qc_3c_4$, where $y_2,\dots,y_{q-1}\in X$ and $a_2,\dots,a_q\in Y$. In both cases, we can again find a path $a_{q+1}y_{q+1}a_{q+2}y_{q+2}\dots a_by_b$, where $X=\{y_1,\dots,y_b\}$ and $a_{q+1},a_{q+2},\dots,a_{b}\in Y\setminus\{c_1,c_2,c_3,c_4,a_2,\dots,a_q\}$. This path has $2(b-q)=\sum_{i=2}^k(\ell_i-1)$ vertices, and so contains vertex-disjoint copies of $P_{\ell_2-1},\dots,P_{\ell_k-1}$. As before, we can extend these to copies of $P_{\ell_2},\dots,P_{\ell_k}$ so that we have a copy of $F$ in $G$, a contradiction.\ *Case 2.* *$\ell_1=5$, and no two vertices from distinct edges in $G[Y]$ have a common neighbour in $X$.*\ We shall prove that $e_p(G)<e_p(H(n,F))$, which will contradict the choice of $G$. Recall that $H(n,F)$ is $K_b+E_{n-b}$ with an edge $uv$ added to the empty class, and note that $b\ge\lfloor\frac{5}{2}\rfloor+\lfloor\frac{3}{2}\rfloor-1=2$. Let $d_H(z)$ denote the degree of a vertex $z$ in $H(n,F)$. Clearly, for every $z\in X$ and every vertex $z'$ in the $K_b$, we have $d_G(z)\le n-1=d_H(z')$. Now, let $u_1u_2,\dots,u_{2s-1}u_{2s}$ be all the independent edges in $G[Y]$, for some $s\ge 2$, and let $\Gamma_i$ be the set of vertices in $X$ that are adjacent to at least one of $u_{2i-1}$ and $u_{2i}$, for $i=1,\dots,s$. We may assume that $|\Gamma_1|\ge\cdots\ge|\Gamma_s|\ge 1$. Note that the $\Gamma_i$ are pairwise disjoint subsets of $X$, so that $|\Gamma_2|\le \frac{b}{2}$. Also, since $u_3$ has a neighbour in $X$, we have $|\Gamma_1|\le b-1$. Hence, $d_G(u_j)\le b<d_H(u)=d_H(v)$ for $j=1,2$; $d_G(u_j)\le \frac{b}{2}+1\le b=d_H(z')$ for $j=3,\dots,2s$; and $d_G(z)\le b=d_H(z')$ for $z\in Y\setminus\{u_1,\dots,u_{2s}\}$ and $z'\neq u,v$ in the $E_{n-b}$. The degree sequence of $H(n,F)$ strictly majorises that of $G$, and therefore, we have $e_p(G)<e_p(H(n,F))$.\ *Case 3.* *$\ell_1\ge 7$, and all vertices of the edges of $G[Y]$ are connected to a single vertex of $X$.*\ Let $y_1\in X$ be this single vertex, and note that $\ell_1\ge 7$ implies that $b\ge 3$. We construct an $F$-free graph $G'$ on $n$ vertices such that $e_{p}(G')>e_{p}(G)$, which contradicts the choice of $G$. Let $y_{2} \in X\setminus\{y_1\}$, and let $Y^*$ denote the set of non-isolated vertices in $G[Y]$. Observe that $|Y^*|\geq 4$ and no vertex of $Y^*$ is adjacent to any vertex of $X\setminus\{y_1\}$. We construct $G'$ as follows: delete the $\frac{|Y^*|}{2}$ independent edges of $G[Y]$, and join $y_{2}$ to each vertex of $Y^*$. Similar to Claim \[clm3\], we see that $G'$ is an $F$-free graph. Indeed, if $G'$ contains a copy of $F$, then exactly one path, say $P_{\ell_j}$, must use an edge $vy_2$, for some $v\in Y^*$. If $v$ is not an end-vertex of such a $P_{\ell_j}$, then the other neighbour of $v$ in the $P_{\ell_j}$ is $y_1$. Now in $G$, we can find a common neighbour $v'\in Y$ of $y_1$ and $y_2$ which is not used in the copy of $F$. If $v$ is an end-vertex of the $P_{\ell_j}$, then we can take $v'\in Y$ to be any neighbour of $y_2$ not in the copy of $F$. Replacing $v$ with $v'$ on the $P_{\ell_j}$, we obtain a copy of $F$ in $G$, a contradiction. However, the degree sequence of $G'$ strictly majorises that of $G$, since the degree of $y_2$ has strictly increased, and all other degrees have not changed. Hence $e_p(G')>e_p(G)$, which is the required contradiction. By Claim \[clm4\], $G$ is a spanning subgraph of $H(n,F)$. Hence $e_{p}(G)<e_{p}(H(n,F))$, which contradicts the choice of $G$. The proof of Theorem \[LFthm\] is complete. Brooms {#broomsect} ====== In this section, we shall consider the function $\ex_p(n,B_{\ell,s})$, where $B_{\ell,s}$ is a broom graph, $p\ge 2$, $\ell\ge 4$, $s\ge 0$, and $n$ is sufficiently large. As we have already seen, Theorems \[SWthm1\] and \[SWthm2\] appear to suggest that the determination of the Turán function $\ex(n,B_{\ell,s})$ and the corresponding extremal graphs may be a complicated problem, in the sense that the potential results may be difficult to state. Somewhat surprisingly, we shall see here that the same problem for $\ex_p(n,B_{\ell,s})$, where $p\ge 2$, may possibly be more manageable. Since the case $\ell=4$ is covered in Proposition \[CYprp2\], we consider $\ell\ge 5$. Here, we will provide the answers for the cases $\ell=5,6,7$, and present a conjecture for the case of general $\ell$. The case $\ell=5$ turns out to be a rather special case. Although the case $s=0$ is covered by Theorem \[CYthm1\], we will include this case here since we will obtain some explicit lower bounds for $n$. \[B5sthm\] Let $p\ge 2$, $s\ge 0$, and $n>(2s+10)^2$. We have $$\ex_p(n,B_{5,s})= \left\{ \begin{array}{ll} e_p(H(n,5)) & \textup{\emph{if $s=0$,}}\\ e_p(K_1+M_{n-1}) & \textup{\emph{if $s\ge 1$.}} \end{array} \right.$$ Moreover, the unique extremal graph is $H(n,5)$ if $s=0$, and $K_1+M_{n-1}$ if $s\ge 1$. \[B6sthm\] Let $p\ge 2$, $s\ge 0$, and $n>(2s+12)^2$. We have $$\ex_p(n,B_{6,s})=e_p(H(n,6)).$$ Moreover, $H(n,6)$ is the unique extremal graph. \[B7sthm\] Let $p\ge 2$, $s\ge 0$, and $n>(3s+31)^2$. We have $$\ex_p(n,B_{7,s})=e_p(H(n,7)).$$ Moreover, $H(n,7)$ is the unique extremal graph. In view of Theorems \[B6sthm\] and \[B7sthm\], we believe that the following assertion may be true. \[Blsconj\] Let $p\ge 2$, $\ell\ge 6$, $s\ge 0$, and $n\ge n_0(\ell,s)$ be sufficiently large. We have $$\ex_p(n,B_{\ell,s})=e_p(H(n,\ell)).$$ Moreover, $H(n,\ell)$ is the unique extremal graph. That is, Conjecture \[Blsconj\] claims that if $n$ is sufficiently large, then $\ex_p(n,B_{\ell,s})$ is exactly the same as $\ex_p(n,P_{\ell})$, with the same unique extremal graph $H(n,\ell)$. If Conjecture \[Blsconj\] is true, then it can be considered as an extension to Theorem \[CYthm1\]. Before we prove Theorems \[B5sthm\] to \[B7sthm\], we first prove some auxiliary lemmas. We also prove a proposition which simplifies a possible proof of Conjecture \[Blsconj\]. \[Blslem1\] Let $p\ge 2$, $n_1,n_2\ge\ell$, and $n=n_1+n_2$. 1. If $\ell=5$, then $e_p(K_1+M_{n_1-1})+e_p(K_1+M_{n_2-1})<e_p(K_1+M_{n-1})$. 2. If $\ell\ge 5$, then $e_p(H(n_1,\ell))+e_p(H(n_2,\ell))<e_p(H(n,\ell))$. \(a) Let $\ell=5$. Then $$\begin{aligned} e_p(K_1+M_{n_1-1})+e_p(K_1+M_{n_2-1}) &\le (n_1-1)^p+(n_1-1)2^p\\ &\quad\quad\quad +(n_2-1)^p+(n_2-1)2^p\\ &< (n-1)^p+(n-2)2^p< e_p(K_1+M_{n-1}).\end{aligned}$$ (b) Let $\ell\ge 5$, and $b=\lfloor\frac{\ell}{2}\rfloor-1\ge 1$. Since $$\begin{aligned} e_p(H(n_1,\ell))+e_p(H(n_2,\ell)) &\le b(n_1-1)^p+(n_1-b-2)b^p+2(b+1)^p\\ &\quad\quad\quad +b(n_2-1)^p+(n_2-b-2)b^p+2(b+1)^p,\\ e_p(H(n,\ell)) &\ge b(n-1)^p+(n-b)b^p,\end{aligned}$$ it suffices to prove that $$b(n-1)^p>b[(n_1-1)^p+(n_2-1)^p]+4(b+1)^p-(b+4)b^p.\label{Blslemeq1}$$ Clearly, $n\ge 2\ell\ge 4b+4$. We have $$\begin{aligned} b(n-1)^p &>b(n-2)^p+bp(n-2)^{p-1}\\ &>b[(n_1-1)^p+(n_2-1)^p]+2b(4b+2)^{p-1},\end{aligned}$$ which implies (\[Blslemeq1\]), since it is easy to verify that $2b(4b+2)^{p-1}> 4(b+1)^p-(b+4)b^p$. \[Blslem2\] Let $p\ge 2$, $s\ge 0$, $\ell\ge 5$. Let $G^\ast$ be a graph on $h^\ast>0$ vertices with $\Delta(G^\ast)\le d=d(\ell,s)$. Let $n=h+h^\ast>(\ell+s+d)^2$ for some $h\ge \ell$. 1. If $\ell=5$, then $e_p(K_1+M_{h-1})+e_p(G^\ast)<e_p(K_1+M_{n-1})$. 2. If $\ell\ge 5$, then $e_p(H(h,\ell))+e_p(G^\ast)<e_p(H(n,\ell))$. \(a) Let $\ell=5$. We have $$\begin{aligned} e_p(K_1+M_{h-1})+e_p(G^\ast) &\le (h-1)^p+(h-1)2^p+h^\ast d^p,\\ e_p(K_1+M_{n-1}) &> (n-1)^p+(n-2)2^p.\end{aligned}$$ Since $(n-2)2^p\ge (h-1)2^p$, and $$(n-1)^p-(h-1)^p>(n-h)(n-1)^{p-1}>h^\ast d^{2p-2}\ge h^\ast d^p,$$ it follows that $e_p(K_1+M_{h-1})+e_p(G^\ast)<e_p(K_1+M_{n-1})$.\ (b) Let $\ell\ge 5$, and $b=\lfloor\frac{\ell}{2}\rfloor-1\ge 1$. Since $$\begin{aligned} e_p(H(h,\ell))+e_p(G^\ast) &\le b(h-1)^p+(h-b-2)b^p+2(b+1)^p+h^\ast d^p,\\ e_p(H(n,\ell)) &\ge b(n-1)^p+(n-b)b^p,\end{aligned}$$ and $(n-b)b^p>(h-b-2)b^p$, it suffices to prove that $$b[(n-1)^p-(h-1)^p]>2(b+1)^p+h^\ast d^p.$$ Clearly $\ell+s\ge\ell\ge 2b+2$. We have $$\begin{aligned} b[(n-1)^p-(h-1)^p] &>(n-h)(n-1)^{p-1}\ge h^\ast(\ell+s+d)^{2p-2}\\ &\ge h^\ast(2b+2+d)^p> h^\ast(2b+2)^p+h^\ast d^p\\ &>2(b+1)^p+h^\ast d^p,\end{aligned}$$ as required. Before we prove the next lemma, we make some definitions. Let $C$ be a connected graph, and $v,x\in V(C)$. - For $y\in V(C-\{v,x\})$, the edge $e=xy\in E(C)$ is an *$x$-pendent edge* if $x$ is the only neighbour of $y$ in $C$. - Let $y,y'\in V(C-\{v,x\})$ where $xy,xy',yy'\in E(C)$, and $y,y'$ do not have any other neighbours in $C$. The subgraph $T=C[\{x,y,y'\}]$ is an *$x$-pendent triangle*. - Let $z,y,y'\in V(C-\{v,x\})$ where $xy,xy',zy,zy',yy'\in E(C)$, and $z,y,y'$ do not have any other neighbours in $C$. The subgraph $D=C[\{x,z,y,y'\}]$ is an *$x$-pendent diamond*. - For some $t\ge 2$, let $z,y_1,\dots,y_t\in V(C-\{v,x\})$ where $xy_k,zy_k\in E(C)$ (resp. $xy_k,zy_k,$ $xz\in E(C)$) for every $1\le k\le t$, and $z,y_1,\dots,y_t$ do not have any other neighbours in $C$. The subgraph $S=C[\{x,z,y_1,\dots, y_t\}]$ (resp. $S^+=C[\{x,z,y_1,\dots, y_t\}]$) is an *$x$-pendent spindle* (resp. *$x$-pendent spindle$^+$*). \[Blslem3\] Let $p\ge 2$, $s\ge 0$, and $\ell\ge 5$. Let $C$ be a connected $B_{\ell,s}$-free graph, and $v\in V(C)$ where $d_C(v)=\Delta(C)\ge \ell+s-1$. Let $C'$ be a graph that can be obtained from $C$ with any of the following operations. 1. Delete an $x$-pendent edge $e=xy$, and add the edge $vy$. 2. Delete the three edges of an $x$-pendent triangle $T=C[\{x,y,y'\}]$, and add the edges $vy,vy'$. 3. Delete the five edges of an $x$-pendent diamond $D=C[\{x,z,y,y'\}]$, and add the edges $vz,vy,vy'$. 4. Delete the $2t$ edges of an $x$-pendent spindle $S=C[\{x,z,y_1,\dots, y_t\}]$ (for some $t\ge 2$), and add the edges $vz,vy_1,\dots.vy_t$. 5. Delete the $2t+1$ edges of an $x$-pendent spindle$^+$ $S^+=C[\{x,z,y_1,\dots, y_t\}]$ (for some $t\ge 2$), and add the edges $vz,vy_1,\dots.vy_t$. Then $C'$ is also $B_{\ell,s}$-free, $d_{C'}(v)=\Delta(C')\ge \ell+s-1$, and $e_p(C)<e_p(C')$. Clearly we have $d_{C'}(v)=\Delta(C')\ge \ell+s-1$, since in the transformation from $C$ to $C'$, the only vertex whose degree has increased is $v$. Next, let $V_1$ be the set of neighbours of $v$ in $C$. Suppose that $C'$ contains a copy of $B_{\ell,s}$, and we are in case (iv) or (v). Then for some $u_1,\dots,u_m\in \{z,y_1,\dots,y_t\}$ where $1\le m\le t+1$, the edges $vu_1,\dots,vu_m$ must be used by the $B_{\ell,s}$, with $u_1,\dots,u_m$ being leaves. Note that $|V_1\cup\{v,y_1,\dots,y_t,z\}|\ge \ell+s+t+1$, and this means that there are vertices $w_1,\dots,w_m\in V_1$ which are not used in the $B_{\ell,s}$. Thus we obtain a copy of $B_{\ell,s}$ in $C$ by replacing $vu_1,\dots,vu_m$ with $vw_1,\dots,vw_m$, a contradiction. Similar arguments hold if we are in the other three cases, in view of $|V_1\cup\{v,y\}|\ge \ell+s+1$; $|V_1\cup\{v,y,y'\}|\ge \ell+s+2$; $|V_1\cup\{v,z,y,y'\}|\ge \ell+s+3$ for (i), (ii), (iii), respectively. Therefore, $C'$ is $B_{\ell,s}$-free. It remains to prove that $e_p(C)<e_p(C')$ for each case.\ (i) Going from $C$ to $C'$, we see that the degree of $v$ is increased by $1$, and the degree of $x$ is decreased by $1$. Since $d_C(v)\ge d_C(x)\ge 2$, we have $$\begin{aligned} e_p(C')-e_p(C) &= (d_{C}(v)+1)^p-d_C(v)^p+(d_{C}(x)-1)^p-d_C(x)^p\\ &\ge\sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x)^{p-j})+{p\choose 2}(d_C(v)^{p-2}+d_C(x)^{p-2})\\ &>0.\end{aligned}$$ (ii) Going from $C$ to $C'$, we see that the degree of $v$ is increased by $2$, the degree of $x$ is decreased by $2$, and the degrees of $y,y'$ are each decreased from $2$ to $1$. Since $d_C(v)\ge d_C(x)\ge 3$ and $d_C(v)\ge \ell+s-1\ge 4$, we have $$\begin{aligned} e_p(C')-e_p(C) &= (d_{C}(v)+2)^p-d_C(v)^p+(d_{C}(x)-2)^p-d_C(x)^p+2(1^p-2^p)\\ &\ge\sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x)^{p-j})2^j\\ &\quad\quad\quad+{p\choose 2}(d_C(v)^{p-2}+d_C(x)^{p-2})2^2+2(1-2^p)\\ &>4(4^{p-2}+3^{p-2})-2^{p+1}\ge 0.\end{aligned}$$ (iii) Going from $C$ to $C'$, we see that the degree of $v$ is increased by $3$, the degree of $x$ is decreased by $2$, the degree of $z$ is decreased from $2$ to $1$, and the degrees of $y,y'$ are each decreased from $3$ to $1$. Since $d_C(v)\ge d_C(x)\ge 3$ and $d_C(v)\ge \ell+s-1\ge 4$, we have $$\begin{aligned} e_p(C')-e_p(C) &= (d_{C}(v)+3)^p-d_C(v)^p+(d_{C}(x)-2)^p-d_C(x)^p\\ &\quad\quad\quad+(1^p-2^p)+2(1^p-3^p)\\ &\ge p(3d_C(v)^{p-1}-2d_C(x)^{p-1})+{p\choose 2}(9d_C(v)^{p-2}+4d_C(x)^{p-2})\\ &\quad\quad\quad+\sum_{3\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}3^j-d_C(x)^{p-j}2^j)+3-2^p-2\cdot 3^p\\ &\ge 2\cdot 4^{p-1}+9\cdot 4^{p-2}+4\cdot 3^{p-2}+3-2^p-2\cdot 3^p>0,\end{aligned}$$ since $2\cdot 4^{p-1}+9\cdot 4^{p-2}+3>4^p+3>2\cdot 3^p$ and $4\cdot 3^{p-2}\ge 2^p$.\ (v) Going from $C$ to $C'$, we see that the degree of $v$ is increased by $t+1$, the degree of $x$ is decreased by $t+1$, the degree of $z$ is decreased from $t+1$ to $1$, and the degrees of $y_1,\dots,y_t$ are each decreased from $2$ to $1$. Since $d_C(v)\ge d_C(x)\ge t+1$, we have $$\begin{aligned} e_p(C')-e_p(C) &= (d_{C}(v)+t+1)^p-d_C(v)^p+(d_{C}(x)-t-1)^p-d_C(x)^p\\ &\quad\quad\quad+1^p-(t+1)^p+t(1^p-2^p)\\ &>\sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x)^{p-j})(t+1)^j\\ &\quad\quad\quad+{p\choose 2}(d_C(v)^{p-2}+d_C(x)^{p-2})(t+1)^2-(t+1)^p-t\cdot 2^p\\ &\ge 2(t+1)^p-(t+1)^p-t\cdot 2^p=(t+1)^p-t\cdot 2^p>0.\end{aligned}$$ (iv) This follows from (v), since we can obtain the graph $C''$ from $C$ by adding the edge $xz$, so that $e_p(C)<e_p(C'')<e_p(C')$. \[Blsprp\] Conjecture \[Blsconj\] holds if the following statement is true: Let $p\ge 2$, $\ell\ge 6$, and $s\ge 0$. Then there exists $d=d(\ell,s)\ge\ell+s$ such that, for all connected $B_{\ell,s}$-free graph $C$ with $c\ge d=d(\ell,s)$ vertices and $\Delta(C)\ge d-1$, we have $e_p(C)\le e_p(H(c,\ell))$, with equality if and only if $C=H(c,\ell)$. Similarly, Theorem \[B5sthm\] holds if the following statement is true: Let $p\ge 2$ and $s\ge 0$. Then for all connected $B_{5,s}$-free graph $C$ with $c\ge s+5$ vertices and $\Delta(C)\ge s+4$, we have $$\label{B5seq1} e_p(C)\le \left\{ \begin{array}{ll} e_p(H(c,5)) & \textup{\emph{if $s=0$,}}\\ e_p(K_1+M_{c-1}) & \textup{\emph{if $s\ge 1$,}} \end{array} \right.$$ with equality if and only if $C=H(c,5)$ for $s=0$, and $C=K_1+M_{c-1}$ for $s\ge 1$. Suppose that the first assertion in Proposition \[Blsprp\] holds. We prove that Conjecture \[Blsconj\] holds for $n>(\ell+s+d)^2$. Clearly the graph $H(n,\ell)$ is $B_{\ell,s}$-free. Now, let $G$ be a $B_{\ell,s}$-free graph on $n$ vertices and $G\neq H(n,\ell)$. Then the assertion of Conjecture \[Blsconj\] follows if we can prove that $e_p(G)<e_p(H(n,\ell))$. Suppose first that $\Delta(G)\le d-2$. Then since $n-1>(d+1)^2$, we have $$\begin{aligned} (n-1)^p &= (n-1)(n-1)^{p-1} > (n-1)(d+1)^{2p-2}\\ & > (n-1)[d^{2p-2}+(2p-2)d^{2p-3}]\\ & > (n-1)d^{2p-2}+2d^{2p-1}> n(d-2)^p,\end{aligned}$$ so that $$e_p(G)\le n(d-2)^p<(n-1)^p<e_p(H(n,\ell)).\label{Blsprpeq1}$$ Now, let $\Delta(G)\ge d-1$. Let $G^\ast\subset G$ be the subgraph consisting of the components with maximum degree at most $d-2$, so that $\Delta(G^\ast)\le d-2$. We have $G=C_1\cup\cdots\cup C_t\cup G^\ast$ for some $t\ge 1$, where $C_1,\dots,C_t$ are the components of $G$ with maximum degree at least $d-1$. Let $c_i=|V(C_i)|\ge d$. By the assertion in Proposition \[Blsprp\], for every $1\le i\le t$, we have $$e_p(C_i)\le e_p(H(c_i,\ell)),\label{Blsprpeq2}$$ with equality if and only if $C_i=H(c_i,\ell)$. We apply (\[Blsprpeq2\]) to every $C_i$, and then apply Lemma \[Blslem1\](b) repeatedly $t-1$ times, and finally Lemma \[Blslem2\](b), if $|V(G^\ast)|>0$. We find that $e_p(G)<e_p(H(n,\ell))$, since $G\neq H(n,\ell)$ by assumption. By a similar argument, using Lemmas \[Blslem1\](a) and \[Blslem2\](a), and setting $\ell=5$, $d=s+5$, we see that the second assertion implies Theorem \[B5sthm\] for $n>(2s+10)^2$. Note that the analogous inequality to (\[Blsprpeq1\]) would be $$e_p(G)\le n(s+3)^p<(n-1)^p<e_p(H(n,5))<e_p(K_1+M_{n-1}).$$ We are now ready to prove Theorems \[B5sthm\], \[B6sthm\] and \[B7sthm\]. The arguments in all three proofs are similar. In outline, it suffices to verify the statements in Proposition \[Blsprp\] for $\ell=5,6,7$. Let $C$ be a connected $B_{\ell,s}$-free graph on $c$ vertices as defined in the proposition. We may assume that $C$ does not contain any of the pendent subgraphs, otherwise we may apply Lemma \[Blslem3\] to obtain another $B_{\ell,s}$-free graph $C'$ with $e_p(C)<e_p(C')$, so that we could consider the argument for $C'$ instead of $C$. Under this assumption, we then show that $e_p(C)\le e_p(K_1+M_{c-1})$ for $\ell=5$, $s\ge 1$, and $e_p(C)\le e_p(H(c,\ell))$ otherwise. In each case, equality occurs if and only if $C$ is the corresponding extremal graph. It suffices to verify the second statement in Proposition \[Blsprp\]. Let $C$ be a $B_{5,s}$-free connected graph with $c\ge s+5$ vertices, and $v\in V(C)$ with $d_C(v)=\Delta(C)\ge s+4$. By Lemma \[Blslem3\], we may assume that $C$ does not contain an $x$-pendent edge $xy$ where $x,y\in V(C-v)$. Otherwise, we may delete $xy$ and add $vy$ to obtain the $B_{5,s}$-free graph $C'$ with $e_p(C)<e_p(C')$ and $d_{C'}(v)=\Delta(C')$, and consider the graph $C'$ instead of $C$. For $i\ge 1$, let $V_i$ be the set of vertices of $C$ at distance $i$ from $v$. Note that $|V_1|=d_C(v)\ge s+4$. Also, we have the following properties. 1. $V_i=\emptyset$ for $i\ge 3$. 2. $C[V_2]$ does not contain an edge. 3. Every vertex of $V_2$ has exactly one neighbour in $V_1$. 4. $C[V_1]$ contains at most one edge if $s=0$, and $C[V_1]$ does not contain a copy of the path $P_3$ if $s\ge 1$. Otherwise, suppose that (i) is false. Then we have a copy of $B_{5,s}$, where the path $P_5$ in $B_{5,s}$ is $x_3x_2x_1vy_1$ with $x_i\in V_i$ for $i=1,2,3$, $y_1\in V_1$, and the remaining $s$ vertices of the $B_{5,s}$ are all in $V_1\setminus\{x_1,y_1\}$. Properties (ii) to (iv) also hold for similar reasons. If $V_2\neq\emptyset$, then we must have an edge $xy\in E(C)$ with $x\in V_1$ and $y\in V_2$. It follows from (i) to (iii) that $xy$ is an $x$-pendent edge. Therefore, we may assume that $V_2=\emptyset$. From (iv), we can now easily see that $C\subset H(c,5)$ if $s=0$; and $C\subset K_1+M_{c-1}$ if $s\ge 1$, since $C[V_1]$ consists of independent edges and isolated vertices. Consequently (\[B5seq1\]) holds, as well as the cases of equality. It suffices to verify the first statement in Proposition \[Blsprp\] for $\ell=6$, with $d=s+6$. Let $C$ be a connected graph with $c\ge s+6$ vertices, and $v\in V(C)$ with $d_C(v)=\Delta(C)\ge s+5$. We may assume that $C$ does not contain an $x$-pendent edge or an $x$-pendent triangle, where $x\in V(C-v)$. Otherwise in either case, we may obtain the $B_{6,s}$-free graph $C'$ as described in Lemma \[Blslem3\] with $e_p(C)<e_p(C')$ and $d_{C'}(v)=\Delta(C')$, and consider the graph $C'$ instead of $C$. For $i\ge 1$, let $V_i$ be the set of vertices of $C$ at distance $i$ from $v$. Note that $|V_1|=d_C(v)\ge s+5$. Also, we have the following properties. 1. $V_i=\emptyset$ for $i\ge 4$. 2. $C[V_i]$ does not contain a copy of the path $P_{5-i}$, for $i=1,2,3$. 3. Every vertex of $V_3$ has exactly one neighbour in $V_2$. Otherwise if any of (i) to (iii) is false, then we can easily find a copy of $B_{6,s}$ with centre $v$. By (i) to (iii), we may assume that $V_3=\emptyset$, otherwise we have an $x$-pendent edge $xy\in E(C)$ where $x\in V_2$ and $y\in V_3$. Next, suppose that we have an edge $yy'\in C[V_2]$. If $y$ and $y'$ have distinct neighbours in $V_1$, then we can again easily find a copy of $B_{6,s}$ with centre $v$ in $C$. It follows from (ii) with $i=2$ that $y$ and $y'$ must each have exactly one neighbour in $V_1$, which is a common neighbour $x\in V_1$, and therefore $C[\{x,y,y'\}]$ is an $x$-pendent triangle. Thus, we may assume that $C[V_2]$ does not contain an edge. Since no $x$-pendent edge $xy$ exists where $x\in V_1$, $y\in V_2$, this means that every vertex of $V_2$ must have at least two neighbours in $V_1$. This implies that any two vertices $y,y'\in V_2$ cannot have a common neighbour in $V_1$, otherwise we can again easily find a copy of $B_{6,s}$ in $C$. Therefore, if $V_2\neq\emptyset$ with $V_2=\{y_1,\dots,y_q\}$ for some $q\ge 1$, then for $1\le k\le q$, if $\Gamma_k\subset V_1$ is the set of neighbours of $y_k$ in $V_1$, we have $|\Gamma_k|\ge 2$, and the sets $\Gamma_k$ must be disjoint. Let $X=V_1\setminus\bigcup_{k=1}^q\Gamma_k$. Note that if we have an edge $e$ in $C[V_1]$, then $e$ must either be within $X$, or $e$ connects the two vertices of some $\Gamma_k$ with $|\Gamma_k|=2$, otherwise we can again find a copy of $B_{6,s}$ in $C$. Together with (ii) with $i=1$, we see that $C[V_1\cup V_2]$ does not contain a copy of the path $P_4$ (whether $V_2\neq\emptyset$ or $V_2=\emptyset$). Therefore, we can easily deduce that $C[V_1\cup V_2]$ is a subgraph whose components are stars and triangles. Let $C^\ast$ be the graph obtained from $C$ by adding all edges from $v$ to $V_2$. Note that by replacing $C^\ast-v$ with the star of the same order, we obtain the graph $H(c,6)$. We shall show that this operation does not decrease the value of $e_p$. Consider the following operations.\ (A) Suppose that $C^\ast-v$ contains two star components, say with centres $x$ and $y$, and the leaves at $y$ are $y_1,\dots,y_m$ for some $m\ge 0$. We delete the edges $yy_1,\dots,yy_m$ and add the edges $xy,xy_1,\dots,xy_m$. The increase in the value of $e_p$ is $$(d_{C^\ast}(x)+m+1)^p-d_{C^\ast}(x)^p+2^p-(m+1)^p>2^p>0.$$ \(B) Suppose that $C^\ast-v$ contains at least two triangle components, say with vertices $x_1,\dots,x_{3m}$ for some $m\ge 2$. We delete the edges of the triangles, and connect $x_1$ to $x_2,\dots,x_{3m}$. The increase in the value of $e_p$ is $$(3m)^p-3^p+(3m-1)(2^p-3^p)=(m^p-3m)3^p+(3m-1)2^p>0.$$ \(C) Suppose that $C^\ast-v$ contains a star and a triangle component, exactly one of each. Let $x$ be the centre of the star, and note that since $|V(C^\ast-v)|=c-1\ge s+5\ge 5$, we have $d_{C^\ast}(x)\ge 2$. We delete the edges of the triangle and connect $x$ to its three vertices. The increase in the value of $e_p$ is $$(d_{C^\ast}(x)+3)^p-d_{C^\ast}(x)^p+3(2^p-3^p).$$ If $p=2$, then the increase is $6d_{C^\ast}(x)-6>0$. If $p\ge 3$ and $d_{C^\ast}(x)=2$, then the increase is $5^p+2^{p+1}-3^{p+1}>0$. Otherwise if $p\ge 3$ and $d_{C^\ast}(x)\ge 3$, then the increase is at least $$3pd_{C^\ast}(x)^{p-1}+3(2^p-3^p)\ge 3^{p+1}+3(2^p-3^p)>0.$$ Therefore where possible, we apply operation (B), followed by successive applications of operation (A), and finally operation (C). We obtain $e_p(C)\le e_p(C^\ast)\le e_p(H(c,6))$. Equality occurs if and only if $C=C^\ast$ and $C^\ast-v$ is itself a star. That is, if and only if $C=H(c,6)$. It suffices to verify the first statement in Proposition \[Blsprp\] for $\ell=7$, with $d=2s+24$. Let $C$ be a connected graph with $c\ge 2s+24$ vertices, and $v\in V(C)$ with $d_C(v)=\Delta(C)\ge 2s+23$. We may assume that $C$ does not contain an $x$-pendent edge, triangle, diamond, spindle, or spindle$^+$, where $x\in V(C-v)$. Otherwise, we may obtain the $B_{7,s}$-free graph $C'$ as described in Lemma \[Blslem3\] with $e_p(C)<e_p(C')$ and $d_{C'}(v)=\Delta(C')$, and consider the graph $C'$ instead of $C$. For $i\ge 1$, let $V_i$ be the set of vertices of $C$ at distance $i$ from $v$. Note that $|V_1|=d_C(v)\ge 2s+23$. Also, we have the following properties. 1. $V_i=\emptyset$ for $i\ge 5$. 2. $C[V_i]$ does not contain a copy of the path $P_{6-i}$, for $i=1,2,3,4$. 3. Every vertex of $V_4$ has exactly one neighbour in $V_3$. Otherwise if any of (i) to (iii) is false, then we can easily find a copy of $B_{7,s}$ with centre $v$. Proceeding exactly the same way as we did in Theorem \[B6sthm\], by avoiding a copy of $B_{7,s}$, or an $x$-pendent edge or triangle for some $x\in V(C-v)$, we can obtain the following facts. - We may assume that $V_4=\emptyset$. - We may assume that $C[V_3]$ does not contain an edge, and that every vertex of $V_3$ has at least two neighbours in $V_2$. If $V_3\neq\emptyset$, with $V_3=\{y_1,\dots,y_q\}$ for some $q\ge 1$, and $\Gamma_k\subset V_2$ is the set of neighbours of $y_k$ in $V_2$, we have $|\Gamma_k|\ge 2$, and the sets $\Gamma_k$ must be disjoint. For $X=V_2\setminus\bigcup_{k=1}^q\Gamma_k$, if we have an edge $e$ in $C[V_2]$, then $e$ must either be within $X$, or $e$ connects the two vertices of some $\Gamma_k$ with $|\Gamma_k|=2$. Now for any $\Gamma_k$, any two vertices $y,y'\in \Gamma_k$ cannot have two distinct neighbours in $V_1$, otherwise we can find a copy of $B_{7,s}$. Thus, the vertices of $\Gamma_k$ must have one common neighbour $x_k\in V_1$, so that $C[\Gamma_k\cup \{x_k,y_k\}]$ is either an $x_k$-pendent diamond or an $x_k$-pendent spindle. Therefore, we may further assume that $V_3=\emptyset$. By (ii) with $i=2$, we see that the components of $C[V_2]$ are stars and triangles. Suppose that we have a star component in $C[V_2]$ with centre $z$ and leaves $y_1,\dots,y_t$, for some $t\ge 2$. Then no two of the $y_k$ can have distinct neighbours in $V_1$, otherwise we can find a copy of $B_{7,s}$. Thus, the vertices $y_k$ must have one common neighbour $x\in V_1$. If $z$ has a neighbour $x'\in V_1\setminus\{x\}$, then we have a copy of $B_{7,s}$ with centre $v$, where the $P_7$ is $y_1xy_2zx'vx''$ for some $x''\in V_1\setminus\{x,x'\}$, and the $s$ leaves are in $V_1\setminus\{x,x',x''\}$. Therefore, $x$ must be the unique neighbour of $z$ in $V_1$, and $C[\{x,z,y_1,\dots,y_t\}]$ is an $x$-pendent spindle$^+$. Thus, we may assume that the components of $C[V_2]$ are triangles, and single edges and isolated vertices. We consider the behaviour of the edges that connect these components to $V_1$, keeping in mind that we should avoid creating a copy of $B_{7,s}$. - If $y_1,y_2,y_3\in V_2$ form a triangle in $C[V_2]$, then $y_1,y_2,y_3$ must have a unique common neighbour in $V_1$, and they do not have any other neighbours in $V_1$. - Let $y_1y_2$ be a single edge in $C[V_2]$. If $y_1,y_2$ have exactly one common neighbour $x\in V_1$, then exactly one of $y_1,y_2$ has at least one neighbour in $V_1\setminus\{x\}$, otherwise $C[\{x,y_1,y_2\}]$ is an $x$-pendent triangle or there is a copy of $B_{7,s}$. If $y_1,y_2$ have exactly two common neighbours $x_1,x_2\in V_1$, then both $y_1,y_2$ cannot have a neighbour in $V_1\setminus\{x_1,x_2\}$. Also, $y_1,y_2$ cannot have at least three common neighbours in $V_1$. The remaining possibility is that $y_1, y_2$ have no common neighbour in $V_1$. - If $y$ is an isolated vertex in $C[V_2]$, then $y$ must have at least two neighbours in $V_1$, otherwise there is an $x$-pendent edge $xy$, for some $x\in V_1$. Let $\tilde{C}=(C-v)-E(C[V_1])$, i.e., $\tilde{C}$ is the subgraph on $V_1\cup V_2$, with the edges of $C[V_1]$ deleted. Then, when considering the components of $\tilde{C}$, these components are the subgraphs as shown in Figure 2(a). We refer the subgraphs illustrated as *Type 1* to *Type 5*.\ $$\unit = 1cm {{\setbox\dotb\hbox{{{\font\dotf=cmr10 scaled 400\dotf.}}}\xoff=-.5\wd\dotb \wd\dotb=0pt\yoff=-.5\ht\dotb\psep=.6\ht\dotb}}{{{\dx=-5.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=-5.7\unit\advance\dx-0.2\unit \divide\dx by\nd\dy=0\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1\unit\dx=-5.7\unit\advance\dx-0.2\unit \divide\dx by\nd\dy=1\unit\advance\dy-1\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=-6.2\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=-6.2\unit\advance\dx--6.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=-6.2\unit\advance\dx--5.7\unit\dy=0\unit \advance\dy-0\unit\dxx=-6.2\unit\advance\dxx--5.7\unit\dyy=0.5\unit\advance \dyy-0\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=0\unit\raise\yoff\rlap{\kern-5.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=-6.2\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=-6.2\unit\advance\dx--6.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=-6.2\unit\advance\dx--5.7\unit\dy=1\unit \advance\dy-1\unit\dxx=-6.2\unit\advance\dxx--5.7\unit\dyy=0.5\unit\advance \dyy-1\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1\unit\raise\yoff\rlap{\kern-5.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=0.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0.7\unit\advance\dx-0.7\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0.7\unit\advance\dx-0.2\unit\dy=0\unit \advance\dy-0\unit\dxx=0.7\unit\advance\dxx-0.2\unit\dyy=0.5\unit\advance \dyy-0\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=0\unit\raise\yoff\rlap{\kern0.2\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=0.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0.7\unit\advance\dx-0.7\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0.7\unit\advance\dx-0.2\unit\dy=1\unit \advance\dy-1\unit\dxx=0.7\unit\advance\dxx-0.2\unit\dyy=0.5\unit\advance \dyy-1\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1\unit\raise\yoff\rlap{\kern0.2\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=-5.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1.5\unit\dx=-5.7\unit\advance\dx-0.2\unit \divide\dx by\nd\dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=2.5\unit\dx=-5.7\unit\advance\dx-0.2\unit \divide\dx by\nd\dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=-6.2\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=-6.2\unit\advance\dx--6.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=-6.2\unit\advance\dx--5.7\unit\dy=1.5\unit \advance\dy-1.5\unit\dxx=-6.2\unit\advance\dxx--5.7\unit\dyy=2\unit\advance \dyy-1.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.5\unit\raise\yoff\rlap{\kern-5.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=-6.2\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=-6.2\unit\advance\dx--6.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=-6.2\unit\advance\dx--5.7\unit\dy=2.5\unit \advance\dy-2.5\unit\dxx=-6.2\unit\advance\dxx--5.7\unit\dyy=2\unit\advance \dyy-2.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=2.5\unit\raise\yoff\rlap{\kern-5.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=0.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0.7\unit\advance\dx-0.7\unit\divide\dx by\psep \dy=2\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0.7\unit\advance\dx-0.2\unit\dy=1.5\unit \advance\dy-1.5\unit\dxx=0.7\unit\advance\dxx-0.2\unit\dyy=2\unit\advance \dyy-1.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.5\unit\raise\yoff\rlap{\kern0.2\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=0.7\unit\advance\dx-0.2\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=0.7\unit\advance\dx-0.7\unit\divide\dx by\psep \dy=2\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=0.7\unit\advance\dx-0.2\unit\dy=2.5\unit \advance\dy-2.5\unit\dxx=0.7\unit\advance\dxx-0.2\unit\dyy=2\unit\advance \dyy-2.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=2.5\unit\raise\yoff\rlap{\kern0.2\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{\rlap{\kern-0.2\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}} {{{{\dx=0.4\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-0.2\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.4\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=0.4\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-0.2\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.4\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}{{{\dx=-0.4\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-0.2\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.4\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=-0.4\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-0.2\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.4\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}} {\rlap{\kern-0.45\unit\raise2.15\unit\hbox{{\footnotesize$\ge 2$}}}} {{{\dx=-0.6\unit\advance\dx--0.2\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-0.6\unit\advance\dx--0.2\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=0.2\unit\advance\dx--0.2\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=0.2\unit\advance\dx--0.2\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-0.2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern-1.4\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-2\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}} {{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-1.3\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-1.3\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-1.3\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-1.3\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}}{{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-2.1\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-2.1\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-2.1\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-2.1\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}} {\rlap{\kern-2.35\unit\raise2.15\unit\hbox{{\footnotesize$\ge 1$}}}}{\rlap{\kern-1.55\unit\raise2.15\unit\hbox{{\footnotesize$\ge 1$}}}} {{{\dx=-2\unit\advance\dx--1.4\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-2\unit\advance\dx--1.4\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-1.4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=-1.6\unit\advance\dx--1.4\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-1.6\unit\advance\dx--1.4\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-1.4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-1\unit\advance\dx--1.4\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-1\unit\advance\dx--1.4\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-1.4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-2.4\unit\advance\dx--2\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-2.4\unit\advance\dx--2\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-1.8\unit\advance\dx--2\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-1.8\unit\advance\dx--2\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-2\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern-2.8\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-3.3\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}} {{\rlap{\kern-2.8\unit\raise1.85\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-3.3\unit\raise1.85\unit\hbox{{\copy\ptbox}}}}} {{{\dx=-3.3\unit\advance\dx--2.8\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-3.3\unit\advance\dx--2.8\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-2.8\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-2.8\unit\advance\dx--2.8\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-2.8\unit\advance\dx--2.8\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-2.8\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-3.3\unit\advance\dx--2.8\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-3.3\unit\advance\dx--2.8\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-2.8\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-2.8\unit\advance\dx--3.3\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-2.8\unit\advance\dx--3.3\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-3.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-3.3\unit\advance\dx--3.3\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-3.3\unit\advance\dx--3.3\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-3.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern-4\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-4.5\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}} {{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-4\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-4\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-4\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise0.15\dy\copy\dotb}}}}}{{{\dx=-0.3\unit\advance\dx-0\unit\divide\dx by\psep \dy=-0.15\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\dx=\unit\dy=0pt\raise\yoff\rlap{\kern-4\unit\kern\xoff\raise1.85\unit\hbox{\loop\ifnum\dx>0\rlap{\kern-0.3\dx \raise-0.15\dy\copy\dotb}\hgt=\dx\divide\hgt by\nd\advance\dy\hgt\hgt=\dy \divide\hgt by\nd\advance\dx-\hgt\repeat\rlap{\raise-0.15\dy\copy\dotb}}}}}}{{\rlap{\kern-4.5\unit\raise1.85\unit\hbox{{\copy\ptbox}}}}} {\rlap{\kern-4.25\unit\raise2.15\unit\hbox{{\footnotesize$\ge 1$}}}} {{{\dx=-4.5\unit\advance\dx--4\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-4.5\unit\advance\dx--4\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-4.5\unit\advance\dx--4\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-4.5\unit\advance\dx--4\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-4.5\unit\advance\dx--4.5\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-4.5\unit\advance\dx--4.5\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4.5\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=-4.3\unit\advance\dx--4\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-4.3\unit\advance\dx--4\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-3.7\unit\advance\dx--4\unit\divide\dx by\psep \dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=-3.7\unit\advance\dx--4\unit \divide\dx by\nd\dy=1.84\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern-4.9\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-5.7\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern-5.3\unit\raise0.6\unit\hbox{{\copy\ptbox}}}}} {{\rlap{\kern-5.3\unit\raise1.85\unit\hbox{{\copy\ptbox}}}}} {{{\dx=-5.7\unit\advance\dx--4.9\unit\divide\dx by\psep \dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=-5.7\unit\advance\dx--4.9\unit \divide\dx by\nd\dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4.9\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.3\unit\advance\dx--4.9\unit\divide\dx by\psep \dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=-5.3\unit\advance\dx--4.9\unit \divide\dx by\nd\dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4.9\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.3\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=-5.3\unit\advance\dx--5.7\unit \divide\dx by\nd\dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-5.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.3\unit\advance\dx--4.9\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=-5.3\unit\advance\dx--4.9\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-4.9\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.3\unit\advance\dx--5.7\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=-5.3\unit\advance\dx--5.7\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-5.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=-5.3\unit\advance\dx--5.3\unit\divide\dx by\psep \dy=1.85\unit\advance\dy-0.6\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.6\unit\dx=-5.3\unit\advance\dx--5.3\unit \divide\dx by\nd\dy=1.85\unit\advance\dy-0.6\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern-5.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {\rlap{\kern-6.59\unit\raise1.88\unit\hbox{{\footnotesize$V_1$}}}}{\rlap{\kern-7.2\unit\raise2.28\unit\hbox{{\footnotesize (a)}}}} {\rlap{\kern-6.59\unit\raise0.38\unit\hbox{{\footnotesize$V_2$}}}}{\rlap{\kern-6.59\unit\raise-0.4\unit\hbox{{\footnotesize Type:}}}}{\rlap{\kern-5.36\unit\raise-0.4\unit\hbox{{\footnotesize$1$}}}}{\rlap{\kern-4.17\unit\raise-0.4\unit\hbox{{\footnotesize$2$}}}} {\rlap{\kern-3.12\unit\raise-0.4\unit\hbox{{\footnotesize$3$}}}}{\rlap{\kern-1.77\unit\raise-0.4\unit\hbox{{\footnotesize$4$}}}}{\rlap{\kern-0.27\unit\raise-0.4\unit\hbox{{\footnotesize$5$}}}} {{\setbox\dotb\hbox{{{\font\dotf=cmr10 scaled 400\dotf.}}}\xoff=-.5\wd\dotb \wd\dotb=0pt\yoff=-.5\ht\dotb\psep=.6\ht\dotb}}{{{\dx=6.7\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0\unit\dx=6.7\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=0\unit\advance\dy-0\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=6.7\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1\unit\dx=6.7\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=1\unit\advance\dy-1\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=2.2\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2.2\unit\advance\dx-2.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2.2\unit\advance\dx-2.7\unit\dy=0\unit \advance\dy-0\unit\dxx=2.2\unit\advance\dxx-2.7\unit\dyy=0.5\unit\advance \dyy-0\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=0\unit\raise\yoff\rlap{\kern2.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=2.2\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2.2\unit\advance\dx-2.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2.2\unit\advance\dx-2.7\unit\dy=1\unit \advance\dy-1\unit\dxx=2.2\unit\advance\dxx-2.7\unit\dyy=0.5\unit\advance \dyy-1\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1\unit\raise\yoff\rlap{\kern2.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=7.2\unit\advance\dx-6.7\unit\divide\dx by\psep \dy=0\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=7.2\unit\advance\dx-7.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-0\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=7.2\unit\advance\dx-6.7\unit\dy=0\unit \advance\dy-0\unit\dxx=7.2\unit\advance\dxx-6.7\unit\dyy=0.5\unit\advance \dyy-0\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=0\unit\raise\yoff\rlap{\kern6.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=7.2\unit\advance\dx-6.7\unit\divide\dx by\psep \dy=1\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=7.2\unit\advance\dx-7.2\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-1\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=7.2\unit\advance\dx-6.7\unit\dy=1\unit \advance\dy-1\unit\dxx=7.2\unit\advance\dxx-6.7\unit\dyy=0.5\unit\advance \dyy-1\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1\unit\raise\yoff\rlap{\kern6.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=6.7\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=1.5\unit\dx=6.7\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=6.7\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=2.5\unit\dx=6.7\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=2.2\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2.2\unit\advance\dx-2.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2.2\unit\advance\dx-2.7\unit\dy=1.5\unit \advance\dy-1.5\unit\dxx=2.2\unit\advance\dxx-2.7\unit\dyy=2\unit\advance \dyy-1.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.5\unit\raise\yoff\rlap{\kern2.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=2.2\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=2.2\unit\advance\dx-2.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=2.2\unit\advance\dx-2.7\unit\dy=2.5\unit \advance\dy-2.5\unit\dxx=2.2\unit\advance\dxx-2.7\unit\dyy=2\unit\advance \dyy-2.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=2.5\unit\raise\yoff\rlap{\kern2.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{{\dx=7.2\unit\advance\dx-6.7\unit\divide\dx by\psep \dy=1.5\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=7.2\unit\advance\dx-7.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-1.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=7.2\unit\advance\dx-6.7\unit\dy=1.5\unit \advance\dy-1.5\unit\dxx=7.2\unit\advance\dxx-6.7\unit\dyy=2\unit\advance \dyy-1.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=1.5\unit\raise\yoff\rlap{\kern6.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}}{{{\dx=7.2\unit\advance\dx-6.7\unit\divide\dx by\psep \dy=2.5\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}\ndx=\nd{\dx=7.2\unit\advance\dx-7.2\unit\divide\dx by\psep \dy=2\unit\advance\dy-2.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat} \ifnum\ndx>\nd\nd=\ndx\fi\dx=7.2\unit\advance\dx-6.7\unit\dy=2.5\unit \advance\dy-2.5\unit\dxx=7.2\unit\advance\dxx-6.7\unit\dyy=2\unit\advance \dyy-2.5\unit\advance\dxx-2\dx\advance\dyy-2\dy\divide\dxx by\nd\divide\dyy by\nd\advance\dx.25\dxx\advance\dy.25\dyy\divide\dx by\nd\divide\dy by\nd \multiply\nd by2\dx=100\dx\dy=100\dy\dxx=100\dxx\dyy=100\dyy\divide\dxx by\nd \divide\dyy by\nd\hgt=2.5\unit\raise\yoff\rlap{\kern6.7\unit\kern\xoff \raise\hgt\copy\dotb\loop\ifnum\nd>0\advance\nd-1\advance\hgt0.01\dy \kern0.01\dx\raise\hgt\copy\dotb\advance\dx\dxx\advance\dy\dyy\repeat}}} {{\rlap{\kern2.7\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern3.5\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern3.1\unit\raise0.6\unit\hbox{{\copy\ptbox}}}}} {{\rlap{\kern4.3\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern5.1\unit\raise0.4\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern4.7\unit\raise0.6\unit\hbox{{\copy\ptbox}}}}} {\rlap{\kern3.68\unit\raise0.41\unit\hbox{$\cdots$}}} {{\rlap{\kern3.9\unit\raise2\unit\hbox{{\copy\ptbox}}}}} {{{\dx=3.9\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.9\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.9\unit\advance\dx-3.5\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.9\unit\advance\dx-3.5\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern3.5\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.9\unit\advance\dx-3.1\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.6\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.6\unit\dx=3.9\unit\advance\dx-3.1\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.6\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern3.1\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.9\unit\advance\dx-4.3\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.9\unit\advance\dx-4.3\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern4.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.9\unit\advance\dx-5.1\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.9\unit\advance\dx-5.1\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern5.1\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.9\unit\advance\dx-4.7\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.6\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.6\unit\dx=3.9\unit\advance\dx-4.7\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.6\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern4.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=3.5\unit\advance\dx-2.7\unit\divide\dx by\psep \dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.5\unit\advance\dx-2.7\unit \divide\dx by\nd\dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern2.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=3.1\unit\advance\dx-3.5\unit\divide\dx by\psep \dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=3.1\unit\advance\dx-3.5\unit \divide\dx by\nd\dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern3.5\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=2.7\unit\advance\dx-3.1\unit\divide\dx by\psep \dy=0.4\unit\advance\dy-0.6\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.6\unit\dx=2.7\unit\advance\dx-3.1\unit \divide\dx by\nd\dy=0.4\unit\advance\dy-0.6\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern3.1\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{{\dx=5.1\unit\advance\dx-4.3\unit\divide\dx by\psep \dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=5.1\unit\advance\dx-4.3\unit \divide\dx by\nd\dy=0.4\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern4.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=4.7\unit\advance\dx-5.1\unit\divide\dx by\psep \dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.4\unit\dx=4.7\unit\advance\dx-5.1\unit \divide\dx by\nd\dy=0.6\unit\advance\dy-0.4\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern5.1\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=4.3\unit\advance\dx-4.7\unit\divide\dx by\psep \dy=0.4\unit\advance\dy-0.6\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.6\unit\dx=4.3\unit\advance\dx-4.7\unit \divide\dx by\nd\dy=0.4\unit\advance\dy-0.6\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern4.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {{\rlap{\kern5.5\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern5.9\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern6.7\unit\raise0.5\unit\hbox{{\copy\ptbox}}}}} {\rlap{\kern6.08\unit\raise0.41\unit\hbox{$\cdots$}}} {{\rlap{\kern5.9\unit\raise2\unit\hbox{{\copy\ptbox}}}}}{{\rlap{\kern6.3\unit\raise2\unit\hbox{{\copy\ptbox}}}}} {{{\dx=5.9\unit\advance\dx-5.5\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=5.9\unit\advance\dx-5.5\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern5.5\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=5.9\unit\advance\dx-5.9\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=5.9\unit\advance\dx-5.9\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern5.9\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=5.9\unit\advance\dx-6.7\unit\divide\dx by\psep \dy=2\unit\advance\dy-0.5\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=0.5\unit\dx=5.9\unit\advance\dx-6.7\unit \divide\dx by\nd\dy=2\unit\advance\dy-0.5\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern6.7\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=5.5\unit\advance\dx-6.3\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-2\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=2\unit\dx=5.5\unit\advance\dx-6.3\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-2\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern6.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=5.9\unit\advance\dx-6.3\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-2\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=2\unit\dx=5.9\unit\advance\dx-6.3\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-2\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern6.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}}{{{\dx=6.7\unit\advance\dx-6.3\unit\divide\dx by\psep \dy=0.5\unit\advance\dy-2\unit\divide\dy by\psep \multiply\dx by\dx\multiply\dy by\dy\advance\dx\dy\nd=1\advance\dx-1sp \loop\ifnum\dx>0\advance\dx-\nd sp\advance\nd1\advance\dx-\nd sp\repeat}{{\nd=\nd\hgt=2\unit\dx=6.7\unit\advance\dx-6.3\unit \divide\dx by\nd\dy=0.5\unit\advance\dy-2\unit\divide\dy by\nd \advance\hgt\yoff\rlap{\kern6.3\unit\kern\xoff\loop\ifnum\nd>1\advance\nd-1 \advance\hgt\dy\kern\dx\raise\hgt\copy\dotb\repeat}}}}} {\rlap{\kern1.81\unit\raise0.38\unit\hbox{{\footnotesize$V_2$}}}}{\rlap{\kern1.2\unit\raise2.28\unit\hbox{{\footnotesize (b)}}}} {\rlap{\kern1.81\unit\raise1.88\unit\hbox{{\footnotesize$V_1$}}}} {{\rlap{\kern0\unit\raise-1.4\unit\hbox{\raise.8ex{\hbox to 0pt{\hss{$\scriptstyle{\textup{\normalsize Figure 2. (a) Types 1 to 5 subgraphs; (b) How Type 1 and Type 5 subgraphs intersect}}$}\hss}}}}}}$$\ For two such subgraphs $S,S'$, we see that, in order to avoid creating a $B_{7,s}$ with centre $v$, we would need to have $V(S)\cap V(S')\cap V_1=\emptyset$ in most cases. The only exceptions are when $S$ and $S'$ are of Type 1 and meeting at one vertex in $V_1$, or they are of Type 5 with order $3$ and meeting at exactly two vertices in $V_1$. Indeed, we may have at least two subgraphs meeting in $V_1$ in these two exceptional cases, as shown in Figure 2(b). We next eliminate these two possibilities. Suppose first that we have $x\in V_1$, with exactly $m\ge 2$ Type 1 subgraphs meeting at $x$. Let $y_1,\dots,y_{3m}\in V_2$ be the vertices of the triangles in $V_2$. We delete the $4m$ edges of the Type 1 subgraphs, and add the edges $vy_1,\dots,vy_{3m}$. Then the degree of $v$ is increased by $3m$, the degree of $x$ is decreased by $3m$, and the degrees of the $y_k$ are decreased from $3$ to $1$. Since $d_C(v)\ge d_C(x)>3m$, the increase in the value of $e_p$ is $$\begin{aligned} & (d_C(v)+3m)^p-d_C(v)^p+(d_C(x)-3m)^p-d_C(x)^p+3m(1^p-3^p)\\ >\:\:& \sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x)^{p-j})(3m)^j\\ &\quad\quad\quad+{p\choose 2}(d_C(v)^{p-2}+d_C(x)^{p-2})(3m)^2-3m\cdot 3^p\\ >\:\:& 18m^2(3m)^{p-2}-m\cdot 3^{p+1}=2m^p\cdot 3^p-m\cdot 3^{p+1}\ge 4m\cdot 3^p-m\cdot 3^{p+1}>0.\end{aligned}$$ Next, suppose that we have $x_1,x_2\in V_1$, with exactly $m\ge 2$ Type 5 subgraphs of order $3$ meeting at $x_1,x_2$. Let $y_1,\dots,y_m\in V_2$ be the vertices of these subgraphs in $V_2$. Note that the neighbours of $x_1$ (resp. $x_2$) are precisely $v$ and the $y_k$, and possibly $x_2$ (resp. $x_1$), otherwise there is a copy of $B_{7,s}$. Thus $d_C(x_1),d_C(x_2)\in \{m+1, m+2\}$. Suppose first that $m\le s+1$. Note that $d_C(v)\ge 2s+23>2(m+2)\ge d_C(x_1)+d_C(x_2)$. We delete the $2m$ edges of the Type 5 subgraphs, and add the edges $vy_1,\dots,vy_m$. Then the degree of $v$ is increased by $m$, the degrees of $x_1,x_2$ are decreased by $m$, and the degrees of the $y_k$ are decreased from $2$ to $1$. The increase in the value of $e_p$ is $$\begin{aligned} & (d_C(v)+m)^p-d_C(v)^p+(d_C(x_1)-m)^p-d_C(x_1)^p+(d_C(x_2)-m)^p\\ &\quad\quad\quad -d_C(x_2)^p+m(1^p-2^p)\\ >\:\:& \sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x_1)^{p-j}-d_C(x_2)^{p-j})m^j\\ &\quad\quad\quad +{p\choose 2}(d_C(v)^{p-2}+d_C(x_1)^{p-2}+d_C(x_2)^{p-2})m^2-m\cdot 2^p\\ >\:\:& ((2m)^{p-2}+2m^{p-2})m^2-m\cdot 2^p=(2^{p-2}+2)m^p-m\cdot 2^p> 0.\end{aligned}$$ Secondly, let $m\ge s+2$. Suppose that there is an edge $x'y'$ where either $x'\in V_1\setminus\{x_1,x_2\}$ and $y'\in (V_1\cup V_2)\setminus\{x_1,x_2,y_1,\dots,y_m\}$, or $x'=x_2$ and $y'\in V_1\setminus\{x_1,x_2\}$. Then there is a copy of $B_{7,s}$ with centre $x_1$, where the $P_7$ is $y'x'vx_2y_2x_1y_1$ or $wvy'x_2y_2x_1y_1$ for some $w\in V_1\setminus\{x_1,x_2,y'\}$, and the $s$ leaves are $y_3,\dots,y_{s+2}$. Similarly, we cannot have an edge $x_1y'$ for every $y'\in V_1\setminus\{x_1,x_2\}$. It follows that all the edges of $C$ are those connecting $v$ to $V_1$, and all edges between $\{x_1,x_2\}$ and $\{y_1,\dots,y_m\}$, and possibly $x_1x_2$. Now, let $C'$ be the graph obtained by deleting the edges $x_2y_1,\dots,x_2y_m$ and adding the edges $vy_1,\dots,vy_m$. Since $d_C(v)\ge d_C(x_2)$, the increase in the value of $e_p$ is $$\begin{aligned} & (d_C(v)+m)^p-d_C(v)^p+(d_C(x_2)-m)^p-d_C(x_2)^p\\ \ge\:\:& \sum_{1\le j\le p\textup{, $j$ odd}}{p\choose j}(d_C(v)^{p-j}-d_C(x_2)^{p-j})m^j+{p\choose 2}(d_C(v)^{p-2}+d_C(x_2)^{p-2})m^2> 0.\end{aligned}$$ Moreover, we see that the degree sequence of $C'$ is majorised by the degree sequence of $K_2+E_{c-2}$, by identifying $\{v,x_1\}$ with $K_2$, and the remaining vertices of $C'$ with $E_{c-2}$. It follows that $e_p(C)<e_p(C')\le e_p(K_2+E_{c-2})<e_p(H(c,7))$. Therefore, we may assume that no two of the subgraphs as shown in Figure 2(a) meet in $V_1$. For such a subgraph $S$, let $[S]$ denote the component of $C-v$ containing $S$. We consider the structure of $[S]$, so as to avoid a copy of $B_{7,s}$. Clearly if $S$ is of Type 1, then $[S]=S$. If $S$ is of Type 2, 3 or 4, then either $[S]=S$, or $|V(S)\cap V_1|=2$, and the edge connecting the two vertices of $V(S)\cap V_1$ is in $[S]$. Finally, let $S$ be of Type 5, with $V(S)\cap V_1=\{x_1,\dots,x_t\}$ for some $t\ge 2$. It is easy to check that any additional vertices and edges in $[S]$ are as follows. If $t=2$, then we may possibly have the edge $x_1x_2$, and either $x_1$ and $x_2$ are connected to another vertex of $V_1\setminus\{x_1,x_2\}$, or only one of $x_1,x_2$ is connected to some other vertices of $V_1\setminus\{x_1,x_2\}$. If $t=3$, then we may possibly have any number of the edges $x_1x_2, x_1x_3,x_2x_3$, or none of these three edges and only one of $x_1,x_2,x_3$ is connected to some other vertices of $V_1\setminus\{x_1,x_2,x_3\}$. If $t\ge 4$, then we may either have exactly one edge in $\{x_1,\dots,x_t\}$, or no edge in $\{x_1,\dots,x_t\}$ and only one of $x_1,\dots,x_t$ is connected to some other vertices of $V_1\setminus\{x_1,\dots,x_t\}$. We see that all such components $[S]$ can be classified into exactly one of three types: 1. A subgraph of $K_4$. 2. A $H(c',5)$ for some $5\le c'\le c-1$ (i.e., a star on $c'$ vertices with an edge connecting two leaves). 3. A double star with at least five vertices (i.e., two disjoint stars with an edge connecting their centres). A star itself is a special case of a double star. Moreover, by (ii) with $i=1$, we see that if $Y=V_1\setminus\bigcup V([S])$, where the union is taken over all such subgraphs $S$ in Figure 2(a), then $C[Y]$ is $P_5$-free. It is easy to show that the components of $C[Y]$ must also be one of the types (I), (II) or (III). Consequently, if we connect $v$ to all vertices of $V_2$ to obtain the graph $C^\ast$, then the components of $C^\ast-v$ are of the types (I), (II) or (III). Note that by replacing $C^\ast-v$ with the graph $H(c-1,5)$, we obtain the graph $H(c,7)$. We shall show that this operation does not decrease the value of $e_p$. Consider the following operations.\ (A) Suppose that $C^\ast-v$ has a double star component with at least five vertices, which is not a star. Let the centres be $x,y$, and the leaves at $y$ be $y_1,\dots,y_m$, for some $m\ge 1$. We may assume that $d_{C^\ast}(x)\ge d_{C^\ast}(y)=m+2$. We obtain the star with the same order by deleting the edges $yy_1,\dots,yy_m$, and adding the edges $xy_1,\dots,xy_m$. If $m\ge 2$, then the increase in the value of $e_p$ is $$\begin{aligned} (d_{C^\ast}(x)+m)^p-d_{C^\ast}(x)^p+2^p-(m+2)^p &> pd_{C^\ast}(x)^{p-1}m-(m+2)^p\\ &\ge 2(m+2)^{p-1}m-(m+2)^p\ge 0.\end{aligned}$$ If $m=1$, then we have $d_{C^\ast}(x)\ge 4$. In this case, the increase in the value of $e_p$ is $$(d_{C^\ast}(x)+1)^p-d_{C^\ast}(x)^p+2^p-3^p > pd_{C^\ast}(x)^{p-1}+2^p-3^p\ge 2\cdot 4^{p-1}+2^p-3^p>0.$$ (B) Suppose that we have two components $C_1,C_2\subset C^\ast-v$ with $c_1$ and $c_2$ vertices, where $c_1\ge c_2\ge 5$, and $C_1$ (resp. $C_2$) is either a star or the graph $H(c_1,5)$ (resp. $H(c_2,5)$). If $C_1$ is a star, we add an edge to create $H(c_1,5)$, and likewise for $C_2$, so that we have the graphs $H(c_1,5)$ and $H(c_2,5)$. We then delete all edges of the $H(c_2,5)$, and connect all of its vertices to the universal vertex of the $H(c_1,5)$, thus obtaining the subgraph $H(c_1+c_2,5)$. The increase in the value of $e_p$ is at least $$(c_1+c_2)^p-c_1^p+2^p-c_2^p+2(2^p-3^p)>pc_1^{p-1}c_2-2\cdot 3^p> 0.$$ Let $R$ be the subgraph of $C^\ast-v$ consisting of the components which are subgraphs of $K_4$. We have $d_{C^\ast}(y)\le 4$ for all $y\in V(R)$. Let $|V(R)|=r$.\ (C) Suppose $r\ge 16$. We replace $R$ with the star of order $r$, with centre $x\in V(R)$. The increase in the value of $e_p$ is $$r^p-d_{C^\ast}(x)^p+\sum_{y\in V(R-x)}(2^p-d_{C^\ast}(y)^p) > r^p-r\cdot 4^p\ge r(16^{p-1}-4^p)\ge 0.$$ \(D) Suppose that $1\le r\le 15$, and the subgraph $C^\ast-(\{v\}\cup V(R))$ is $H(c_1,5)$. Recall that $|V(C^\ast-v)|=c-1\ge 2s+23\ge 23$, and thus $c_1\ge 8$. We delete all edges of $R$, and connect all vertices of $R$ to the universal vertex of the $H(c_1,5)$, to form a copy of $H(c-1,5)$. Since $c_1+r=c-1$, the increase in the value of $e_p$ is $$\begin{aligned} (c-1)^p-c_1^p+\sum_{y\in V(R)}(2^p-d_{C^\ast}(y)^p) &\ge (c_1+r)^p-c_1^p+r(2^p-4^p)\\ &> pc_1^{p-1}r-r\cdot 4^p\ge r(2\cdot 8^{p-1}-4^p)\ge 0.\end{aligned}$$ Therefore where possible, we apply operation (C), then apply operation (A) to all double stars in $C^\ast[V_1]$, followed by successive applications of operation (B), and finally operation (D). We obtain $e_p(C) \le e_p(C^\ast) \le e_p(H(c,7))$. Equality occurs if and only if $C = C^\ast$ and $C^\ast-v$ is the graph $H(c-1,5)$. That is, if and only if $C = H(c,7)$. The proof of Theorem \[B7sthm\] is now complete. Acknowledgements {#acknowledgements .unnumbered} ================ Yongxin Lan, Zhongmei Qin, and Yongtang Shi are partially supported by National Natural Science Foundation of China (Nos. 11371021, 11771221), and Natural Science Foundation of Tianjin (No. 17JCQNJC00300). Henry Liu is partially supported by the Startup Fund of One Hundred Talent Program of SYSU. Henry Liu would also like to thank the Chern Institute of Mathematics, Nankai University, for their generous hospitality. He was able to carry out part of this research during his visit there. [99]{} B. Bollobás, Modern Graph Theory, Springer-Verlag, New York, 1998, xiv+394 pp. B. Bollobás, V. Nikiforov, Degree powers in graphs with forbidden subgraphs, *Electron. J. Combin.* 11 (2004), R42. B. Bollobás, V. Nikiforov, Degree powers in graphs: the Erdős-Stone Theorem, *Combin. Probab. Comput.* 21 (2012), 89–105. N. Bushaw, N. Kettle, Turán numbers of multiple paths and equibipartite forests, *Combin. Probab. Comput.* (2011), 837–853. Y. Caro, R. Yuster, A Turán type problem concerning the powers of the degrees of a graph, *Electron. J. Combin.* 7 (2000), R47. P. Erdős, Extremal problems in graph theory, in: Theory of Graphs and its applications, Proc. Sympos. Smolenice, Prague (1964), 29–36. P. Erdős, T. Gallai, On maximal paths and circuits of graphs, *Acta Math. Acad. Sci. Hungar.* 10 (1959), 337–356. R. J. Faudree, R. H. Schelp, Path Ramsey numbers in multicolorings, *J. Combin. Theory Ser. B* 19 (1975), 150–160. I. Gorgol, Turán numbers for disjoint copies of graphs, *Graphs Combin.* 27 (2011), 661–667. R. Gu, X. Li, Y. Shi, Degree powers in $C_5$-free graphs, *Bull. Malays. Math. Sci. Soc.* 38 (2015), 1627–1635. B. Lidický, H. Liu, C. Palmer, On the Turán number of forests, *Electron. J. Combin.* 20 (2013), P62. V. Nikiforov, Degree powers in graphs with a forbidden even cycle, *Electron. J. Combin.* 16 (2009), R107. Z-H. Sun, L-L. Wang, Turán’s problem for trees, *J. Comb. Number Theory* 3 (2011), 51–69. P. Turán, On an extremal problem in graph theory, *Mat. Fiz. Lapok.* 48 (1941), 436–452. L-T. Yuan, X-D. Zhang, The Turán number of disjoint copies of paths, *Discrete Math.* 340 (2017), 132–139. [^1]: Corresponding author
--- abstract: 'Measurements of diffractive phenomena observed at HERA are reviewed. A short introduction to the theoretical background is presented. The review focuses on the current experimental directions and discusses the exclusive production of vector mesons, the deep inelastic structure of diffraction and complementary information from jet structures. Emphasis is placed on the current sources of background and the experimental uncertainties.' author: - Anthony T Doyle title: 'Diffraction at HERA: experimental perspective' --- 17.0cm -1.0in epsf [GLAS-PPE/96–01]{} *Talk presented at the Workshop on HERA Physics,* *“Proton, Photon and Pomeron Structure",* *Durham, September 1995.* Introduction: maps of the pomeron ================================= At the last Durham workshop on HERA physics, HERA was heralded as the new frontier for QCD. In the proceedings from that workshop, there was one theoretical contribution on “Partons and QCD effects in the pomeron" [@ingelman]. Experimentally, the large rapidity gap events in deep inelastic scattering were yet to be discovered and the first preliminary results on photoproduced vector mesons were just starting to appear. Two years later, this workshop focuses on “proton, photon and pomeron structure": the inclusion of the word “pomeron" in the title of the workshop reflects a series of diffractive measurements which have been made in the intervening period at HERA. This talk is therefore an opportunity to discuss “[*Measurements*]{} of partons and QCD effects in the pomeron", based on results from the H1 and ZEUS collaborations. The diffractive processes studied are of the form: $$e~(k)~+~ p~(P) \rightarrow e'~(k')~+~p'(P')~+~X,$$ where the the photon dissociates into the system $X$ and the outgoing proton, $p'$, remains intact, corresponding to single dissociation. The measurements are made as a function of the photon virtuality, $ Q^2 \equiv -q^2=-(k~-~k')^2, $ the centre-of-mass energy of the virtual-photon proton system, $W^2=(q~+~P)^2$, the mass of the dissociated system, $X$, denoted by $M^2$ and the four-momentum transfer at the proton vertex, given by $t = (P~-~P')^2$. The subject of diffraction is far from new: diffractive processes have been measured and studied for more than thirty years [@goul]. Their relation to the corresponding total cross sections at high energies has been successfully interpreted via the introduction of a single pomeron trajectory with a characteristic $W^2$ and $t$ dependence [@dl]. The high-energy behaviour of the total cross sections is described by a power-law dependence on $W^2$: $$\begin{aligned} \sigma \sim (W^2)^\epsilon\end{aligned}$$ where $W$ is measured in GeV, $\epsilon = \alpha(0) - 1$ and $\alpha(0)$ is the pomeron intercept. The slow rise of hadron-hadron total cross sections with increasing energy indicates that the value of $\epsilon \simeq 0.08$ i.e. the total cross sections increase as $W^{0.16}$, although the latest $p\bar{p}$ data from CDF at $\sqrt{s}=1800$ GeV indicate $\epsilon \simeq 0.11$ [@CDF1]. The optical theorem relates the total cross sections to the elastic, and hence diffractive, scattering amplitude at the same $W^2$: $$\begin{aligned} \frac{d\sigma}{dt} \sim (W^2)^{2(\epsilon-\alpha'\cdot |t|)}\end{aligned}$$ where $\epsilon - \alpha' \cdot |t| = \alpha(t) - 1 $ and $\alpha'= 0.25~$GeV$^{-2}$ reflects the shrinkage of the diffractive peak with increasing $W^2$. Whilst these Regge-based models give a unified description of all pre-HERA diffractive data, this approach is not fundamentally linked to the underlying theory of QCD. It has been anticipated that at HERA energies if either of the scales $Q^2$, $M^2$ or $t$ become larger than the QCD scale $\Lambda^2$, then it may be possible to apply perturbative QCD (pQCD) techniques, which predict changes to this power law behaviour, corresponding to an increase in the effective value of $\epsilon$ and a decrease of $\alpha'$. This brings us from the regime of dominance of the slowly-rising “soft" pomeron to the newly emergent “hard" behaviour and the question of how a transition may occur between the two. Precisely where the Regge-based approach breaks down or where pQCD may be applicable is open to experimental question. The emphasis is therefore on the internal (in)consistency of a wide range of measurements of diffractive and total cross sections. As an experimentalist navigating around the various theoretical concepts of the pomeron, it is sometimes difficult to see which direction to take and what transitions occur where (Figure 1(a)). However, from an experimental perspective, the directions are clear, even if the map is far from complete (Figure 1(b)). The HERA collider allows us to observe a broad range of diffractive phenomena at the highest values of $W^2$. What is new is that we have the ability to observe the variation of these cross sections at specific points on the $M^2$ scale, from the $\rho^0$ up to the $\Upsilon$ system as discussed in section 2.1. Similarly, the production cross section can be explored as a function of $Q^2$, using a virtual photon probe. The high energy available provides a large rapidity span of $\simeq$ 10 units ($\Delta(\eta) \sim \ln(W^2)$). The observation of a significant fraction of events ($\simeq 10\%$) with a large rapidity gap between the outgoing proton, $p'$, and the rest of the final state, $X$, in deep inelastic scattering (DIS) has led to measurements of the internal structure of the pomeron. These results are discussed in section 2.2. Similar studies of events with high-$p_T$ jets and a large rapidity gap have also been used to provide complementary information on this structure. Also, the observation of rapidity gaps between jets, corresponding to large $t$ diffraction, are presented in section 2.3. Finally, a first analysis of the leading proton spectrometer data where the diffracted proton is directly measured is presented in section 2.4. Signals and Backgrounds ======================= Exclusive Production of Vector Mesons ------------------------------------- The experimental signals are the exclusive production of the vector mesons in the following decay modes: $$\rho^0\rightarrow \pi^+\pi^-~\cite{Hrho,Zrho,Hrho*,Zrho*}~~~~~~ \phi\rightarrow K^+K^-~\cite{Zphi,Zphi*}~~~~~~ J/\psi\rightarrow\mu^+\mu^-,e^+e^-~\cite{Hpsi,Zpsi,Hpsi*}.$$ First results on $\omega\rightarrow \pi^+\pi^-\pi^o$ and higher vector mesons ($\rho'\rightarrow \pi^+\pi^-\pi^o\pi^o$ and $\psi'\rightarrow\mu^+\mu^-,e^+e^-$) are in the early analysis stages and first candidates for $\Upsilon\rightarrow\mu^+\mu^-,e^+e^-$ are also appearing in the data. The clean topology of these events results in typical errors on the measured quantities ($t$, $M^2$, $W^2$ and $Q^2$), reconstructed in the tracking chambers, of order 5%. Containment within the tracking chambers corresponds to a $W$ interval in the range $40 {\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}W {\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}140$ GeV. However, some analyses are restricted to a reduced range of $W$ where the tracking and trigger systematics are well understood. Conversely, H1 have also used the shifted vertex data to extend the analysis of the $\rho^0$ cross section to higher $<\!W\!> = 187$ GeV. At small $t$ there are problems triggering and, to a lesser extent, reconstructing the decay products of the vector meson. In particular, the photoproduction of $\phi$ mesons is limited to $t {\raisebox{-.6ex}{${\textstyle\stackrel{>}{\sim}}$}}0.1$ GeV$^2$, since the produced kaons are just above threshold and the available energy in the decay is limited. In order to characterise the $t$-dependence, a fit to the diffractive peak is performed. In the most straightforward approach, a single exponential fit to the $t$ distribution, $dN/d|t| \propto e^{-b|t|}$ for $|t|~{\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}~0.5~$GeV$^2$ is adopted. The contributions to the systematic uncertainties are similar in each of the measurements. For example, the uncertainties on acceptance of photoproduced $\rho^0$’s are due to uncertainties on trigger thresholds ($\simeq$ 9%), variations of the input Monte Carlo distributions ($\simeq$ 9%) and track reconstruction uncertainties especially at low $p_T$ ($\simeq$ 6%). In particular for the $\rho^0$ analysis, where the mass distribution is skewed compared to a Breit-Wigner shape, uncertainties arise due to the assumptions of the fit for the interference between the resonant signal and the non-resonant background contributions ($\simeq$ 7%). Other significant contributions to the uncertainty are contamination due to $e$-gas interactions ($\simeq$ 2-5%) and from higher mass dissociated photon states, such as elastic $\omega$ and $\phi$ decays ($\simeq$ 2-7%). The uncertainty due to neglecting radiative corrections can also be estimated to be $\simeq$ 4-5% [@Hrho; @Zrho]. Finally, one of the key problems in obtaining accurate measurements of the exclusive cross sections and the $t$ slopes is the uncertainty on the double dissociation component, where the proton has also dissociated into a low mass nucleon system [@dd]. The forward calorimeters will see the dissociation products of the proton if the invariant mass of the nucleon system, $M_N$, is above approximately 4 GeV. A significant fraction of double dissociation events produce a limited mass system which is therefore not detected. Proton remnant taggers are now being used further down the proton beamline to provide constraints on this fraction and, in the H1 experiment, further constraints are provided by measuring secondary interactions in the forward muon system. Based on $p\bar{p}$ data one finds that the dissociated mass spectrum falls as $ dN/dM_N^2 = 1/M_N^n $ with $n$ = 2.20 $\pm$ 0.03 at $\sqrt{s} = 1800$ GeV from CDF measurements [@CDF2]. However it should be noted that this measurement corresponds to a restricted mass interval. The extrapolation to lower masses is subject to uncertainties and the universality of this dissociation is open to experimental question, given the different behaviour at the upper vertex. Precisely how the proton dissociates and whether the proton can be regarded as dissociating independently of the photon system is not a priori known. Currently, this uncertainty is reflected in the cross sections by allowing the value of $n$ to vary from around 2 to 3, although this choice is somewhat arbitrary. The magnitude of the total double dissociation contribution is estimated to be typically $\simeq 50\%$ prior to cuts on forward energy deposition, a value which can be cross-checked in the data with an overall uncertainty of $\simeq~10\%$ which is due to the considerations above. Combining the above uncertainties, the overall systematic errors in the various cross sections are typically $\simeq~20\%$. Photoproduction processes have been extensively studied in fixed-target experiments, providing a large range in $W$ over which to study the cross sections. The key features are the weak dependence of the cross section on $W$, an exponential dependence on $t$ with a slope which shrinks with increasing $W$ and the retention of the helicity of the photon by the vector meson. The $t$ dependence of the $\rho^0$ photoproduction data is illustrated in Figure 2 where the H1 and ZEUS data are compared to a compilation of lower energy data [@aston]. The data are consistent with a shrinkage of the $t$ slope with increasing $E_\gamma \simeq W^2/2 $, where $E_\gamma$ is the photon energy in the proton rest frame, as indicated by the curve for soft pomeron exchange [@schuler]. =8.cm The measured $t$ slopes are $9.4\pm1.4\pm0.7$ GeV$^{-2}$ (H1) [@Hrho] and $10.4\pm0.6\pm1.1$ GeV$^{-2}$ (ZEUS) [@Zrho] for the $\rho^0$ (where similar single-exponential fits have been applied). These values can be compared to $7.3\pm1.0\pm0.8$ GeV$^{-2}$ (ZEUS) [@Zphi] for the $\phi$ and $4.7\pm1.9$ GeV$^{-2}$ (H1) [@Hpsi] for the $J/\psi$. Physically, the slope of the $t$ dependence in diffractive interactions tells us about the effective radius of that interaction, $R$: if d$\sigma/dt \propto e^{-b|t|}$, then b $\simeq$ 1/4 $R^2$. The range of measured $b$ slopes varies from around 4 GeV$^{-2}$ ($R \simeq 0.8$ fm) to 10 GeV$^{-2}$ ($R \simeq 1.3$ fm). Further, the interaction radius can be approximately related to the radii of the interacting proton and vector meson, $R \simeq \sqrt{R_P^2 + R_V^2}$. Given $R_P \simeq 0.7$ fm, then this variation in $b$ slopes corresponds to a significant change in the effective radius of the interacting vector meson from $R_V \simeq 0.4$ fm to $R_V \simeq 1.1$ fm. =10.cm Integrating over the measured $t$ dependence, the $W$ dependence of the results on exclusive vector meson photoproduction cross sections are shown in Figure 3 [@levy]. From the experimental perspective, there is generally good agreement on the measured cross sections. The $\gamma p$ total cross section is also shown in Figure 3, rising with increasing energy as in hadron-hadron collisions and consistent with a value of $\epsilon \simeq 0.08$ i.e. the total cross section increases as $W^{0.16}$. Given the dominance of the pomeron trajectory at high $W$ and an approximately exponential behaviour of the $|t|$ distribution with slope $b \simeq 10$, whose mean $|\bar{t}|$ value is given by $1/b$, the diffractive cross section rise is moderated from $$W^{4\epsilon}= W^{0.32}$$ to $$W^{4(\epsilon - \alpha'|\bar{t}|)} \equiv W^{4\bar{\epsilon}} = W^{0.22}.$$ Here $\bar{\epsilon} = 0.055$ characterises the effective energy dependence after integration over $t$. The observed shrinkage of the diffractive peak therefore corresponds to a relative reduction of the diffractive cross section with increasing energy. Such a dependence describes the general increase of the $\rho^0$, $\omega$ and $\phi$ vector meson cross sections with increasing $W$. However, the rise of the $J/\psi$ cross section is clearly not described by such a $W$ dependence, the increase being described by an effective $W^{0.8}$ dependence. Whilst these effective powers are for illustrative purposes only, it is clear that in exclusive $J/\psi$ production a new phenomenon is occurring. Qualitatively, the $W^{0.8}$ dependence, corresponding to $\bar{\epsilon} \simeq 0.2$, could be ascribed to the rise of the gluon density observed in the scaling violations of $F_2$. The $J/\psi$ mass scale, $M^2$, is larger than the QCD scale $\Lambda^2$, and it is therefore possible to apply pQCD techniques. Quantitatively, the theoretical analysis predicts that the rise of the cross section is proportional to the square of the gluon density at small-$x$ and allows discrimination among the latest parametrisations of the proton structure function [@ryskin]. We also know from measurements of the DIS $\gamma^* p$ total cross section that application of formula (1) results in a value of $\epsilon$ which increases with increasing $Q^2$, with $\epsilon \simeq$ 0.2 to 0.25 at $Q^2\simeq10$ GeV$^2$ [@levy]. The fact that the corresponding relative rise of $F_2$ with decreasing $x$ can be described by pQCD evolution [@GRV] points towards a calculable function $\epsilon = \epsilon(Q^2)$ for $Q^2 {\raisebox{-.6ex}{${\textstyle\stackrel{>}{\sim}}$}}Q_o^2 \simeq 0.3~$GeV$^2$. One contribution to the DIS $\gamma^* p$ total cross section is the electroproduction of low mass vector mesons. Experimentally, the statistical errors typically dominate with systematic uncertainties similar to the photoproduction case. The trigger uncertainties are significantly reduced, however, since the scattered electron is easily identified and the radiative corrections, which are more significant ($\simeq 15\%$ [@wulff]), can be corrected for. The $W$ dependence of the DIS $\rho^0$ and $\phi$ cross sections for finite values of $Q^2$ are shown in Figure 4, compared to the corresponding photoproduction cross sections. The $W$ dependence for the $\rho^0$ and $\phi$ electroproduction data are similar to those for the $J/\psi$ photoproduction data, consistent with an approximate $W^{0.8}$ dependence also shown in Figure 4. An important point to emphasise here is that the relative production of $\phi$ to $\rho^0$ mesons approaches the quark model prediction of 2/9 at large $W$ and large $Q^2$, which would indicate the applicability of pQCD to these cross sections. The measurements of the helicity angle of the vector meson decay provide a measurement of $R= \sigma_L/\sigma_T$ for the (virtual) photon, assuming s-channel helicity conservation, i.e. that the vector meson preserves the helicity of the photon. The photoproduction measurements for the $\rho^0$ are consistent with the interaction of dominantly transversely polarised photons ($R=0.06\pm0.03$ (ZEUS) [@Zrho]). However, adopting the same analysis for virtual photons, $R=1.5^{+2.8}_{-0.6}$ (ZEUS) [@Zrho*], inconsistent with the behaviour in photoproduction and consistent with a predominantly longitudinal exchange. This predominance is expected for an underlying interaction of the virtual photon with the constituent quarks of the $\rho^0$. Also, the measured $b$ slope approximately halves from the photoproduction case to a value of $b=5.1^{+1.2}_{-0.9}\pm1.0$ (ZEUS) [@Zrho*], comparable to that in the photoproduced $J/\psi$ case. The basic interaction is probing smaller distances, which allows a first comparison of the observed cross section with the predictions of leading-log pQCD (see [@Zrho*]).   (a)    (b) \ \ Finally, first results based on the observation of 42 $J/\psi$ events at significant $<\!Q^2\!>=17.7$ GeV$^2$ have been reported by H1 [@Hpsi*]. The cross section has been evaluated in two $W$ intervals in order to obtain an indication of the $W$ dependence, as shown in Figure 5, where an estimated 50% contribution due to double dissociation has been subtracted [@berger]. The electroproduction data are shown with statistical errors only although the systematics are estimated to be smaller than these errors ($\simeq 20\%$). The electroproduction and photoproduction $J/\psi$ data are consistent with the $W^{0.8}$ dependence ($\bar{\epsilon} \simeq 0.2$) noted previously. The $J/\psi$ electroproduction cross section is of the same order of that of the $\rho^0$ data, in marked contrast to the significantly lower photoproduction cross section for the $J/\psi$, even at HERA energies, also shown in Figure 5. Further results in this area would allow tests of the underlying dynamics for transverse and longitudinally polarised photons coupling to light and heavy quarks in the pQCD calculations. =7.cm In conclusion, there is an accumulating body of exclusive vector meson production data, measured with a systematic precision of $\simeq 20\%$, which exhibit two classes of $W^2$ behaviour: a slow rise consistent with that of previously measured diffractive data for low $M^2$ photoproduction data but a significant rise of these cross sections when a finite $Q^2$ and/or a significant $M^2$ is measured. Deep Inelastic Structure of Diffraction --------------------------------------- One of the major advances in the subject of diffraction has been the observation of large rapidity gap events in DIS and their subsequent analysis in terms of a diffractive structure function [@Hd1; @Zd1]. In these analyses, the signature of diffraction is the rapidity gap, defined by measuring the maximum pseudorapidity of the most-forward going particle with energy above 400 MeV, $\eta_{max}$, and requiring this to be well away from the outgoing proton direction. A typical requirement of $\eta_{max} < 1.5$ corresponds to a low mass state measured in the detectors of $\ln(M^2) \sim 3$ units and a large gap of $\ln(W^2) - \ln(M^2) \sim 7$ units with respect to the outgoing proton (nucleon system). In order to increase the lever arm in $M^2$, the H1 and ZEUS analyses have extended the $\eta_{max}$ cuts to 3.2 and 2.5, respectively. This is achieved directly using the forward muon system/proton remnant taggers, in the case of H1, or via the measurement of a further discriminating variable, $\cos\theta_H=\sum_i p_{z_i}/|\sum_i \vec{p_i}|$, where $\vec{p_i}$ is the momentum vector of a calorimeter cell, for ZEUS. These extensions are, however, at the expense of a significant non-diffractive DIS background (up to $\simeq 50\%$ and $\simeq 20\%$, respectively). In each case, this background is estimated using the the colour-dipole model as implemented in the ARIADNE 4.03 program [@ariadne], which reasonably reproduces the observed forward $E_T$ flows in non-diffractive interactions. The uncertainty on this background is estimated by changing the applied cuts or by using other Monte Carlo models and is up to 20% for large masses, $M^2$, of the dissociated photon. The double dissociation contribution is estimated with similar uncertainties to the vector meson case. Other systematic errors are similar to those for the $F_2$ analyses (${\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}10\%$) with additional acceptance uncertainties due to variations of the input diffractive Monte Carlo distributions. In the presentation of the results, the formalism changes [@ingprytz], reflecting an assumed underlying partonic description, and two orthogonal variables are determined: $${\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}= \frac{(P-P')\cdot q}{P\cdot q} \simeq \frac{M^2 + Q^2}{W^2 + Q^2}~~~~~~~~~ \beta = \frac{Q^2}{2(P-P')\cdot q} \simeq \frac{Q^2}{M^2 + Q^2},$$ the momentum fraction of the pomeron within the proton and the momentum fraction of the struck quark within the pomeron, respectively. The structure function is then defined by analogy to that of the total $ep$ cross section: $$\frac{d^3\sigma_{diff}}{d\beta dQ^2 d{\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}} = \frac{2 \pi \alpha^2}{\beta Q^4} \; (1+(1-y)^2) \; F_2^{D(3)}(\beta,Q^2,{\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}),$$ where the contribution of $F_L$ and radiative corrections are neglected and an integration over the (unmeasured) $t$ variable is performed. The effect of neglecting $F_L$ corresponds to a relative reduction of the cross section at small  (high $W^2$) which is always $<17\%$ and therefore smaller than the typical measurement uncertainties ($\simeq~20\%$). As discussed above, a major uncertainty comes from the estimation of the non-diffractive background. This problem has been addressed in a different way in a further analysis by ZEUS [@Zd2]. In this analysis the mass spectrum, $M^2$, is measured as a function of $W$ and $Q^2$, as shown in Figure 6 for four representative intervals, where the measured mass is reconstructed in the calorimeter and corrected for energy loss but not for detector acceptance, resulting in the turnover at large $M^2$. The diffractive data are observed as a low mass shoulder at low $W$, which becomes increasingly apparent at higher $W$. Also shown in the figure are the estimates of the non-diffractive background based on (a) the ARIADNE Monte Carlo (dotted histogram) and (b) a direct fit to the data, discussed below. =8.cm The probability of producing a gap is exponentially suppressed as a function of the rapidity gap, and hence as a function of $\ln(M^2$), for non-diffractive interactions. The slope of this exponential is directly related to the height of the plateau distribution of multiplicity in the region of rapidity where the subtraction is made. The data can thus be fitted to functions of the form $ dN/d \ln(M^2) = D + C {\rm exp}( b \cdot \ln(M^2)) $, in the region where the detector acceptance is uniform, where $b$, $C$ and $D$ are determined from the fits. Here, $D$ represents a first-order estimate of the diffractive contribution which is flat in $\ln(M^2$). The important parameter is $b$, which is determined to be $b = 1.55\pm0.15$ in fits to each of the measured data intervals, compared to $b=1.9\pm 0.1$ estimated from the ARIADNE Monte Carlo. The systematic uncertainty in the background reflects various changes to the fits, but in each case the measured slope is incompatible with that of the Monte Carlo. This result in itself is interesting, since the fact that ARIADNE approximately reproduces the observed forward $E_T$ ($\sim$ multiplicity) flow but does not reproduce the measured $b$ slope suggests that significantly different correlations of the multiplicities are present in non-diffractive DIS compared to the Monte Carlo expectations. Also new in this analysis is that the diffractive Monte Carlo POMPYT 1.0 [@pompyt] has been tuned to the observed data contribution for low mass states, allowing the high $\beta$ region to be measured up to the kinematic limit ($\beta\rightarrow 1$) and radiative corrections have been estimated in each interval (${\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}10\%$ [@wulff]). The virtual-photon proton cross sections measured at fixed $M^2$ and $W$, measured in this analysis, can be converted to $F_2^{D(3)}$ at fixed $\beta$ and ${\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}$. These results are shown in Figure 7 as the ZEUS(BGD) [@Zd2] analysis, compared to the earlier H1 [@Hd1] and ZEUS(BGMC) [@Zd1] analyses in comparable intervals of $\beta$ and $Q^2$ as a function of ${\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}$. The overall cross sections in each $\beta$ and $Q^2$ interval are similar, however, the  dependences are different. As can be seen in Figure 6, the background estimates are significantly different which results in a systematic shift in the $W$ () dependence at fixed $M$ ($\beta$) and $Q^2$. =10.cm Fits of the form $F_2^{D(3)} = b_i \cdot {\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}^{n}$ are performed where the normalisation constants $b_i$ are allowed to differ in each $\beta,Q^2$ interval. The fits are motivated by the factorisable ansatz of $F_2^{D(3)}(\beta, Q^2, {\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}) = f_{{I\hspace{-0.2em}P}}({\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}) \cdot F_2^{{I\hspace{-0.2em}P}}(\beta,Q^2), $ where $f_{{I\hspace{-0.2em}P}}({\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}})$ measures the flux of pomerons in the proton and $F_2^{{I\hspace{-0.2em}P}}(\beta,Q^2)$ is the probed structure of the pomeron. The exponent of  is identified as $n = 1+2\cdot\bar{\epsilon}$, where $\bar{\epsilon}$ measures the effective  dependence ($\equiv W^2$ dependence at fixed $M^2$ and $Q^2$) of the cross section, integrated over $t$, as discussed in relation to exclusive vector meson production. In each case, the $\chi^2/DOF$ are $\simeq 1$ indicating that a single power law dependence on energy provides a reasonable description of the data and that effects due to factorisation breaking predicted in QCD-based calculations [@nz] are not yet observable. The results for $\bar{\epsilon}$ are $0.095\pm0.030\pm0.035$ (H1) [@Hd1], $0.15\pm0.04^{+0.04}_{-0.07}$ (ZEUS(BGMC)) [@Zd1] and $0.24\pm0.02^{+0.07}_{-0.05}$ (ZEUS(BGD)) [@Zd2], where the systematic errors are obtained by refitting according to a series of systematic checks outlined above. It should be noted that the (2$\sigma$) systematic shift between the ZEUS(BGD) and ZEUS(BGMC) can be attributed to the method of background subtraction. Whilst the H1 and ZEUS(BGMC) analyses, based on Monte Carlo background subtraction, agree within errors, the ZEUS(BGD) value is different from the H1 value at the 3$\sigma$ level. The Donnachie-Landshoff prediction [@dl] is $\bar{\epsilon} \simeq 0.05$, after integration over an assumed $t$ dependence and taking into account shrinkage. While comparison with the H1 value indicates that this contribution is significant, the possibility of additional contributions cannot be neglected. Taking the ZEUS(BGD) value, this measurement is incompatible with the predicted soft pomeron behaviour at the 4$\sigma$ level. Estimates of the effect of $\sigma_L$ made by assuming $\sigma_L = (Q^2/M^2) \sigma_T$ rather than $\sigma_L = 0$ result in $\bar{\epsilon}$ increasing from 0.24 to 0.29. The values can also be compared with $\bar{\epsilon} \simeq$ 0.2 obtained from the exclusive photoproduction of $J/\psi$ mesons and the electroproduction data or with $\epsilon \simeq$ 0.2 to 0.25 obtained from the dependence of the total cross sections in the measured $Q^2$ range [@levy]. In the model of Buchmüller and Hebecker [@buch], the effective exchange is dominated by one of the two gluons. In terms of $\epsilon$, where the optical theorem is no longer relevant, the diffractive cross section would therefore rise with an effective power which is halved to $\epsilon \simeq$ 0.1 to 0.125. The measured values are within the range of these estimates. The overall cross sections in each $\beta$, $Q^2$ interval are similar and one can integrate over the measured  dependence in order to determine $\tilde{F}_2^D$($\beta, Q^2$), a quantity which measures the internal structure of the pomeron up to an arbitrary integration constant. Presented in this integrated form, the data agree on the general features of the internal structure. In Figure 8 the H1 data are compared to preliminary QCD fits [@Hd2]. The general conclusions from the $\beta$ dependence are that the pomeron has a predominantly hard structure, typically characterised by a symmetric $\beta(1-\beta)$ dependence, but also containing an additional, significant contribution at low $\beta$ which has been fitted in the ZEUS analysis [@Zd1]. The virtual photon only couples directly to quarks, but the overall cross section can give indications only of the relative proportion of quarks and gluons within the pomeron, since the flux normalisation is somewhat arbitrary [@Zd1]. The $Q^2$ behaviour is broadly scaling, consistent with a partonic structure of the pomeron. Probing more deeply, however, a characteristic logarithmic rise of $\tilde{F}_2^D$ is observed in all $\beta$ intervals. Most significantly, at large $\beta$ a predominantly quark-like object would radiate gluons resulting in negative scaling violations as in the case of the large-$x$ (${\raisebox{-.6ex}{${\textstyle\stackrel{>}{\sim}}$}}0.15$) behaviour of the proton. The question of whether the pomeron is predominantly quarks or gluons, corresponding to a “quarkball" or a “gluemoron" [@cf], has been tested quantitatively by H1 using QCD fits to $\tilde{F}_2^D$ [@Hd2]. A flavour singlet quark density input of the form $zq(z) = A_q \cdot z^{B_q}(1-z)^{C_q}$, where $z$ is the momentum fraction carried by the quark, yields a numerically acceptable $\chi^2$. The characteristic $Q^2$ behaviour, however, is not reproduced. Adding a gluon contribution of similar form gives an excellent description of the data. The fit shown uses $B_q = 0.35$, $C_q = 0.35$, $B_g = 8$, $C_g = 0.3$. In general, the fits tend to favour inputs where the gluon carries a significant fraction, $\sim$ 70 to 90%, of the pomeron’s momentum. =10.cm Jet structure ------------- The question of the constituent content of the pomeron can also be addressed via measurements of diffractively produced jets in the photoproduction data [@Zjet]. Jets are reconstructed at large $W$ ($130< W < 270$ GeV) using the cone algorithm with cone radius 1 and $E_T^{jet} > 8$ GeV. The diffractive contribution is identified as a tail in the $\eta_{max}$ distribution of these events above the PYTHIA 5.7 [@pythia] Monte Carlo expectation. In Figure 9(a) the measured cross section is compared to various model predictions as a function of the jet rapidity. Comparison with the non-diffractive contribution estimated from PYTHIA indicates a significant excess at lower values of $\eta^{jet}$. Here, standard photon and proton parton distributions are adopted and the overall scale, which agrees with the non-diffractive data normalisation, is set by $E_T^{jet}$. Also shown are the predicted diffractive cross sections from POMPYT using a hard ($z(1-z)$) quark, hard gluon or soft (($1-z)^5$) gluon where a Donnachie-Landshoff flux factor is adopted and the momentum sum rule is assumed to be satisfied in each case. Sampling low-energy (soft) gluons corresponds to a small cross section and can be discounted, whereas high-energy (hard) gluons and/or quarks can account for the cross section by changing the relative weights of each contribution. The $x_\gamma$ distribution for these events, where $x_\gamma$ is the reconstructed momentum fraction of the interacting photon, is peaked around 1, indicating that at these $E_T^{jet}$ values a significant fraction of events is due to direct processes where the whole photon probes the pomeron constituents. We now have two sets of data, the DIS data [@Zd1] probing the pomeron structure at a scale $Q$ and the jet data probing at a scale of $E_T^{jet}$. Each probes the large $z$ structure of the pomeron with the jet and DIS data, predominantly sampling the (hard) gluon and quark distributions, respectively. In Figure 9(b) the preferred momentum fraction carried by the (hard) gluon, $c_g$, is indicated by the overlapping region of the jet (dark band) and DIS (light band) fits to the data. Considering the systematics due to the non-diffractive background, modelled using the Monte Carlo models, a range of values consistent with $c_g \sim 0.55 \pm 0.25$ can be estimated. The result depends on the assumption that the cross sections for both sets of data factorise with a universal flux, characterised by the same value of $\bar{\epsilon}$ in this $W$ range, but does not assume the momentum sum rule.        (a)        (b) \ \ So far we have only considered the case of small-$t$ diffraction with respect to the outgoing proton. Further insight into the diffractive exchange process can be obtained by measurements of the rapidity gap between jets. Here, a class of events is observed with little hadronic activity between the jets [@Zt]. The jets have $E_T^{jet} > 6$ GeV and are separated by a pseudorapidity interval ($\Delta\eta$) of up to 4 units. The scale of the momentum transfer, $t$, is not precisely defined but is of order $(E_T^{jet})^2$. A gap is defined as the absence of particles with transverse energy greater than 300 MeV between the jets. The fraction of events containing a gap is then measured as a function of $\Delta\eta$, as shown in Figure 10. The fit indicates the sum of an exponential behaviour, as expected for non-diffractive processes and discussed in relation to the diffractive DIS data, and a flat distribution expected for diffractive processes. At values of $\Delta\eta {\raisebox{-.6ex}{${\textstyle\stackrel{>}{\sim}}$}}3$, an excess is seen with a constant fraction over the expectation for non-diffractive exchange at $\simeq 0.07\pm 0.03$. This can be interpreted as evidence for large-$t$ diffractive scattering. In fact, secondary interactions of the photon and proton remnant jets could fill in the gap and therefore the underlying process could play a more significant rôle. The size of this fraction is relatively large when compared to a similar analysis by D0 and CDF where a constant fraction at $\simeq 0.01$ is observed [@D0; @CDF3]. The relative probability may differ due to the higher $W$ values of the Tevatron compared to HERA or, perhaps, due to differences in the underlying $\gamma p$ and $p\bar{p}$ interactions. =6.cm Leading proton spectrometer measurements ---------------------------------------- The advent of the leading proton spectrometers (LPS) at HERA is especially important in these diffractive measurements, since internal cross checks of the measurements as a function of $t$, $M^2$, $W^2$ and $Q^2$ can ultimately be performed and underlying assumptions can be questioned experimentally. Only in these measurements can we positively identify the diffracted proton and hence substantially reduce uncertainties on the non-diffractive and double dissociation backgrounds. However, new uncertainties are introduced due to the need for precise understanding of the beam optics and relative alignment of the detectors. Reduced statistical precision also results due to the geometrical acceptance of the detectors ($\simeq$ 6%). In figure 11(a), various observed distributions are shown for 240 events selected from the DIS data, with $Q^2 {\raisebox{-.6ex}{${\textstyle\stackrel{>}{\sim}}$}}4$ GeV$^2$. The momentum distribution clearly indicates a significant diffractive peak at $E_p = 820$ GeV above the non-diffractive background and the observed $M$ and $W$ distributions are well described by the NZ Monte Carlo [@ada]. The distribution of $\beta$ versus ${\mbox{$x_{_{{I\hspace{-0.2em}P}}}$}}$ indicates a significant fraction of events at small $\beta {\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}0.1$ which are difficult to access using the experimental techniques described earlier. The measurements are currently being analysed, but a preliminary result on the $t$-dependence is shown in Figure 11(b), measured in a relatively high observed mass interval, $<\!M\!> = 9$ GeV, at relatively low $Q^2$. The slope can be characterised by a single exponential fit with $b = 7.52 \pm 0.95^{+0.65}_{-0.82}$. This is somewhat high compared to the value of $b\simeq 4.5$ expected for a predominantly hard pomeron but lies within the range of expectations of $4 {\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}b {\raisebox{-.6ex}{${\textstyle\stackrel{<}{\sim}}$}}10$. However, before drawing conclusions, we should perhaps wait for further results on the general dependences measured in the LPS. (a) (b) \ \ Conclusions =========== The soft pomeron no longer describes $all$ diffractive data measured at HERA. As the photon virtuality and/or the vector meson mass increases a new dependence on $W^2$ emerges. As we investigate the pomeron more closely, a new type of dynamical pomeron may begin to play a rôle: a dynamical pomeron whose structure is being measured in DIS. These data are consistent with a partonic description of the exchanged object which may be described by pQCD. The experimental work focuses on extending the lever arms and increasing the precision in $t$, $M^2$, $W^2$ and $Q^2$ in order to explore this new structure. Before more precise tests can be made, further theoretical and experimental input is required to reduce the uncertainties due to non-diffractive backgrounds and proton dissociation as well as the treatment of $F_L$ and radiative corrections. Acknowledgements {#acknowledgements .unnumbered} ================ The results presented in this talk are a summary of significant developments in the study of diffraction at HERA during the last year. The financial support of the DESY Directorate and PPARC allowed me to participate in this research, whilst based at DESY, for which I am very grateful. It is a pleasure to thank Halina Abramowicz, Ela Barberis, Nick Brook, Allen Caldwell, John Dainton, Robin Devenish, Thomas Doeker, Robert Klanner, Henri Kowalski, Aharon Levy, Julian Phillips, Jeff Rahn, Laurel Sinclair, Ian Skillicorn, Ken Smith, Juan Terron, Jim Whitmore and Günter Wolf for their encouragement, enthusiasm, help and advice. Finally, thanks to Mike Whalley for his organisation at the workshop and for keeping me to time in these written proceedings. [99]{} G. Ingelman, J. Phys. G19 (1993) 1633. K. Goulianos, Phys. Rep. 101 (1983) 169; Nucl. Phys. B (Proc. Suppl.) 12 (1990) 110. A. Donnachie and P.V. Landshoff, Nucl. Phys. B244 (1984) 322; A. Donnachie, these proceedings. CDF Collab., F. Abe et al., Phys. Rev. D50 (1994) 5550. H1 Collab., S. Aid et al., EPS-0473. ZEUS Collab., M. Derrick et al., DESY 95-143. H1 Collab., S. Aid et al., EPS-0490. ZEUS Collab., M. Derrick et al., Phys. Lett. B356 (1995) 601. ZEUS Collab., M. Derrick et al., EPS-0389. ZEUS Collab., M. Derrick et al., EPS-0397. H1 Collab., S. Aid et al., EPS-0468. ZEUS Collab., M. Derrick et al., EPS-0386. H1 Collab., S. Aid et al., EPS-0469. H. Holtmann et al., HEP-PH-9503441. CDF Collab., F. Abe et al., Phys. Rev. D50 (1994) 5535. D. Aston et al., Nucl. Phys. B209 (1982) 56. G.A. Schuler and T. Sjöstrand, Nucl. Phys. B407 (1993) 539. A. Levy, DESY 95-204. M. Ryskin et al., these proceedings. M. Glück, E. Reya and A. Vogt, Phys. Lett. B306 (1993) 391. N. Wulff, University of Hamburg thesis (1994), unpublished. C. Berger, Proceedings of the Beijing Conference, August 1995. H1 Collab., T. Ahmed et al., Phys. Lett. B348 (1995) 681; J. Dainton, DESY 95-228. ZEUS Collab., M. Derrick et al., Z. Phys. C68 (1995) 569. G. Ingelman and K. Jansen-Prytz, Z. Phys. C58 (1993) 285. L. Lönnblad, Comp. Phys. Comm. 71 (1992) 15. ZEUS Collab., M. Derrick et al., Contribution to the Beijing Conference, August 1995. P. Bruni and G. Ingelman, DESY 93-187; Proceedings of the Europhysics Conference on HEP, Marseille 1993, 595. N.N. Nikolaev and B.G. Zakharov, Z. Phys. C53 (1992) 331; M. Genovese, N.N. Nikolaev and B.G. Zakharov, KFA-IKP(Th)-1994-37 and [CERN–TH.13/95]{}. W. Buchmüller and A. Hebecker, DESY 95-077. H1 Collab., S. Aid et al., EPS-0491; J.P. Phillips, Proceedings of the Paris Conference, April 1995. F. Close and J. Forshaw, HEP-PH-9509251. ZEUS Collab., M. Derrick et al., Phys. Lett. B356 (1995) 129. H.-U. Bengtsson and T. Sjöstrand, Comp. Phys. Comm. 46 (1987) 43; T. Sjöstrand, CERN-TH.6488/92. ZEUS Collab., M. Derrick et al., DESY 95-194. D0 Collab., S. Abachi et al., Phys. Rev. Lett. 72 (1994) 2332; D0 Collab., S. Abachi et al., FERMILAB-PUB-95-302-E (1995). CDF Collab., F. Abe, et al., Phys. Rev. Lett. 74 (1995) 855. A. Solano, Ph.D. Thesis, University of Torino 1993 (unpublished); A. Solano, Nucl. Phys. B (Proc. Suppl.) 25 (1992) 274.
--- abstract: 'We tackle the problem of estimating the 3D pose of an individual’s upper limbs (arms+hands) from a chest mounted depth-camera. Importantly, we consider pose estimation during everyday interactions with objects. Past work shows that strong pose+viewpoint priors and depth-based features are crucial for robust performance. In egocentric views, hands and arms are observable within a well defined volume in front of the camera. We call this volume an egocentric workspace. A notable property is that hand appearance correlates with workspace location. To exploit this correlation, we classify arm+hand configurations in a global egocentric coordinate frame, rather than a local scanning window. This greatly simplify the architecture and improves performance. We propose an efficient pipeline which 1) generates synthetic workspace exemplars for training using a virtual chest-mounted camera whose intrinsic parameters match our physical camera, 2) computes perspective-aware depth features on this entire volume and 3) recognizes discrete arm+hand pose classes through a sparse multi-class SVM. Our method provides state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time.' author: - | Gregory Rogez$^{1,2}$, James S. Supancic III$^{1}$, Deva Ramanan$^{1}$\ $^{1}$Dept of Computer Science, University of California, Irvine, CA, USA\ $^{2}$Universidad de Zaragoza, Zaragoza, Spain\ [grogez@unizar.es @ics.uci.edu]{} bibliography: - 'deva\_bib.bib' title: Egocentric Pose Recognition in Four Lines of Code --- Introduction ============ Understanding hand poses and hand-object manipulations from a wearable camera has potential applications in assisted living [@adl_cvpr12], augmented reality [@FathiHR12] and life logging [@LuG13]. As opposed to hand-pose recognition from third-person views, egocentric views may be more difficult due to additional occlusions (from manipulated objects, or self-occlusions of fingers by the palm) and the fact that hands interact with the environment and often leave the field-of-view. The latter necessitates constant re-initialization, precluding the use of a large body of hand trackers which typically perform well given manual initialization. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![[**Egocentric workspaces**]{}. We directly model the observable egocentric workspace in front of a human with a 3D volumetric descriptor, extracted from a 2.5D egocentric depth sensor. In this example, this volume is discretized into $4 \times 3 \times 4$ bins. This feature can be used to accurately predict shoulder, arm, hand poses, even when interacting with objects. We describe models learned from synthetic examples of observable egocentric workspaces obtained by place a virtual Intel Creative camera on the chest of an animated character. \[fig:splash\]](Poser3DBody.png "fig:"){width="40.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Previous work for egocentric hand analysis tends to rely on local 2D features, such as pixel-level skin classification [@LiICCV13; @LiCVPR13] or gradient-based processing of depth maps with scanning-window templates [@RogezKSMR2014]. Our approach follows in the tradition of  [@RogezKSMR2014], who argue that near-field depth measures obtained from a egocentric-depth sensor considerably simplifies hand analysis. Interestingly, egocentric-depth is not “cheating” in the sense that humans make use of stereoscopic depth cues for near-field manipulations [@fielder1996does]. We extend this observation by building an explicit 3D map of the observable near-field workspace. [**Our contributions:**]{} In this work, we describe a new computational architecture that makes use of [**global**]{} egocentric views, [**volumetric**]{} representations, and [**contextual**]{} models of interacting objects and human-bodies. Rather than detecting hands with a local (translation-invariant) scanning-window classifier, we process the entire global egocentric view (or [*work-space*]{}) in front of the observer (Fig. \[fig:splash\]). Hand appearance is not translation-invariant due to perspective effects and kinematic constraints with the arm. To capture such effects, we build a library of synthetic 3D egocentric workspaces generated using real capture conditions (see examples in Fig. \[fig:pose\_examples\]). We animate a 3D human character model inside virtual scenes with objects, and render such animations with a chest-mounted camera whose intrinsics match our physical camera . We simultaneously recognize arm and hand poses while interacting with objects by classifying the whole 3D volume using a multi-class Support Vector Machine (SVM) classifier. Recognition is simple and fast enough to be implemented in 4 lines of code. Related work {#sec:litr} ------------ [**Hand-object pose estimation:**]{} While there is a large body of work on hand-tracking [@KurataKKJE02; @KolschT05; @Kolsch10; @Athitsos03estimating3d; @Oikonomidis2012; @StengerPAMI06; @wang2009rth], we focus on hand pose estimation during object manipulations. Object interactions both complicate analysis due to additional occlusions, but also provide additional contextual constraints (hands cannot penetrate object geometry, for example). [@hamer_tracking_2009] describe articulated tracker with soft anti-penetration constraints, increasing robustness to occlusion. Hamer *et al.* describe contextual priors for hands in relation to objects [@hamer_object-dependent_2010], and demonstrate their effectiveness for increasing tracking accuracy. Objects are easier to animate than hands because they have fewer joint parameters. With this intuition, object motion can be used as an input signal for estimating hand motions [@hamer_data-driven_2011]. [@RomeroKEK13] use a large synthetic dataset of hands manipulating objects, similar to us. We differ in our focus on single-image and egocentric analysis. ![[**Problem statement.**]{} The dimensionality of the problem is ${\color{green} N_{arm} } \times {\color{red} N_{hand}} \times {\color{blue} N_{object}} \times {\color{cyan} N_{background}}$. In this work, we will randomly sample ${\color{green} N_{arm} }$, consider a fixed set of hand-object configurations (${\color{red} N_{hand}} \times {\color{blue} N_{object}} = 100$) and a fixed set of ${\color{cyan} N_{background}}$ background images captured with an Intel Creative depth camera. To deal with the background, we will cluster the dataset and learn discriminative multi-classifiers, robust to background. For each hand-object model, we randomly perturbed shoulder, arm and hand joint angles as physically possible to create a new dense cloud of 3D points for arm+hand+object. We show 2 examples of hand-object models, a bottle (left) and a juice box (right) rendered in front of a flat wall. \[fig:pose\_examples\] ](PoseExamples.png){width="\columnwidth"} [**Egocentric Vision:**]{} Previous egocentric studies have focused on activities of daily living [@adl_cvpr12; @fathiunderstanding]. Long-scale temporal structure was used to handle complex hand object interactions, exploiting the fact that objects look different when they are manipulated (active) versus not manipulated (passive) [@adl_cvpr12]. Much previous work on egocentric hand recognition make exclusive use of RGB cues [@LiCVPR13; @limodel], while we focus on volumetric depth cues. Notable exceptions include [@DamenGMC12], who employ egocentric RGB-D sensors for personal workspace monitoring in industrial environments and  [@MannHJLRCD11], who employ such sensors to assist blind users in navigation. [**Depth features:**]{} Previous work has shown the efficacy of depth cues [@shotton_efficient_2013; @XuChe_iccv13]. We compute volumetric depth features from point clouds. Previous work has examined point-cloud processing of depth-images [@YeWYRP11; @shotton2013real; @XuChe_iccv13]. A common technique estimates local surface orientations and normals [@YeWYRP11; @XuChe_iccv13], but this may be sensitive to noise since it requires derivative computations. We employ simpler volumetric features, similar to [@song2014sliding] except that we use a spherical coordinate frame that does not slide along a scanning window (because we want to measure depth in an egocentric coordinate frame). [**Non-parametric recognition:**]{} Our work is inspired by non-parametric techniques that make use of synthetic training data [@RomeroKEK13; @shakhnarovich2003fast; @hamer_tracking_2009; @transduc:forest; @tzionas_comparison_2013]. [@shakhnarovich2003fast] make use of pose-sensitive hashing techniques for efficient matching of synthetic RGB images rendered with Poser. We generate synthetic depth images, mimicking capture conditions of our actual camera. ----- ----- (a) (b) (c) (d) ----- ----- Training data ============= The dataset employed in this paper is made of realistic synthetic 3D exemplars which are generated simulating real capture conditions: synthetic 3D hand-object data, rendered with a 3D computer graphics program, are combined with real 3D background scenario and rendered using the test camera projection matrix. [**Poser models.**]{} Our synthetic database includes more than 200 different grasping hand postures with and without objects. We also varied the objects being interacted with, as well as the clothing of the character, i.e., with and without sleeves. Overall we used 49 objects, including kitchen utensils, personal bathroom items, office/classroom objects, fruits, etc. Additionally we used 6 models of empty hands: waive, fist, thumbs-up, point, etc . Note that some objects can be handled with different postures. For instance, when we open a bottle we do not use the same posture (to grasp its cap and neck) as we do to idly grasp its body . We added several such variant models to our database, i.e., different hand postures manipulating the same object. [**Kinematic model.**]{} Let $\theta$ be a vector of arm joint angles, and let $\phi$ be a vector of grasp-specific hand joint angles, obtained from the above set of Poser models. We use a standard forward kinematic chain to convert the location of finger joints ${\bf u}$ (in a local coordinate system) to image coordinates: $$\begin{aligned} & {\bf p} = C \prod_i T(\theta_i) \prod_j T(\phi_j) {\bf u}, \quad \text{where} \quad T,C \in \mathbb{R}^{4 \times 4}, \nonumber \\ &{\bf u} = \begin{bmatrix} u_x & u_y & u_z & 1 \end{bmatrix}^T, \quad (x,y) = (f \frac{p_x}{p_z}, f\frac{p_y}{p_z}), \label{eq:kinematics}\end{aligned}$$ where $T$ specifies rigid-body transformations (rotation and translation) along the kinematic chain and $C$ specifies the extrinsic camera parameters. Here [**p**]{} represents the 3D position of point [**u**]{} in the camera coordinate system. To generate the corresponding image point, we assume camera intrinsics are given by identity scale factors and a focal length $f$ (though it is straightforward to use more complex intrinsic parameterizations). We found it important to use the $f$ corresponding to our physical camera, as it is crucial to correctly model perspective effects for our near-field workspaces. [**Viewpoint-dependent translations:**]{} We wish to enrich the core set of posed hands with additional translations and viewpoints. The parametrization of visible arm+hand configurations is non-trivial. To do so, we take a simple [*rejection sampling*]{} approach. We fix $\phi$ parameters to respect the hand grasps from Poser, and add small Gaussian perturbations to arm joint angles $$\theta_i' = \theta_i + \epsilon \quad \text{where} \quad \epsilon \sim N(0,\sigma^2).$$ Importantly, this generates hand joints [**p**]{} at different translations and viewpoints, correctly modeling the dependencies between both. For each perturbed pose, we render hand joints using and keep poses where 90% of them are visible (e.g., their $(u,v)$ coordinate lies within the image boundaries). [**Depth maps.**]{} Associated with each rendered set of keypoints, we would also like a depth map. To construct a depth map, we represent each rigid limb with a dense cloud of 3D vertices $\{{\bf u}_i\}$. We produce this cloud by (over) sampling the 3D meshes defining each rigid-body shape. We render this dense cloud using forward kinematics , producing a set of points $\{{\bf p}_{i}\}=\{(p_{x,i},p_{y,i},p_{z,i})\}$. We define a 2D depth map $z[u,v]$ by ray-tracing. Specifically, we cast a ray from the origin, in the direction of each image (or depth sensor) pixel location $(u,v)$ and find the closest point: $$\begin{aligned} z[u,v] = \min_{k \in \text{Ray}(u,v)}||{\bf p}_k|| \label{eq:depth} $$ where $\text{Ray}(u,v)$ denotes the set of points on (or near) the ray passing through pixel $(u,v)$. We found the above approach simpler to implement than hidden surface removal, so long as we projected a sufficiently dense cloud of 3D points. [**Multiple hands:**]{} Some object interactions require multiple hands interacting with a single object. Additionally, many views contain the second hand in the “background”. For example, two hands are visible in roughly 25% of the frames in our benchmark videos. We would like our training dataset to have similar statistics. Our existing Poser library contains mostly single-hand grasps. To generate additional multi-arm egocentric views, we randomly pair 25% of the arm poses with a mirrored copy of another randomly-chosen pose. We then add noise to the arm joint angles, as described above. Such a procedure may generate unnatural or self-intersecting poses. To remove such cases, we separately generate depth maps for the left and right arms, and only keep pairings that produce compatible depth maps: $$\begin{aligned} | z_{left}[u,v] - z_{right}[u,v] | > \delta \quad \forall u,v\end{aligned}$$ We find this simple procedure produces surprisingly realistic multi-arm configurations (Fig. \[fig:trainingdata\]). Finally we add background clutter from depth maps of real egocentric scenes (not from our benchmark data). We used the above approach to produce a dataset of 500,000 multi-hand(+arm+objects) configurations and associated depth-maps. ![image](Frustum3D2D_boxes.png){width="\textwidth"} Formulation =========== Perspective-aware depth features -------------------------------- Objects close to the lens appear large relative to more distant objects and cover greater areas of the depth map. Much previous work has proposed to remove the effect of the perspective projection by computing depth feature in real-world orthographic space, e.g. by quantizing 3D points clouds, for instance to train translation-invariant detectors. We posit that perspective distortion is useful in egocentric settings and should be exploited: objects of interest (hands, arms, and manipulated things) tend to lie near the body and exhibit perspective effects. To encode such phenomena, we construct a spherical bin histogram by gridding up the egocentric workspace volume by varying azimuth and elevation angles (See Fig. \[fig:Frustum\]). We demonstrate that this feature performs better than orthographic counterparts, and is also faster to compute. ![image](pipeline_depthmap_with_pose2.png){width="\textwidth"} [**Binarized volumetric features:**]{} Much past work processes depth maps as 2D rasterized sensor data. Though convenient for applying efficient image processing routines such as gradient computations (e.g., [@tang2013histogram]), rasterization may not fully capture the 3D nature of the data. Alternatively, one can convert depth maps to a full 3D point cloud [@LaiBF14], but the result is orderless making operations such as correspondence-estimation difficult. We propose encoding depth data in a 3D volumetric representation, similar to [@song2014sliding]. To do so, we can back-project the depth map from into a cloud of visible 3D points $\{{\bf p}_{k}\}$, visualized in Fig. \[fig:feature\]-(b). They are a subset of the original cloud of 3D points $\{{\bf p}_{i}\}$ in Fig. \[fig:feature\]-(a). We now bin those visible points that fall within the egocentric workspace in front of the camera (observable volume within $z_{max} = 70 \text{cm}$) into a binary voxel grid of $N_u \times N_v \times N_w$ voxels: $$\begin{aligned} b[u,v,w] = \left\{ \begin{array}{c l l} 1 & \text{if} & \exists k \text{ s.t.} \quad {\bf p}_{k} \in F(u,v,w)\\ 0 & & \text{otherwise} \end{array}\right. \end{aligned}$$ where $F(u,v,w)$ denotes the set of points within a voxel centered at coordinate $(u,v,w)$. [**Spherical voxels:**]{} Past work tends to use rectilinear voxels [@song2014sliding; @LaiBF14]. Instead, we use a spherical binning structure, centering the sphere at the camera origin ( Fig. \[fig:Vol\]). At first glance, this might seem strange because voxels now vary in size – those further away from the camera are larger. The main advantage of a “perspective-aware” binning scheme is that all voxels now project to the same image area in pixels (Fig. \[fig:Vol\]-(c) . This in turn makes feature computation extremely efficient, as we will show. [**Efficient quantization:**]{} Let us choose spherical bins $F(u,v,w)$ such that they project to a single pixel $(u,v)$ in the depth map. This allows one to compute the binary voxel grid $b[u,v,w]$ by simply “reading off” the depth value for each $z(u,v)$ coordinates, quantizing it to $z'$, and assigning 1 to the corresponding voxel: $$\begin{aligned} b[u,v,w] = \left\{ \begin{array}{l l l} 1 & \text{if} &\quad w = z'[u,v]\\ 0 & & \text{otherwise} \end{array}\right. \label{eq:voxel}\end{aligned}$$ This results in a sparse volumetric voxel features visualized in Fig. \[fig:feature\]-(c). Once a depth measurement is observed at position $b[u',v',w'] = 1$, all voxels behind it are occluded for $w\geq w'$. This arises from the fact that single camera depth measurements are, in fact, 2.5D. By convention, we define occluded voxels to be “1”. In practice, we consider a coarse discretization of the volume to make the problem more tractable. The depth map $z[x,y]$ is resized to $N_u \times N_v$ (smaller than depth map size) and quantized in $z$-direction. To minimize the effect of noise when counting the points which fall in the different voxels, we quantize the depth measurements by applying a median filter on the pixel values within each image region: $$\begin{aligned} \begin{array}{c} \forall u,v\in [1, N_u]\times[1, N_v], \\ z'[u,v]=\frac{N_w}{z_{max}}\text{median}(z[x,y]:(x,y)\in P(u,v)), \end{array}\end{aligned}$$ where $P(u,v)$ is the set of pixel coordinates in the original depth map corresponding to pixel coordinate $(u,v)$ coordinates in the resized depth map. Global pose classification {#sec:globalpose} -------------------------- We quantize the set of poses from our synthetic database into $K$ coarse classes for each limb, and train a $K$-way pose-classifier for pose-estimation. The classifier is linear and makes use of our sparse volumetric features, making it quite simple and efficient to implement. [**Pose space quantization:** ]{} For each training exemplar, we generate the set of 3D keypoints: 17 joints (elbow + wrist + 15 finger joints) and the 5 finger tips. Since we want to recognize coarse limb (arm+hand) configurations, we cluster the resulting training set by applying K-means to the elbow+wrist+knuckle 3D joints. We usually represent each of the K resulting clusters using the average 3D/2D keypoint locations of both arm+hand (See examples in Fig. \[fig:PoseClusters\]). Note that K can be chosen as a compromise between accuracy and speed. ![image](PoseClusters.png){width="\textwidth"} [**Global classification:**]{} We use a linear SVM for a multi-class classification of upper-limb poses. However, instead of classifying local scanning-windows, we classify global depth maps quantized into our binarized depth feature $ b[u,v,w]$ from . Global depth maps allow the classifier to exploit contextual interactions between multiple hands, arms and objects. In particular, we find that modeling arms is particularly helpful for detecting hands. For each class $k \in \{1, 2, ... K \}$, we train a one-vs-all SVM classifier obtaining weight vector which can be re-arranged into a $N_u \times N_v \times N_w$ tensor $\beta_k[u,v,w]$. The score for class k is then obtained by a simple dot product of this weight and our binarized feature $b[u,v,w]$: $$\begin{aligned} \text{score}[k]= \sum_{u} \sum_{v} \sum_{w} \beta_k[u,v,w] \cdot b[u,v,w]. \label{eq:svm}\end{aligned}$$ In Fig. \[fig:PoseClusters\], we show the weight tensor $\beta_k[u,v,w]$ for 3 different pose clusters. Joint feature extraction and classification ------------------------------------------- To increase run-time efficiency, we exploit the sparsity of our binarized volumetric feature and jointly implement feature extraction and SVM scoring. Since our binarized depth features do not require any normalization and the classification score is a simple dot product, we can readily extract the feature and update the score on the fly. Because all voxels behind the first measurement are backfilled, the SVM score for each class $k$ from can be written as: $$\begin{aligned} \text{score}[k]= \sum_{u} \sum_{v} \beta_k'[u,v,z'[u,v]], \label{eq:eff}\end{aligned}$$ where $z'[u,v]$ is the quantized depth map and tensor $\beta_k'[u,v,w]$ is the cumulative sum of the weight tensor along dimension $w$: $$\begin{aligned} \beta'_k[u,v,w] = \sum_{d >= w} \beta_k[u,v,d]\end{aligned}$$ Note that the above cumulative-sum tensors can be precomputed. This makes test-time classification quite efficient . Feature extraction and SVM classification can be computed jointly following the algorithm presented in Alg. \[alg:classif\]. We invite the reader to view our code in supplementary material. Experiments =========== For evaluation, we use the recently released UCI Egocentric dataset [@RogezKSMR2014] and score hand pose detection as a proxy for limb pose recognition (following the benchmark criteria used in  [@RogezKSMR2014]) . The dataset consists of 4 video sequences (around 1000 frames each) of everyday egocentric scenes with hand annotations every 10 frames. Our unoptimized matlab implementation runs at 15 frames per second. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Feature comparison Feature Resolution ![[**Feature evaluation**]{}. We compare different type of features, volumetric features, HOG on RGB-D, HOG on Depth for $K=750$ classes (a). For our perspective binary features and the orthographic binary features, we consider regular grids of dimensions $32\times24\times35$. For HOG on depth and HOG on RGB-D, we respectively use $30\times40$ and $16\times24$ cells with 16 orientation bins. Our perspective binary features clearly outperforms other types of features. We also show results varying the resolution of our proposed feature in (b), again $K=750$. We can observe how $32\times24\times35$ is a good trade-off between feature dimensionality and performance, which validates our choice. Doubling the resolution in $u,v$ marginally improves accuracy. \[fig:EvalFeatures\]](featurecomp.png "fig:"){width="0.5\columnwidth"} ![[**Feature evaluation**]{}. We compare different type of features, volumetric features, HOG on RGB-D, HOG on Depth for $K=750$ classes (a). For our perspective binary features and the orthographic binary features, we consider regular grids of dimensions $32\times24\times35$. For HOG on depth and HOG on RGB-D, we respectively use $30\times40$ and $16\times24$ cells with 16 orientation bins. Our perspective binary features clearly outperforms other types of features. We also show results varying the resolution of our proposed feature in (b), again $K=750$. We can observe how $32\times24\times35$ is a good trade-off between feature dimensionality and performance, which validates our choice. Doubling the resolution in $u,v$ marginally improves accuracy. \[fig:EvalFeatures\]](multiSVMs_detection_varyingResolution_BinFeatures_750classes.png "fig:"){width="0.5\columnwidth"} (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [**Feature evaluation:**]{} We first compare hand detection accuracy for different K-way SVM classifiers trained on HOG on depth (as in [@RogezKSMR2014] ) and HOG on RGB-D, thus exploiting the stereo-views provided by RGB and depth sensors. To show the benefit of preserving the perspective when encoding depth features, we also experimented with an orthographic version of our binarized volumetric feature (similar to past work  [@song2014sliding; @LaiBF14]). In that case, we quantize those points that fall within a 64x48x70 $cm^3$ egocentric workspace in front of the camera into a binary voxel grid: $$\begin{aligned} b_{\perp}[u,v,w] = \left\{ \begin{array}{c l l} 1 & \text{if} & \exists i \text{ s.t.} \quad (x_i,y_i,z_i) \in N(u,v,w)\\ 0 & & \text{otherwise} \end{array}\right. \end{aligned}$$ where $N(u,v,w)$ specifies a $2 \times 2 \times 2 cm$ cube centered at voxel $(u,v,w)$. Note that this feature is considerable more involved to calculate, since it requires an explicit backprojection and explicit geometric computations for binning. It is also not clear how to identify occluded voxels because they are not arranged along line-of-sight rays. The results obtained with $K=750$ pose classes are reported in Fig. \[fig:EvalFeatures\]-(a). Our perspective binary features clearly outperforms other types of features. We reach $72\%$ detection accuracy while state of the art algorithm [@RogezKSMR2014] reports $60\%$ accuracy. Our volumetric feature has empirically strong performance in egocentric settings. One reason is that it is robust to small intra-cluster misalignment and deformations because all voxels behind the first measurement are backfilled. Second, it is sensitive to variations in apparent size induced by perspective effects (because voxels have consistent perspective projections). In Fig. \[fig:EvalFeatures\]-(b), we also show results varying the resolution of the grid. Our choice of $32\times24\times35$ is a good trade-off between feature dimensionality and performance. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- Detection varying $K$ Detection varying size of training ![[**Clustering and size of training set.**]{} We compare hand pose detection against state of the art method [@RogezKSMR2014] in (a). The results are given varying the number of discretized pose classes for a total of 120,000 training exemplars. We see that we reach a local maxima for $K=750$. In (b), we show how increasing the number of positive training exemplars used to train each 1-vs-all SVM classifier slowly increases accuracy. \[fig:Eval\]](multiSVMs_detection_varyingNumClasses_BinFeature_vs_HOG_RGBD.png "fig:"){width="0.5\columnwidth"} ![[**Clustering and size of training set.**]{} We compare hand pose detection against state of the art method [@RogezKSMR2014] in (a). The results are given varying the number of discretized pose classes for a total of 120,000 training exemplars. We see that we reach a local maxima for $K=750$. In (b), we show how increasing the number of positive training exemplars used to train each 1-vs-all SVM classifier slowly increases accuracy. \[fig:Eval\]](Accuracy_varying_Nexemp.png "fig:"){width="0.5\columnwidth"} (a) (b) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -- [**Training data and clustering:**]{} We evaluated the performance of our algorithm when varying the discretization of a set of 120,000 training images, i.e. varying the number of pose classes. We can observe in Fig. \[fig:Eval\]-(a) that we reach a local maxima for $K=750$. This suggests that for $K\geq 750$ there is not enough training data to train robust SVM classifiers and our model over-fits. We trained several $K$-way classifiers varying the number of training instances for each class. Increasing the number of positive training exemplars used to train each 1-vs-all SVM classifier slowly increases accuracy as shown in Fig. \[fig:Eval\]-(b). These results suggest that a massive training data set and a finer quantization of the pose space ($K\geq 750$) should outperform our existing model. [**Qualitative results:**]{} We illustrate successes in difficult scenarios in Fig. \[fig:easycases\] and analyze common failure modes in Fig. \[fig:hardcases\]. Please see the figures for additional discussion. We also invite the reader to view our supplementary videos for additional results. ![image](results-small.pdf){width="\textwidth"} ![image](hardcases){width="\textwidth"} Conclusions =========== We have proposed a new approach to the problem of egocentric 3D hand pose recognition during interactions with objects. Instead of classifying local depth image regions through a typical translation-invariant scanning window, we have shown that classifying the global arm+hand+object configurations within the “whole” egocentric workspace in front of the camera allows for fast and accurate results. We train our model by synthesizing workspace exemplars consisting of hands, arms, objects and backgrounds. Our model explicitly reasons about perspective occlusions while being both conceptually and practically simple to implement (4 lines of code). We produce state-of-the-art real-time results for egocentric pose estimation.
--- abstract: 'We prove diffusive lower bounds on the rate of escape of the random walk on infinite transitive graphs. Similar estimates holds for finite graphs, up to the relaxation time of the walk. Our approach uses non-constant equivariant harmonic mappings taking values in a Hilbert space. For the special case of discrete, amenable groups, we present a more explicit proof of the Mok-Korevaar-Schoen theorem on existence of such harmonic maps by constructing them from the heat flow on a F[ø]{}lner set.' author: - 'James R. Lee[^1]' - 'Yuval Peres[^2]' bibliography: - 'drift.bib' title: | [**Harmonic maps on amenable groups and a\ diffusive lower bound for random walks**]{} --- Introduction {#sec:intro} ============ Let $G$ be a $d$-regular, transitive graph (i.e., with transitive automorphism group), let $\{X_t\}$ denote the symmetric simple random walk on $G$ with $X_0$ arbitrary, and let ${\mathsf{dist}}$ be the path metric on $G$. In the case when $G$ is the Cayley graph of a finitely-generated, amenable group, Èrshler [@Erschler08] showed that $\mathbb E\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}C t/d$ for all times $t {\geqslant}1$, where $C {\geqslant}1$ is some absolute constant. Our first theorem concerns a more precise analysis of the random walk behavior, as well as an extension to general transitive, amenable graphs. Recall that a graph $G$ is [**amenable**]{} if there exists a sequence of finite subsets $\{S_j\}$ of the vertices such that $|S_j \triangle N(S_j)|/|S_j| \to 0$, where $N(S_j)$ denotes the neighborhood of $S_j$ in $G$. \[thm:maininfinite\] Suppose $G$ is an infinite, connected, and amenable transitive $d$-regular graph. Then the simple random walk on $G$ satisfies the estimates, $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}t/d,$$ Moreover, for some universal constant $C {\geqslant}1$ and $t {\geqslant}4 d$, we have the estimates $${{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)\right] {\geqslant}C \sqrt{t/d},$$ and for every ${\varepsilon}{\geqslant}1/\sqrt{t}$, $$\frac{1}{t} \sum_{s=0}^t \pr\left[{\mathsf{dist}}(X_0, X_s) {\leqslant}{\varepsilon}\sqrt{t/d}\right] {\leqslant}C {\varepsilon}\,.$$ In Section \[sec:applications\], we prove a version of the preceding theorem for the Cayley graph of any group without property (T). We also prove a version for finite graphs which holds up to the relaxation time of the random walk. \[thm:mainfinite\] Suppose $G$ is a finite, connected, transitive $d$-regular graph and $\lambda$ denotes the second-largest eigenvalue of the transition matrix $P$ of the random walk on $G$. Then for every $t {\leqslant}(1-\lambda)^{-1}$, $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}t/(2d),$$ Moreover, for some universal constant $C {\geqslant}1$ and $(1-\lambda)^{-1} {\geqslant}t {\geqslant}4 d$, we have the estimates $${{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)\right] {\geqslant}C \sqrt{t/d},$$ and for every ${\varepsilon}{\geqslant}1/\sqrt{t}$, $$\frac{1}{t} \sum_{s=0}^t \pr\left[{\mathsf{dist}}(X_0, X_s) {\leqslant}{\varepsilon}\sqrt{t/d}\right] {\leqslant}C {\varepsilon}\,.$$ We remark that, in both cases, the dependence on $d$ is necessary (see Remark \[rem:weighted\]). The proof of Theorem \[thm:maininfinite\] is based on the existence of non-constant, equivariant harmonic maps on amenable groups. For the simplicity of presentation, we first restrict ourselves to the setting of groups. Let $\Gamma$ be a group with finite generating set $S \subseteq \Gamma$, and let $G$ be the corresponding Cayley graph. Suppose that $\mathcal H$ is some Hilbert space on which $\Gamma$ acts by isometries, and we have a non-constant equivariant harmonic map $\Psi : \Gamma \to \mathcal H$, i.e. such that $g \Psi(h) = \Psi(gh)$ and $\Psi(h) = |S|^{-1} \sum_{s \in S} \Psi(hs)$ hold for every $h \in \Gamma$. Èrshler [@Erschler08] observed that this can be used to lower bound ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right]$, as follows. We may normalize $\Psi$ so that, if $e \in \Gamma$ is the identity, $$\label{eq:normalize} \frac{1}{|S|} \sum_{s \in S} \|\Psi(e)-\Psi(s)\|^2=1.$$ By equivariance, this implies that $\Psi$ is $\sqrt{|S|}$-Lipschitz, hence $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}\frac{1}{|S|} {{\mathbb E}}\,\|\Psi(X_0)-\Psi(X_t)\|^2.$$ But since $\Psi$ is harmonic, $\Psi(X_t)$ is a martingale, thus $${{\mathbb E}}\, \|\Psi(X_0)-\Psi(X_t)\|^2 = \sum_{j=0}^{t-1} {{\mathbb E}}\, \|\Psi(X_j)-\Psi(X_{j+1})\|^2 = t,$$ where in the final line we have used equivariance and . By results of Mok [@Mok95] and Korevaar-Schoen [@KS97], if $\Gamma$ is amenable, then it always admits such an equivariant harmonic map. On the other hand, if $\Gamma$ is not amenable, then $G$ has spectral radius $\rho < 1$ [@Kesten59], hence ${{\mathbb E}}[{\mathsf{dist}}(X_0,X_t)^2] {\geqslant}C t^2$, for some constant $C=C(\rho) > 0$ (see, e.g. [@Woess00 Prop. 8.2]). Thus the preceding discussion shows that ${{\mathbb E}}[{\mathsf{dist}}(X_0,X_t)^2]$ grows at least linearly in $t$, for any infinite group $\Gamma$. In Section \[sec:escape\], we exhibit a general method for proving escape lower bounds. For any function $\psi \in \ell^2(\Gamma)$, we have $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}\frac{1}{d} \left(t - t^2 \frac{\|(I-P) \psi\|^2}{2\langle \psi, (I-P) \psi\rangle}\right),$$ where $P$ is the transition kernel of the random walk on $G$. For finite groups, we choose $\psi$ to be the eigenfunction corresponding to the second-largest eigenvalue of $P$. For infinite amenable groups, one can obtain $\psi$ directly from spectral projection. For a more explicit approach in the infinite, amenable case, we show that one can obtain the ${{\mathbb E}}[{\mathsf{dist}}(X_0,X_t)^2] {\geqslant}t/|S|$ bound by taking a sequence of functions $\{\psi_n\}$ to be a truncated heat flow from some sets $A_n \subseteq \Gamma$, i.e. $\psi_n = \sum_{i=0}^{n} P^i {\mathbf{1}}_{A_n}$, where $\{A_n\}$ forms an appropriate F[ø]{}lner sequence in $G$. These lower bounds, and indeed all the results in our paper, are proved for amenable, transitive graphs (and even quasi-transitive graphs), and more general forms of stochastic processes. The existence of non-constant equivariant harmonic maps on groups without property (T) is proved in [@Mok95; @KS97] (see also [@Kleiner07 App. A] for an exposition in the case of discrete groups, based on [@FM05]). In Section \[sec:harmonic\], inspired by the preceding escape lower bounds, we give an explicit construction of these harmonic maps, simple enough to describe here. We focus now on the amenable case; in Theorem \[thm:harmonicNPT\], we show that this approach generalizes to all discrete groups without property (T). Define $\Psi_n : \Gamma \to \ell^2(\Gamma)$ by $$\Psi_n(x) : g \mapsto \frac{\psi_n(gx)}{\sqrt{2\langle \psi_n, (I-P)\psi_n\rangle }}\,,$$ where $\psi_n$ is as before. We argue that, after applying an appropriate affine isometry to each $\Psi_n$, there is a subsequence of $\{\Psi_n\}$ which converges pointwise to a non-constant, equivariant harmonic map. Our construction works for any infinite, transitive, amenable graph (see Theorem \[thm:harmonic\]). Let $G=(V,E)$ be any infinite, connected, amenable, transitive graph. Then there exists a Hilbert space $\mathcal H$ and an $\mathcal H$-valued, non-constant equivariant harmonic mapping on $G$. In Section \[sec:withoutT\], we show that our approach also proves the preceding theorem for the Cayley graph of any group without property (T). It is known [@Mok95; @KS97] that a group admits such an equivariant harmonic mapping if and only if it does not have property (T) (see also [@Kleiner07 Lem. A.6]). One can use such mappings to obtain more detailed information on the random walk. Virág [@Virag08] showed that, in the setting of Cayley graphs of amenable groups, one has ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)\right] {\geqslant}C\sqrt{t}/|S|^{3/2}$ for some $C > 0$. This can be proved by analyzing the process $\Psi(X_t)$ via the BDG martingale inequalities (see, e.g. [@KW92 Thm. 5.6.1]).[^3] In Section \[sec:estimates\], we show how a strongers bound can be derived directly from hitting time estimates, which can themselves be easily derived for martingales, then transferred to the group setting via harmonic maps. More generally, we study some finer properties of the escape behavior of the random walk. Related work ------------ We recall some previous results on the rate of escape of random walks on groups. A large amount of work has been devoted to classifying situations where the rate of escape $\mathbb E [{\mathsf{dist}}(X_0,X_t)]$ is linear; we refer to the survey of Vershik [@Vershik00] and to the forthcoming book [@LP]. Èrshler has given examples where the rate can be asymptotic to $t^{1-2^{-k}}$ for any $k \in \mathbb N$ [@Ershler01]. Following seminal work of Varopoulos [@Var], Hebisch and Saloff-Coste [@HSC] obtained precise heat kernel estimates for symmetric bounded-range random walks on groups of polynomial growth. In particular, Theorem 5.1 in [@HSC] implies our Theorem \[thm:maininfinite\] for groups of polynomial growth. However, for groups of super-polynomial growth, it seems that existing heat-kernel bounds (see, e.g., Theorem 4.1 in [@HSC]) are not powerful enough to imply Theorem \[thm:maininfinite\]. Diaconis and Saloff-Coste show that on finite groups satisfying a certain “moderate growth” condition, the random walk mixes in $O(D^2)$ steps, where $D$ is the diameter of the group in the word metric. A sequence of works [@ANP; @NP1; @NP2] have related the rate of escape of random walks to questions in geometric group theory, notably to estimates of Hilbert compression exponents of groups. Our argument for finite groups was motivated by the work of the first author with Y. Makarychev [@LM] on effective, finitary versions of Gromov’s polynomial growth theorem. Another substantial work in this direction is the recent preprint of Shalom and Tao [@ST], written independently of the present paper. Constructions of nearly harmonic functions play a key role there as well. Escape rate of random walks {#sec:escape} =========================== In the present section, we will consider a finite or infinite symmetric, stochastic matrix $\{P(x,y)\}_{x,y \in V}$ for some index set $V$. We write ${\mathsf{Aut}}(P)$ for the set of all bijections $\sigma : V \to V$ whose diagonal action preserves $P$, i.e. $P(x,y)=P(\sigma x, \sigma y)$ for all $x,y \in V$. For the most part, we will be concerned with matrices $P$ for which ${\mathsf{Aut}}(P)$ acts transitively on $V$. A primary example is given by taking $P$ to be the transition matrix of the simple random walk on a finite or infinite vertex-transitive graph $G$. \[thm:potential\] Let $V$ be an at most countable index set, and consider any symmetric, stochastic matrix $\{P(x,y)\}_{x,y \in V}$. Suppose that $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ is a closed, unimodular subgroup which acts transitively on $V$, and let $G=(V,E)$ be any graph on which $\Gamma$ acts by automorphisms. If ${\mathsf{dist}}$ is the path metric on $G$, and $\psi \in \ell^2(V)$, then $$\label{eq:infdrift} {{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}p_* \frac{\langle \psi, (I-P^t) \psi\rangle}{\langle \psi, (I-P) \psi\rangle} {\geqslant}p_* \left(t - t^2 \frac{\|(I-P) \psi\|^2}{2\langle \psi, (I-P) \psi\rangle}\right).$$ where $\{X_t\}$ denotes the random walk with transition kernel $P$ started at any $X_0=x_0 \in V$, and $$p_* = \min \{ P(x,y) : \{x,y\} \in E \}.$$ Since $\Gamma$ is unimodular, we can choose the Haar measure $\mu$ on $\Gamma$ to be normalized so that $\mu(\Gamma_x)=1$ for every $x \in V$, where $\Gamma_x$ is the stabilizer subgroup of $x$. (Note that the stabilizier $\Gamma_x$ is compact since $\Gamma$ acts by automorphisms on $G$, which has all its vertex degrees bounded by $1/p_*$.) Define $\Psi : V \to L^2(\Gamma, \mu)$ by $\Psi(x) : \sigma \mapsto \psi(\sigma x)$. In this case, for every $z \in V$, $$\begin{aligned} \sum_{y \in V} P(z,y) \|\Psi(y)-\Psi(z)\|_{L^2(\Gamma,\mu)}^2 &=& \sum_{y \in V} P(z,y) \int |\psi(\sigma z)-\psi(\sigma y)|^2\, d\mu(\sigma) \nonumber \\ &=& \sum_{y \in V} \int P(\sigma z, \sigma y) |\psi(\sigma z)-\psi(\sigma y)|^2\, d\mu(\sigma) \nonumber \\ &=& \mu(\Gamma_z) \sum_{x,y \in V} P(x,y) |\psi(x)-\psi(y)|^2 \nonumber \\ &=& 2 \langle \psi, (I-P)\psi \rangle. \label{eq:lastbound}\end{aligned}$$ Thus for $\{x,y\} \in E$, we have $\|\Psi(x)-\Psi(y)\|_{L^2(\Gamma,\mu)}^2 {\leqslant}\frac{2 \langle \psi, (I-P)\psi\rangle}{p_*}$, which implies that $$\label{eq:grad} \|\Psi\|_{{\mathrm{Lip}}} {\leqslant}\sqrt{\frac{2 \langle \psi, (I-P)\psi\rangle}{p_*}},$$ where $\Psi$ is considered as a map from $(V,{\mathsf{dist}})$ to $L^2(\Gamma,\mu)$, and we use $\|\Psi\|_{{\mathrm{Lip}}}$ to denote the infimal number $L$ such that $\Psi$ is $L$-Lipschitz. So, for any $x_0 \in V$, we have $$\begin{aligned} \|\Psi\|_{{\mathrm{Lip}}}^2 \,\mathbb E\left[{\mathsf{dist}}(X_0,X_t)^2\,|\,X_0=x_0\right] &{\geqslant}& \mathbb E\left[\|\Psi(X_0)-\Psi(X_t)\|_{L^2(\Gamma,\mu)}^2\,|\,X_0=x_0\right] \nonumber \\ &=& \int \mathbb E\left[|\psi(\sigma X_0)-\psi(\sigma X_t)|^2\,|\,X_0=x_0\right]\,d\mu(\sigma) \nonumber \\ &=& \sum_{x \in V} \mathbb E\left[|\psi(X_0)-\psi(X_t)|^2\,|\,X_0=x\right] \nonumber \\ &=& 2 \langle \psi, (I-P^t) \psi\rangle \nonumber\\ &=& 2 \sum_{i=0}^{t-1} \langle \psi, (I-P) P^i \psi \rangle, \label{eq:variation}\end{aligned}$$ where in the third line, we have used the fact that the action of $\sigma$ preserves $P$. To finish, we use the fact that $I-P$ is self-adjoint to compare adjacent terms via $$\begin{aligned} |\langle \psi, (I-P) P^{i} \psi \rangle - \langle \psi, (I-P) P^{i+1} \psi \rangle| = |\langle (I-P) \psi, P^i (I-P) \psi \rangle| {\leqslant}\|(I-P) \psi\|^2.\end{aligned}$$ where the final inequality follows because $P$ is stochastic, and hence a contraction. From this, we infer that $\langle\psi, (I-P)P^i \psi \rangle {\geqslant}\langle \psi, (I-P) \psi \rangle-i\|(I-P) \psi\|^2$, whence $$\sum_{i=0}^{t-1} \langle \psi, (I-P)P^i \psi \rangle {\geqslant}t \langle \psi, (I-P) \psi \rangle - \frac{t^2}{2} \|(I-P)\psi\|^2.$$ Combining the preceding line with and yields $$\frac{1}{p_*} {{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}\frac{\langle \psi, (I-P^t) \psi\rangle}{\langle \psi, (I-P)\psi \rangle} {\geqslant}t - t^2 \frac{\|(I-P) \psi\|^2}{2\langle \psi, (I-P) \psi\rangle}.$$ We now demonstrate circumstances in which an appropriate $\psi \in \ell^2(V)$ exists. Corollaries \[cor:finite\], \[cor:infinite-drift\], and Conjecture \[conj:finite\] all assume the notation of Theorem \[thm:potential\]. \[cor:finite\] Let $V$ be a finite index set and suppose that ${\mathsf{Aut}}(P)$ acts transitively on $V$. If $\lambda < 1$ is the second-largest eigenvalue of $P$, then $$\label{eq:sumbound} \mathbb E\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}p_* (1+\lambda+\lambda^2+\cdots+\lambda^{t-1}),$$ In particular, $$\mathbb E\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}p_* t/2.$$ for $t {\leqslant}(1-\lambda)^{-1}$. Let $\psi : V \to \mathbb R$ satisfy $P \psi = \lambda \psi$. By Theorem \[thm:potential\], $$\mathbb E\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}p_* \frac{\langle \psi, (I-P^t) \psi \rangle}{\langle \psi, (I-P) \psi\rangle} = p_* \frac{1-\lambda^t}{1-\lambda} = p_* (1+\lambda+\lambda^2+\cdots+\lambda^{t-1}).$$ To complete the proof, use the inequality $\lambda^j {\geqslant}(1-t^{-1})^j {\geqslant}1-j/t$. \[rem:weighted\] In particular, if $P$ is irreducible and $p_* = \min \{ P(x,y) : P(x,y) > 0 \}$ then, the conclusion is that ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}p_* t/2$ for $t {\leqslant}(1-\lambda)^{-1}$. Thus if $P$ is the simple random walk on a $d$-regular graph, the conclusion is ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}t/(2d)$. To see that the asymptotic dependence on $d$ is tight, one can consider a cycle of length $n$, together with $d-2$ self loops at each vertex, for $d {\geqslant}2$. In this case, ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\leqslant}2t/d$ for all $t {\geqslant}0$. The quantity $(1-\lambda)^{-1}$ is called the [*relaxation time*]{} of the random walk specified by $P$, and the bound degrades after this time. It is interesting to consider what happens between the relaxation time and the mixing time which is always at most $O(\log |V|) (1-\lambda)^{-1}$. One might conjecture that ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right]$ continues to have a linear lower bound until the mixing time. Towards this end, we pose the following conjecture. \[conj:finite\] There exists a constant ${\varepsilon}_0 > 0$ such that the following holds. For every finite, connected, $d$-regular transitive graph $G=(V,E)$ with diameter $D$, $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}{\varepsilon}_0 t/d$$ for $t {\leqslant}{\varepsilon}_0 D^2$, where $\{X_t\}$ is the simple random walk on $G$. \[cor:amen\] If $G=(V,E)$ is an infinite, transitive, connected, amenable graph with degree $d$ and $\{X_t\}$ is the simple random walk, then $${{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}t/d\,.$$ If $P$ is the transition kernel of the simple random walk, it is a standard fact [@Kesten59] that when $G$ is infinite, connected, and amenable, the spectrum of $P$ has an accumulation point at $1$, but does not contain $1$. Therefore, for every $\delta > 0$ and ${\varepsilon}> 0$, there exists a $\delta' \in (0,\delta]$ so that, by considering the spectral projection of $P$ onto the interval $[1-\delta'-{\varepsilon},1-\delta')$, we obtain a unit vector $\psi \in \ell^2(V)$ for which $\langle \psi, P^t \psi \rangle {\leqslant}(1-\delta')^t$, while $\langle \psi, P \psi \rangle {\geqslant}1-\delta'-{\varepsilon}$. Plugging this into Theorem \[thm:potential\], we conclude that $${{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}\frac{1}{d} \cdot \frac{1-(1-\delta)^t}{\delta+{\varepsilon}}\,.$$ Sending ${\varepsilon}\to 0$ and then $\delta \to 0$ yields the desired claim. The preceding corollary yields a uniform lower bound of the form ${{\mathbb E}}[{\mathsf{dist}}(X_0,X_t)^2] {\geqslant}Ct/d$ for all $d$-regular infinite, connected, amenable graphs. In fact, one can take $C=1$. Certainly for every $d$-regular infinite, connected graph $G$ there exists a constant $C_G$ such that ${{\mathbb E}}[{\mathsf{dist}}(X_0,X_t)^2] {\geqslant}C_G t/d$, since in the non-amenable case ${\mathsf{dist}}(X_0, X_t)$ grows linearly with $t$, but with some constant depending on $G$. It is natural to ask whether one can take $C_G {\geqslant}\Omega(1)$, i.e. whether a uniform lower bound holds without the amenability assumption. This seems closely related to Conjecture \[conj:finite\]. Infinite amenable graphs {#sec:amenable} ------------------------ While Corollary \[cor:amen\] gives satisfactory results for infinite, amenable graphs, we take some time in this section to further analyze the amenable case; in particular, the explicit construction of Lemma \[lem:genQ\] serves as a connection between random walks and our construction of harmonic functions in Section \[sec:harmonic\]. The following theorem will play a role in a number of arguments. The transitive version is due to Soardi and Woess [@SW90], and the extension to quasi-transitive actions is from [@Salvatori92]. See also a different proof in [@BLPS99 Thm. 3.4]. We recall that for a graph $G=(V,E)$, we say that a group $\Gamma {\leqslant}{\mathsf{Aut}}(G)$ is [*quasi-transitive*]{} if $|\Gamma{\!\setminus\!}V| < \infty$, where $\Gamma{\!\setminus\!}V$ denotes the set of $\Gamma$-orbits of $V$. \[thm:amenable\] Let $G$ be a graph and $\Gamma {\leqslant}{\mathsf{Aut}}(G)$ a closed, quasi-transitive subgroup. Then $G$ is amenable if and only if $\Gamma$ is amenable and unimodular. We begin with the following general construction. Gromov [@Gromov03 §3.6–3.7] uses a similar analysis in the setting of the continuous heat flow on manifolds (see, in particular, Remark 3.7B in [@Gromov03]). We remark that, in this setting, the result itself follows rather directly from spectral projection as in the proof of Corollary \[cor:amen\]. \[lem:genQ\] Let $\mathcal{H}$ be a Hilbert space, and let $Q : \mathcal{H} \to \mathcal{H}$ be a self-adjoint linear operator which is contractive, i.e. with $\|Q\|_{\mathcal{H} \to \mathcal{H}}{\leqslant}1$. Suppose that for some $\theta \in (0,\frac12)$, there exists an $f \in \mathcal{H}$ which satisfies $\|f\|_{\mathcal{H}}=1$, $\|Qf-f\|_{\mathcal{H}} {\leqslant}\theta$, and $$\label{eq:limk} \lim_{k \to \infty} \frac{1}{k} \sum_{i=0}^{k-1} \langle Q^i f,f \rangle = 0.$$ Then there exists an element $\varphi \in \mathcal{H}$ with $$\frac{\|(I-Q)\varphi\|^2_{\mathcal{H}}}{\langle \varphi, (I-Q)\varphi\rangle_{\mathcal{H}}} {\leqslant}32 \,\theta.$$ Given $f \in \mathcal{H}$ and $k \in \mathbb N$, we define $\varphi_{k} \in \mathcal{H}$ by $$\varphi_{k} = \sum_{i=0}^{k-1} Q^i f.$$ First, using $(I-Q)\varphi_{k} = (I-Q^k) f$ and the fact that $Q$ is a contraction, we have $$\label{eq:lapest} \|(I-Q)\varphi_{k}\|_{\mathcal H}^2 {\leqslant}4 \|f\|_{\mathcal H}^2.$$ On the other hand, $$\begin{aligned} \langle \varphi_{k}, (I-Q)\varphi_{k} \rangle_{\mathcal H} &=& \langle \varphi_{k}, (I-Q^k) f\rangle_{\mathcal H} \\ &=& \left\langle (I-Q^k) \sum_{i=0}^{k-1} Q^i f, f \right\rangle_{\mathcal H} \\ &=& \langle 2 \varphi_{k}-\varphi_{2k}, f \rangle_{\mathcal H},\end{aligned}$$ where in the second line we have used the fact that $I-Q^k$ is self-adjoint. Combining this with yields $$\label{eq:ratio1} \frac{\|(I-Q)\varphi_{k}\|_{\mathcal H}^2}{\langle \varphi_{k}, (I-Q)\varphi_{k}\rangle_{\mathcal H}} {\leqslant}\frac{4 \|f\|_{\mathcal H}^2}{\langle 2 \varphi_{k}-\varphi_{2k},f\rangle_{\mathcal H}}.$$ The following claim will conclude the proof. [**Claim**]{}: There exists a $k \in \mathbb N$ such that $$\label{eq:wantit} \langle 2 \varphi_{k} - \varphi_{2k}, f \rangle_{\mathcal H} {\geqslant}\frac{1}{8\theta}.$$ It remains to prove the claim. By assumption, $f$ satisfies $\|f\|_{\mathcal H}=1$, and $\|Qf-f\|_{\mathcal H} {\leqslant}\theta$. Since $Q$ is a contraction, we have $\|Q^j f - Q^{j-1} f\|_{\mathcal H} {\leqslant}\theta$ for every $j {\geqslant}1$, and thus by the triangle inequality, $\|Q^j f - f\|_{\mathcal H} {\leqslant}j \theta$ for every $j {\geqslant}1$. It follows by Cauchy-Schwarz that $\langle f, (I-Q^j) f \rangle_{\mathcal H} {\leqslant}j\theta$, therefore $$\label{eq:innerprod} \langle f, Q^j f \rangle_{\mathcal H} {\geqslant}1-j \theta.$$ Thus for every $j {\geqslant}1$, $$\langle \varphi_{2^j}, f \rangle_{\mathcal H} {\geqslant}2^j (1-2^j \theta).$$ Fix $\ell \in \mathbb N$ so that $2^{\ell} \theta {\leqslant}\frac12 {\leqslant}2^{\ell+1} \theta$, yielding $$\label{eq:M} \langle \varphi_{2^{\ell}}, f \rangle_{\mathcal H} {\geqslant}\frac{1}{8\theta}.$$ Now, let $a_m = \langle \varphi_{2^m}, f \rangle_{\mathcal H}$, and write, for some $N {\geqslant}1$, $$a_{\ell} - \frac{a_{N}}{2^{N-\ell}} = \sum_{m=\ell}^{N-1} \frac{2 a_m - a_{m+1}}{2^{m-\ell+1}}.$$ By , we have $$\label{eq:limzero} \lim_{N \to \infty} \frac{a_N}{2^N} = 0.$$ Using and taking $N \to \infty$ on both sides above yields $$\frac{1}{8\theta} {\leqslant}a_{\ell} = \sum_{m=\ell}^{\infty} \frac{2 a_m - a_{m+1}}{2^{m-\ell+1}}.$$ Since $\sum_{m=\ell}^{\infty} \frac{1}{2^{m-\ell+1}}=1$, there must exist some $m {\geqslant}\ell$ with $2 a_m - a_{m+1} {\geqslant}\frac{1}{8\theta}$. This establishes the claim (\[eq:wantit\]) for $k=2^m$ and, in view of (\[eq:ratio1\]), completes the proof of the lemma. We now arrive at the following corollaries. Recall that if $P$ is transient or null-recurrent, then we have the pointwise limit, $$\label{eq:pwlimit} P^i f \to 0 \quad \textrm{for every } f \in \ell^2(V).$$ (This is usually proved for finitely supported $f$, see e.g. [@GS92 Thm. 6.4.17] or [@LPW09 Thm. 21.17]. The general case follows by approximation using the contraction property of $P$). \[cor:infpsi\] If $V$ is infinite, $P$ satisfies , and $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ is a closed, amenable, unimodular subgroup, which acts transitively on $V$, then $$\label{eq:infpsi} \inf_{\varphi \in \ell^2(V)} \frac{\|(I-P) \varphi\|^2}{\langle \varphi, (I-P) \varphi\rangle} = 0.$$ This follows from Lemma \[lem:genQ\] using the fact that, under the stated assumptions, for every $\theta > 0$, there exists an $f \in \ell^2(V)$ with $\|f\|=1$ and $\|Pf-f\| {\leqslant}\theta$. This is a standard fact that can be proved as in [@Woess00 Thm. 12.10]. In general, for every $\theta > 0$, one considers, for some ${\varepsilon}= {\varepsilon}(\theta) > 0$, the graph $G_{{\varepsilon}}$ with vertices $V$ and an edge $\{x,y\}$ whenever $P(x,y) {\geqslant}{\varepsilon}$. Since $\Gamma {\leqslant}{\mathsf{Aut}}(G_{\varepsilon})$, Theorem \[thm:amenable\] implies that $G_{{\varepsilon}}$ is amenable, and then one can take $f$ to be the (normalized) indicator of a suitable F[ø]{}lner set in $G_{\varepsilon}$. The following is an immediate consequence of Theorem \[thm:potential\] combined with the preceding result. \[cor:infinite-drift\] Under the assumptions of Theorem \[thm:potential\], the following holds. If $V$ is a countably infinite index set, $P$ satisfies , and $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ is a closed, amenable, unimodular subgroup which acts transitively on $V$, then $$\label{eq:infdrift2} {{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}p_* t.$$ Under the assumptions of Theorem \[thm:potential\], the following holds. If $\rho=\rho(P)$ is the spectral radius of $P$, then for all times $t {\leqslant}(32 (1-\rho))^{-1}$, $$\mathbb E\left[{\mathsf{dist}}(X_0,X_t)^2 \right] {\geqslant}\frac{p_* t}{2}.$$ Since $P$ is self-adjoint and positive, we have $\rho=\|P\|_{2 \rightarrow 2} = \sup_{\|f\|=1} \langle Pf,f \rangle. $ It follows that $$\inf_{\|f\|=1} \|f-Pf\|^2 {\leqslant}\inf_{\|f\|=1} 1 + \rho^2 - 2 \langle f, Pf \rangle = (1-\rho)^2.$$ Combining this with Lemma \[lem:genQ\] yields the claimed result. Compare the preceding bound with the finite case (Corollary \[cor:finite\]). The constant $p_*$ in is not tight. To do slightly better, one can argue as follows. Let $\Psi : V \to L^2(\Gamma,\mu)$ be as in the proof of Theorem \[thm:potential\]. Fix $x,y \in V$ with $L = {\mathsf{dist}}(x,y)$, and let $x = v_0, v_1, \ldots, v_L = y$ be a shortest path from $x$ to $y$ in $G$. In this case, the triangle inequality yields $$2 \|\Psi(x)-\Psi(y)\| {\leqslant}\|\Psi(v_0)-\Psi(v_1)\| + \sum_{i=1}^{L-1} \left(\|\Psi(v_{i-1})-\Psi(v_{i})\|+\|\Psi(v_i)-\Psi(v_{i+1})\|\right) + \|\Psi(v_{L-1})-\Psi(v_L)\|.$$ But for every $i \in \{1,2,\ldots,L-1\}$, there are two terms involving $v_i$, and for such $i$, we can bound $$\|\Psi(v_i)-\Psi(v_{i-1})\|^2 + \|\Psi(v_i)-\Psi(v_{i+1})\|^2 {\leqslant}\frac{2 \langle \psi, (I-P)\psi\rangle}{p_*}$$ as in . In this way, we gain a factor of 2 for such terms. Letting $\alpha$ denote the right-hand side of the preceding inequality, we have $$\|\Psi(x)-\Psi(y)\| {\leqslant}\sqrt{\alpha} \left(1+ \frac{L-1}{\sqrt{2}}\right) {\leqslant}\sqrt{\frac{\alpha}{2}} (L+1).$$ Thus for all $x,y \in V$, we have $\|\Psi(x)-\Psi(y)\|^2 {\leqslant}[{\mathsf{dist}}(x,y)+1]^2 \frac{\langle \psi, (I-P)\psi\rangle}{p_*}$. Plugging this improvement into the proof of Theorem \[thm:potential\] yields $$\label{eq:optimal} \mathbb E\left[({\mathsf{dist}}(X_0,X_t)+1)^2\right] {\geqslant}2 p_* \frac{\langle \psi, (I-P^t) \psi\rangle}{\langle \psi, (I-P) \psi\rangle},$$ which is asymptotically tight since, on the one hand, the simple random walk on $\mathbb Z$ satisfies ${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] =t$, while plugging into Corollary \[cor:infinite-drift\] yields ${{\mathbb E}}\left[({\mathsf{dist}}(X_0,X_t)+1)^2\right] {\geqslant}t$. The dependence on $p_*$ is easily seen to be tight for the simple random walk on $\mathbb Z$ with a $1-2p_*$ holding probability added to every vertex, as in Remark \[rem:weighted\]. Equivariant harmonic maps {#sec:harmonic} ========================= Let $V$ be a countably infinite index set, and let $\{P(x,y)\}_{x,y \in V}$ be a stochastic, symmetric matrix. If $\mathcal H$ is a Hilbert space, a mapping $\Psi : V \to \mathcal H$ is called [*$P$-harmonic*]{} if, for all $x \in V$, $$\Psi(x) = \sum_{y \in V} P(x,y) \Psi(y).$$ Suppose furthermore that we have a group $\Gamma$ acting on $V$. We say that $\Psi$ is [*$\Gamma$-equivariant*]{} if there exists an affine isometric action $\pi$ of $\Gamma$ on $\mathcal H$, such that for every $g \in \Gamma$, $\pi(g) \Psi(x) = \Psi(g x)$ for all $x \in V$. If we wish to emphasize the particular action $\pi$, we will say that $\Psi$ is [*$\Gamma$-equivariant with respect to $\pi$*]{}. \[thm:harmonic\] For $P$ as above, let $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ be a closed, amenable, unimodular subgroup which acts transitively on $V$. Suppose there exists a connected graph $G=(V,E)$ on which $\Gamma$ acts by automorphisms, and that for $x \in V$, $$\label{eq:bddstep} \sum_{y \in V} P(x,y)\,{\mathsf{dist}}(x,y)^2 < \infty,$$ where ${\mathsf{dist}}$ is the path metric on $G$. Suppose also that $$p_* = \min \{ P(x,y) : \{x,y\} \in E \} > 0.$$ Then there exists a Hilbert space $\mathcal H$, and a non-constant $\Gamma$-equivariant $P$-harmonic mapping from $V$ into $\mathcal H$. It is a standard result that since $G$ is connected, $P$ satisfies . Let $\{\psi_j\} \subseteq \ell^2(V)$ be a sequence of functions satisfying $$\label{eq:ratio} \frac{\left\langle (I-P) \psi_j, (I-P) \psi_j\right\rangle}{\left\langle \psi_j, (I-P) \psi_j \right\rangle} \to 0.$$ The existence of such a sequence is the content of Corollary \[cor:infpsi\]. Define $\Psi_j : V \to L^2(\Gamma,\mu)$ by $$\Psi_j(x) : \sigma \mapsto \frac{\psi_j(\sigma^{-1} x)}{\sqrt{2 \langle \psi_j, (I-P)\psi_j\rangle}}.$$ Since $\Gamma$ is unimodular, we can choose the Haar measure $\mu$ on $\Gamma$ to be normalized $\mu(\Gamma_x)=1$ for all $x \in V$, where $\Gamma_x$ is the stabilizer subgroup of $x$. Now, observe that for every $x \in V$, $$\begin{aligned} \sum_{y \in V} P(x,y) \|\Psi_j(x)-\Psi_j(y)\|_{L^2(\Gamma,\mu)}^2 &=& \frac{\displaystyle \sum_{y \in V} P(x,y) \int |\psi_j(\sigma^{-1} x)-\psi_j(\sigma^{-1} y)|^2\,d\mu(\sigma)}{\displaystyle2 \langle \psi_j, (I-P) \psi_j\rangle} \nonumber \\ &=& \mu(\Gamma_x) \frac{\displaystyle\sum_{u,y \in V} P(u,y) |\psi_j(u)-\psi_j(y)|^2}{2 \langle \psi_j, (I-P) \psi_j \rangle} \nonumber \\ &=& 1. \label{eq:lipest}\end{aligned}$$ Next, for every $x \in V$, we have $$\begin{aligned} \left\|\Psi_j(x)-\sum_{y \in V} P(x,y) \Psi_j(y)\right\|_{L^2(\Gamma,\mu)}^2 &=& \frac{\displaystyle \int \Big|\psi_j(\sigma^{-1} x)-\sum_{y \in V} P(x,y) \psi_j(\sigma^{-1} y)\Big|^2 d\mu(\sigma)}{2 \langle \psi_j, (I-P)\psi_j\rangle} \\ &=& \mu(\Gamma_x) \frac{\displaystyle \sum_{u \in V} \left|\psi_j(u)-\sum_{y \in V} P(u,y) \psi_j(y)\right|^2}{2 \langle \psi_j, (I-P) \psi_j\rangle} \\ &= & \vphantom{\frac{\sum_{u \in V} \left|\psi_j(u)-\sum_{y \in V} P(u,y) \psi_j(y)\right|^2}{2 \langle \psi_j, (I-P) \psi_j\rangle}} \frac{\left\langle (I-P) \psi_j, (I-P) \psi_j\right\rangle}{2\left\langle \psi_j, (I-P) \psi_j \right\rangle}\ . $$ In particular, from , $$\label{eq:harmlim} \lim_{j \to \infty} \left\|\Psi_j(x)-\sum_{y \in V} P(x,y) \Psi_j(y)\right\|_{L^2(\Gamma,\mu)}^2 = 0,$$ where the limit is uniform in $x \in V$. Define a unitary action $\pi_0$ of $\Gamma$ on $L^2(\Gamma,\mu)$ as follows: For $\gamma \in \Gamma, h \in L^2(\Gamma,\mu)$, $[\pi_0(\gamma) h](\sigma) = h(\gamma^{-1} \sigma)$ for all $\sigma \in \Gamma$. Notice that each $\Psi_j$ is $\Gamma$-equivariant since for $\gamma \in \Gamma, x \in V$, we have $$(\pi_0(\gamma) [\Psi_j(x)])(\sigma) = [\Psi_j(x)] (\gamma^{-1} \sigma) = \frac{\psi_j(\sigma^{-1} \gamma x)}{\sqrt{2 \langle \psi_j, (I-P)\psi_j\rangle}} = [\Psi_j(\gamma x)](\sigma).$$ We state the next lemma in slightly more generality than we need presently, since we will use it also in Section \[sec:withoutT\]. \[lem:gencon\] Suppose that $\mathfrak{H}$ be a Hilbert space, $\Gamma$ is a group, and $\pi_0$ is an affine isometric action of $\Gamma$ on $\mathfrak{H}$. Let $(V,\mathsf{dist})$ be a countable metric space, and consider a sequence of functions $\{\Psi_j : V \to \mathfrak{H}\}_{j=1}^{\infty}$, where each $\Psi_j$ is Lipschitz and $\Gamma$-equivariant with respect to $\pi_0$. Then there is a sequence of affine isometries $T_j : \mathfrak{H} \to \mathfrak{H}$ and a subsequence $\{\alpha_j\}$ such that $T_{\alpha_j} \Psi_{\alpha_j}$ converges pointwise to a map $\widehat{\Psi} : V \to \mathfrak{H}$ which is $\Gamma$-equivariant with respect to an affine isometric action $\pi$. Before proving the lemma, let us see that it can be used to finish the proof of Theorem \[thm:harmonic\]. Using , one observes that for all $j \in \mathbb N$, the map $\Psi_j$ is $\sqrt{1/p_*}$-Lipschitz on $(V,\mathsf{dist})$. Thus we are in position to apply the preceding lemma and arrive at a map $\widehat{\Psi}: V \to L^2(\Gamma,\mu)$ which is $\Gamma$-equivariant with respect to an affine isometric action. From , we see that $\widehat{\Psi}$ is $P$-harmonic. Furthermore, since the ${\Psi}_j$’s are uniformly Lipschitz, and we have the estimate , we see that holds for $\widehat{\Psi}$ as well, showing that $\widehat{\Psi}$ is non-constant, and completing the proof. Arbitrarily order the points of $V = \{x_1, x_2, \ldots \}$ and fix a sequence of subspaces $\{W_j\}_{j=1}^{\infty}$ of $\mathfrak{H}$ with $W_j \subseteq W_{j+1}$ for each $j=1,2,\ldots$, and ${\mathsf{dim}}(W_j)=j$. For each such $j$, define an affine isometry $T_j : \mathfrak{H} \to \mathfrak{H}$ which satisfies $T_j \Psi_j (x_1) = 0$ and, for every $r=1,2,\ldots, j$, $\{T_j \Psi_j(x_k)\}_{k=1}^r \subseteq W_r$. Put $\widehat{\Psi}_j = T_j \Psi_j$, and define an affine isometric action $\pi_j$ of $\Gamma$ on $\mathfrak{H}$ by $\pi_j = T_j \pi_0 T_j^{-1}$. It is straightforward to check that each $\widehat{\Psi}_j$ is $\Gamma$-equivariant with respect $\pi_j$. Since each $\Psi_j$ is Lipschitz, this holds also for $\widehat{\Psi}_j$. We now pass to a subsequence $\{\alpha_j\}$ along which $\widehat{\Psi}_{\alpha_j}(x)$ converges pointwise for every $x \in V$. To see that this is possible, notice that by construction, for every fixed $x \in V$, there is a finite-dimensional subspace $W \subseteq \mathfrak{H}$ such that $\widehat{\Psi}_j(x) \subseteq W$ for every $j \in \mathbb N$. Hence by the Lipschitz property of $\widehat{\Psi}_j$, the sequence $\{\widehat{\Psi}_j(x)\}_{j=1}^{\infty}$ lies in a compact set. We are thus left to show that $\widehat{\Psi}$ is $\Gamma$-equivariant. Toward this end, we define an action $\pi$ of $\Gamma$ on $\mathfrak{H}$ as follows: On the image of $\widehat{\Psi}$, set $\pi(\gamma) \widehat{\Psi}(x) = \widehat{\Psi}(\gamma x)$. For $g \in \mathfrak{H}$ lying in the orthogonal complement of the span of $\{\widehat{\Psi}(x)\}_{x \in V}$, we put $\pi(\gamma) g = \pi(\gamma) 0$, and then extend $\pi(\gamma)$ affine linearly to the whole space. To see that such an affine linear extension exists, observe that $$\pi(\gamma) \widehat{\Psi}(x) = \widehat{\Psi}(\gamma x) = \lim_{j\to \infty} \pi_{\alpha_j}(\gamma) \widehat{\Psi}_{\alpha_j}(x).$$ From this expression, it also follows immediately that $\pi$ acts by affine isometries, since each $\pi_{\alpha_j}$ does. Thus $\widehat{\Psi}$ is $\Gamma$-equivariant with respect to $\pi$, completing our construction. Note that, in the case where $P$ is simply the kernel of the simple random walk on the Cayley graph of a finitely-generated amenable group, one can take the Hilbert space $\mathcal H$ in Theorem \[thm:harmonic\] to be simply $\ell^2(V)$. Quasi-transitive graphs ----------------------- Only for the present section, we allow $P$ to be a non-symmetric kernel on the state space $V$. We recall that $\Gamma$ is said to [*act quasi-transitively on a set $V$*]{} if $|\Gamma{\!\setminus\!}V| < \infty$, where $\Gamma {\!\setminus\!}V$ denotes the set of $\Gamma$-orbits of $V$. We prove an analog of Theorem \[thm:harmonic\] in the quasi-transitive setting, under the assumption that kernel $P$ is reversible. \[cor:quasi\] Let $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ be a closed, amenable, unimodular subgroup which acts [*quasi-transitively*]{} on $V$. Suppose also that $P$ is the kernel of a reversible Markov chain, and there exists a connected graph $G=(V,E)$ on which $\Gamma$ acts by automorphisms, and that for $x \in V$, $$\label{eq:bddstep3} \sum_{y \in V} P(x,y)\,{\mathsf{dist}}(x,y)^2 < \infty,$$ where ${\mathsf{dist}}$ is the path metric on $G$. Suppose also that $$p_* = \min \{ P(x,y) : \{x,y\} \in E \} > 0.$$ Then there exists a Hilbert space $\mathcal H$, and a non-constant $\Gamma$-equivariant $P$-harmonic mapping from $V$ into $\mathcal H$. Let $x_0, x_1, \ldots, x_L \in V$ be a complete set of representatives of the orbits of $\Gamma$. Let $V_0 = \Gamma x_0$, and let $P_0$ be the induced transition kernel of the $P$-random walk watched on $V_0$, i.e. $P_0(x,y)~=~\pr[X_{\tau} = y \,\mid\, X_0 = x]$ for $x,y \in V_0$, where $\tau = \min \{ t {\geqslant}1 : X_t \in V_0 \}$. Since $P$ is reversible and $\Gamma$ acts transitively on $V_0$, we see that $P_0$ is symmetric. Letting $D = \max_{i \neq j} {\mathsf{dist}}(x_i, x_j)$, we define a new graph $G_0=(V_0,E_0)$ by having an edge $\{x,y\} \in E_0$ whenever 1. $\{x,y\} \in E$ and $x,y \in V_0$, or 2. there exists a path $x = v_0, v_1, \ldots, v_k = y$ in $G$ with $v_1, \ldots, v_{k-1} \notin V_0$ and $k {\leqslant}2D$. Let ${\mathsf{dist}}_0$ denote the path metric on $G_0$. It is clear that $\Gamma$ acts on $G_0$ by automorphisms, and also that $p_*(G_0) = \min \{ P_0(x,y) : \{x,y\} \in E_0 \} {\geqslant}(p_*)^{2D} > 0$. Now, since every point $x \notin V_0$ has ${\mathsf{dist}}(x,V_0) {\leqslant}D$, we see that actually ${\mathsf{dist}}(x,y) \eqsim {\mathsf{dist}}_0(x,y)$ for all $x,y \in V_0$ (up to a multiplicative constant depending on $D$). Furthermore, this implies that for any $x \in V$ there exists $y \in V_0$ with $\sum_{i=0}^{D} P^i(x,y) {\geqslant}(p_*)^D$, hence implies that for every $x \in V_0$, $$\sum_{y \in V_0} P_0(x,y)\, {\mathsf{dist}}(x,y)^2 < \infty$$ (the number of $P$-steps taking before returning to $V_0$ is dominated by a geometric random variable), which implies the same for ${\mathsf{dist}}_0$. Thus we can apply Theorem \[thm:harmonic\] to obtain a Hilbert space $\mathcal H$ and a non-constant $\Gamma$-equivariant $P_0$-harmonic map $\Psi_0 : V_0 \to \mathcal H$. We extend this to a mapping $\Psi : V \to \mathcal H$ by defining $\Psi(x) = {{\mathbb E}}\left[ \Psi_0(W_0(x))\right]$ where $W_0(x)$ is the first element of $V_0$ encountered in the $P$-random walk started at $x$. Note that $\Psi|_{V_0} = \Psi_0$, and $\Psi$ is again $\Gamma$-equivariant. To finish the proof, it suffices to check that $\Psi$ is $P$-harmonic. From the definition of $\Psi$, this is immediately clear for $x \notin V_0$. Since $\Psi_0$ is $P_0$-harmonic, it suffices to check that for $x \in V_0$, $$\sum_{y \in V} P(x,y) \Psi(y) = \sum_{y \in V_0} P_0(x,y) \Psi_0(y),$$ but both sides are precisely ${{\mathbb E}}\left[\Psi_0(W_0(Z))\right]$, where $Z$ is the random vertex arising from one step of the $P$-walk started at $x$. \[cor:qtharm\] If $G=(V,E)$ is an infinite, connected, amenable graph and $\Gamma {\leqslant}{\mathsf{Aut}}(G)$ is a quasi-transitive subgroup, then $G$ admits a non-constant $\Gamma$-equivariant harmonic mapping into some Hilbert space. Now let $G=(V,E)$ be an infinite, connected, quasi-transitive, amenable graph. The preceding construction of harmonic functions also gives escape lower bounds for the random walk on $G$. By Theorem \[thm:amenable\], when $G$ is amenable, $\Gamma={\mathsf{Aut}}(G)$ is amenable and unimodular. Let $R \subseteq V$ be a complete set of representatives from $\Gamma {\!\setminus\!}V$. Let $\mu$ be the Haar measure on $\Gamma$. For $r \in R$, let $\mu_r = \mu(\Gamma_r)$, and normalize $\mu$ so that $\sum_{r \in R} \deg(r)/\mu_r = 1$. Let ${\mathsf{dist}}$ be the path metric on $G$, and let $X_0$ have the distribution $\pr[X_0 = r] = \deg(r)/\mu_r$ for $r \in R$. Then, $${{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)^2\right] {\geqslant}\frac{t}{\max \left\{ \mu_r : r \in R \right\}},$$ where $\{X_t\}$ denotes the simple random walk on $G$. Let $\Psi : V \to \mathcal H$ be the harmonic map guaranteed by Corollary \[cor:qtharm\] normalized so that $$\label{eq:qtgradient} \sum_{r \in R} \frac{1}{\mu_r} \sum_{x : \{x,r\} \in E} \|\Psi(x)-\Psi(r)\|^2 = 1.$$ We have $$\|\Psi\|_{{\mathrm{Lip}}} {\leqslant}\max_{r \in R} \sqrt{\mu_r}.$$ For every $r, \hat r \in R$, [@BLPS99 Cor. 3.5] (with $f(x,y)=1$ for $\{x,y\} \in E$ such that $x \in \Gamma r$ and $y \in \Gamma \hat r$ and $f(x,y)=0$ otherwise) implies that $$\frac{1}{\mu_r} \# \left\{ x \in \Gamma \hat r : \{r,x\} \in E \right\} = \frac{1}{\mu_{\hat r}} \# \left\{ x \in \Gamma r : \{\hat r,x\} \in E \right\}.$$ Thus if we use the notation $[x]$ to denote the unique $r \in R$ such that $x \in \Gamma r$, then $[X_i]$ and $[X_0]$ are identically distributed for every $i {\geqslant}0$. (This is also a special case of [@LS99], Theorem 3.1). It follows that, $$\begin{aligned} \|\Psi\|_{{\mathrm{Lip}}}^2 \,{{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)^2\right] &{\geqslant}& {{\mathbb E}}\,\|\Psi(X_0)-\Psi(X_t)\|^2 \\ &=& \sum_{i=0}^{t-1} {{\mathbb E}}\,\|\Psi(X_{i+1})-\Psi(X_{i})\|^2 \\ &=& \sum_{i=0}^{t-1} {{\mathbb E}}\frac{1}{\deg(X_{i})} \sum_{x : \{x,X_i\} \in E} \|\Psi(x)-\Psi(X_i)\|^2 \\ &=& t \cdot \sum_{r \in R} \frac{\deg(r)}{\mu_r} \frac{1}{\deg(r)} \sum_{x : \{x,r\} \in E} \|\Psi(x)-\Psi(r)\|^2 \\ &=& t,\end{aligned}$$ where in the second line we have used the fact that $\{\Psi(X_t)\}$ is a martingale, in the fourth line we have used equivariance of $\Psi$, and in the final line we have used . Groups without property (T) {#sec:withoutT} --------------------------- We now state a version of Theorem \[thm:harmonic\] that applies to the simple random walk on Cayley graphs of groups without property (T). (We refer to [@BHV08] for a thorough discussion of Kazhdan’s property (T).) To this end, let $\Gamma$ be a finitely-generated group, with finite, symmetric generating set $S \subseteq \Gamma$. Let $P$ be the transition kernel of the simple random walk on $\Gamma$ (with steps from $S$). \[thm:propTphi\] Under the preceding assumptions, if $\Gamma$ does not have property (T), there exists a Hilbert space $\mathcal{H}$ and a unitary action $\pi$ of $\Gamma$ on $\mathcal{H}$ such that $$\inf_{\varphi \in \mathcal{H}} \frac{\|(I-P_{\dag})\varphi\|^2_{\mathcal{H}}}{\langle \varphi, (I-P_{\dag})\varphi\rangle_{\mathcal{H}}} = 0,$$ where $P_{\dag} : \mathcal H \to \mathcal H$ is defined by $$\label{eq:Paction} P_{\dag} f = \frac{1}{|S|} \sum_{\gamma \in S} \pi(\gamma) f.$$ Since $\Gamma$ does not have property (T), it admits a unitary action $\pi$ on some Hilbert space $\mathcal{H}$ without fixed points such that we can find, for every $\theta > 0$, an $f \in \mathcal{H}$ with $\|f\|_{\mathcal H} = 1$ and $\|P_{\dag} f - f \|_{\mathcal H} {\leqslant}\theta$. Now, $P_{\dag}$ is self-adjoint and contractive by construction, thus to apply Lemma \[lem:genQ\] (with $Q = P_{\dag}$) and reach our desired conclusion, we are left to show that $\lim \frac{1}{k} \sum_{i=0}^{k-1} \langle P_{\dag}^i f, f \rangle = 0$. Fix some non-zero $f \in \mathcal{H}$ and let $\varphi_k = \frac{1}{k} \sum_{i=0}^{k-1} P_{\dag}^i f$. If $$\label{eq:nonzerolim} \lim_{k \to \infty} \frac{1}{k} \sum_{i=0}^{k-1} \langle P_{\dag}^i f, f \rangle \neq 0,$$ then there exists a subsequence $\{k_{\alpha}\}$ and a non-zero $\varphi \in \mathcal{H}$ such that $\varphi$ is a weak limit of $\{\varphi_{k_\alpha}\}$. But in this case, we claim that $$\label{eq:fp} P_{\dag} \varphi = \varphi,$$ since for any $g \in \mathcal{H}$, we have $$\begin{aligned} \langle P_{\dag} \varphi, g \rangle_{\mathcal H} &=& \lim_{\alpha \to \infty} \left\langle \frac{1}{k_{\alpha}} \sum_{i=0}^{k_{\alpha}-1} P_{\dag}^{i+1} f, g \right\rangle_{\mathcal H} \\ &=& \lim_{\alpha \to \infty} \left\langle \frac{1}{k_{\alpha}} \sum_{i=0}^{k_{\alpha}-1} P_{\dag}^i f, g \right\rangle_{\mathcal H} \\ &=& \langle \varphi, g \rangle_{\mathcal H},\end{aligned}$$ where we have used the fact that $\lim_{\alpha \to \infty} \frac{1}{k_{\alpha}} (P_{\dag}^{k_{\alpha}} f - f) = 0,$ since $P_{\dag}$ is contractive. On the other hand, if holds, then we must have $\Gamma \varphi = \{\varphi\}$. This follows by convexity since $P_{\dag} f$ is an average of elements of $\mathcal H$, all with norm $\|f\|_{\mathcal H}$. Consequently, cannot hold, completing the proof. \[thm:harmonicNPT\] Let $\Gamma$ be a group with finite, symmetric generating set $S \subseteq \Gamma$, and let $P$ be the transition kernel of the simple random walk on the Cayley graph $\mathsf{Cay}(G;S)$. If $\Gamma$ does not have property (T), then there exists a Hilbert space $\mathcal{H}$ and a non-constant $\Gamma$-equivariant $P$-harmonic mapping from $\Gamma$ into $\mathcal H$. We write $\langle \cdot,\cdot \rangle$ and $\|\cdot\|$ for the inner product and norm on $\mathcal H$. Let $\{\psi_j\}_{j=0}^{\infty}$ be a sequence in $\mathcal H$ with $$\frac{\|(I-P_{\dag})\psi_j\|^2}{\langle \psi_j, (I-P_{\dag})\psi_j\rangle} \to 0.$$ The existence of such a sequence is the content of Theorem \[thm:propTphi\]. Define $\Psi_j : \Gamma \to \mathcal{H}$ by $$\Psi_j(g) = \frac{\pi(g) \psi_j}{2 \langle \psi_j, (I-P_{\dag})\psi_j \rangle},$$ where we recall the definition of $P_{\dag}$ from . Then, for every $j = 0,1,\ldots$, and for any $g \in \Gamma$, $$\frac{1}{|S|} \sum_{s \in S} \|\Psi_j(g)-\Psi_j(gs)\|^2 = \frac{1}{|S|} \sum_{s \in S} \frac{\|\pi(g) \psi_j - \pi(gs) \psi_j\|^2}{2 \langle \psi_j, (I-P_{\dag})\psi_j \rangle} = \frac{1}{|S|} \sum_{s \in S} \frac{\|\psi_j - \pi(s) \psi_j\|^2}{2 \langle \psi_j, (I-P_{\dag})\psi_j \rangle} = 1,$$ where we have used the fact that $\pi$ acts by isometries. By the same token, $$\left\|\Psi_j(g)-\frac{1}{|S|} \sum_{s \in S} \Psi_j(gs)\right\|^2 = \frac{\|(I-P_{\dag}) \psi_j\|^2}{2 \langle \psi_j, (I-P_{\dag}) \psi_j\rangle} \to 0.$$ Equipping $\Gamma$ with the word metric on $\mathsf{Cay}(G;S)$, an application of Lemma \[lem:gencon\] finishes the proof, just as in Theorem \[thm:harmonic\]. The rate of escape {#sec:estimates} ================== We now show how simple estimates derived from harmonic functions lead to more detailed information about the random walk. In fact, we will show that in general situations, a hitting time bound alone leads to some finer estimates. Graph estimates --------------- Consider again a symmetric, stochastic matrix $\{P(x,y)\}_{x,y \in V}$ for some at most countable index set $V$. Let $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ be a closed, unimodular subgroup that acts transitively on $V$, and let $G=(V,E)$ be any graph on which $\Gamma$ acts by automorphisms. Let ${\mathsf{dist}}$ denote the path metric on $G$, and let $\{X_t\}$ denote the random walk with transition kernel $P$ started at some fixed vertex $x_0 \in V$. For any $k \in \mathbb N$, let $H_k$ denote the first time $t$ at which ${\mathsf{dist}}(X_0, X_{t})=k$, and define the function $h : \mathbb N \to \mathbb R$ by $h(k)=\mathbb E[H_k]$. We start with the following simple lemma which employs reversibility, transitivity, and the triangle inequality. It is based on an observation due to Braverman; see also the closely related inequalities of Babai [@Babai91]. \[lem:mark\] For any $T {\geqslant}0$, we have $${{\mathbb E}}\,{\mathsf{dist}}(X_0, X_{T}) {\geqslant}\frac12 \max_{0 {\leqslant}t {\leqslant}T} {{\mathbb E}}\left[{\mathsf{dist}}(X_0, X_t)-{\mathsf{dist}}(X_0, X_1)\right]\,.$$ Let $s' {\leqslant}T$ be such that $${{\mathbb E}}\,{\mathsf{dist}}(X_0,X_{s'}) = \max_{0 {\leqslant}t {\leqslant}T} {{\mathbb E}}\,{\mathsf{dist}}(X_0, X_t)\,.$$ Then there exists an even time $s \in \{s',s'-1\}$ such that ${{\mathbb E}}\,{\mathsf{dist}}(X_0,X_s) {\geqslant}{{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_{s'})-d(X_0,X_1)\right]$. Consider $\{X_t\}$ and an identically distributed walk $\{\tilde{X_t}\}$ such that $\tilde{X_t}=X_t$ for $t {\leqslant}s/2$ and $\tilde{X_t}$ evolves independently after time $s/2$. By the triangle inequality, we have $${\mathsf{dist}}(X_0, \tilde{X}_{T}) + {\mathsf{dist}}(\tilde{X}_{T}, X_{s}) {\geqslant}{\mathsf{dist}}(X_0, X_{s})\,.$$ But by reversibility and transitivity, each of ${\mathsf{dist}}(X_0, \tilde{X}_T)$ and ${\mathsf{dist}}(\tilde{X}_T, X_s)$ are distributed as ${\mathsf{dist}}(X_0, X_T)$. Taking expectations, the claimed result follows. \[lem:lindrift\] If $h(k) {\leqslant}T$, then $${{\mathbb E}}\, {\mathsf{dist}}(X_0, X_{2T}) {\geqslant}\frac{k}{12} - \frac12 {{\mathbb E}}\,{\mathsf{dist}}(X_0,X_1)\,.$$ Assume, for the moment, that $\pr({\mathsf{dist}}(X_0, X_j) {\geqslant}k/2) < 1/3$ for all $j < 2T$. Define the random time $\tau$ to be the smallest $t {\geqslant}0$ for which ${\mathsf{dist}}(X_0,X_t) {\geqslant}k$. Since $h(k) {\leqslant}T$, we have $\Pr[\tau {\leqslant}2T] {\geqslant}\frac12$. Now, by the triangle inequality, $$\pr({\mathsf{dist}}(X_0, X_{2T}) {\geqslant}k/2) {\geqslant}\pr(\tau {\leqslant}2T) \cdot \pr({\mathsf{dist}}(X_{\tau}, X_{2T}) < k/2\mid \tau {\leqslant}2T)\,.$$ But by transitivity, we have $\pr({\mathsf{dist}}(X_{\tau}, X_{2T}) < k/2\mid \tau {\leqslant}2T) = \pr({\mathsf{dist}}(X_0, X_{2T-\tau}) < k/2) {\geqslant}2/3$, by our initial assumption. We conclude that $\pr({\mathsf{dist}}(X_0, X_{2T}) {\geqslant}k/2) {\geqslant}1/3$. Thus whether this assumption is correct or not, we obtain a $j {\leqslant}2T$ for which $\pr({\mathsf{dist}}(X_0, X_j) {\geqslant}k/2) {\geqslant}1/3$. In particular, ${{\mathbb E}}\, {\mathsf{dist}}(X_0, X_j) {\geqslant}\frac{k}{6}$. Combining this with Lemma \[lem:mark\] yields the desired result. Using transitivity, one can also prove a small-ball occupation estimate, directly from information on the hitting times. \[thm:smallball\] Assume that ${\mathsf{dist}}(X_0, X_1) {\leqslant}B$ almost surely. If $h(k) {\leqslant}T$, then for any ${\varepsilon}{\geqslant}\frac{1}{k}$, we have $$\frac{1}{T} \sum_{t=0}^{T} \pr\left[{\mathsf{dist}}(X_0, X_t) {\leqslant}\frac{{\varepsilon}}{B} k\right] {\leqslant}O({\varepsilon})\,.$$ Let $\alpha = {\varepsilon}k$. We define a sequence of random times $\{t_i\}_{i=0}^{\infty}$ as follows. First, $t_0=0$. We then define $t_{i+1}$ as the smallest time $t > t_i$ such that ${\mathsf{dist}}(X_t, X_{t_j}) {\geqslant}\alpha$ for all $j {\leqslant}i$. We put $t_{i+1}=\infty$ if no such $t$ exists. Observe that the set $\{ X_{t_i} : t_i < \infty\}$ is $\alpha$-separated in the metric ${\mathsf{dist}}$. We then define, for each $i {\geqslant}0$, the quantity $$\tau_i = \begin{cases} 0 & \textrm{if } t_i > 2T \\ \# \{ t \in [t_i, t_i + 2T] : {\mathsf{dist}}(X_t, X_{t_i}) {\leqslant}\alpha/3 \} & \textrm{otherwise.} \end{cases}$$ Since the set $\{ X_{t_i} : t_i < \infty \}$ is $\alpha$-separated, the $(\alpha/3)$-balls about the centers $X_{t_i}$ are disjoint, thus we have the inequality $$\label{eq:ineq1} 4T {\geqslant}\sum_{i=0}^{\infty} \tau_i\,,$$ where the latter sum is over only finitely many terms. We can also calculate for any $i {\geqslant}0$, $${{\mathbb E}}[\tau_i] {\geqslant}\pr(t_i {\leqslant}2T) \cdot {{\mathbb E}}[\tau_0],$$ using transitivity. Now, we have $t_i {\leqslant}2T$ if ${\mathsf{dist}}(X_0, X_T) {\geqslant}B \alpha i$, thus for $i {\leqslant}k/(B \alpha) = 1/({\varepsilon}B)$, we have $$\pr(t_i > 2T) {\leqslant}\pr(H_k > 2T) {\leqslant}\frac{{{\mathbb E}}[\tau_k]}{2T} {\leqslant}\frac12\,.$$ We conclude that for $i {\leqslant}\lfloor 1/({\varepsilon}B)\rfloor $, we have ${{\mathbb E}}[\tau_i] {\geqslant}\frac12\,{{\mathbb E}}[\tau_0]$. Combining this with yields, $${{\mathbb E}}[\tau_0] {\leqslant}O({\varepsilon}B T)\,,$$ proving the desired claim. In the next section, we prove analogues of the preceding statements for Hilbert space-valued martingales. One can then use harmonic functions to obtain such results in the graph setting. The results in this section are somewhat more general though, since they give general connections between the function $h(k)$ and other properties of the chain. For instance, for every $j \in \mathbb N$, there are groups where $h(k) \asymp k^{1/(1-2^{-j})}$ as $k \to \infty$ [@Ershler01]. Martingale estimates -------------------- We now prove analogs of Lemma \[lem:lindrift\] and Theorem \[thm:smallball\] in the setting of martingales. Let $\{M_t\}$ be a martingale taking values in some Hilbert space $\mathcal H$, with respect to the filtration $\{\mathcal F_t\}$. Assume that ${{\mathbb E}}\left[\|M_{t+1}-M_t\|^2\mid \mathcal F_t\right]=1$ for every $t {\geqslant}0$, and there exists a $B {\geqslant}1$ such that for every $t {\geqslant}0$, we have $|M_{t+1}-M_t| {\leqslant}B$ almost surely. \[lem:mghit\] For $R {\geqslant}0$, let $\tau$ be the first time $t$ such that $\|M_t-M_0\| {\geqslant}R$. Then, $R^2 {\leqslant}{{\mathbb E}}(\tau) {\leqslant}(R+B)^2$. Applying the Optional Stopping Theorem (see, e.g., [@LPW09 Cor 17.7]) to the martingale $\|M_t-M_0\|^2-t$, with the stopping time $\tau$, we see that ${{\mathbb E}}(\tau) = {{\mathbb E}}(\|M_{\tau}-M_0\|^2)$, and $R^2 {\leqslant}{{\mathbb E}}(\|M_{\tau}-M_0\|^2) {\leqslant}(R+B)^2.$ The following simple estimate gives a lower bound on the $L^1$ rate of escape for a martingale. \[lem:L1mg\] For every $T {\geqslant}0$, we have ${{\mathbb E}}\,\left\|M_0-M_T\right\| {\geqslant}\sqrt{(T-B)/8}$. Let $\tau {\geqslant}0$ be the first time such that $\|M_0-M_{\tau}\| {\geqslant}\sqrt{(T-B)/2}$, and let $\tau' = \min(\tau,T)$. First, by Lemma \[lem:mghit\] and Markov’s inequality, we have $${{\mathbb E}}\, \|M_0-M_{\tau'}\| {\geqslant}\pr(\tau {\leqslant}T) \cdot \sqrt{(T-B)/2} {\geqslant}\frac{\sqrt{T-B}}{2 \sqrt{2}}\,.$$ Then, since $\|M_0-M_t\|$ is a submartingale and $\tau'$ and $T$ are stopping times with $\tau' {\leqslant}T$, we have $${{\mathbb E}}\, \|M_0-M_T\| {\geqslant}{{\mathbb E}}\,\|M_0-M_{\tau'}\|\,.$$ Now we prove an analog of Theorem \[thm:smallball\] in the martingale setting, beginning with a preliminary lemma. \[lem:yuval\] For $R {\geqslant}R' {\geqslant}0$, let $p_R$ denote the probability that $\|M_t\| {\geqslant}\|M_0\| + R$ occurs before $\|M_t\| {\leqslant}\|M_0\|-R'$. Then $p_R {\geqslant}\frac{R'}{2R+B}$. Let $\tau {\geqslant}0$ be the first time at which $\|M_{\tau}\| {\geqslant}\|M_0\| + R$ or $\|M_{\tau}\| {\leqslant}\|M_0\|-R'$. Since $\|M_t\|-\|M_0\|$ is a submartingale, the Optional Stopping Theorem implies $$0 {\leqslant}{{\mathbb E}}\, \left(\|M_{\tau}\|-\|M_0\|\right) {\leqslant}p_R (R+B) - (1-p_R) R' {\leqslant}p_R(2R+B) + R'\,.$$ Rearranging yields the desired result. From this, we can prove a general occupation time estimate. \[lem:mgocc\] Suppose that $M_0=0$. Then for every ${\varepsilon}{\geqslant}B/\sqrt{T}$ and $T {\geqslant}1$, we have $$\frac{1}{T} \sum_{t=0}^T \pr\left[\|M_t\| {\leqslant}{\varepsilon}\sqrt{T} \right] {\leqslant}O({\varepsilon})\,.$$ Let $h=\lceil 2 (3 {\varepsilon}\sqrt{T} + B)^2\rceil$. Let $\mathcal B$ denote the ball of radius ${\varepsilon}\sqrt{T}$ about $0$ in $\mathcal H$. For $t {\leqslant}T-h$, let $p_t$ denote the probability that $M_t \in \mathcal B$, but $M_{t+h}, M_{t+h+1}, \ldots, M_{T} \notin \mathcal B$. We first show that for every such $t$, $$\label{eq:wantshow} p_t {\geqslant}\frac{{\varepsilon}}{40} \cdot \pr\left(\|M_t\| {\leqslant}{\varepsilon}\sqrt{T}\right)\,.$$ To this end, we prove three bounds. First, $$\begin{aligned} \pr\Big(\exists i {\leqslant}h & \textrm{ such that } \|M_{t+i}\| {\geqslant}2 {\varepsilon}\sqrt{T} \mid M_t \in \mathcal B\Big) \nonumber \\ {\geqslant}& \,\,\,\pr\left(\exists i {\leqslant}h \textrm{ such that } \|M_{t+i}-M_t\| {\geqslant}3 {\varepsilon}\sqrt{T} \mid M_t \in \mathcal B\right) {\geqslant}\frac12\,, \label{eq:fone}\end{aligned}$$ where the latter bound follows from Markov’s inequality and Lemma \[lem:mghit\]. Next, observe that for $R {\geqslant}{\varepsilon}\sqrt{T}$, Lemma \[lem:yuval\] gives, $$\label{eq:secondstep} \pr\left(\|M_s\| {\geqslant}R \textrm{ occurs before } \|M_s\| {\leqslant}{\varepsilon}\sqrt{T} \textrm{ for } s {\geqslant}t+i \,\Big|\, \|M_{t+i}\| {\geqslant}2 {\varepsilon}\sqrt{T}\right) {\geqslant}\frac{{\varepsilon}\sqrt{T}}{2R+B}\,.$$ Finally, the Doob-Kolmogorov maximal inequality implies that, $$\label{eq:doob} \pr\left(\max_{0 {\leqslant}r {\leqslant}T} \|M_s-M_{s+r}\| > R \mid \mathcal F_s\right) {\leqslant}\frac{{{\mathbb E}}\left[\|M_s-M_{s+T}\|^2\mid \mathcal F_s\right]}{R^2} = \frac{T}{R^2}\,.$$ Setting $R=2\sqrt{T}$, and imply that for any time $t {\geqslant}0$, we have $$\pr\left(M_{t+i}, M_{t+i+1}, \ldots, M_T \notin \mathcal B \mid \|M_{t+i}\| {\geqslant}2 {\varepsilon}\sqrt{T}\right) {\geqslant}\frac{{\varepsilon}}{20}\,.$$ Combining this with yields . But we must have, $$\sum_{t=0}^T p_t {\leqslant}h = O({\varepsilon}^2 T),$$ by construction. Thus yields $$\sum_{t=0}^T \pr\left[\|M_t\| {\leqslant}{\varepsilon}\sqrt{T} \right] {\leqslant}O({\varepsilon}T),$$ concluding the proof. Applications {#sec:applications} ------------ Combining the observations of the preceding section, together with existence of harmonic functions yields our claimed results on transitive graphs. In particular, the following result, combined with Theorem \[thm:harmonic\], proves Theorem \[thm:maininfinite\]. \[thm:infdiffuse\] Let $V$ be a countably infinite index set, and let $\{P(x,y)\}_{x,y \in V}$ be a stochastic, symmetric matrix. Suppose that $\Gamma {\leqslant}{\mathsf{Aut}}(P)$ is a closed, amenable, unimodular subgroup that acts transitively on $V$, and there exists a connected graph $G=(V,E)$ on which $\Gamma$ acts by automorphisms. Suppose further that for some $B > 0$, for all $x,y \in V$, we have $$\label{eq:bddstep2} P(x,y) \implies {\mathsf{dist}}(x,y) {\leqslant}B\,,$$ where ${\mathsf{dist}}$ is the path metric on $G$. Suppose also that $$p_* = \min \{ P(x,y) : \{x,y\} \in E \} > 0.$$ If there exists a Hilbert space $\mathcal H$ and a non-constant $\Gamma$-equivariant $\mathcal H$-valued $P$-harmonic mapping, then the following holds. Let $\{X_t\}$ denote the random walk with transition kernel $P$. For every $t {\geqslant}0$, we have the estimates, $$\begin{aligned} {{\mathbb E}}\,[{\mathsf{dist}}(X_0, X_t)^2] &{\geqslant}& p_* t \label{eq:esc2} \\ {{\mathbb E}}\,[{\mathsf{dist}}(X_0, X_t)] &{\geqslant}& \frac{\sqrt{p_* t}}{24} - \frac32 B\,, \label{eq:esc1}\end{aligned}$$ and, for every ${\varepsilon}{\geqslant}1/\sqrt{T}$ and $T {\geqslant}4/p_*$, $$\label{eq:occ3} \frac{1}{T} \sum_{t=0}^{T} \pr\left[{\mathsf{dist}}(X_0, X_t) {\leqslant}{\varepsilon}\sqrt{p_* T/B}\right] {\leqslant}O({\varepsilon})\,.$$ Let $\mathcal H$ and $\Psi : V \to \mathcal H$ be the Hilbert space and non-constant $\Gamma$-equivariant $P$-harmonic mapping. Let $\|\cdot\| = \|\cdot\|_{\mathcal H}$, and normalize $\Psi$ so that for every $x \in V$, $$\label{eq:psinorm} \sum_{y \in V} P(x,y) \|\Psi(x)-\Psi(y)\|^2 = 1.$$ Then $M_t = \Psi(X_t)$ is an $\mathcal H$-valued martingale with ${{\mathbb E}}\left[\|M_{t+1}-M_t\|^2\,|\,\mathcal F_t\right] = 1$ for every $t {\geqslant}0$. Furthermore, from , we see that $\Psi$ is $\sqrt{1/p_*}$-Lipschitz as a mapping from $(V,{\mathsf{dist}})$ to $\mathcal H$. Thus one has immediately the estimate, $${{\mathbb E}}\left[{\mathsf{dist}}(X_0,X_t)^2\right] {\geqslant}p_*\, {{\mathbb E}}\left[\|M_t-M_0\|^2\right] = p_* t\,.$$ Now, for any $k \in \mathbb N$, let $H_k$ denote the first time $t$ at which ${\mathsf{dist}}(X_0, X_{t})=k$, and define the function $h : \mathbb N \to \mathbb R$ by $h(k)=\mathbb E[H_k]$. Since $\Psi$ is $\sqrt{1/p_*}$-Lipschitz, Lemma \[lem:mghit\] applied to $\{M_t\}$ shows that for every $k \in \mathbb N$, $$h(k) {\leqslant}\frac{(k+B)^2}{p_*}\,.$$ Combining this with Lemma \[lem:lindrift\] yields . Combining it with Theorem \[thm:smallball\] yields . Although we prove a result about occupation times, we conjecture that a stronger bound holds. Suppose that $G$ is an infinite, transitive, connected, amenable graph with degree $d$ and $\{X_t\}$ is the simple random walk on $G$. Theorem \[thm:infdiffuse\] shows that for every ${\varepsilon}> 1/\sqrt{T}$ and $T {\geqslant}4 d^2$, we have $$\frac{1}{T} \sum_{t=0}^{T} \pr({\mathsf{dist}}(X_0, X_t) {\leqslant}{\varepsilon}\sqrt{T}) {\leqslant}O({\varepsilon})\,.$$ We conjecture that this holds pointwise, i.e. for every large enough time $t$, we have $$\pr({\mathsf{dist}}(X_0, X_t) {\leqslant}{\varepsilon}\sqrt{t}) {\leqslant}O({\varepsilon})\,.$$ Finally, we conclude with a theorem about finite graphs which, in particular, yields Theorem \[thm:mainfinite\]. Let $V$ be a finite index set and suppose that ${\mathsf{Aut}}(P)$ acts transitively on $V$, and on the graph $G=(V,E)$ by automorphisms. If $$p_* = \min \{ P(x,y) : \{x,y\} \in E \} > 0,$$ and $\lambda < 1$ is the second-largest eigenvalue of $P$, then for every $t {\leqslant}(1-\lambda)^{-1}$, we have $$\begin{aligned} {{\mathbb E}}\,[{\mathsf{dist}}(X_0, X_t)^2] &{\geqslant}& p_* t/2\,, \\ {{\mathbb E}}\,[{\mathsf{dist}}(X_0, X_t)] &{\geqslant}& \Omega({\sqrt{p_* t}}) - B\,,\end{aligned}$$ and, for every ${\varepsilon}> 0$ and $(1-\lambda)^{-1} {\geqslant}T {\geqslant}4/p_*$, $$\label{eq:occ3} \frac{1}{T} \sum_{t=0}^{T} \pr\left[{\mathsf{dist}}(X_0, X_t) {\leqslant}{\varepsilon}\sqrt{p_* T/B}\right] {\leqslant}O({\varepsilon})\,.$$ Let $\psi : V \to \mathbb R$ be such that $P \psi = \lambda \psi$, and define $\Psi : V \to \ell^2({\mathsf{Aut}}(P))$ by $$\Psi(x) = \frac{\left(\psi(\sigma x)\right)_{\sigma \in {\mathsf{Aut}}(P)}}{\sqrt{2 \langle \psi, (I-P)\psi\rangle}}.$$ An argument as in shows that $\|\Psi\|_{{\mathrm{Lip}}} {\leqslant}\sqrt{1/p_*}$. Now, observe that $\{\lambda^{-t} \Psi(X_t)\}$ is a martingale. This follows from the fact that $\lambda^{-t} \psi(X_t)$ is a martingale, which one easily checks: $${{\mathbb E}}\left[\lambda^{-t-1} \psi(X_{t+1}) \,|\, X_t \right] = \lambda^{-t-1} (P \psi) (X_t) = \lambda^{-t} \psi(X_t).$$ Thus for $t {\leqslant}(1-\lambda)^{-1}$, the mapping $x \mapsto \lambda^{-t} \Psi(x)$ is $O(\sqrt{1/p_*})$-Lipschitz. Hence the same argument as in Theorem \[thm:infdiffuse\] applies. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank Tonci Antunovic for detailed comments on earlier drafts of this manuscript, and Tim Austin for his suggestions toward obtaining equivariance in Theorem \[thm:harmonic\]. We also thank Anna Èrshler, David Fisher, Subhroshekhar Gosh, Gady Kozma, Gábor Pete, and Bálint Virág for useful discussions and comments. [^1]: Computer Science & Engineering, University of Washington. Research partially supported by NSF grant CCF-0644037 and a Sloan Research Fellowship. E-mail: [jrl@cs.washington.edu]{}. [^2]: Microsoft Research. E-mail: [peres@microsoft.com]{} [^3]: In fact, Virag proceeds by explicitly bounding ${{\mathbb E}}\left[\|M_0-M_t\|^4\right] {\leqslant}O(|S|^2 t^2)$ when $\{M_t\}$ is any Hilbert space-valued martingale with ${{\mathbb E}}\left[\|M_{t+1}-M_t\|^2 \,|\, \mathcal F_t\right]{\leqslant}1$ and ${{\mathbb E}}\left[\|M_{t+1}-M_t\|^4\,|\, \mathcal F_t\right] {\leqslant}|S|^2$, for all $t {\geqslant}0$.
--- abstract: 'We present evidence, that if a large enough set of high resolution stock market data is analyzed, certain analogies with physics – such as scaling and universality – fail to capture the full complexity of such data. Despite earlier expectations, the mean value per trade, the mean number of trades per minute and the mean trading activity do not show scaling with company capitalization, there is only a non-trivial monotonous dependence. The strength of correlations present in the time series of traded value is found to be non-universal: The Hurst exponent increases logarithmically with capitalization. A similar trend is displayed by intertrade time intervals. This is a clear indication that stylized facts need not be fully universal, but can instead have a well-defined dependence on company size.' author: - János Kertész - Zoltán Eisler bibliography: - 'sizematproc3.bib' title: Limitations of scaling and universality in stock market data --- In the last decade, an increasing number of physicists is becoming devoted to the study of economic and financial phenomena [@evolving; @evolving2; @kertesz.econophysics]. One of the reasons for this tendency is that societies or stock markets can be seen as strongly interacting systems. Since the early $70$’s, physics has developed a wide range of concepts and models to efficiently treat such topics, these include (fractal and multifractal) scaling, frustrated disordered systems, and far from equilibrium phenomena. To understand how similarly complex patterns arise from human activity, albeit truly challenging, seems a natural continuation of such efforts. While a remarkable success has been achieved [@bouchaud.book; @stanley.book; @mandelbrot.econophysics], studies in econophysics are often rooted in possible analogies, even though there are important differences between physical and financial systems. Despite the obvious similarities to interacting systems here we would like to emphasize the discrepancy in the levels of description. For example, in the case of a physical system undergoing a second order phase transition, it is natural to assume scaling on profound theoretical grounds and the (experimental or theoretical) determination of, e.g., the critical exponents is a fully justified undertaking. There is no similar theoretical basis for the financial market whatsoever, therefore in this case the assumption of power laws should be considered only as one possible way of fitting fat tailed distributions [@gopi.inversecube; @lux.paretian]. Also, the reference to universality should not be plausible as the robustness of *qualitative* features – like the fat tail of the distributions – is a much weaker property. While we fully acknowledge the process of understanding based on analogies as an important method of scientific progress, we emphasize that special care has to be taken in cases where the theoretical support is sparse. The aim of this paper is to summarize some recent advances that help to understand these fundamental differences. We present evidence, that the size of companies strongly affects the characteristics of trading activity of their stocks, in a way which is incompatible with the popular assumption of universality in trading dynamics. Instead, certain stylized facts have a well-defined dependence on company capitalization. Therefore, e.g., averaging distributions over companies with very different capitalization is questionable. The paper is organized as follows. Section \[sec:intro\] introduces the notations and data that were used. Section \[sec:cap\] shows that various measures of trading activity depend on capitalization in a non-trivial way. In Sec. \[sec:correl\], we analyze the correlations present in traded value time series, and find that the Hurst exponent increases with the mean traded value per minute logarithmically. Section \[sec:itt\] deals with a similar size-dependence of correlations present in the time intervals between trades. Finally, Section \[sec:conc\] concludes. Notations and data {#sec:intro} ================== For time windows of size $\Delta t$, let us write the total traded value (activity, flow) of the $i$th stock at time $t$ as $$f_i^{\Delta t}(t) = \sum_{n, t_i(n)\in [t, t+\Delta t]} V_i(n), \label{eq:flow}$$ where $t_i(n)$ is the time of the $n$-th transaction of the $i$-th stock. This corresponds to the coarse-graining of the individual events, or the so-called tick-by-tick data. $V_i(n)$ is the value traded in transaction $n$, and it can be calculated as the product of the price $p$ and the traded volume of stocks $\tilde V$, $$V_i(n) = p_i(n) \tilde V_i(n). \label{eq:v}$$ Price does not change very much from trade to trade, so the dominant factor in the fluctuations and the statistical properties of $f$ is given by the variation of the number of stocks exchanged in the transactions, $\tilde V$. Price serves as a conversion factor to a common unit (US dollars), and it makes the comparison of stocks possible, while also automatically corrects the data for stock splits. The statistical properties (normalized distribution, correlations, etc.) are otherwise practically indistinguishable between traded volume and traded value. We used empirical data from the TAQ database [@taq1993-2003] which records all transactions of the New York Stock Exchange and NASDAQ for the years $1993-2003$. Finally, note that throughout the paper we use $10$-base logarithms. Capitalization affects basic measures of trading activity {#sec:cap} ========================================================= Most previous studies are restricted to an analysis of the stocks of large companies. These are traded frequently, and so price and returns are well defined even on the time scale of a few seconds. Nevertheless, other quantities regarding the activity of trading, such as traded value and volume or the number of trades can be defined, even for those stocks where they are zero for most of the time. In this section we extend the study of Zumbach [@zumbach] which concerned the $100$ large companies included in London Stock Exchange’s FTSE-100 market index. This set spans about two orders of magnitude in capitalization. Instead, we analyze the $3347$ stocks[^1] that were traded continuously at NYSE for the year $2000$. This gives us a substantially larger range of capitalization, approximately $10^6\dots 6\cdot 10^{11}$ USD. Following Ref. [@zumbach], in order to quantify how the value of the capitalization $C_i$ of a company is reflected in the trading activity of its stock, we plotted the mean value per trade $\ev{V_i}$, mean number of trades per minute $\ev{N_i}$ and mean activity (traded value per minute) $\ev{f_i}$ versus capitalization in Fig. \[fig:capdep\]. Ref. [@zumbach] found that all three quantities have power law dependence on $C_i$, however, this simple ansatz does not seem to work for our extended range of stocks. While mean trading activity can be – to a reasonable quality – approximated as $\ev{f_i} \propto C_i^{0.98\pm0.06}$, neither $\ev{V}$ nor $\ev{N}$ can be fitted by a single power law in the whole range of capitalization. Nevertheless, there is an unsurprising monotonous dependence: higher capitalized stocks are traded more intensively. One can gain further insight from Fig. \[fig:capdep\](d), which eliminates the capitalization variable, and shows $\ev{V}$ versus $\ev{N}$. For the largest $1600$ stocks we find the scaling relation $$\ev{V_i} \propto \ev{N_i}^\beta , \label{eq:vvsn}$$ with $\beta = 0.57 \pm 0.09$. The estimate based on the results of Zumbach [@zumbach] for the stocks in London’s FTSE-100, is $\beta \approx 1$, while Ref. [@eisler.unified] finds $\beta = 0.22 \pm 0.04$ for NASDAQ. The regime of smaller stocks shows no clear tendency. One possible interpretation of the effect is the following. Smaller stocks are exchanged rarely, but there must exist a smallest exchanged value that is still profitable to use due to transaction costs, $\ev{V}$ cannot decrease indefinitely. On the other hand, once a stock is exchanged more often (the change happens at about $\ev{N} = 0.05$ trades/min), it is no more traded in this minimal profitable unit. With more intensive trading, trades “stick together”, liquidity allows the exchange of larger packages. This increase is clear, but not very large, up to one order of magnitude. Although increasing package sizes reduce transaction costs, price impact [@gabaix.powerlaw; @plerou.powerlaw; @farmer.powerlaw; @farmer.whatreally] increases, and profits will decrease again. The balance between these two effects can determine package sizes and may play a role in the formation of . Non-universal correlations of traded value {#sec:correl} ========================================== Scaling methods [@vicsek.book; @dfa.intro; @dfa] have long been used to characterize stock market time series, including prices and trading volumes [@bouchaud.book; @stanley.book]. In particular, the Hurst exponent $H(i)$ is often calculated. For the traded value time series $f_i^{\Delta t}(t)$ of stock $i$, it can be defined as $$\label{eq:hurst} \sigma_i^2(\Delta t) = \ev{\left (f_i^{\Delta t}(t)-\ev{f_i^{\Delta t}(t)} \right )^2}\propto\Delta t^{2H(i)},$$ where $\ev{\cdot}$ denotes time averaging with respect to $t$. The signal is said to be correlated (persistent) when $H>0.5$, uncorrelated when $H=0.5$, and anticorrelated (antipersistent) for $H<0.5$. It is not a trivial fact, but several recent papers [@eisler.sizematters; @queiros.volume] point out that the variance on the left hand side exists for any stock’s traded value and any time scale $\Delta t$. Therefore, we carried out measurements of $H$ on all $2647$ stocks that were continuously traded on NYSE in the period $2000-2002$. We investigated separately the $4039$ stocks that were traded at NASDAQ for the same period. We find, that stock market activity has a much richer behavior, than simply all stocks having Hurst exponents statistically distributed around an average value, as assumed in Ref. [@gopi.volume]. Instead, there is a crossover [@eisler.sizematters; @ivanov.itt; @ivanov.unpublished] between two types of behavior around the time scale of a few hours to $1$ trading day. An essentially uncorrelated regime was found when $\Delta t < 20$ min for NYSE and $\Delta t < 2$ min for NASDAQ, while the time series of larger companies become strongly correlated when $\Delta t > 300$ min for NYSE and $\Delta t > 60$ min for NASDAQ. As a reference, we also calculated the Hurst exponents $H_{shuff}(i)$ of the shuffled time series. The results are plotted in Fig. \[fig:hurst\]. ![Behavior of the Hurst exponents $H(i)$ for the period $2000-2002$, and two markets ([**(a)**]{} NYSE, [**(b)**]{} NASDAQ). For short time windows ($\Circle$), all signals are nearly uncorrelated, $H(i)\approx 0.51 - 0.52$, regardless of stock market. The fitted slopes are $\gamma_{NYSE}(\Delta t < \mathrm{20\space min})=0.001\pm 0.002$, and $\gamma_{NASDAQ}(\Delta t < \mathrm{2\space min})=0.003\pm 0.002$. For larger time windows ($\blacksquare$), the strength of correlations depends logarithmically on the mean trading activity of the stock, $\gamma_{NYSE}(\Delta t > \mathrm{300\space min})=0.06\pm 0.01$ and $\gamma_{NASDAQ}(\Delta t > \mathrm{60\space min})=0.05\pm 0.01$. Shuffled data ($\bigtriangledown$) display no correlations, thus $H_{shuff}(i) = 0.5$. [*Insets:*]{} The $\log \sigma$-$\log \Delta t$ scaling plots ($\blacksquare$) for two example stocks, GE (NYSE) and DELL (NASDAQ). The darker shaded intervals have well-defined Hurst exponents, the crossover is indicated with a lighter background. Results for shuffled time series ($\Circle$) were shifted vertically for better visibility.[]{data-label="fig:hurst"}](LmfvsHf_NYSE){height="195pt"} ![Behavior of the Hurst exponents $H(i)$ for the period $2000-2002$, and two markets ([**(a)**]{} NYSE, [**(b)**]{} NASDAQ). For short time windows ($\Circle$), all signals are nearly uncorrelated, $H(i)\approx 0.51 - 0.52$, regardless of stock market. The fitted slopes are $\gamma_{NYSE}(\Delta t < \mathrm{20\space min})=0.001\pm 0.002$, and $\gamma_{NASDAQ}(\Delta t < \mathrm{2\space min})=0.003\pm 0.002$. For larger time windows ($\blacksquare$), the strength of correlations depends logarithmically on the mean trading activity of the stock, $\gamma_{NYSE}(\Delta t > \mathrm{300\space min})=0.06\pm 0.01$ and $\gamma_{NASDAQ}(\Delta t > \mathrm{60\space min})=0.05\pm 0.01$. Shuffled data ($\bigtriangledown$) display no correlations, thus $H_{shuff}(i) = 0.5$. [*Insets:*]{} The $\log \sigma$-$\log \Delta t$ scaling plots ($\blacksquare$) for two example stocks, GE (NYSE) and DELL (NASDAQ). The darker shaded intervals have well-defined Hurst exponents, the crossover is indicated with a lighter background. Results for shuffled time series ($\Circle$) were shifted vertically for better visibility.[]{data-label="fig:hurst"}](LmfvsHf_NASDAQ){height="195pt"} One can see, that for shorter time windows, correlations are absent in both markets, $H(i)\approx0.51-0.53$. For windows longer than a trading day, however, while small $\ev{f}$ stocks again display only very weak correlations, larger ones show up to $H\approx 0.9$. Furthermore, there is a distinct logarithmic trend in the data: $$H(i) = H^* + \gamma\log\ev{f_i}, \label{eq:hurst_scaling}$$ with $\gamma(\Delta t > 300min) = 0.06\pm0.01$ for NYSE and $\gamma(\Delta t > 60min) = 0.05\pm0.01$ for NASDAQ. This result can be predicted by a general framework based on a new type of scaling law [@eisler.non-universality; @eisler.unified]. Shorter time scales correspond to the special case $\gamma = 0$, there is no systematic trend in $H$. After shuffling the time series, as expected, they become uncorrelated and show $H_{shuff}(i)\approx 0.5$ at all time scales and without significant dependence on $\ev{f_i}$. It is to be emphasized, that the crossover is not simply between uncorrelated and correlated regimes. It is instead between homogeneous (all stocks show $H(i)\approx H_1$, $\gamma = 0$) and inhomogeneous ($\gamma > 0$) behavior. One finds $H_1 \approx 0.5$, but very small $\ev{f}$ stocks do not depart much from this value even for large time windows. This is a clear relation to company size, as $\ev{f}$ is a monotonously growing function of company capitalization (see Sec. \[sec:cap\] and Ref. [@eisler.sizematters]). Dependence of the effect on $\ev{f}$ is in fact a dependence on company size. This is a direct evidence of non-universality. The trading mechanism that governs the marketplace depends strongly on the stock that is traded. In a physical sense, there are no universality classes [@reichl] comprising a given group of stocks and characterized by a set of stylized facts, such as Hurst exponents. Instead, there is a continuous spectrum of company sizes and the stylized facts may depend *continuously* on company size/capitalization. Systematic dependence of the exponent of the power spectrum of the number of trades on capitalization was previously reported in Ref. [@bonanno.dynsec], based on the study of $88$ stocks. That quantity is closely related to the Hurst exponent of the respective time series (see Ref. [@ivanov.itt]). Direct analysis finds a strong, monotonous increase of the Hurst exponent of $N$ with growing $\ev{N}$, but no such clear logarithmic trend as Eq. . Non-universal correlations of intertrade times {#sec:itt} ============================================== To strengthen the arguments of Sec. \[sec:correl\], we carried out a a similar analysis of the intertrade interval series $T_i(n=1\dots N_i-1)$, defined as the time spacings between the $n$’th and $n+1$’th trade. $N_i$ is the total number of trades for stock $i$ during the period under study. Previously, Ref. [@ivanov.itt] used $30$ stocks from the TAQ database for the period $1993-1996$ and proposed that $H_T$ has the universal value $0.94 \pm 0.05$. We analyzed the same database, but included a large number of stocks with very different capitalizations. First it has to be noted that the mean intertrade interval has decreased drastically over the years. In this sense the stock market cannot be considered stationary for periods much longer than one year. We analyzed the two year period $1994-1995$ (part of that used in Ref. [@ivanov.itt]) and separately the single year $2000$. We used all stocks in the TAQ database with $\ev{T} < 10^5$ sec, a total of $3924$ and $4044$ stocks, respectively. The Hurst exponents for the time series $T_i$ can written, analogously to Eq. , as $$\label{eq:hurstittdef} \sigma_i^2(N) = \ev{\left (\sum_{n=1}^N T_i(n)-\ev{\sum_{n=1}^N T_i(n)} \right )^2}\propto N^{2H_T(i)},$$ where the series is not defined in time, but instead on a tick-by-tick basis, indexed by the number of transactions. The data show a crossover, similar to that for the traded value $f$, from a lower to a higher value of $H_T(i)$ when the window size is approximately the daily mean number of trades (for an example, see the inset of Fig. \[fig:ITT\]). For the restricted set studied in Ref. [@ivanov.itt], the value $H_T\approx 0.94\pm0.05$ was suggested for window sizes above the crossover. Similarly to the case of traded value Hurst exponents analyzed in Section \[sec:correl\], the inclusion of more stocks[^2] reveals the underlying systematic non-universality. Again, less frequently traded stocks appear to have weaker autocorrelations as $H_T$ decreases monotonously with growing $\ev{T}$. One can fit an approximate logarithmic law [^3]$\null^,$[^4] to characterize the trend: $$H_T = H_T^*+\gamma_T\log\ev{T}, \label{eq:hurstitt}$$ where $\gamma_T = -0.10\pm 0.02$ for the period $1994-1995$ (see Fig. \[fig:ITT\]) and $\gamma_T = -0.08 \pm 0.02$ for the year $2000$ [@uponrequest]. ![Hurst exponents of $T_i$ for windows greater than $1$ day, plotted versus the mean intertrade time $\ev{T_i}$. Stocks that are traded less frequently, show markedly weaker persistence of $T$ for time scales longer than $1$ day. The dotted horizontal line serves as a reference. We used stocks with $\ev{T} < 10^5$ sec, the sample period was $1994-1995$. The inset shows the two regimes of correlation strength for the single stock General Electric (GE) on a log-log plot of $\sigma(N)$ versus $N$. The slopes corresponding to Hurst exponents are $0.6$ and $0.89$.[]{data-label="fig:ITT"}](LmITTvsHITTy1994-1995){height="190pt"} In their recent preprint, Yuen and Ivanov [@ivanov.unpublished] independently show a tendency similar to Eq. for intertrade times of NYSE and NASDAQ in a different set of stocks. Conclusions {#sec:conc} =========== In this paper we have summarized a few recent advances in understanding the role of company size in trading dynamics. We revisited a number of previous studies of stock market data and found that the extension of the range of capitalization of the studied firms reveals a new aspect of stylized facts: The characteristics of trading display a fundamental dependence on capitalization. We have shown that trading activity $\ev{f}$, the number of trades per minute $\ev{N}$ and the mean size of transactions $\ev{V}$ display non-trivial, monotonous dependence on company capitalization, which cannot be described by a simple power law. On the other hand, for moderate to large companies, a power law gives an acceptable fit for the dependence of the mean transaction size on the trading frequency. The Hurst exponents for the variance of traded value/intertrade times can be defined and they depend logarithmically on the mean trading activity $\ev{f}$/mean intertrade time $\ev{T}$. These findings imply that special care must be taken when the concepts of scaling and universality are applied to financial processes. For the modeling of stock market processes, one should always consider that many characteristic quantities depend strongly on the capitalization. The introduction of such models seems a real challenge at present. Acknowledgement =============== The authors thank György Andor for his support with the data. JK is member of the Center for Applied Mathematics and Computational Physics, BME; furthermore, he is grateful for the hospitality of Dietrich Wolf (Duisburg) and of the Humboldt Foundation. Support by OTKA T049238 is acknowledged. [^1]: Note that many minor stocks do not represent actual companies, only different class stocks of a larger firm. [^2]: For a reliable calculation of Hurst exponents, we had to discard those stocks that had less than $\ev{N} < 10^{-3}$ trades/min for $1994-1995$ and $\ev{N} < 2\cdot 10^{-3}$ trades/min for $2000$. This filtering leaves $3519$ and $3775$ stocks, respectively. [^3]: As intertrade intervals are closely related to the number of trades per minute $N(t)$, it is not surprising to find the similar tendency for that quantity [@bonanno.dynsec]. [^4]: Note that for window sizes smaller than the daily mean number of trades, intertrade times are only weakly correlated and the Hurst exponent is nearly independent of $\ev{T}$. This is analogous to what was seen for traded value records in Sec. \[sec:correl\].
--- author: - Oleg Zaitsev date: '[*Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg, Germany*]{}' title: | Semiclassical spin coherent state method\ in the weak spin-orbit coupling limit --- Introduction ============ A new solution to the problem of how to include spin-orbit interaction in the semiclassical theory was recently proposed by Pletyukhov [*et al.*]{} [@Ple]. They use the spin coherent states to describe the spin degrees of freedom of the system. Then a path integral that combines the spin and orbital variables can be constructed, leading to the semiclassical propagator (or its trace) when evaluated within the stationary phase approximation. In such an approach the spin and orbital degrees of freedom are treated on equal footings. In particular, one can think of a classical trajectory of the system in the extended phase space, i.e., the phase space with two extra dimensions due to spin. (The spin part of the extended phase space can be mapped onto the Bloch sphere.) Like in the pure orbital systems, it is possible to construct a classical Hamiltonian that will be a function of the phase space coordinates. The trajectories of the system satisfy the equations of motion generated by this Hamiltonian. In this letter we apply the general theory [@Ple] to the limiting case of weak spin-orbit coupling. This limit is naturally incorporated in the theory proposed by Bolte and Keppeler [@Bol] that based on the $\hbar \rightarrow 0$ expansion in the Dirac (or Schrödinger) equation. Bolte and Keppeler have shown that the semiclassical trace formula without spin-orbit interaction acquires an additional modulation factor due to spin, but otherwise remains unchanged. We obtain the same modulation factor using the spin coherent state method. Classical dynamics and periodic\ orbits {#ClasDyn} ================================ We begin with the classical phase space symbol of the Hamiltonian [@Ple] $$H(p,q,z) = H_0 (p,q) + \kappa \hbar S {\boldsymbol}{\sigma}(z) \cdot \mathbf{C}(p,q) \equiv H_0 + \hbar H_{\mathrm{so}}. \label{Ham}$$ It includes the spin-orbit interaction term $\hbar H_{\mathrm{so}}$ which is linear in spin, but otherwise is an arbitrary function of (possibly multidimensional) momenta and coordinates $p$ and $q$. The spin direction is described by a unit vector ${\boldsymbol}\sigma (z) \overset {\mathrm {def}} = \left< z \right| {\boldsymbol}{\hat \sigma} \left| z \right>$, where ${\boldsymbol}{\hat \sigma}$ are the Pauli matrices and the complex variable $z \equiv u - iv$ labels the spin coherent states of total spin $S$ [@Koc]. At the end of our calculations we will set $S= \frac 1 2$. The Planck constant appears explicitly in this classical Hamiltonian and is treated as the perturbation parameter in the weak-coupling limit. The spin-orbit coupling strength $\kappa$ is kept finite. Thus the condition $\hbar \rightarrow 0$ provides both the semiclassical (high energy) and the weak-coupling limits. The Hamiltonian (\[Ham\]) determines the classical equations of motion for the orbital and spin degrees of freedom [@Ple] $$\begin{aligned} \dot p &=& - \frac {{\partial}H} {{\partial}q} = - \frac {{\partial}H_0} {{\partial}q} - \kappa \hbar S {\boldsymbol}{\sigma} \cdot \frac {{\partial}\mathbf{C}} {{\partial}q}, \label{EOM1}\\ \dot q &=& \frac {{\partial}H} {{\partial}p} = \frac {{\partial}H_0} {{\partial}p} + \kappa \hbar S {\boldsymbol}{\sigma} \cdot \frac {{\partial}\mathbf{C}} {{\partial}p}, \label{EOM2}\\ \dot {{\boldsymbol}{\sigma}} &=& - \kappa {\boldsymbol}{\sigma} \times \mathbf{C}. \label{EOM3}\end{aligned}$$ Since $${\boldsymbol}{\sigma} (z) = \frac 1 {1 + |z|^2} \left(2u, 2v, |z|^2 -1\right)^{\mathrm T} \label{sigma}$$ in the “south” gauge,[^1] Eq. (\[EOM3\]) is equivalent to two Hamilton-like equations $$\begin{aligned} &\dot u = - \frac {\left(1 + |z|^2\right)^2} {4 \hbar S} \frac {{\partial}H} {{\partial}v} = - \frac {\kappa} 4 \left(1 + |z|^2\right)^2 \frac {{\partial}{\boldsymbol}{\sigma}} {{\partial}v} \cdot \mathbf{C}, \label{EOMu} \\ &\dot v = \frac {\left(1 + |z|^2\right)^2} {4 \hbar S} \frac {{\partial}H} {{\partial}u} = \frac {\kappa} 4 \left(1 + |z|^2\right)^2 \frac {{\partial}{\boldsymbol}{\sigma}} {{\partial}u} \cdot \mathbf{C}. \label{EOMv}\end{aligned}$$ In the leading order in $\hbar$ we keep only the unperturbed terms in Eqs.  (\[EOM1\]) and (\[EOM2\]). It follows then that the orbital motion, in this approximation, is unaffected by spin. The spin motion is determined by the unperturbed orbital motion via Eq. (\[EOM3\]), which does not contain $\hbar$. It describes the spin precession in the time-dependent effective magnetic field $\mathbf{C} \left(p_0(t), q_0 (t) \right)$, where $\left(p_0(t), q_0 (t) \right)$ is an orbit of the unperturbed Hamiltonian $H_0$. In order to apply a trace formula for the density of states, we need to know the periodic orbits of the system, both in orbital and spin phase space coordinates. The orbital part of a periodic trajectory is necessarily a periodic orbit of $H_0$. Assume that such an orbit with period $T_0$ is given. Then Eq. (\[EOM3\]) generates a map on the Bloch sphere ${\boldsymbol}{\sigma} (0) \longmapsto {\boldsymbol}{\sigma} (T_0)$ between the initial and final points of a spin trajectory ${\boldsymbol}{\sigma} (t)$. The fixed points of this map correspond to periodic orbits with the period $T_0$. Since Eq. (\[EOM3\]) is linear in ${\boldsymbol}{\sigma}$, for any two trajectories ${\boldsymbol}{\sigma_1} (t)$ and ${\boldsymbol}{\sigma_2} (t)$, their difference also satisfies this equation. But this means that $|{\boldsymbol}{\sigma_1} (t) - {\boldsymbol}{\sigma_2} (t)| = \mathrm{const}$, i.e., the angles between the vectors do not change during the motion. Hence the map is a rotation by an angle $\alpha$ about some axis through the center of the Bloch sphere. The points of intersection of this axis and the sphere are the fixed points of the map (Fig. \[Fig1\]). Thus for a given periodic orbit of $H_0$, there are two periodic orbits of $H$ with opposite spin orientations (unless $\alpha$ is a multiple of $2\pi$, by accident). The angle $\alpha$ was mentioned in Ref. [@Kep]. [ ]{} Modulation factor ================= Correction to the action ------------------------ In order to derive a modulation factor in the trace formula, we need to determine the correction to the action and the stability determinant due to the spin-orbit interaction. The action along a periodic orbit is [@Ple] $${\cal S} = \oint p dq + 2S \hbar \oint \frac {u dv - v du} {1 + |z|^2} \equiv {\cal S}_{pq} + \hbar {\cal S}_{\mathrm{spin}}.$$ While the spin part contains $\hbar$ explicitly, we need to extract the leading order correction to the orbital action. This is the only place where we implicitly take into account the influence of spin on the orbital motion. It is convenient for the following calculation to parameterize both the perturbed and unperturbed orbits by a variable $s \in [0,1]$, i.e., $${\cal S}_{pq} = \int_0^1 p \frac {dq} {ds} ds.$$ The time parameterization would be problematic since the periods of the perturbed and the unperturbed orbits differ by order of $\hbar$ (see Appendix \[time\]). The correction to the orbital part due to the perturbation is $$\begin{aligned} \delta {\cal S}_{pq} &=& \int_0^1 \left[ \delta p \frac {dq_0} {ds} + p_0 \frac d {ds} (\delta q) \right] ds \nonumber \\ &=& \int_0^1 \left( \delta p \frac {dq_0} {ds} - \delta q \frac {dp_0} {ds} \right) ds + p_0 \delta q \bigg|_0^1. \label{deltaSpq}\end{aligned}$$ The boundary term vanishes for the periodic orbit, and the integration can be done over the period of the unperturbed orbit now: $$\begin{aligned} \delta {\cal S}_{pq} &=& \int_0^{T_0} \left( \delta p \dot q_0 - \delta q \dot p_0 \right) dt \nonumber \\ &=& \int_0^{T_0} \left( \delta p \frac {{\partial}H_0} {{\partial}p} + \delta q \frac {{\partial}H_0} {{\partial}q} \right) dt = \int_0^{T_0} \delta H_0 dt.\end{aligned}$$ Since the perturbed and unperturbed orbits have the same energy, the variation of the Hamiltonian is $\delta H_0 = - \hbar H_{\mathrm{so}}$. Taking into account Eq. (\[sigma\]), we can express the change in the orbital action as $$\delta {\cal S}_{pq} = - \hbar \kappa S \int_0^{T_0} \mathbf{C} \cdot {\boldsymbol}\sigma dt = - \hbar \kappa S \int_0^{T_0} \mathbf{C} \cdot \left( \begin{array}{c} 2u \\ 2v \\ |z|^2 -1 \end{array} \right) \frac {dt} {1+|z|^2}. \label{orb}$$ We now turn to the spin action. Parameterizing the trajectory with time and then using the equations of motion (\[EOMu\]), (\[EOMv\]) and Eq.(\[sigma\]), we find $$\begin{aligned} \hbar {\cal S}_{\mathrm{spin}}&= \frac {\hbar \kappa S} 2 \int_0^{T_0} \mathbf{C} \cdot \left( u \frac {{\partial}{\boldsymbol}\sigma} {{\partial}u} + v \frac {{\partial}{\boldsymbol}\sigma} {{\partial}v} \right) \left(1 + |z|^2 \right) dt \nonumber \\ &= \hbar \kappa S \int_0^{T_0} \mathbf{C} \cdot \left( \begin{array}{c} u \left(1 - |z|^2 \right) \\ v \left(1 - |z|^2 \right) \\ 2 |z|^2 \end{array} \right) \frac {dt} {1+|z|^2}. \label{spin}\end{aligned}$$ Summing up the orbital and spin contributions Eqs. (\[orb\]) and (\[spin\]), we obtain the entire change in action as $$\delta {\cal S} = \delta {\cal S}_{pq} + \hbar {\cal S}_{\mathrm{spin}} = \hbar S \int_0^{T_0} F(t) dt, \label{deltaS}$$ where $$F(t) = \kappa \mathbf{C} \cdot \left( \begin{array}{c} -u \\ -v \\ 1 \end{array} \right).$$ Stability determinant {#stdet} --------------------- The stability determinant is derived from the second variation of the Hamiltonian $H^{(2)}$ about the periodic orbit [@Ple]. In the leading order in $\hbar$, the orbital and spin degrees of freedom in $H^{(2)}$ are separated. This means that the spin phase space provides an additional block to the unperturbed monodromy matrix of the orbital phase space, which results in a separate stability determinant due to spin. The linearized momentum and coordinate for spin $$\left( \begin{array}{c} \xi \\ \nu \end{array} \right) = \frac {2 \sqrt{\hbar S}} {1 + |z|^2} \left( \begin{array}{c} \delta u \\ \delta v \end{array} \right)$$ satisfy the equations of motion $$\left( \begin{array}{c} \dot \xi \\ \dot \nu \end{array} \right) = \left( \begin{array}{c} - \frac {{\partial}H^{(2)}} {{\partial}\nu} \vspace{.1cm}\\ \frac {{\partial}H^{(2)}} {{\partial}\xi} \end{array} \right) = F(t) \left( \begin{array}{c} - \nu \\ \xi \end{array} \right). \label{linEOM}$$ Solving these equations we find the spin block of the monodromy matrix to be (Appendix \[mon\]) $$M = \left( \begin{array}{cc} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{array} \right), \label{monmat}$$ where the stability angle is $$\varphi = \int_0^{T_0} F(t) dt.$$ The proportionality between $\varphi$ and $\delta {\cal S}$ \[Eq.(\[deltaS\])\] will be exploited in a moment but, first, we find the stability determinant $$\left| \det (M - I) \right|^{1/2} = 2 \left| \sin \frac \varphi 2 \right|,$$ where $I$ is the $2 \times 2$ unit matrix. Trace formula ------------- As was explained at the end of Sec. \[ClasDyn\], for each unperturbed periodic orbit there are two new periodic orbits with opposite spin orientations ${\boldsymbol}\sigma (t)$. It is easy to deduce, then, that for these two orbits both $\delta {\cal S}$ and $\varphi$ have the same magnitude but opposite signs. Now we are ready to write the trace formula for the oscillatory part of the density of states $$\delta g (E) = \sum_{po} \sum_{\pm} \frac {{\cal A}_0} {2 \left| \sin \frac \varphi 2 \right|} \cos \left[ \frac 1 \hbar ({\cal S}_0 \pm \delta {\cal S}) - \frac \pi 2 (\mu_0 + \mu_{\pm}) \right], \label{trf}$$ where the first sum is over the unperturbed periodic orbits and the second sum takes care of the contribution of the two spin orientations; ${\cal A}_0$ is the prefactor for the unperturbed orbit, which depends on the stability determinant and the primitive period; ${\cal S}_0$ and $\mu_0$ are the unperturbed action and the Maslov index, respectively; $\mu_{\pm}$ are the additional Maslov indices due to spin. The nature of spin requires an additional Kochetov-Solari phase correction [@Koc] that results in the shift $S \longmapsto S + \frac 1 2$ of the total spin parameter in $\delta {\cal S}$ (Appendix \[Koch\]). Setting $S = \frac 1 2$, we end up with $$\delta {\cal S} \longmapsto \delta \tilde {\cal S} = \hbar \varphi.$$ With this relation and the additional Maslov index (Appendix \[Mas\]) $$\mu_{\pm} = 1 + 2 \left[ \pm \frac \varphi {2\pi} \right] \label{Mind}$$ ($[x]$ is the largest integer $\leq x$) the sum over the spin orientations in Eq. (\[trf\]) becomes $$\begin{aligned} &\sum_{\pm} \frac {{\cal A}_0} {2 \sin \frac \varphi 2} \cos \left[ \left( \frac {{\cal S}_0} \hbar - \frac \pi 2 \mu_0 \right) \pm \left( \frac {\delta \tilde {\cal S}} \hbar - \frac \pi 2 \right) \right] \nonumber \\ &= 2 \cos \left( \frac \varphi 2 \right) {{\cal A}_0} \cos \left[ \frac {{\cal S}_0} \hbar - \frac \pi 2 \mu_0 \right]. \label{sum}\end{aligned}$$ This is our main result: each term in the periodic orbit sum is the contribution of an unperturbed orbit ${{\cal A}_0} \cos \left[ \frac {{\cal S}_0} \hbar - \frac \pi 2 \mu_0 \right]$ times the modulation factor $${\cal M} = 2 \cos \left( \frac \varphi 2 \right). \label{mf}$$ Note that no assumption was made on whether the unperturbed periodic orbits are isolated or not. Comparison with another method {#BK} ============================== Bolte and Keppeler [@Bol] derived the modulation factor in the weak-coupling limit by a different method. Their results[^2] are expressed in terms of a spin trajectory with the initial condition $${\boldsymbol}\sigma (0) = (0,0,-1)^{\mathrm{T}} \label{incond}$$ that obeys Eq. (\[EOM3\]). This trajectory, in general, is not periodic. As in our approach, the influence of spin on the orbital motion is neglected. The spin motion can be described by the polar angles $\left(\theta(t), \phi(t)\right)$ with $\theta(0) = \pi$. The modulation factor is then $${\cal M}_{BK} = 2 \cos \left( \frac {\Delta \theta} 2 \right) \cos \chi,$$ where $\Delta \theta = \pi - \theta(T_0)$ and[^3] $$\chi = - \frac \kappa 2 \int_0^{T_0} \mathbf{C} \cdot {\boldsymbol}\sigma dt + \frac 1 2 \int_0^{T_0} \left[ 1 + \cos \theta (t) \right] \dot \phi (t) dt. \label{chi}$$ In order to show that our modulation factor Eq. (\[mf\]) is equal to ${\cal M}_{BK}$, let us express $\varphi$ in terms of the polar angles. From Eq.  (\[sigma\]) follows the coordinate transformation $$\begin{aligned} &u = \cot \frac \theta 2 \cos \phi, \nonumber \\ &v = \cot \frac \theta 2 \sin \phi.\end{aligned}$$ Since $\varphi \propto \delta {\cal S}$, we can represent it as a sum of two terms \[cf. Eqs. (\[orb\])-(\[deltaS\])\] $$\frac \varphi 2 = - \frac \kappa 2 \int_0^{T_0} \mathbf{C} \cdot {\boldsymbol}\sigma dt + \frac 1 2 \int_0^{T_0} \left[ 1 + \cos \theta (t) \right] \dot \phi (t) dt. \label{phi}$$ There is a striking similarity between the expressions for $\chi$ and $\frac \varphi 2$. The only difference is that in Eq. (\[chi\]) the integration is, in general, along a non-periodic orbit with the initial condition Eq.  (\[incond\]), while in Eq. (\[phi\]) the integration is along the periodic orbit. Since the modulation factor should not depend on the choice of the $z$ direction, we can choose the $z$ axis to coincide with the spin vector ${\boldsymbol}\sigma (0)$ for the periodic orbit at $t=0$, i.e., the $z$ axis will be the rotation axis in Fig. \[Fig1\]. Then one of the periodic orbits will satisfy the initial condition Eq.  (\[incond\]), and thus both $\chi$ and $\frac \varphi 2$ can be calculated along this orbit and are equal. Moreover, $\Delta \theta = 0$ in this case. Therefore the modulation factors derived within the two approaches coincide, $${\cal M}_{BK} = {\cal M}.$$ It was mentioned in Ref. [@Kep] that ${\cal M}_{BK} = 2 \cos \frac \alpha 2$, where $\alpha$ is the rotation angle defined in Sec. \[ClasDyn\]. Then, of course, we conclude that $$\cos \frac \alpha 2 = \cos \frac \varphi 2. \label{alphi}$$ To see that this is indeed the case, we can go back to Sec. \[stdet\] where we calculated the stability determinant. It follows from that calculation that the neighborhood of the periodic orbit is rotated by an angle $\varphi$ during the period (Appendix \[mon\]). Therefore the entire Bloch sphere is rotated by this angle. Clearly, the angle of rotation must be defined $\mathrm{mod}\; 4\pi$, i.e., it depends on the parity of the number of full revolutions of the Bloch sphere around the periodic orbit during the period. It would be desirable to prove Eq.  (\[alphi\]) without referring to the small neighborhood of the periodic orbit. The same property can be also shown if one treats the spin quantum mechanically. The spin propagator for the choice of the $z$ axis along the rotation axis (so that $\chi = \frac \varphi 2$) is [@Bol] $$d (T_0) = \left( \begin{array}{cc} e^{- i \frac \varphi 2} & 0\\ 0 & e^{i \frac \varphi 2} \end{array} \right).$$ Applying this operator to a spinor $(\psi_+, \psi_-)^{\mathrm T}$ at $t = 0$, we get the spinor $\left(\psi_+ e^{- i \frac \varphi 2}, \psi_- e^{i \frac \varphi 2 }\right)^{\mathrm T}$ at $t = T_0$, which corresponds to the initial spin vector rotated by the angle $\varphi$ about the $z$ axis, i.e., $\varphi = \alpha$. Summary and conclusions ======================= We have studied the case of weak spin-orbit coupling in the semiclassical approximation using the spin coherent state method. The limit is achieved formally by setting $\hbar \rightarrow 0$. The trajectories in the orbital subspace of the extended phase space then remain unchanged by the spin-orbit interaction. For each periodic orbit in the orbital subspace there are two periodic orbits in the full phase space with opposite spin orientations. The semiclassical trace formula can be expressed as a sum over unperturbed periodic orbits with a modulation factor. This agrees with the results of Bolte and Keppeler. The form of the modulation factor does not depend on whether the unperturbed system has isolated orbits or whether it contains families of degenerate orbits due to continuous symmetries. We remark that in the semiclassical treatment of pure spin systems, a renormalization procedure is necessary in order to correct the stationary phase approximation in the path integral for finite spin $S$. Such a renormalization is equivalent to the Kochetov-Solari phase correction that we employed here without justification for a system with spin-orbit interaction. Although this correction worked well in our case, it may be necessary to develop a general renormalization scheme when the interaction is not weak. Acknowledgments {#acknowledgments .unnumbered} =============== The author thanks M. Pletyukhov and M. Brack for numerous constructive discussions leading to this letter. This work has been supported by the Deutsche Forschungsgemeinschaft. Time parameterization {#time} ===================== For pedagogical reasons we do the calculation in Eq. (\[deltaSpq\]) with the time parameterization. In this case ${\cal S}_{pq} = \int_0^T p \dot q dt$, where $T$ is the exact period. Then the correction is $$\begin{aligned} \delta {\cal S}_{pq}&=& \int_0^{T_0} \left[ \delta p \dot q_0 + p_0 \dot{(\delta q)} \right] dt + p_0 (T_0) \dot q_0 (T_0) \delta T \nonumber \\ &=& \int_0^{T_0} \left( \delta p \dot q_0 - \delta q \dot p_0 \right) dt + p_0 \delta q \bigg|_0^{T_0} + p_0 (T_0) \dot q_0 (T_0) \delta T.\end{aligned}$$ Transforming the boundary term $$\begin{aligned} p_0 \delta q \bigg|_0^{T_0}&= p_0 (T_0) \left[ q (T_0) - q_0 (T_0) - q(0) + q_0 (0) \right] = p_0 (T_0) \left[q (T_0) - q(0) \right] \nonumber \\ &= p_0 (T_0) \left[q (T_0) - q(T) \right] \simeq - p_0 (T_0) \dot q_0 (T_0) \delta T,\end{aligned}$$ we see that it cancels the period correction term. Monodromy matrix {#mon} ================ We derive the monodromy matrix Eq. (\[monmat\]). In order to solve the equations of motion (\[linEOM\]) we define $\eta = \xi + i \nu$. Then $\dot \eta = i \eta F(t)$, which solves to $$\eta (t) = \eta (0) \exp \left[ i \int_0^t F(t') dt' \right].$$ It follows then that $$\begin{aligned} &\xi (T_0) = \xi (0) \cos \varphi - \nu (0) \sin \varphi, \nonumber \\ &\nu (T_0) = \xi (0) \sin \varphi + \nu (0) \cos \varphi, \label{xinu}\end{aligned}$$ resulting in Eq. (\[monmat\]). Note that according to Eq. (\[sigma\]), $$\begin{aligned} &\xi = \sqrt{\hbar S} \left( \delta \sigma_x + \frac {\sigma_x \delta \sigma_z} {1 - \sigma_z} \right), \nonumber \\ &\nu = \sqrt{\hbar S} \left( \delta \sigma_y + \frac {\sigma_y \delta \sigma_z} {1 - \sigma_z} \right).\end{aligned}$$ If we choose the $z$ axis in such a way that the periodic orbit starts and ends in the south pole, i.e., ${\boldsymbol}\sigma (0) = {\boldsymbol}\sigma (T_0) = (0,0,-1)^{\mathrm T}$, then at $t = 0$ and $t = T_0$ we have $$\begin{aligned} &\xi = \sqrt{\hbar S} \delta \sigma_x, \nonumber \\ &\nu = \sqrt{\hbar S} \delta \sigma_y.\end{aligned}$$ Comparing with Eqs. (\[xinu\]) we conclude that the neighborhood of the periodic orbit is rotated by the angle $\varphi$ after the period. Kochetov-Solari phase shift {#Koch} =========================== The Kochetov-Solari phase shift [@Koc] is given by $$\varphi_{KS} = \frac 1 2 \int_0^{T_0} A(t) dt,$$ where $$A(t) = \frac 1 {2 \hbar} \left[ \frac {\partial}{{\partial}\bar z} \frac {\left(1 + |z|^2\right)^2} {2S} \frac {{\partial}H} {{\partial}z} + \mathrm{c.c.} \right].$$ The spin-dependent part of the Hamiltonian is \[cf. Eq. (\[Ham\])\] $$\hbar H_{\mathrm{so}} (z, \bar z) = \frac {\hbar \kappa S} {1 + |z|^2} \mathbf{C} \cdot \left( \begin{array}{c} z + \bar z \\ i (z - \bar z) \\ |z|^2 - 1 \end{array} \right).$$ We find $$\frac {\partial}{{\partial}\bar z} \frac {\left(1 + |z|^2\right)^2} {2S} \frac {{\partial}H_{\mathrm{so}}} {{\partial}z} = \kappa \mathbf{C} \cdot \left( \begin{array}{c} - \bar z \\ i \bar z \\ 1 \end{array} \right),$$ therefore the phase shift becomes $$\varphi_{KS} = \frac 1 2 \varphi. \label{phiKS}$$ Comparing to Eq. (\[deltaS\]) we see that it effectively shifts the spin $S$ by $\frac 1 2$. One should keep in mind that this phase correction was originally derived for a pure spin system. It has not been proven to have the same form for a system with spin-orbit interaction. In the special case of the weak-coupling limit we have a reason to believe that the result Eq.  (\[phiKS\]) is correct, since we were able to reproduce the modulation factor found with another method [@Bol] (see Sec. \[BK\]). Maslov indices {#Mas} ============== The additional Maslov indices $\mu_{\pm}$ are determined by the linearized spin motion about the periodic orbit. The second variation of the Hamiltonian reads \[cf. Eq. (\[linEOM\])\] $$H^{(2)} (\xi,\nu) = \frac {F(t)} 2 \left( \xi^2 + \nu^2 \right).$$ Following Sugita [@Sug] we define its normal form $$H_{\mathrm{norm}} = \frac {\varphi} {2 T_0} \left( \xi^2 + \nu^2 \right)$$ that has a constant frequency and generates the same phase change $\varphi$ as $H^{(2)}$ after the period $T_0$. Then the spin block of the monodromy matrix can be classified as elliptic and its Maslov index is given by Eq.(\[Mind\]). $\varphi$ is the stability angle of one of the two orbits with opposite spin orientations. Therefore, without loss of generality, we can assume that $\varphi > 0$. Then, explicitly, $$\mu_{\pm} = \left\{ \begin{array}{ll} \pm 1, \;\; \mathrm{if} \;\; \varphi \in (0, 2\pi) &\mathrm{mod} \; 4\pi \\ \pm 3, \;\; \mathrm{if} \;\; \varphi \in (2\pi, 4\pi) &\mathrm{mod} \; 4\pi \end{array} \right. .$$ On the other hand, $$\mathrm{sign} \left( \sin \frac \varphi 2 \right) = \left\{ \begin{array}{rll} 1,& \;\; \mathrm{if} \;\; \varphi \in (0, 2\pi) &\mathrm{mod} \; 4\pi \\ -1,& \;\; \mathrm{if} \;\; \varphi \in (2\pi, 4\pi) &\mathrm{mod} \; 4\pi \end{array} \right. .$$ Clearly, one can take $\mu_{\pm} = \pm 1$ and at the same time remove the absolute value sign from $\sin \frac \varphi 2$, as was done in Eq.(\[sum\]). [9]{} M. Pletyukhov, Ch. Amann, M. Mehta, and M. Brack, Phys. Rev. Lett. [**89,**]{} 116601 (2002). J. Bolte and S. Keppeler, Phys. Rev. Lett. [**81,**]{} 1987 (1998); Ann. Phys. (N.Y.) [**274,**]{} 125 (1999). S. Keppeler and R. Winkler, Phys. Rev. Lett. [**88,**]{} 046401 (2002). E. Kochetov, J. Math. Phys. [**36,**]{} 4667 (1995); H. G. Solari, J. Math. Phys. [**28,**]{} 1097 (1987). A. Sugita, Ann. Phys. (N.Y.) [**288,**]{} 277 (2001). [^1]: By the south gauge we mean the choice of parameterization of the spin coherent states by $z$ such that $\sigma_z (|z| \rightarrow \infty) = 1 $. [^2]: We reformulate the results of Ref.  [@Bol] for the south gauge. [^3]: Ref. [@Bol] defines the phase $\eta = - \chi$.
--- abstract: 'Many complex electronic systems exhibit so-called pseudogaps, which are poorly-understood suppression of low-energy spectral intensity in the absence of an obvious gap-inducing symmetry. Here we investigate the superconductor $\bm{{\ifmmode\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}\else$\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}$\xspace\fi}}$ near optimal doping, where unconventional transport behavior and evidence of pseudogap(s) have been observed above the superconducting transition temperature $\bm{{\ifmmode{T}_{c}\else${T}_{c}$\xspace\fi}}$, and near an insulating phase with long-range lattice distortions. Angle-resolved photoemission spectroscopy (ARPES) reveals a dispersive band with vanishing quasiparticle weight and “tails” of deep-energy intensity that strongly decay approaching the Fermi level. Upon cooling below a transition temperature $\bm{{\ifmmode{T}_{p}\else${T}_{p}$\xspace\fi}> {\ifmmode{T}_{c}\else${T}_{c}$\xspace\fi}}$, which correlates with a change in the slope of the resistivity vs. temperature, a partial transfer of spectral weight near $\bm{{\ifmmode{E}_{F}\else${E}_{F}$\xspace\fi}}$ into the deep-binding energy tails is found to result from metal-insulator phase separation. Combined with simulations and Raman scattering, our results signal that insulating islands of ordered bipolarons precipitate out of a disordered polaronic liquid and provide evidence that this process is regulated by a crossover in the electronic mean free path.' author: - 'M. Naamneh$^{1}$, M. Yao$^{1}$, J. Jandke$^{1}$, J. Ma$^{1}$, Z. Ristić$^{1}$, J. Teyssier$^{2}$, A. Stucky$^{2}$, D. van der Marel$^{2}$, D. J. Gawryluk$^{3}$[^1], T. Shang$^{3}$, M. Medarde$^{3}$, E. Pomjakushina$^{3}$, S. Li$^{4}$, T. Berlijn$^{5,6}$, S. Johnston$^{4,7}$, M. Müller$^{8}$, J. Mesot$^{9}$, M. Shi$^{1}$, M. Radović$^{1}$ & N. C. Plumb$^{1}$' bibliography: - 'citations.bib' title: 'Cooling a polaronic liquid: Phase mixture and pseudogap-like spectra in superconducting $\bm{{\ifmmode\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}\else$\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}$\xspace\fi}}$' --- Photon Science Division, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Department of Quantum Matter Physics (DQMP), University of Geneva, 24 quai Ernest-Ansermet, 1211 Geneva 4, Switzerland Laboratory for Multiscale Materials Experiments, Paul Scherrer Institut, CH-5232 Villigen-PSI, Switzerland Department of Physics and Astronomy, University of Tennessee, Knoxville, Tennessee 37996-1200, USA Center for Nanophase Materials Sciences, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Computational Science and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA Joint Institute for Advanced Materials at The University of Tennessee, Knoxville, Tennessee 37996, USA Condensed Matter Theory Group, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Pseudogaps represent a departure from the expectations of standard band theory and the Fermi liquid theory of electronic excitations, which together serve as a successful starting point for understanding many condensed matter systems. They could potentially originate from any ways in which the conventional theories might break down, e.g., due to disorder, fluctuations, strong interactions, and/or strong correlations. But it is also conceivable that some observed pseudogaps might be less mysterious than they first seem, in the sense that they are rooted a “hidden” order that, once revealed, could straightforwardly explain the opening of a gap. Pseudogaps are often observed in strongly correlated transition metal systems where the charge, spin, orbital, and lattice degrees of freedom can intertwine, leading to the emergence of exotic and poorly understood phases. These include ordered insulator or “bad metal” phases near half-filling (e.g., Mott, charge density wave, and spin density wave orders), metal-insulator transitions, superconductivity, anomalous transport behaviors (e.g., “strange metal” behavior and colossal magnetoresistance), intrinsic disorder and phase separation/fluctuation. Examples of such phenomena are found in diverse materials, including the cuprates[@Lee2006], manganites[@Dagotto2005], transition metal dichalcogenides[@Manzeli2017], iron-based superconductors[@Si2016], and rare earth nickelates[@Medarde1997]. In perovskite bismuth oxides such as [\_[1-x]{}\_[x]{}\_[3]{}$\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}$]{}(BKBO), a complex phenomenology similar to that of transition metal oxides emerges out of a very different setting. Here, short-range Coulomb and spin interactions can largely be neglected[@Plumb2016], but short-range electron-lattice interactions (polarons) play a crucial role, offering an opportunity to examine the signatures and origins of many-body phenomena from a new perspective. The phase diagram of BKBO is sketched in Fig. \[fig:intro\]**a**. The insulating phase at zero doping ($x=0$, half filling) is tied to a static long-range structural distortion. Here, alternating BiO$_6$ octahedra collapse in a three-dimensional breathing distortion, as illustrated schematically in the overlay of the phase diagram. Several studies have shown that the parent ground state is bond disproportionated, meaning that hole pairs are trapped in combinations of the O $2p$ orbitals in the collapsed octahedra, opening a gap in the predominantly oxygen-derived conduction band[@Shen1990; @Ignatov1996; @Foyevtsova2015; @Plumb2016; @Khazraie2018]. The localized charge pairs due to coupling to the lattice distortion may be regarded as a frozen lattice of bipolarons. This polaronic view of the parent compound’s ground state has been widely adopted in theoretical approaches to understanding the perovskite bismuth oxides[@Rice1981; @Micnas1990; @Menushenkov2001; @Franchini2009]. BKBO transitions to a metal as a function of doping and/or temperature, and even becomes superconducting up to as high as 34 K in bulk[@Jones1989]. Electrical transport evolves rapidly and dramatically as a function of doping across the superconducting region of the phase diagram. Measurements on single crystals by Nagata *et al*.[@Nagata1999] showed that at $x = 0.34$, resistivity $\rho$ rises with decreasing temperature down to about 75 K, where it saturates before [\_[c]{}${T}_{c}$]{}is reached. Although the high-$T$ behavior of the resistivity at this doping resembles an insulator, our ARPES results will show that the slope change or “elbow” in $\rho(T)$ above [\_[c]{}${T}_{c}$]{}should be viewed as a bad-to-better metal transition. At $x = 0.39$ — close to what is generally cited as optimal doping[@Sleight2015] — $\rho(T)$ is nearly linear above [\_[c]{}${T}_{c}$]{}, with a weak elbow centered roughly around 140 K. Higher doped samples exhibit a more conventional, upward-curving $\rho(T)$. The unusual transport above [\_[c]{}${T}_{c}$]{}in under-/optimally-doped superconducting samples appears to coincide with pseudogap-like spectral features. Optical conductivity studies of BKBO noted a suppression of the low-energy spectral weight at room temperature[@Karlow1993; @Blanton1993]. Similar spectra were found in an analogous region of the phase diagram of the closely related superconducting compound [\_[1-x]{}\_[x]{}\_[3]{}$\text{BaPb}_{1-x}\text{Bi}_{x}\text{O}_{3}$]{}(BPBO)[@Tajima1985]. Separately, angle-integrated photoemission measurements performed on superconducting BKBO observed an energetically sharper, temperature-dependent pseudogap-like suppression of near-[\_[F]{}${E}_{F}$]{}spectral intensity[@Chainani2001]. Some studies have interpreted the unusual transport and spectral properties of these materials as evidence of coexisting metallic and insulating phases[@Tajima1985; @Nagata1999; @Nicoletti2017]. Meanwhile a mixture of structural phases has been directly observed in superconducting BPBO compositions[@Climent-Pascual2011]. Based on transmission electron microscopy experiments, it appears that nanoscale regions of ordered distortions in BPBO tend to percolate into stripe-like formations[@Giraldo-Gallo2015]. This calls to mind predictions of 2D stacking “slices” of concentrated holes in BKBO[@Bischofs2002] and strongly suggests that phase mixing is intrinsic to these materials in certain doping ranges. Overall, however, the ubiquity of structural phase mixing in superconducting bismuth oxides and its relationship to electronic structure and transport properties has remained unclear. Here we use ARPES and Raman measurements to investigate the issues of possible pseudogaps and phase separation underlying BKBO’s unusual transport characteristics. The ARPES experiments were performed *in situ* on freshly-grown thin films of BKBO, allowing us to overcome longstanding sample and surface quality issues associated with single crystals that had prevented successful ARPES measurements up to now. Figure \[fig:intro\]**b** introduces typical resistance-vs.-temperature curves, $R(T)$, of our films grown by pulsed laser deposition (PLD) from ablation targets with $x=0.34$ and $x=0.38$. The curves are normalized to the resistance at 300 K. Aside from the lower superconducting transition temperatures in the films ($T_c = 22$ K) and slightly shifted temperatures of the elbow features relative to bulk BKBO, the $R(T)$ characteristics agree well with measurements of single crystals found in the literature[@Nagata1999]. We will later spectroscopically correlate the resistivity elbow with a transition temperature, $T_p$, to metal-insulator phase separation, thus defining the “M+I” region of the phase diagram in Fig. \[fig:intro\]**a**. In what follows, we focus on $x=0.34$ samples. Overview of ARPES spectra {#overview-of-arpes-spectra .unnumbered} ========================= Figure \[fig:ARPES\_EDC\]**a** shows the Fermi surface of metallic, cubic BKBO with filling corresponding to $x=0.34$. The Fermi surface is drawn from a tight-binding model[@Sahrakorpi2000], which is found to reasonably match the shape and volume of the Fermi surface obtained from ARPES measurements. ARPES constant energy maps at various binding energies are presented in Fig. \[fig:ARPES\_EDC\]**b**. The measurements were performed using a photon energy of $h\nu=70$ eV, which corresponds to a sheet in momentum space that passes close to the $\Gamma-M-X$ plane, shown in Fig. \[fig:ARPES\_EDC\]**a** (see Methods). The temperature was 17 K. The anomalous spectral lineshapes and first indications of pseudogap-like behavior in BKBO are evident from energy distribution curves (EDCs) at fixed momenta. EDCs at the Fermi momenta [\_[F]{}${k}_{F}$]{}along the $\Gamma-M$ and $\Gamma-X$ directions are shown in Figs. \[fig:ARPES\_EDC\]**c** and \[fig:ARPES\_EDC\]**d**, respectively. The spectra taken at 290 K and 200 K exhibit Fermi steps at [\_[F]{}${E}_{F}$]{}but lack distinct quasiparticle peaks expected for a Fermi liquid. Instead, tails of spectral intensity rise monotonically to deeper binding energy. These peculiar EDC lineshapes give a spectroscopic view of the physics underlying the bad metallic transport behavior above [\_[c]{}${T}_{c}$]{}in BKBO near $x=0.34$. The electronic states lie far from the Fermi liquid regime in which sharp peaks would signify quasiparticles at poles of the single-particle-removal Green’s function[@Koralek2006]. The strongly decaying spectral intensity approaching [\_[F]{}${E}_{F}$]{}also defies conventional behavior in a metal and can be viewed as a pseudogap-like feature persisting to room temperature and distributed over an energy scale of at least several hundreds of meV. Indeed it appears to be associated with the broad-energy, doping-dependent suppression in the density of states at [\_[F]{}${E}_{F}$]{}seen in room temperature optical reflectivity experiments on BKBO and BPBO[@Karlow1993; @Tajima1985]. Additionally, a second, distinct type of pseudogap-like feature arises at lower temperature. Below [\_[p]{}${T}_{p}$]{}, in spectra acquired at 50 K and 17 K, intensity within 100–200 meV of [\_[F]{}${E}_{F}$]{}is suppressed relative to the spectra above [\_[p]{}${T}_{p}$]{}, while additional weight appears in the tails. Here the energy scale and temperature dependence align with previous angle-*integrated* photoemission measurements[@Chainani2001]. The momentum-resolving capability of the present experiments is a key to identifying the origin of this behavior. Though originally dubbed a pseudogap, analysis in the next section will demonstrate that the low-temperature redistribution of spectral intensity is due to the opening of a true bandgap with an associated change in symmetry, albeit one whose origin had been obscured by phase separation. Signatures of electronic phase separation {#signatures-of-electronic-phase-separation .unnumbered} ========================================= From a cursory view of Fig. \[fig:ARPES\_EDC\], it would be tempting to ascribe the low-temperature spectral buildup in the tails to a simple increase in incoherent states due to, e.g., inelastic scattering from defects in such a highly doped material. But two important observations suggest that the changes are actually associated with the formation of an electronic phase that is distinct from the (bad) metal. First, it is clear from comparing Figs. \[fig:ARPES\_EDC\]**c** and \[fig:ARPES\_EDC\]**d** that the spectral weight redistribution into the tails at low temperature is highly anisotropic in $k$-space. Namely, the relative increase in intensity in the tail along $\Gamma-M$ upon cooling is much less than along $\Gamma-X$. Second, comparing Fig. \[fig:ARPES\_EDC\]**c** to the inset of Fig. \[fig:ARPES\_EDC\]**d**, which zooms in near [\_[F]{}${E}_{F}$]{}along $\Gamma-X$, we can see that the *fraction* of spectral weight lost at [\_[F]{}${E}_{F}$]{}at each momentum point upon cooling is nearly the same. Further analysis presented in Fig. \[fig:subtraction\] demonstrates these observations can be explained by a transition to a superposition of two distinct electronic structures appearing in the ARPES measurements below [\_[p]{}${T}_{p}$]{}. In particular, the $k$-isotropic reduction in intensity at [\_[F]{}${E}_{F}$]{}results from the fraction of metallic states lost to the breathing-distorted insulator phase. As a result of its symmetry, this phase has dispersive features and spectral weight patterns that differ from those of the metallic band structure, thereby accounting for the $k$-anisotropy of the temperature-dependent spectral tails. To verify the occurrence of phase separation, we show that the low-temperature spectra can be decomposed in order to isolate the insulating component, $I^{LT}_{ins}(\bm{k},\omega)$, which can then be compared to insulating low- or undoped BKBO. We obtain this component via the subtraction $$\label{eq:subtraction} I^{LT}_{ins}(\bm{k},\omega)=I^{LT}(\bm{k},\omega)-\alpha I^{HT}(\bm{k},\omega),$$ where $I^{LT}(\bm{k},\omega)$ is a spectrum acquired at low temperature, $I^{HT}(\bm{k},\omega)$ is a high-temperature spectrum obtained well above [\_[p]{}${T}_{p}$]{}, and $\alpha$ is a constant equal to the intensity ratio $I^{LT}(k_F,E_F)/I^{HT}(k_F,E_F)$ between high temperature and low temperature scans. By construction, intensity at [\_[F]{}${E}_{F}$]{}in the difference spectrum of Eq. (\[eq:subtraction\]) will vanish at all points in $k$-space only if $\alpha$, which we extract empirically at an arbitrary [\_[F]{}${k}_{F}$]{}, is single-valued on the whole Fermi surface — a stringent condition that is satisfied in the case of metal-insulator phase separation, as $\alpha$ would then straightforwardly represent the fraction of remaining metallic states at low temperature. The outcome of the subtraction is summarized in Fig. \[fig:subtraction\]. Starting from the left, the first two columns of Fig. \[fig:subtraction\]**a** present energy-vs.-$k$ data obtained along $\Gamma-M$ from a superconducting sample with doping $x=0.34$, at temperatures well below and above $T_p$ (17 K and 200 K, respectively). The third column is the result of the subtraction operation (Eq. \[eq:subtraction\]) between the high and low temperature data using $\alpha=0.6$, while the fourth column (Fig. \[fig:subtraction\]**b**) presents comparison data from a low-doped insulating sample ($x \approx 0.15$, $T = 17$ K). There is a clear similarity between the subtraction spectrum and the insulator in terms of a gap in the spectrum, and the turnaround point of the band at momentum corresponding to the folded Brillouin zone boundary of the insulator (vertical dashed lines). Figure \[fig:subtraction\]**c** shows momentum distribution curves (MDCs) at a fixed binding energy of -0.15 eV from each panel in Fig. \[fig:subtraction\]**a** (horizontal dashed lines). These further illustrate how spectral weight is redistributed to higher momentum, such that the gap edge seen in the subtraction spectrum forms around the insulator Brillouin zone boundary. We have extended this subtraction method over a quadrant of the $\Gamma-M-X$ plane of the Fermi surface obtained by azimuthally scanning the sample. The results are presented in Fig. \[fig:subtraction\]**d** in the form of $k$-space intensity maps at different binding energies. We draw attention to two important aspects of the subtracted data: First, the use of a single scaling factor, $\alpha$, indeed leads to vanishing intensity at [\_[F]{}${E}_{F}$]{}over the full Fermi surface. Second, the intensity maps at binding energies of -220 and -300 meV indicate that at low temperature, spectral weight redistributes in $k$-space into the region around the $M$ point (indicated by arrows). As seen from the low-doped $x \approx 0.15$ sample, this is consistent with the folded band structure along $(\pi,\pi,\pi)/a$ and spectral weight distribution of the insulating phase (see further discussion in the Supplementary Information). Comparison with a 2D Su-Schrieffer-Heeger model {#comparison-with-a-2d-su-schrieffer-heeger-model .unnumbered} =============================================== We have compared the findings from ARPES with a 2D cluster calculation based on a three-orbital Su-Schrieffer-Heeger (SSH) model[@Su1980] defined on a BiO$_2$ lattice (see Methods). A key aspect of this model is that the electron-lattice coupling modulates the hopping parameters with oxygen displacements. We then minimize the total energy with respect to the static displacement of the oxygen atoms. Although this model neglects factors such as Coulomb interactions and the phonon momenta, it is nevertheless serves to illustrate how short-range electron-phonon interactions can drive the formation of insulating bipolaron order at low doping and intrinsic phase separation at intermediate doping. As shown in Fig. \[fig:calculation\]**a**, at a density $n$ of one hole per Bi, the model yields a breathing-distorted ground state analogous to three-dimensional [\_[3]{}$\text{BaBiO}_{3}$]{}, which opens a gap in the conduction band (Fig. \[fig:calculation\]**b**). By contrast, $n=1.4$ is metallic (Fig. \[fig:calculation\]**c**). At intermediate doping — in this case $n=1.2$ — nanoscale regions of ordered breathing distortions cluster together, separated by areas of irregular Bi-O distortions (Fig. \[fig:calculation\]**d**). Although the model used here is a toy model for the real system, the resulting spectrum shows evidence of superimposed metallic and insulating band structures (Figs. \[fig:calculation\]**e**, **f**), supporting the conclusions from of our analysis of the ARPES data. We hope that these observations will motivate further theoretical studies of related models. In particular, the current model as constructed may undergo spinodal decomposition (as strongly suggested by the numerical results), in which case the true ground state should be expected to have very large regions of metal or insulator. A more realistic model would take long elastic couplings and/or Coulomb constraints into account to obtain finite size bubbles of phase separated regions consistent with experiments. Temperature dependence of Raman and ARPES data {#temperature-dependence-of-raman-and-arpes-data .unnumbered} ============================================== Since insulating behavior in the BKBO phase diagram is fundamentally connected with ordered BiO$_6$ breathing distortions, we have compared the temperature dependence of the electronic and atomic structure to gain insights into the nature of the transition to electronic phase separation occurring across [\_[p]{}${T}_{p}$]{}. Figure \[fig:Raman\]**a** plots ARPES difference spectra along $\Gamma-M$ obtained by Eq. (\[eq:subtraction\]). Here the high-temperature reference spectrum $I^{HT}(\bm{k},\omega)$ was measured at 255 K, while the various low-temperature spectra $I^{LT}(\bm{k},\omega)$ were measured at the temperatures indicated in the figure. A clear change occurs below $\sim160$ K, at which point band weight can be seen dispersing up to about -200 meV binding energy, reaching the folded Brillouin zone boundary of the insulating phase — the signature of insulating band structure and phase separation established in Fig. \[fig:subtraction\]. It is interesting to compare the relatively well-defined temperature range of this electronic phase separation with the temperature dependence of the local atomic structure. Although diffraction experiments report that BKBO in this doping range has — globally, at least — simple cubic[@Pei1990] (or tetragonal[@Braden2000]) structure, Raman scattering, which is more sensitive to local structure, has found evidence of persistent BiO$_6$ breathing distortions[@Tajima1992; @Menushenkov2003]. The discrepancy between diffraction and Raman measurements signals that the breathing distortions are highly disordered and/or fluctuating. Raman spectra presented in Fig. \[fig:Raman\]**b** confirm the presence of breathing distortions in our own thin film samples, as indicated by the peak located at an energy shift of 564 cm$^{-1}$. The peak, which shows an asymmetric Fano-like broadening that is likely due to coupling with the electrons, is visible at all temperatures. It shows little qualitative change as a function of temperature, save for a slight sharpening below [\_[p]{}${T}_{p}$]{}of the subtle contours of the broadened structure (blue curves) with respect to temperatures in the phase separation transition region (green curves) and above it (red curves). The stability of the Raman features as a function of temperature suggests that the mere existence of breathing distortions is not the sole factor at play in the metal-insulator phase separation at [\_[p]{}${T}_{p}$]{}; presumably some lengthscale of both structural and electronic coherence needs to be established. Indeed, we find evidence from ARPES suggesting that electronic scattering is a key determinant in restricting the formation of the the insulating regions. The left axis of Fig. \[fig:Raman\]**c** shows the Lorentzian half-width at half-maximum, $\gamma$, of the metallic band at [\_[F]{}${E}_{F}$]{}as a function of temperature. The values are obtained from fits to MDCs along the $\Gamma-X$ direction. On the right axis we plot the insulator-associated signal intensity obtained from the difference spectra of Fig. \[fig:Raman\]**a**. This “insulator weight” comes from integrating counts in a region focused on the insulating band, indicated by the white lines in the top left image of Fig. \[fig:Raman\]**a**, and is adopted as a metric to characterize the extent of the electronic transition. In ARPES, $\gamma$ is equal to the inverse of the mean free path of the electrons, $\xi^{-1}$. The presented measurements of the low-energy electrons near the $X$ point are of particular interest, as these states are the closest on the Fermi surface to a van Hove singularity[@Sahrakorpi2000]. At the highest measured temperature of 255 K, the MDC linewidth corresponds to an extremely high scattering rate: On average, electrons at [\_[F]{}${E}_{F}$]{}scatter after traveling only about $1.7a$, where $a$ is the cubic lattice constant (4.29 Å). This is close to the Mott-Ioffe-Regel limit, below which localization would be expected. Tracking the insulator weight while decreasing temperature, we see an onset of the transition at about 160 K and completion near 110 K. At the same time, $\gamma$ decreases monotonically. We observe that the transition centered at [\_[p]{}${T}_{p}$]{}occurs in a temperature range where $\xi$ begins to exceed $2a$, the characteristic lengthscale regime of the ordered breathing distortions in insulating BKBO, where the cubic unit cell doubles along each crystal axis. Given that extrinsic effects can broaden the measured linewidths, these values of $\xi$ deserve to be thoroughly scrutinized. Ultimately, we find compelling evidence that the linewidths and corresponding values of $\xi$ are close to intrinsic (see Supplementary Information). For example, we have repeated the experiments at different photon energies to test for the influence of perpendicular momentum broadening and surface effects in the photoemission process. As shown in Fig. \[fig:Raman\]**c**, we obtain the same $\xi(T)$ results using photon energies of 70 eV and 120 eV. Moreover, the notion that the phase separation is regulated by a temperature-controlled crossover in $\xi$ qualitatively accounts for the otherwise difficult-to-explain doping dependence of [\_[p]{}${T}_{p}$]{}seen in transport measurements. Having now found a correlation between the resistivity elbow and the phase separation transition at [\_[p]{}${T}_{p}$]{}, it appears that [\_[p]{}${T}_{p}$]{}*increases* with higher doping until its disappearance at $x \approx 0.4$ (ref. ). The same qualitative shift in [\_[p]{}${T}_{p}$]{}can be seen in our own thin film samples (Fig. \[fig:intro\]**b**). At first glance, this trend is deeply counterintuitive; it would be natural to assume that [\_[p]{}${T}_{p}$]{}should decrease as doping is increased in the range of about $x=0.35$ to 0.4, going away from the insulator phase below $x \approx 0.3$. The observed behavior can be explained, though, in the scattering-regulated scenario. Scattering appears to decrease with increasing $x$ as the samples move toward a more Fermi-liquid-like transport regime, which is not surprising, as both charge correlations and electron-phonon coupling would be reduced at higher doping. Higher $x$ therefore implies longer electronic mean free paths, which in our picture leads to higher [\_[p]{}${T}_{p}$]{}. Discussion {#discussion .unnumbered} ========== Depending on one’s definition, our data can be interpreted in terms of two types of pseudogaps in BKBO. The first exists up to at least room temperature and takes the form of a broad suppression of spectral weight and absence of clear quasiparticles approaching [\_[F]{}${E}_{F}$]{}. The second is the energetically sharper near-[\_[F]{}${E}_{F}$]{}intensity loss below [\_[p]{}${T}_{p}$]{}. We have shown that, in reality, this second pseudogap could alternatively be regarded as a true bandgap, albeit one whose origins and underlying symmetry had been obscured by phase separation. Our results indicate that these two pseudogap-like features and the transition between them stem from (bi)polaron interactions. First and most obviously, the band structure of the insulating fraction below [\_[p]{}${T}_{p}$]{}reflects folding present in low-/undoped BKBO attributed to ordered bipolarons. Secondly, this insulating band structure is formed out of a spectral weight transfer from near-[\_[F]{}${E}_{F}$]{}states into deep-energy tails that already existed at temperatures above [\_[p]{}${T}_{p}$]{}. Combined with Raman results showing the presence of disordered and/or fluctuating breathing distortions at high temperatures, this spectral weight transfer and the anomalous quasiparticle-less lineshapes that precede it give the impression that incoherent states of the bad metal phase (or, rather, subregions of it) are itinerant precursors to the localized, ordered bipolarons that precipitate below [\_[p]{}${T}_{p}$]{}. In this view, energetics favorable to bipolarons would already be established above [\_[p]{}${T}_{p}$]{}, but scattering precludes their phase condensation into an ordered phase — a scenario that fits well with our observation that [\_[p]{}${T}_{p}$]{}is correlated with a crossover in the electronic mean free path. The dispersive band trailed by a large incoherent tail seen here in BKBO bears similarities to ARPES measurements of other materials thought to show hallmarks of polarons, such as low-doped TiO$_2$[@Moser2013], SrTiO$_3$[@Wang2016], and manganites[@Sun2006; @Massee2011]. BKBO shows a particularly striking resemblance to $R$NiO$_3$ rare earth nickelates. Those compounds also undergo metal-insulator phase transitions, and there are claims that they have a bond-disproportionated ground state analogous to the BKBO parent compound[@Park2012; @Johnston2014; @Bisogni2016]. ARPES data from NdNiO$_3$ show spectral features (weak, though observable, quasiparticle-like peaks and strong deep-energy tails) that are qualitatively similar to BKBO[@Schwier2012; @Dhaka2015]. Moreover, as in BKBO, decreasing temperature across the metal-insulator transition in NdNiO$_3$ leads to a transfer of spectral weight from near the Fermi level into the deep-energy tails[@Schwier2012; @Dhaka2015]. In contrast to BKBO, however, NdNiO$_3$ is undoped, and photoemission intensity at [\_[F]{}${E}_{F}$]{}vanishes below the transition temperature; the sample becomes purely insulating, rather than phase separated. Similar to the results from Raman spectroscopy on BKBO, measurements of the pair distribution function (PDF) in nickelates by neutron scattering have revealed local distortions related to the insulating phase that are “hidden” within the global symmetry and persist above the metal-insulator transition temperature[@Li2016; @Shamblin2018]. Based on both PDF and dielectric spectroscopy measurements, Shamblin *et al*. have proposed a scenario similar to the one here, in which the metal-insulator transition in nickelates represents the freezing of a polaronic liquid[@Shamblin2018]. In all the examples just mentioned, however, EDC peaks of the metallic bands signal that at least some fraction of coherent spectral weight survives the polaron interactions, whereas electrons in BKBO are scattered to an even greater extent, such that we see no peaks. The observation of a dispersive band persisting in the absence of coherent spectral weight runs counter to Fermi liquid theory, which predicts that effective mass should become infinite as the quasiparticle residue, $Z_{\bm{k}}$, goes to zero. The behavior may be in line, though, with the SSH model, in which electron-phonon coupling modulates the hopping parameters, rather than the local potential via the charge density. Recent theoretical work on such a model found light bipolarons can exist, even in the strong coupling regime[@Sous2018]. Metal-insulator phase separation is potentially an important factor influencing superconductivity in BKBO. An obvious point is that phase separation sidesteps a global metal-insulator transition, as occurs in the nickelates, allowing for the survival of a metallic Fermi surface amid strong interactions and setting the stage for superconductivity. At a deeper level, recent studies have highlighted how disorder[@Harris2018] and the lengthscale of structural phase separation[@Giraldo-Gallo2015] in the bismuth oxides appear to be relevant to [\_[c]{}${T}_{c}$]{}. Additionally, the electronic phase mixture seen above [\_[c]{}${T}_{c}$]{}implies that a direct superconductor-insulator transition (SIT) indeed occurs along the $x$ axis of the phase diagram at $T=0$, as has long been suspected. This positions BKBO as a platform for the study of SITs, and it may have important consequences for the nature of its superconducting pair state[@Trivedi2012b]. Sample preparation ------------------ Samples were prepared at the Paul Scherrer Institut. Thin films were grown on SrTiO$_3$(001) substrates by pulsed laser deposition. Ablation targets were prepared by a two-step solid state synthesis. Stoichiometric mixtures of KO$_2$, Bi$_2$O$_3$, and pre-synthesized BaBiO$_3$[@Plumb2016] were well mixed into a paste inside a helium glove box, then annealed at 725 $^\circ$C in dynamic vacuum for 1 h, and finally furnace cooled. Subsequently, the material was heated up to 425 $^\circ$C, kept at this temperature for 1 h, and slowly cooled down at a rate in the range of 5-15 $^\circ$C/h. The material was then further annealed for 1 h at 400 $^\circ$C, 375 $^\circ$C, 350 $^\circ$C, and 325 $^\circ$C with slow cooling, all in flowing oxygen atmosphere. For each batch of the material, the whole procedure was repeated three times. The resulting powder was pressed into pellets with a load of 15 tons. Pellets were sintered in oxygen at temperatures of 425–325 $^\circ$C, as previously applied for the powder synthesis. The film growth was performed using an Nd:YAG laser with a repetition rate of 2 Hz and pulse fluence of 1.6 J/cm$^{2}$. Films were grown on SrTiO$_3$(001) substrates in 0.5 mbar of oxygen with substrates held at a temperature of 480 $^\circ$C. Two-dimensional epitaxial growth was verified by reflection high-energy electron diffraction (RHEED). After deposition, the films were annealed at 370 $^\circ$C in 1 bar of pure oxygen for 0.5 hours. Films were roughly 11 nm thick. For the ARPES measurements, films were transfered directly from the deposition chamber to the experimental station via an ultrahigh vacuum connection with pressure better than $5 \times 10^{-9}$ mbar. In addition to being studied by ARPES and Raman scattering, samples were characterized by x-ray diffraction, x-ray fluorescence, x-ray photoelectron spectrsocopy, and resistivity measurements. See the Supplementary Information for more details. Angle-resolved photoemission spectroscopy ----------------------------------------- ARPES experiments were performed at the Surface/Interface Spectroscopy (SIS) beamline X09LA of the Swiss Light Source. The endstation is equipped with a hemispherical electron analyzer and 6-axis cryogenic manipulator. The presented data were collected using $p$-polarized light. The total energy resolution was 10 meV. The hemispherical analyzer has an angular resolution of 0.1$^\circ$. Momentum-space intensity maps were acquired by scanning the azimuthal rotation axis at fixed tilt and polar angles. The pressure during the measurements was better than $5 \times 10^{-11}$ mbar. Raman scattering ---------------- Raman scattering experiments were performed at the University of Geneva using a homemade micro-Raman setup based on a half-meter spectrometer coupled to a nitrogen-cooled Princeton Instruments CCD detector. A gas laser at a wavelength of 514.5 nm was used for excitation. Narrow edge filters allowed measuring Stokes lines down to 50 cm$^{-1}$. A long working distance $\times63$ objective (N.A. 0.7) was used. In order to avoid overheating, we used a very low laser power (less than 100 $\mu$W) for a spot size of about 2 $\mu$m in diameter. For the temperature dependent measurements, the samples were mounted on a Cryovac helium flow cryostat. Full spectra were acquired at identical temperatures for both the BKBO films and a bare SrTiO$_3$ substrate. The Raman response of the BKBO thin film presented in Fig. \[fig:Raman\]**b** was obtained after subtraction of the substrate contribution. Theoretical model ----------------- We define a three-orbital Su-Schrieffer-Heeger model on a 2D Lieb lattice. The orbital basis consists of a Bi $6s$ atom and two O $2p$ orbitals situated halfway between each of the Bi atoms. The positions of the heavier Bi atoms are fixed, and O atoms are restricted to move along the bond direction. The Hamiltonian is $$\label{eq:Hamiltonian} \begin{split} H = & -t_{sp} \sum_{\rm{r},\sigma} (1-\alpha x_{\mathbf{r}}) (s_{\mathbf{r},\sigma}^\dagger p_{\mathbf{r} ,x,\sigma}+h.c.) -t_{sp} \sum_{\mathbf{r},\sigma} (1-\alpha y_{\mathbf{r}}) (s_{\mathbf{r},\sigma}^\dagger p_{\mathbf{r},y,\sigma}+h.c.) \\ & +t_{sp} \sum_{\mathbf{r},\sigma} (1+\alpha x_{\mathbf{r}}) (s_{\mathbf{r+a},\sigma}^\dagger p_{\mathbf{r},x,\sigma}+h.c.) +t_{sp} \sum_{\mathbf{r},\sigma} (1+\alpha y_{\mathbf{r}}) (s_{\mathbf{r+b},\sigma}^\dagger p_{\mathbf{r},y,\sigma}+h.c.) \\ & +t_{pp} \sum_{\mathbf{r},\sigma} (p_{\mathbf{r},x,\sigma}^\dagger p_{\mathbf{r},y,\sigma} - p_{\mathbf{r},y,\sigma}^\dagger p_{\mathbf{r-a},x,\sigma} +p_{\mathbf{r-a},x,\sigma}^\dagger p_{\mathbf{r-b},y,\sigma} - p_{\mathbf{r-b},y,\sigma}^\dagger p_{\mathbf{r},x,\sigma}) \\ & +\sum_{\mathbf{r},\sigma} (\epsilon_{s} \hat{n}_{\mathbf{r},\sigma}^{s} + \epsilon_{p} \hat{n}_{\mathbf{r},\sigma}^{p_{x}} + \epsilon_{p} \hat{n}_{\mathbf{r},\sigma}^{p_{y}}) + \sum_{\mathbf{r}} (K x_\mathbf{r}^2+K y_\mathbf{r}^2). \end{split}$$ Here, the operators $s_{\mathbf{r},\sigma}^\dagger$ ($s_{\mathbf{r},\sigma}$) and $p_{\mathbf{r},\sigma}^\dagger$ ($p_{\mathbf{r},\sigma}$) are the creation (annihilation) for the spin $\sigma$ holes on the Bi 6s and O 2p orbitals respectively. The unit cells are indexed by $\mathbf{r} = n_x \mathbf{a} + n_y \mathbf{b}$, where $(n_x, n_y) \in \mathbb{Z}$, $\mathbf{a} = (a, 0)$ and $\mathbf{b} = (0, a)$ are the primitive lattice vectors along the $x$ and $y$ directions, respectively, and $a$ is the Bi-Bi bond length of the undistorted lattice. The operators ${n}^{s}_{\mathbf{r},\sigma} = {s}^{\dagger}_{\mathbf{r},\sigma}s_{\mathbf{r},\sigma}$ and ${n}^{p_{\delta}}_{\mathbf{r},\sigma} = {p}^{\dagger}_{\delta,\mathbf{r},\sigma}p_{\delta,\mathbf{r},\sigma}$ are the number operators for $s$ and $p_{\delta}$ $(\delta = x, y)$ orbitals, respectively. The displacement of oxygen atoms is described by $x_{\mathbf{r}}$ and $y_{\mathbf{r}}$, and the electron-phonon coupling modulates the hopping integral $t_{sp}$ by $\alpha x_{\mathbf{r}}$ and $\alpha y_{\mathbf{r}}$. $K$ is the coefficient of elasticity between each Bi and O atoms, and each O atom is linked by two springs to the neighboring Bi atoms. We have neglected the kinetic energy term for oxygen atoms. The solution of the Hamiltonian is found by searching for $\frac{\partial E}{\partial x_{\mathbf{r}}}=0$ and $\frac{\partial E}{\partial y_{\mathbf{r}}}=0$, where $E$ is the energy of the system given by $$E = \sum_{m,\sigma}f(E_{m,\sigma})E_{m,\sigma}=\sum_{m,\sigma}f(E_{m,\sigma})\expval{H}{\Psi_{m,\sigma}},$$ where $E_{m,\sigma}$ is the eigenenergy for spin $\sigma$ corresponding to the $\Psi_{m,\sigma}$ eigenstate. $f(E_{m,\sigma})$ is the Fermi-Dirac distribution with respect to $E_{m,\sigma}$. We adopt hopping parameters motivated by density functional theory calculations[@Foyevtsova2015] with $t_{sp} = 2.08$ eV, $t_{pp} = 0.056$ eV, $\epsilon_s = 6.42$ eV, and $\epsilon_p = 2.42$ eV. The *e-ph* coupling constant is $\alpha = 4a^{-1}$ and $K=0.104$ $\mathrm{eV}/a^2$, which leads to a distortion of $0.075a$ at half-filling. Calculations were performed on a large cluster with $N = 20 \times 20$ cell size. The spectral weight function was calculated via $$\label{eq:AKW} A(\mathbf{k},\omega) = \frac{1}{2\pi} \sum_{m,\gamma,\sigma} \frac{\mid \langle \mathbf{k}, \gamma \mid \Psi_{m,\sigma} \rangle \mid ^2}{\omega -E_{m,\sigma}+{\mathrm i} \delta},$$ where $\delta=0.3$ eV, $\gamma$ is the orbital index and $\mid \mathbf{k},\gamma\rangle = \frac{1}{N} \sum_{\mathbf{r}} e^{{\mathrm i}\mathbf{k\cdot r}}\mid\gamma_{\mathbf{r}}\rangle$. The authors are grateful for thought-provoking discussions with T.M. Rice, and J. Chang. A. Pfister and L. Nue lent technical assistance at SIS beamline. C.W. Schneider assisted with instrumentation in the laboratory of the Thin Films and Interfaces group at PSI. M.N. is supported by the Swiss National Science Foundation under project 200021\_159678. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 701647. D.J.G. received financial support from SCIEX NMS^CH^ (Project No. 13.236) granted by the Rectors Conference of the Swiss Universities. A portion of the work was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. T.B. and S.J. acknowledge support from the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences, Division of Materials Sciences and Engineering. M.N. grew the BKBO films and performed ARPES with help from M.Y., J.J., J.M., M.R. and N.C.P.. Z.R. helped with studies of optimal film growth conditions. D.G. produced the ablation targets for the film growth under the supervision of M.Medarde and E.P.. J.T. and A.S. performed temperature-dependent Raman measurements under the supervision of D.v.d.M.. M.N. characterized the films by various techniques (resistivity, x-ray diffraction, and x-ray fluorescence) with help from T.S., D.G., and E.P.. S.L., T.B., and S.J. performed the cluster calculations. M.N. analyzed the data with valuable feedback from N.C.P., M.Müller, S.J., T.B., M.S., M.R., and J.M.. M.N. and N.C.P. wrote the manuscript with input from all the coauthors. N.C.P. supervised the project and conceived it together with M.S. and M.R. All authors discussed the results. The authors declare that they have no competing financial interests. Correspondence and requests for materials should be addressed to M.N. (email: muntaser.naamneh@psi.ch) or N.C.P. (email: nicholas.plumb@psi.ch). ![**Electronic properties of [\_[1-x]{}\_[x]{}\_[3]{}$\text{Ba}_{1-x}\text{K}_{x}\text{BiO}_{3}$]{}.** **a**, Sketch of the doping- and temperature-dependent electronic phase diagram. The overlays are 2D schematics of the lattice illustrating “breathing” distortions essential to the insulating phase (left), as well as the idealized undistorted lattice in the metallic phase (right). Dots represent bismuth sites, while vertices are oxygen atoms. **b**, The temperature dependence of the resistivity normalized by the room temperature resistance from films with $x=0.34$ and $x=0.38$. The arrows mark the temperature $T_p$ where the slope of the resistivity is changed. []{data-label="fig:intro"}](Figure1M3.pdf){width="\columnwidth"} ![**Electronic structure of BKBO.** **a**, Brillouin zone of cubic BKBO drawn from a tight-binding model with filling corresponding to $x=0.34$ (0.66 e$^{-}$ per unit cell). ARPES data were acquired with an incident photon energy of 70 eV, corresponding to data points in momentum space indicated by the curved sheet, which is close to the $\Gamma-M-X$ plane. **b**, ARPES intensity maps plotted as a function of binding energy. The sample temperature was 17 K. **c**, **d** EDC’s at different temperatures evaluated at [\_[F]{}${k}_{F}$]{}along the $\Gamma-M$ and $\Gamma-X$ directions, respectively. The inset of **d** highlights the energy region near [\_[F]{}${E}_{F}$]{}. The fraction of spectral intensity lost at [\_[F]{}${E}_{F}$]{}with decreasing temperature, denoted by $\alpha$, is equal in the two spectra. The EDC data is shown as-measured, without normalization. []{data-label="fig:ARPES_EDC"}](Figure2M7.pdf){width="60.00000%"} ![image](Figure3_subtraction_M13.pdf){width="95.00000%"} ![**Phase separation in a 2D Su-Schieffer-Heeger model.** **a**, Ground state at half filling ($n=1$). The model yields ordered breathing distortions analogous to 3D BBO. The collapsed BiO$_2$ plaquettes are indicated by black diamonds. Red (blue) arrows represent positive (negative) displacements of the oxygen atoms away from the ideal square lattice. **b**, Insulating band structure of the $n=1$ breathing-distorted phase. **c**, Calculated band structure for $n=1.4$, which is found to be metallic. **d**,**e** Real-space view and corresponding band structure of the lattice after minimizing the total energy at $n=1.2$. **f**, Mixed spectral function consisting of equal contributions of the $n=1$ insulating and $n=1.4$ metallic spectra from **b** and **c**, respectively, for comparison with **e**. []{data-label="fig:calculation"}](Fig5v9.pdf){width="100.00000%"} ![**Temperature dependence of the electronic phase separation.** **a**, ARPES difference spectra along the $\Gamma-M$ direction for various labeled temperatures. The spectra are calculated by the method applied in Fig. \[fig:subtraction\] and described in the supporting text. All difference spectra are taken with respect to 255 K. The high intensity features seen when $T<T_p$ (top row) are signatures of the insulating band structure. **b**, Temperature-dependent Raman spectra. The breathing mode feature peaked at 564 cm$^{-1}$ (dashed line) is present at all temperatures and shows a strong asymmetric Fano-like broadening. **c**, Left axis: Fitted Lorentzian half-width at half-maximum, $\gamma$, of momentum distribution curves (MDCs) of the metallic band at [\_[F]{}${E}_{F}$]{}. The data were acquired from different samples using two different incoming photon energies, 70 eV and 120 eV. For 70 eV, the MDC is nearly along the $\Gamma-X$ line. The 120 eV data was collected in the same geometry in order to test the robustness of the results against $k_z$ broadening and surface effects. The horizontal dashed line indicates where $\gamma$ corresponds to an electron mean free path, $\xi$, equal to $2a$ — the characteristic lengthscale of the ordered insulating phase. Right axis: Intensity of the insulating band. The values are obtained by summing the difference spectra in the window region indicated by vertical white lines in the top left image of **a**. The curves are hand-drawn guides to the eye.[]{data-label="fig:Raman"}](Figure4M9.pdf){width="95.00000%"} [^1]: On leave from Institute of Physics, Polish Academy of Sciences, Aleja Lotnikow 32/46, PL-02-668 Warsaw, Poland
--- abstract: 'In 1975 Doi and Edwards predicted that entangled polymer melts and solutions can have a constitutive instability, signified by a decreasing stress for shear rates greater than the inverse of the reptation time. Experiments did not support this, and more sophisticated theories incorporated Marrucci’s idea (1996) of removing constraints by advection; this produced a monotonically increasing stress and thus stable constitutive behavior. Recent experiments have suggested that entangled polymer solutions may possess a constitutive instability after all, and have led some workers to question the validity of existing constitutive models. In this Letter we use a simple modern constitutive model for entangled polymers, the Rolie-Poly model with an added solvent viscosity, and show that (1) instability and shear banding is captured within this simple class of models; (2) shear banding phenomena is observable for *weakly stable* fluids in flow geometries that impose a sufficiently inhomogeneous total shear stress; (3) transient phenomena can possess inhomogeneities that resemble shear banding, even for weakly stable fluids. Many of these results are model-independent.' author: - 'J. M. Adams' - 'P. D. Olmsted' bibliography: - '../../banding\_bib/SQW.bib' - '../../banding\_bib/malkus.bib' - '../../banding\_bib/LCtheory.bib' - '../../banding\_bib/sriram.bib' - '../../banding\_bib/worms3.bib' - '../../banding\_bib/rheofolks.bib' - '../../banding\_bib/master.bib' - '../../banding\_bib/worms.bib' - '../../banding\_bib/berrportdecruppe.bib' - '../../banding\_bib/callaghan.bib' - '../../banding\_bib/rheochaosPRL.bib' - '../../banding\_bib/bord04.bib' - '../../banding\_bib/shear05.bib' - '../../banding\_bib/articles.bib' - '../../banding\_bib/books.bib' - '../LAOS.bib' title: 'A non-monotonic constitutive model is not necessary to obtain shear banding phenomena in entangled polymer solutions' --- Much of the rheology of entangled polymers solutions and melts is captured by the molecular theory of Doi and Edwards (DE) [@doiedwards], who argued that polymers relax by curvilinear diffusion (reptation) within a tube of the surrounding polymers. The DE model has a local maximum in the constitutive relation (the total shear stress as a function of shear rate for homogeneous flows). The resulting non-monotonic relation (*e.g.* the dashed curves in Fig. \[fig:flowcurves\]) leads to an instability that for many years was not observed in experiments [@stratton1966dnn; @*menezes1982nrb; @*hieber1989sci; @*pattamaprom2001cre], but nonetheless attracted attention [@doi1979dcp; @mcleish86; @*mcleish87]. This stress maximum is predicted to be less pronounced or absent if the convected constraint release (CCR) of entanglements due to flow [@Marr96b; @*mead98; @*IannirubertoM00; @likhtmangraham03; @MilnerML01] is incorporated. A sudden release of a constraint can relax both the orientation and conformation of a stretched polymer, which increases the stress and, for sufficiently frequent events, eliminates the instability. The CCR mechanism also leads to neutron scattering predictions that agree with experiment [@bent2003nmp]. Similar physics applies to solutions of breakable wormlike micelles, in which the instability is well documented experimentally and leads to *shear banding*, in which a fast-flowing oriented state coexists with a more disordered and viscous state along a stress plateau [@SCM93; @*catesfielding06]. There, CCR is less pronounced because of breakage and fails to ameliorate a constitutive instability [@MilnerML01]. Recently, Wang, Hu and co-workers studied entangled solutions of a high molecular weight (HMW) polymer in its own oligomer[@tapadiawang03; @tapadiawang06; @boukany07; @wang06], or DNA solutions [@WangMM2008], finding a number of results that may be consistent with instability and shear banding after all. In controlled shear rate mode a weakly increasing stress plateau of three decades in shear rate was found, whereas in controlled shear stress mode the sheared solution experienced a jump in the shear rate, together with spatially inhomogeneous birefringence [@tapadiawang03]. Local velocimetry revealed spatially inhomogeneous velocity profiles in both the transient and the steady state [@tapadiawang06] regimes, while large amplitude oscillatory shear flow (LAOS) experiments showed an inhomogeneous banding-like shear rate profile at finite frequencies [@tapadiaravin06]. Similar behavior was observed in a sliding plate shear cell in monodisperse solutions [@boukany07]. Relaxation after a step strain induced a highly inhomogeneous velocity field with *negative* local shear rates [@wang06]. Hu *et al.* found similar inhomogeneous flow behavior and possible signatures of shear banding in polymer solutions, and wormlike micelle solutions at concentrations where severe shear thinning, but not banding, might be expected [@hu07; @*hu08]. Wang *et al.* could not reconcile their results with existing theory, and proposed that the instability is a yield like effect due to an unbalanced “entropic retraction force” [@wang07; @*wangboukany07]. Here we show that much of the phenomenology of these experiments is consistent with the predictions of tube models with CCR, perhaps as anticipated in the original theory [@doi1979dcp], without introducing new physics. *Model—*We approximate the total stress ${{\mathbf{T}}}$ as separating into fast Newtonian (or solvent) degrees of freedom and a slow viscoelastic component ${{\mathbf{\Sigma}}}$ (HMW polymer): $${{\mathbf{T}}} = -p {{{\mathbf{I}}}}+ 2 \eta {{\mathbf{D}}} + G {{\mathbf{\Sigma}}}, \label{eqn:totalstress}$$ where ${{{\mathbf{I}}}}$ is the identity tensor, ${{\mathbf{D}}} = {\textstyle{\frac{1}{2}}}\left[\nabla {{\mathbf v}} + (\nabla {{\mathbf v}})^T\right]$, $p$ is the isotropic pressure determined by incompressibility ($\nabla \cdot {{\mathbf v}} = 0$), and $\eta$ is the solvent viscosity, for which we use the dimensionless quantity $\epsilon = \eta/(G\tau_d)$. Here, $\tau_d$ is the reptation time. We are interested in the creeping flow (low Reynolds number) regime, in which $\nabla\cdot {{\mathbf{T}}}=0$. For the dynamics of ${{\mathbf{\Sigma}}}$ we use the Rolie-Poly (RP) model [@likhtmangraham03], a simplified tube model that incorporates CCR [@MilnerML01], where ${{\mathbf{\Sigma}}}({{\mathbf r}},t)$ obeys $$\begin{gathered} (\partial_t + {{\mathbf v}}\cdot \nabla) {{\mathbf{\Sigma}}}+\left({{\mathbf{\nabla v}}}\right) \cdot {{\mathbf{\Sigma}}} + {{\mathbf{\Sigma}}}\cdot ({{\mathbf{\nabla v}}})^T + \frac{1}{\tau_d}{{\mathbf{\Sigma}}} = \label{eqn:RPmodel}\\ 2 {{\mathbf{D}}} - \frac{2}{\tau_R}(1-A) \left[{{{\mathbf{I}}}}+ {{\mathbf{\Sigma}}}(1+\beta A)\right]+\mathcal{D} \nabla^2 {{\mathbf{\Sigma}}},\end{gathered}$$ $A = ( 1+\textrm{tr} {{\mathbf{\Sigma}}}/3)^{-1/2}$ and the Rouse time $\tau_R$ governs chain stretch. The stress “diffusion” term ${\mathcal D} \nabla^2 {{\mathbf{\Sigma}}}$ describes the response to an inhomogeneous viscoelastic stress; while not in the original RP model, it can arise due to diffusion or finite persistence length [@olmsted99a; @elkareh89; @adams08]. We specify Neumann boundary conditions ($\nabla {{\mathbf{\Sigma}}}=0$) [@bhavearmstrong91; @adams08]. From experimental values of the plateau modulus $G\sim 6 \times 10^2 \textrm{Pa}$, reptation time $\tau_d \sim 20\,\textrm{s}$, and solvent viscosity $\eta \sim 1\,\textrm{Pa s}$ [@tapadiawang03], we use $\epsilon= 10^{-5}$. Here we use $\tau_d/\tau_R \sim 10^3$, which is consistent with the length of the stress-shear rate plateau reported in [@tapadiawang03]. The parameter $\beta$ controls the efficiency of CCR, and is difficult find a precise value of experimentally. Ref. [@likhtmangraham03] chose $\beta=1$ to fit steady state data in polymer melts, and used multiple modes with $\beta =0.5$ to fit experimental transient data. Here we tune between two qualitatively differrent types of constitutive curve; either a non-monotonic ($0.65$) or a monotonic ($0.728$) constitutive curve with a broad plateau (Figs. \[fig:model\],\[fig:flowcurves\]). Eq. (\[eqn:RPmodel\]) was solved in one spatial dimension using the Crank-Nicolson algorithm [@olmsted99a], for unidirectional Couette flow $v(r,t)\widehat{\boldsymbol{\theta}}$ between cylinders of radii $R_1$ and $R_2$ parameterized by $q\equiv \ln R_2/R_1$. In this geometry the total shear stress $T_{r\theta}\sim1/r^2$, so that the stress difference across the flow cell is $\Delta\ln T_{r\theta}=2 q$. Cone and plate flow with cone angles of $\theta = (4 ^\circ , 1^{\circ})$ has been reported [@tapadiawang06; @hu07], so we use consistent values of stress difference corresponding to $q \simeq\Delta R/R= (2 \times 10^{-3}, 2 \times 10^{-4})$ [@adams08]. Stresses are measured in units of $G$, shear rates in units of $\tau_d^{-1}$, and velocities in units of $ q R_1/\tau \approx \Delta R/\tau_d $ for small $q$. To plot numerical data we use $\Gamma$, the dimensionless specific torque (per height per radian) on the inner cylinder. The diffusion constant used was ${\mathcal D} \tau_d/(R_1 q)^2 = 4\times 10^{-4}$. *Flow Curves—*To calculate the steady state flow curves a step shear rate was applied from rest and evolved for $500 \tau_d$ with time step $10^{-5} \tau_d$, after which subsequent shear rate steps and time evolutions were applied to scan up and down in shear rate (Fig. \[fig:flowcurves\]). For non-monotonic constitutive curves ($\beta=0.65$) shear banding always occurs, with hysteresis and a stress “plateau”. For the monotonic case ($\beta=0.728$) shear banding could be inferred in the more highly curved geometry with the larger stress difference (larger $q$), since the flow curve no longer follows the constitutive curve; but *without* hysteresis. Crudely, a monotonic flow curve exhibits banding-like flows when most of the shear rate in the gap occurs over a small range of stresses, *i.e.* the slope of the plateau must be much smaller than the apparent slope specified by the flow geometry: $$\left.\frac{d\Gamma}{d\dot{\gamma}}\right|_{\textrm{C.C.}}\ll \left.\frac{\Gamma(R_1)-\Gamma(R_2)}{\Delta \dot{\gamma}}\right|_{\textrm{g}} \sim e^q-1, \label{eq:condition}$$ where “C.C.” denotes the flat portion of the constitutive curve (dashed in Fig. \[fig:flowcurves\]) and “g” refers to the range of torques and shear rates specified by the flow geometry. The steady state velocity profiles are shown in Fig. \[fig:vprofs\] as solid (red) lines. The non-monotonic flow curves $(\beta=0.65)$ lead to a pronounced kink in the velocity profile, a signature of shear banding. The monotonic case does not shear band in the flatter geometry (small $q$), but for a more curved geometry (larger $q$) more shear rates are accessible and the resulting smooth velocity profile could easily be interpreted as banding [@tapadiawang06]; certainly the constitutive curve is not followed (Fig. \[fig:flowcurves\]). Similar smooth profiles were reported in [@tapadiawang06; @hu07], in a flow geometry with $q\simeq0.004-0.02$. A slightly increasing stress plateau over several decades in shear rates (as in [@tapadiawang03]) would thus lead to apparently banding (inhomogenous) flow in geometries with very low stress gradients; but a linear steady state profile is found if $q$ is sufficiently small (Eq. \[eq:condition\]). *Startup Transients—*Transients were studied by evolving from rest using a time step of $10^{-5}\tau_d$ (Fig. \[fig:vprofs\]). In all cases shown here strongly inhomogeneous flow develops after the stress overshoot, leading to a sharply banded transient state, with a *negative* velocity and shear rate in the less viscous band. In the monotonic case the velocity profile eventually smooths out. For a narrower stress plateau (*e.g.*, $\beta=0.3$, not shown) the overshoot has a less pronounced kink and typically a positive shear rate. We have found inhomogeneous transients with negative velocities with stress differences corresponding to a cone angle $\theta=0.003^\circ$ in startup runs, but for $\theta = 0.001^\circ$ the amplitude of the inhomogeneous flow is reduced, and the velocity no longer negative, whilst for $\theta = 0^{\circ}$ the flow remains homogeneous. With perturbed initial conditions then the inhomogeneous transient behaviour returns. The transient for a monotonic model $(\beta=0.728)$ in which spatial gradients are artificially prohibited exhibits a slower decrease after the stress overshoot than in the spatially resolved model (Fig. \[fig:vprofs\]); hence, inhomogeneities are important when using transient data to help differentiate candidate constitutive models [@doiedwards; @WangMM2007]. *Large Amplitude Oscillatory Shear (LAOS)—* A sinusoidal spatially-averaged shear rate was applied with frequency $\Omega$ and maximum shear rate $\dot{\gamma}_m$, and evolved from rest (zero stress) until any initial transients had decayed. We characterize the dynamics by the Weissenberg number $\mathit{Wi} = \dot{\gamma}_m \tau_d$ and the Deborah number $\mathit{De} = \Omega \tau_d$. For low $\mathit{De}$ (frequency) we expect to recover some features of the steady state behavior, such as transient banding for $\mathit{Wi}$ roughly within the non-monotonic part of the flow curve; while higher frequencies (high $\mathit{De}$) should produced sharper profiles similar to the transient behavior in Fig. \[fig:vprofs\], since the system cannot relax before flow reversal. At the highest frequencies we expect the reversing dynamics to be too fast to allow an inhomogenous state. Fig. \[fig:LAOS\]A shows this behavior on a “Pipkin diagram” of $\mathit{Wi}$ vs. $\mathit{De}$, for a monotonic flow curve ($\beta=0.728$) in a slightly curved geometry. The inhomogeneous profiles in the banding regime (Fig. \[fig:LAOS\]D) can be represented parametrically in terms of shear rate and torque, $( \dot{\gamma}(y),\Gamma(y))$ (Fig. \[fig:LAOS\]B). At the high stress regions of the cycle a portion of the sample enters the high shear rate band as reported experimentally [@tapadiaravin06]. In these calculations the position $y_{\ast}$ of the interface at a given strain $\dot{\gamma}_0/\Omega = 3$ varied with $\mathit{De}$ as $y_{\ast}\sim (\mathit{De})^{\alpha}$, where $\alpha\sim0.4-0.6$, unlike the fixed position reported in Ref. [@tapadiaravin06]. We suspect that these experiments did not attain steady state. The torque overshoot is typical of polymer solutions, and resembles that found in [@hu07] (Fig. \[fig:LAOS\]). At low frequencies the system has time to find a selected stress, which remains constant for part of the cycle while the shear band grows into the cell. At high frequencies the fluid cannot relax or shear band, which leads to a sinusoidal response and a nearly affine spatial profile. *Step Strain—*Some experiments on the relaxation after a step strain found a strong inhomogeneous recoil that developed a negative velocity gradient [@wang06]. We illustrate this with a monotonic constitutive curve ($\beta = 0.728$), Fig. \[fig:stepstrain\]. As in Fig. 5 of [@wang06], an inhomogeneous velocity profile develops and the velocity becomes negative as the system recoils from the applied shear. Experimentally, the total displacement after recoil is of the order of a tenth of the gap size which is comparable with that observed here. Thus, inhomogeneities are important when using step strain data to help discriminate among candidate constitutive models, as when using the DE damping function [@doiedwards; @WangMM2007]. *Summary—*We have shown that behavior reminiscent of shear banding, as reported recently, can be reproduced using the Rolie-Poly model supplemented by a term to accommodate spatial gradients. The RP model contains an unknown parameter $\beta$, which controls the efficacy of convected constraint release. Even for $\beta$ large enough to yield a stable (monotonic) constitutive curve, shear banding signatures can appear if the “stress plateau” is flat enough: (1) a geometry with a high stress gradient can induce a flow profile that could be mistaken for banding; (2) sharp banding-like profiles can appear in start-up transients even though the steady state is non-banded; (3) LAOS can trap these sharp transient profiles; and (4) relaxation after a large step strain can be very inhomogeneous, sometimes with a negative shear rate recoil. Several recent experiments, particularly on polydisperse polymer solutions, may fall into this category [@hu07]. A wide plateau is believed to accompany very highly entangled systems [@MilnerML01], and the larger number of relaxation times are likely to render polydisperse systems intrinsically more stable than monodisperse systems [@doi1979dcp], as was noted in recent experiments [@hu07]. Our results are not specific to the RP model; Zhou *et al.* recently studied a different two-fluid model of shear banding (with a non-monotonic constitutive relation), and found qualitative results similar to some of ours [@cook08]. We thank S.-Q. Wang, R. Graham, T. McLeish, O. Radulescu, and S. Fielding for lively discussions. This work was supported by the Royal Commission of 1851. [32]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ** (, , ) , ****, () , ****, () , ****, () , ****, () , ****, () , ****, () , ****, () , ****, () , , , ****, () , ****, () , ****, () , , , ****, () , , , , , , , , , , , ****, () , , , ****, () , ****, () , ****, () , ****, () , ****, () , , , , , , , ****, () , , , **** () , , , ****, () , , , , ****, () , , , ****, () , , , , ****, () , , , , ****, () , , , ****, () , ****, () , , , ****, () , , , ****, () , ****, () , , , , ****, ()
--- abstract: 'The magnetic ground state in highly ordered double perovskites LaSr$_{1-x}$Ca$_x$NiReO$_6$ ($x$ = 0.0, 0.5, 1.0) were studied in view of the Goodenough-Kanamori rules of superexchange interactions. In LaSrNiReO$_6$, Ni and Re sublattices are found to exhibit curious magnetic states, but do not show any long range magnetic ordering. The magnetic transition at $\sim$ 255 K is identified with the Re sublattic magnetic ordering. The sublattice interactions are tuned by modifying the Ni-O-Re bond angles via changing the lattice structure through Ca doping. Upon Ca doping, the Ni and Re sublattices start to display a ferrimagnetically ordered state at low temperature. The neutron powder diffraction reveals a canted alignment between the Ni and the Re sublattices, while the individual sublattice is ferromagnetic. The transition temperature of the ferrimagnetic phase increases monotonically with increasing Ca concentration.' author: - Somnath Jana - Payel Aich - 'P. Anil Kumar' - 'O. K. Forslund' - 'E. Nocerino' - 'V. Pomjakushin' - 'M. Månsson' - 'Y. Sassa' - Peter Svedlindh - Olof Karis - Sugata Ray title: 'Revisiting Goodenough-Kanamori rules in a new series of double perovskites LaSr$_{1-x}$Ca$_x$NiReO$_6$' --- Introduction ============ Double perovskites (DP; $A$$_2$$B$$B$$^{'}$O$_6$) [@1; @DP1; @DP2] belong to a class of materials which exhibits many interesting properties and rich physics. Understandably, the choice of the transition metal ions at the $B$ and $B$$^{'}$ sites with different electron occupancies decide the material properties of the DPs. When both $B$ and $B$$^{'}$ are magnetic ions, the magnetic and electronic properties of the system is governed by $B$-O-$B$$^{'}$ interaction within a rock salt type structural defination as shown in Fig. 1 (a). For example, the high temperature ferromagnetic (FM) order ($T$$_C$ $>$ 400 K) of the DP compounds, Sr$_2$FeMoO$_6$ and Sr$_2$FeReO$_6$ is explained by a generalized double exchange mechanism through electronic band filling of the (Mo/Re) $t$$_{2g}$$\downarrow$-O-Fe $t$$_{2g}$$\downarrow$ conduction band [@sfmo5]. However, if the $B$-site ion is non-magnetic, the magnetic ground state would be defined by the edge-shared network of tetrahedra comprising the $B$$^{'}$ magnetic ions (Fig. 1 (b)). Such systems often exhibit geometric frustration in presence of antiferromagnetic nearest-neighbor correlations. Recently, detailed theoretical investigations have been carried out on similar DPs with the magnetic $B$$^{'}$ ions, having $n$$d$$^1$ and $n$$d$$^2$ ($n$ = 4, 5) electronic configurations and significant spin orbit coupling (SOC) [@d1; @d2]. Here, the nearest neighbor distance between the tetrahedrally arranged 4$d$/5$d$ magnetic ions naturally becomes much larger compared to the cases when both $B$ and $B$$^{'}$ sites are filled up with the magnetic ions. This reduces the inter-atomic exchange between the magnetic ions which helps to protect the SOC driven states. This situation opens up many options, and consequently many double perovskites with $d$$^1$ (e.g., Ba$_2$YMoO$_6$, Sr$_2$CaReO$_6$, Sr$_2$MgReO$_6$, Ba$_2$NaOsO$_6$ etc.) as well as $d$$^2$ electronic configurations (e.g., Ba$_2$CaOsO$_6$, Ba$_2$YReO$_6$, La$_2$LiReO$_6$ etc.) have been studied, where numerous unusual magnetic ground states are revealed [@21; @8; @9; @10re; @11re; @12os; @13os; @14os; @os-re]. Another interesting possibility appears in DPs, when both $B$ and $B$$^{'}$ ions are magnetic, but the valence electrons of $B$ ion lack the orbital symmetry of the same of $B$$^{'}$ ion for effective $B$-O-$B$$^{'}$ superexchange interaction. Such a situation will give rise to two noninteracting or weakly interacting magnetic sublattices. It will be interesting to gradually manipulate the extent of $B$-O-$B$$^{'}$ interaction by carefully changing the $B$-O-$B$$^{'}$ angle, and consequently follow the evolution of two sublattices getting engaged in single magnetic lattice (Fig. 1 (b) $\longrightarrow$ Fig. 1 (a)), following the famous Goodenough-Kanamori rule. Accordingly, we have designed a series of DPs, LaSr$_{1-x}$Ca$_x$NiReO$_6$, having a combination of 3$d$ and 5$d$ transition metals Ni$^{2+}$ (3$d$$^8$, $t$$_{2g}$$^6$$e$$_g$$^2$) and Re$^{5+}$ (5$d$$^2$, $t$$_{2g}$$^2$) at the $B$ and $B$$^{'}$ sites, respectively. Due to large crystal field splitting, the empty $e$$_g$ orbitals of Re have much higher energy than the $t$$_{2g}$ manifold and do not contribute in any interatomic exchange interaction. The filled $t$$_{2g}$ orbital of Ni as well has less involvement in the exchange interaction. The only possible exchange interaction that can be active between the $e$$_g$ of Ni$^{2+}$ and the $t$$_{2g}$ of Re$^{5+}$, will be very weak if the $B$-O-$B$$^{'}$ angle is strictly 180$^{\circ}$, as the overlap integral between the $e$$_g$ and the $t$$_{2g}$ becomes zero. However, finite overlap between these orbitals can be introduced by tuning the bond angles and lattice parameters as a consequence of the doping of Sr$^{2+}$ ions by smaller sized Ca$^{2+}$ ions. ![\[fig:1\] (a) The crystal structure of the $B$-site ordered double perovskite, $A$$_2$$B$$B$$^{'}$O$_6$. (b) The geometrically frustrated face-centered cubic lattice of edge-shared tetrahedra formed by the $B$$^{'}$ sites.[]{data-label="Fig1"}](Fig1.pdf){width="0.5\columnwidth"} In our design criteria of the ordered LaSr$_{1-x}$Ca$_x$NiReO$_6$ series, $B$, $B$$^{'}$ cations are also selected in such a way that there is sufficiently large difference in their charge and ionic radii, in order to achieve complete $B$, $B$$^{'}$ rock-salt ordering. Here the ionic radii of Ni$^{2+}$ (= 0.69 [Å]{}) and Re$^{5+}$ (= 0.58 [Å]{}) [@shanon_5] do fulfill the above criteria. Detailed magnetic measurements on LaSr$_{1-x}$Ca$_x$NiReO$_6$ indicate curious evolution of magnetic states as a function of doping. For LaSrNiReO$_6$, the system undergoes multiple magnetic transitions, indicated by a divergence between ZFC and FC at $\sim$ 255 K, typical of Re$^{5+}$ $t$$_{2g}$$^2$ ions confined in a fcc sublattice, and by a down turn in magnetization at $\sim$ 27 K, observed in both the ZFC and FC curves. Transport measurements confirm purely insulating behavior of the samples. However with Ca doping, the structure undergoes into larger monoclinic distortion, which results in a larger deviation of the Ni-O-Re ($\angle$ NOR) bond angles from 180$^{\circ}$. This deviation enables substantial superexchange interaction between the Ni $e$$_g$ and Re $t$$_{2g}$ orbitals, resulting to an overall canted antiferromagnetic order between two individual ferromagnetic sublattices. \ Results and Discussions ======================= All the samples appeared to be single phased as no impurity peak is detected in the whole 2${\theta}$ range in the powder XRD data, collected at room temperature for LaSr$_{1-x}$Ca$_x$NiReO$_6$ (LSCNRO) series (see Fig. 2). The observed (circle), calculated (line through the data) and the difference (blue dashed line) diffraction data are shown in Fig. 2 for $x$ = 0.0, 0.5 and 1.0 compositions respectively, all of which could be fitted with the $P$2$_1$/$n$ space group. The system shows increase in monoclinic distortion (see the insets to Fig. 2 (a), Fig. 2 (b) and Fig. 2 (c)) with increasing Ca concentration. The observation of the monoclinic distortion in this series of compounds is in agreement with the calculated tolerance factors, which goes from $f$ = 0.98 ($x$ = 0.0) to $f$ = 0.96 ($x$ = 1.0). An overall shift of the Bragg peaks towards higher 2${\theta}$ indicates a decrease of the unit cell parameters with increasing $x$. The refined parameters and the goodness factors are listed in Table 1. Presence of the intense peak at around 10$^\circ$ (1 1 0) indicates high degree of Ni/Re ordering within the double perovskite structure of LaSr$_{1-x}$Ca$_x$NiReO$_6$ for $x$ = 0.0, 0.5 and 1.0 compositions. The ordering between Ni and Re is quantified to be 93%, 95% and 97% for $x$ = 0.0, $x$ = 0.5 and $x$ = 1.0 from the refined occupation numbers of Ni and Re at (0.5, 0.0, 0.5) and (0.5, 0.0, 0.0) crystallographic sites, respectively. The stoichiometry of Ni and Re in the compounds are probed and confirmed through ICP-OES experiments. ![\[fig:3\] Ni $L$-edge X-ray absorption spectra measured for $x$ = 0.0, 0.5 and 1.0 compositions of LaSr$_{1-x}$Ca$_x$NiReO$_6$ series. Data have been vertically shifted for clarity.[]{data-label="Fig3"}](Fig3.pdf){width="0.5\columnwidth"} ![\[fig:4\] Resistivity vs temperature plots for (a) $x$ = 0.0 and (b) $x$ = 1.0 compositions. Insets showing the insulating electrical resistivity of both the compositions could be modelled by variable range hopping mechanism in 3-dimension.[]{data-label="Fig4"}](Fig4.pdf){width="0.5\columnwidth"} Next, we have performed X-ray absorption spectroscopy (XAS) at the $L$-edge of Ni to verify the charge state. The XAS spectra collected for $x$ = 0.0, 0.5, 1.0 compositions are shown in Fig. 3. Note that the La $M$$_{3/2}$ absorption edge is superimposed with the Ni $L$$_{3/2}$-edge. However, the spectral features of both the Ni $L$$_{3/2}$ and $L$$_{1/2}$ are quite different for Ni$^{2+}$ and Ni$^{3+}$ [@Ni_XAS]. Both $L$$_{3/2}$ and $L$$_{1/2}$ consist of two peaks indicated by the arrows in the figure. In the 2+ charge state of Ni as in NiO, the lower energy peak of $L$$_{3/2}$ is much more intense relative to the higher energy one, while both the peaks have similar intensities in case of NdNiO$_3$, PrNiO$_3$ and LaNiO$_3$, containing Ni$^{3+}$ ions [@Ni_XAS]. Also, the spectral feature of the $L$$_{1/2}$ edge of all the three compositions resemble to that of NiO. Hence it can be inferred that Ni is in 2+ charge state for all the LSCNRO compositions. In order to maintain charge neutrality, the charge state of Re should then be 5+ as Sr$^{2+}$, La$^{3+}$ and O$^{2-}$ are expected to be very stable at their respective ionic states. ![Magnetic measurements. (a)-(d) ZFC FC, ${M(T)}$ data of LaSr$_{1-x}$Ca$_x$NiReO$_6$ with $x$ = 0.0, 0.5, 0.75 and 1.0 measured with $H$ = 200 Oe. Inset of (a) shows the temperature dependence of magnetic susceptibility of $x$ = 0.0 composition in an applied field of 5000 Oe. (e) ${M(H)}$ curves for $x$ = 0.0, 0.5, 0.75 and 1.0 compositions. Inset shows variation of $H$$_C$ with doping.[]{data-label="Fig5"}](Fig5.pdf){width="0.5\columnwidth"} The electrical resistivity ($\rho$) of the two end member compounds as a function of temperature are shown in Fig. 4. The resistivity data were fitted using an activated transport model as well as using variable range hopping model. Both the data were nonlinear on a $T$$^{-1}$ scale and was found to be linear on a $T$$^{-1/4}$ scale (see inset to Figs. 4 (a) and (b)), in accordance with a three-dimensional variable range hopping transport model [@vth]. The insulating nature of the compounds can be explained by the fact that Ni$^{2+}$; 3$d$$^8$ effectively provides electrons of $e$$_g$ symmetry at the valence band as the $t$$_{2g}$ bands are completely filled up, while Re$^{5+}$; 5$d$$^2$ has partially filled $t$$_{2g}$ bands, thus from the symmetry consideration the electron hopping is prohibited (see the inset of Fig. 4 (a)). However, a nonzero hopping probability is realized if the bond angles deviate sufficiently from 180$^{\circ}$, thereby enabling Ni $e$$_g$-Re $t$$_{2g}$ hybridization via oxygen (compare the inset to Fig. 4 (b)). Replacing Sr$^{2+}$ by the smaller Ca$^{2+}$ ion, we anticipate a larger octahedral distortion that can provide a route for hybridization between the Ni $e$$_g$ and Re $t$$_{2g}$. Indeed a larger amount of distortion is achieved as understood from the crystal structure of LaCaNiReO$_6$ and consequently a clear decrease in the resistivity is also observed, although the temperature dependence still suggests that the material is best described as an insulator/semiconductor. Next, we have looked into the magnetic properties in details. The zero field cooled (ZFC) and field cooled (FC) data recorded with an applied field of 200 Oe for the four compositions of LSCNRO are shown in Figs. 5 (a)-(d). The ${M(T)}$ of $x$ = 0.0 sample shows a transition at around 255 K where ZFC and FC curves start to bifurcate. Another transition occurs at around 27 K, where susceptibilty seems to saturate. For $x$ = 0.5 (Fig. 5 (b)) a ferro/ferrimagnetic (FM) like transition is observed at 45 K, along with the high temperature FC-ZFC bifurcated curves. This FM like transition gradually shifts to higher temperatures with increasing $x$ ($\sim$ 81 K for $x$ = 0.75 and $\sim$ 110 K for $x$ = 1.0; Fig. 5) while the higher temperature FC-ZFC splitting continuously diminishes and in fact vanishes in case of $x$ = 1.0. It appears that the parent compound with strontium ($x$ = 0.0), hosting two separate magnetic instabilities, gets converted to a single magnetic lattice with an unique magnetic order when all the Sr ions are replaced with Ca. The magnetization versus field ${(M (H))}$ data, collected at 5 K for the four compositions are plotted in Fig. 5 (e). The coercivity of all the samples are very high in general, which arises as a result of large spin-orbit coupling driven anisotropy, commonly observed for Re based DPs [@sfmo5; @Re]. However, the overall nature of the $M$($H$) curves varies drastically with $x$. The remnant magnetization increases continuously with clear signatures of magnetic saturation as a function of Ca-doping. This observation clearly suggests significant changes in magnetic interactions with increasing monoclinic distortions and decreasing $B$-O-$B$$^{'}$ angle. [![(a) Neutron diffraction pattern for $x$ = 0.0 recorded at 2 K and 300 K. No magnetic Bragg peaks are observed. (b) Rietveld refinement of NPD data at 2 K. The solid line through the experimental points is the Rietveld refinement profile calculated for the space group $P$2$_1$/$n$ structural model. The vertical bars indicate the Bragg positions. The lowermost curve represents the difference between the experimental data and the calculated results.[]{data-label="Fig6"}](Fig6.jpg "fig:")]{}\ \ In order to have more insight about the observed magnetic transitions, neutron powder diffraction (NPD) were carried out for the end compositions above and below the magnetic transition temperatures. Fig. 6 (a) displays the NPD patterns for $x$ = 0.0 measured at 2 K and 300 K. A comparison of the data below and above the transition temperature (see Fig. 6 (a)) reveals no magnetic Bragg peak. This suggests that the observed magnetic transitions are of short range type. A structural Rietveld refinement of the neutron diffraction pattern at $T$ = 300 K shown in Fig. 6 (b) reveals a single-phase nature of the sample with crystallographic parameters very close to what have been extracted from the x-ray diffraction pattern. At 300 K the lattice parameters are $a$ = 5.599 Å, $b$ = 5.575 Å and $c$ = 7.893 Å. The neutron diffraction patterns of $x$ = 1.0 recorded at 2 K, 125 K and 300 K are shown in Fig. 7 (a). Contrary to $x$ = 0.0, several magnetic Bragg peaks are clearly observed and easily identified by the difference plot shown in the inset of Fig. 7 (a). There are three clear peaks at low $q$, which are indexed with propagation vector $k$ = \[0,0,0\]. These peaks are not forbidden to have structural origin for the paramagnetic space group $P2_1/n$, but it will result changes in many other peaks at higher angles. Experimental intensity of the first magnetic peak (101) at 2$\theta$ $\sim$ 24$^\circ$ is not observed above $T$$_N$ at 125 K, but it is not zero by symmetry. We have tried to completely release the structure parameters in the fit of low temperature pattern but the peak (101) is not fitted, implying that it must be of magnetic origin. We have very wide $q$-range and apparently the structural origin of (101) peak is not supported by the whole diffraction patters, there are many other Bragg peaks that would be affected by the structural change. In addition, we have performed low temperature XRD (not shown in this manuscript) and no structural change was found. A Shubnikov space solution is found to correspond to a irreducible representation mGM2+mk7t3 P1 (a) 14.79 P21’/n’ with the lattice parameters $a$ = 5.482 Å, $b$ = 5.593 Å  and $c$ = 7.782 Å and a magnetization per unit cell of 1.067 $\mu{_B}$. The fitted parameters for the magnetization vector for both Ni and Re site are given in the Table 2. The magnetic moments at Ni and Re site are plotted in Fig. 8. Both Ni and Re moments are ferromagnetically aligned within individual sublattice, while the refined structure suggests a canting alignment between the two sublattice. ![Magnetic structure for $x$ = 1.0 at $T$ = 2 K. Arrows indicate the direction of the Ni and Re moments. The La/Ca, and O atoms are not shown for clarity. The average value of the ordered magnetic moment is estimated as 1.067 $\mu{_B}$ per unit cell.[]{data-label="Fig8"}](Fig8.pdf){width="0.4\columnwidth"} In a highly ordered $B$-site double perovskite with magnetic ions at $B$ and $B$$^{'}$ sites the long range magnetic order is always determined by the type of active exchange interaction that mediates through the $B$-O-$B$$^{'}$ connectivity. For localized electrons (i.e. the $d$ electrons), the magnetic interactions between two such magnetic $B$ and $B$$^{'}$ cations is often described by the Goodenough-Kanamori rules of superexchange interaction [@good; @kanamori]. According to this rule, when the orbitals of two magnetic ions have a significant overlap integral, the superexchange is antiferromagnetic ($\angle$ $e$$_g$($B$)-O-$e$$_g$($B$$^{'}$) = 180$^{\circ}$, $\angle$ $t$$_{2g}$($B$)-O-$t$$_{2g}$($B$$^{'}$) = 180$^{\circ}$, $\angle$ $t$$_{2g}$($B$)-O-$e$$_g$($B$$^{'}$) = 90$^{\circ}$). However, when the orbitals are arranged in such a way that they are expected to be in contact but to have no overlap integral - most notably $t$$_{2g}$ and $e$$_g$ in 180$^{\circ}$ position, where the overlap is zero by symmetry, the rules predict ferromagnetic interaction, which is usually very weak in strength. In case of highly ordered LaSr$_{1-x}$Ca$_x$NiReO$_6$ compounds, Ni 3$d$ and Re 5$d$ orbitals are connected by oxygen 2$p$ orbitals. From refinement of XRD, the average Ni-O-Re bond angle ($\angle$ NOR) comes about 170$^{\circ}$ for $x$ = 0.0 sample and $\sim$150$^{\circ}$ for $x$ = 1.0 sample. Therefore in case of $x$ = 0.0 sample, when $\angle$ NOR is close to 180$^{\circ}$, the only superexchange interaction one can expect between half filled $e$$_g$ orbitals of Ni$^{2+}$ and partially filled $t$$_{2g}$ orbitals of Re$^{5+}$ is a weak ferromagnetic interaction. Of course, there could be magnetic signals appearing from independent Ni and Re sublattices too. However, as the $\angle$ NOR start deviating noticeably from 180$^{\circ}$ (e.g. $x$ = 1.0), the nonzero overlap between the $e$$_g$ and $t$$_{2g}$ orbitals starts to favor antiferromagnetic (AFM) interaction between the partially filled Ni $e$$_g$ and Re $t$$_{2g}$ orbitals following the Goodenough-Kanamori rule. We conclude that the low temperature magnetic feature observed in $x$ = 0.0 compound could very well be a reminiscence of the weak ferromagnetism predicted by Goodenough-Kanamori rule, but could also be an independent Ni sublattice feature as is seen in other Ni analog samples, such as, Sr$_2$NiWO$_6$ and Sr$_2$NiTeO$_6$ with nonmagnetic W$^{6+}$ (${[Xe]}$4$f$$^{14}$) and Te$^{6+}$ (${[Kr]}$4$d$$^{10}$) at the $B$$^{'}$ site respectively, where the antiferromagnetic transition occurs at 35 K and 54 K respectively [@ni]. However, the observed bifurcation between ZFC and FC curves at higher temperatures in $x$ = 0.0 compound is clearly very similar to what is commonly observed when Re-ion sits in the geometrically frustrated fcc lattice (the $B$$^{'}$-site ) within the double perovskite structure, but having nonmagnetic $B$-site ions, e.g., Sr$_2$CaReO$_6$ [@10re], Sr$_2$InReO$_6$ [@siro]. For $x$ = 1.0 sample, a sizeable deviation of $\angle$NOR from 180$^{\circ}$ enables the antiferromagnetic interaction between Ni$^{2+}$ and Re$^{5+}$ ions. In an ordered structure, this will result the moment of the individual sublattice to order along the same direction. However, the alignment between the two sublattice will depend on the different competitive interaction strengths. The spin-orbit interaction will result different crystalline anisotropy directions for Re and Ni spins due to the $t$$_{2g}$ and $e$$_g$ type orbitals, respectively. Therefore, the final magnetic structure is resulted in a canted arrangement between the two subllatice. In LSCNRO, both Ni$^{+2}$ and Re$^{+5}$ ions have two unpaired electrons. However, the strong spin-orbit coupling in 5$d$ orbitals (relative to 3$d$ orbitals) usually results in a reduced total moment in Re ions compared to its spin only value [@Ref; @Ref2; @Ref3], which is clearly observed from the NPD analysis. Also, the net magnetic moment obtained from NPD is in very good agreement with the observed moment in MH data at the highest applied field. Conclusion ========== Double perovskite series, LaSr$_{1-x}$Ca$_x$NiReO$_6$ ($x$ = 0.0, 0.5, 1.0) is realized with partially filled orbitals of $e$$_g$ and $t$$_{2g}$ symmetries (local) at highly ordered $B$ and $B$$^{'}$-sites respectively. All the compositions are formed in a monoclinic structure. In LaSrNiReO$_6$ ($x$ = 0.0), an unusual divergence between the ZFC and FC curves is identified with the magnetic state that arise for Re $t$$_{2g}$$^{2}$ ions sitting in a geometrically frustrated fcc sublattice in DP host. At low temperature ($\sim$ 27 K), the system undergoes into another magnetic transition, where weak ferromagnetism predicted by Goodenough-Kanamori rule could be identified. As lattice parameter decreases with Ca doping, the reduced Ni-O-Re bond angle introduces nonzero overlap integral between Ni $e$$_g$ and Re $t$$_{2g}$ orbitals, which favors a highly canted AFM alignment between Ni and Re sublattices. The neutron powder diffraction measurement conducted at room temperature and low temperature (2 K) revealed that the $x$ = 0.0 sample possess a disordered/short-range magnetic state at low temperature, while for Ca sample, the Ni and Re sublattice aligned in a canted antiferromagnetic state to give a long range magnetic order evidenced from the magnetic Bragg peak corresponding to the double perovskite superlattice peak. Methods ======= Four samples of LaSr$_{1-x}$Ca$_x$NiReO$_6$ (LSCNRO) ($x$ = 0.0, 0.5, 0.75, 1.0) were synthesized by solid state synthesis route. Highly pure La$_2$O$_3$, SrCO$_3$, CaCO$_3$, NiO, Re$_2$O$_7$ and Re metal were used as the starting materials. The synthesis was done in two steps. In the first step, La$_2$NiO$_4$ was made by heating a thorough mixture of stoichiometric La$_2$O$_3$ and NiO at 1250$^{\circ}$ C in an inert atmosphere for 48 hours with several intermediate grinding. SrO and CaO were used after heating them at 1250$^{\circ}$ C and 1000$^{\circ}$ C for 12 hours in inert atmosphere. Next, stoichiometric amount of La$_2$NiO$_4$, SrO, CaO, NiO, Re$_2$O$_7$ and Re metal were mixed inside a glovebox and the resultant mixture was sealed inside a quartz tube, which was then annealed at 1200$^{\circ}$ C for the final product. The phase purity of the three samples ($x$ = 0.0, 0.5, 1.0) were checked by x-ray diffraction (XRD) at MCX beamline of the Elettra Synchrotron Centre, Italy using wavelength of 0.751 [Å]{}. The XRD data were analyzed via Rietveld refinement using the FullProf [@fullprof_5] program. La, Ca, Sr, Ni, Re quantitative analysis were performed in Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES)(Perkin-Elmer USA, Optima 2100 DV) instrument following standard protocol of sample analysis. $d.c.$ magnetic measurements were carried out in a Quantum Design SQUID magnetometer. Resistivity measurements were performed in a home made four probe setup. Soft x-ray absorption spectroscopy (XAS) was performed at I1011 beamlines of the Swedish synchrotron facility MAX-lab, Lund. All the XAS spectra were measured by recording the total electron yield. Neutron powder diffraction (NPD) measurements were performed using the HRPT [@Fischer] diffractometer at the Paul Scherrer Institut, SINQ (Switzerland). The neutron wavelength was set to $\lambda$ = 1.89 Å and about 1 g of $x$ = 0.0 and $x$ = 1.0 samples were used. Magnetic structure refinements were performed using the FULLPROF suite [@fullprof_5]. [99]{} Anderson, M. T., Greenwood, K. B., Taylor, G. A. & Poeppelmeier K. R. B-cation arrangements in double perovskites. [*Prog. Solid State Chem.*]{} [**22**]{}, 197-233 (1993). Knapp, M. C. & Woodward, P. M. A-Site Cation Ordering in AA$^{'}$BB$^{'}$O$_6$ Perovskite. [*J. Solid State Chem.*]{} [**179**]{}, 1076-1085 (2006). Vasala, S. & Karppinen, M. A$_2$B$^{'}$B$^{''}$O6 perovskites: A review. [*Prog. Solid State Chem.*]{} [**43**]{}, 1-36 (2015). Serrate, D., Teresa, J. M. De. & Ibarra, M. R. Double perovskites with ferromagnetism above room temperature. [*J. Phys.: Condens. Matter*]{} [**19**]{}, 023201 (2007). Chen, G., Pereira, R. & Balents, L. Exotic phases induced by strong spin-orbit coupling in ordered double perovskites. [*Phys. Rev. B*]{} [**82**]{}, 174440 (2010). Chen, G. & Balents, L. Spin-orbit coupling in $d$$^2$ ordered double perovskites. [*Phys. Rev. B*]{} [**84**]{}, 094420 (2011). Aharen, T., Greedan, J. E., Bridges, C. A., Aczel, A. A., Rodriguez, J., MacDougall, G., Luke, G. M., Michaelis, V. K., Kroeker, S., Wiebe, C. R., Zhou, H. & Cranswick, L. M. D. Structure and magnetic properties of the $S$ = 1 geometrically frustrated double perovskites La$_2$LiReO$_6$ and Ba$_2$YReO$_6$. [*Phys. Rev. B*]{} [**81**]{}, 064436 (2010). Vries, M. A. de., Mclaughlin, A. C. & Bos, J. W. G. Valence Bond Glass on an fcc Lattice in the Double Perovskite Ba$_2$YMoO$_6$. [*Phys. Rev. Lett.*]{} [**104**]{}, 177202 (2010). Aharen, T., Greedan, J. E., Bridges, C. A., Aczel, A. A., Rodriguez, J., MacDougall, G., Luke, G. M., Imai, T., Michaelis, V. K., Kroeker, S., Zhou, H., Wiebe, C. R. & Cranswick, L. M. D. Magnetic properties of the geometrically frustrated $S$ =1/2 antiferromagnets, La$_2$LiMoO$_6$ and Ba$_2$YMoO$_6$, with the B-site ordered double perovskite structure: Evidence for a collective spin-singlet ground state. [*Phys. Rev. B*]{} [**81**]{}, 224409 (2010). Wiebe, C. R., Greedan, J. E., Luke, G. M. & Gardner, J. S. Spin-glass behavior in the $S$ = 1/2 fcc ordered perovskite Sr$_2$CaReO$_6$. [*Phys. Rev. B*]{} [**65**]{}, 144413 (2002). Wiebe, C. R., Greedan, J. E., Kyriakou, P. P., Luke, G. M., Gardner, J. S., Fukaya, A., Gat-Malureanu, I. M., Russo, P. L., Savici, A. T. & Uemura, Y. J. Frustration-driven spin freezing in the $S$ = 1/2 fcc perovskite Sr$_2$MgReO$_6$. [*Phys. Rev. B*]{} [**68**]{}, 134410 (2003). Erickson, A. S., Misra, S., Miller, G. J., Gupta, R. R., Schlesinger, Z., Harrison, W. A., Kim, J. M. & Fisher, I. R. Ferromagnetism in the Mott Insulator Ba$_2$NaOsO$_6$. [*Phys. Rev. Lett.*]{} [**99**]{}, 016404 (2007). Xiang, H. J. & Whangbo, M. H. Cooperative effect of electron correlation and spin-orbit coupling on the electronic and magnetic properties of Ba$_2$NaOsO$_6$. [*Phys. Rev. B*]{} [**75**]{}, 052407 (2007). Yamamura, K., Wakeshima, M. & Hinatsu, Y. Structural phase transition and magnetic properties of double perovskites Ba$_2$CaMO$_6$ (M = W, Re, Os). [*J. Solid State Chem.*]{} [**179**]{}, 605-612 (2006). Thompson, C. M., Carlo, J. P., Flacau, R., Aharen, T., Leahy, I. A., Pollichemi, J. R., Munsie, T. J. S., Medina, T., Luke, G. M., Munevar, J., Cheung, S., Goko, T., Uemura, Y. J. & Greedan, J. E. Long-range magnetic order in the 5$d$$^2$ double perovskite Ba$_2$CaOsO$_6$: comparison with spin-disordered Ba$_2$YReO$_6$. [*J. Phys.: Condens. Matter*]{} [**26**]{}, 306003 (2014). Shannon, R. D. Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides. [*Acta Cryst.*]{} [**A32**]{}, 751-767 (1976). Rodríguez-Carvajal, J. Recent advances in magnetic structure determination by neutron powder diffraction. [*Physica B*]{} **192**, 55-69 (1993). Fischer, P., Frey, G., Koch, M., Koennecke, M., Pomjakushin, V., Schefer, J., Thut, R., Schlumpf, N., Buerge, R., Greuter, U., Bondt, S. & Berruyer, E. High-resolution powder diffractometer HRPT for thermal neutrons at SINQ. [*Physica B*]{} **276-278**, 146-147 (2000). Medarde, W., Fontaine, A., García-Muñoz, J. L., Rodríguez-Carvajal, J., Santis, M. de., Sacchi, M., Rossi, G. & Lacorre, P. RNiO$_3$ perovskites (R = Pr,Nd): Nickel valence and the metal-insulator transition investigated by x-ray-absorption spectroscopy. [*Phys. Rev. B*]{} [**46**]{}, 14975 (1992). Mott, N. F. Conduction in non-crystalline materials. [*Philos. Mag.*]{} [**19**]{}, 835 (1969). Alamelu, T., Varadaraju, U. V., Venkatesan, M., Douvalis, A. P. & Coey, J. M. D. Structural and magnetic properties of (Sr$_{2-x}$Ca$_x$)FeReO$_6$. [*J. Appl. Phys.*]{} [**91**]{}, 8909 (2002). Goodenough, J. B. Theory of the Role of Covalence in the Perovskite-Type Manganites \[La,M(II)MnO$_3$\]. [*Phys. Rev.*]{} [**100**]{}, 564 (1955). Kanamori, J. Superexchange interaction and symmetry properties of electron orbitals. [*J. Phys. Chem. Solids*]{} [**10**]{}, 87-98 (1959). Iwanaga, D., Inaguma, Y. & Itoh, M. Structure and magnetic properties of Sr$_2$NiAO$_6$ (A = W, Te) [*Mater. Res. Bull.*]{} [**35**]{}, 449-457 (2000). Gao, H., Llobet, A., Barth, J., Winterlik, J., Felser, C., Panthöfer, M. & Tremel, W. Structure-property relations in the distorted ordered double perovskite Sr$_2$InReO$_6$. [*Phys. Rev. B*]{} [**83**]{}, 134406 (2011). Sikora, M., Kapusta, C., Borowiec, M., Oates, C. J., Prochazka, V., Rybicki, D., Zajac, D., Teresa, J. M. De., Marquina, C. & Ibarra, M. R. Evidence of unquenched Re orbital magnetic moment in AA$^{'}$FeReO$_6$ double perovskites. [**Appl. Phys. Lett.**]{} [**89**]{}, 062509 (2006). Michalik, J. M., Teresa, J. M. De., Ritter, C., Blasco, J., Serrate, D., Ibarra, M. R., Kapusta, C., Freudenberger, J. & Kozlova, N. High-field magnetization measurements in Sr$_2$CrReO$_6$ double perovskite: Evidence for orbital contribution to the magnetization. [*Eur. Phys. Lett.*]{} [**78**]{}, 17006 (2007). Jeon, B. C., Kim, C. H., Moon, S. J., Choi, W. S., Jeong, H., Lee, Y. S., Yu, J., Won, C. J., Jung, J. H., Hur, N. & Noh, T. W. Electronic structure of double perovskite A$_2$FeReO$_6$ (A = Ba and Ca): interplay between spin-orbit interaction, electron correlation, and lattice distortion. [*J. Phys.: Condens. Matter*]{} [**22**]{}, 345602 (2010). Acknowledgement =============== S.J. and P.A. thanks Council of Scientific and Industrial Research (CSIR), India for fellowship. P.A.K. thanks the Swedish Foundation for International Cooperation in Research and Higher Education (STINT) for supporting his stay at Uppsala University. S.R. thanks Indo-Italian POC for support to carry out experiments (20145381) in Elettra , Italy. S.R. also thanks Department of Science and Technology (DST) \[Project No. WTI/2K15/74\] and TRC, Department of Science and Technology (DST), Government of India for support. MM and OKF are supported by a Marie Sk[ł]{}odowska Curie Action, International Career Grant through the European Union and Swedish Research Council (VR), Grant No. INCA-2014-6426. EN is fully funded by the Swedish Foundation for Strategic Research (SSF) within the Swedish national graduate school in neutron scattering (SwedNess). Finally, YS is fully supported by a VR neutron project grant (BIFROST, Dnr. 2016-06955) as well as a VR starting grant (Dnr. 2017-05078).The experimental neutron diffraction work was performed at the HRPT beamline of the Laboratory for Neutron Scattering & Imaging, Paul Scherrer Institute, CH-5232 Villigen PSI, Switzerland. We thank the beamline staff for their support. Author contribution statement ============================= S.J. designed the project and the method of synthesis. S.J. and P.A. have synthesized and characterized the compounds. S.J. and P.A. have performed the XAS and R vs T measurements and analysis. The magnetic measurements and analysis are performed by S.J., P.A., P.A.K. and P.S., while O.K.F., E.N., V.P., M.M. and Y.S. have performed the NPD measurements and analysis. S.J., P.A. and S.R. wrote the main body of the manuscript, while all the authors contributed in writing and reviewing the manuscript. Additional Information ====================== [**Competing interests:**]{} The authors declare no competing financial and non-financial interests.
--- abstract: 'Long-range order in quantum many-body systems is usually associated with equilibrium situations. Here, we experimentally investigate the quasicondensation of strongly-interacting bosons at finite momenta in a far-from-equilibrium case. We prepare an inhomogeneous initial state consisting of one-dimensional Mott insulators in the center of otherwise empty one-dimensional chains in an optical lattice with a lattice constant $d$. After suddenly quenching the trapping potential to zero, we observe the onset of coherence in spontaneously forming quasicondensates in the lattice. Remarkably, the emerging phase order differs from the ground-state order and is characterized by peaks at finite momenta $\pm (\pi/2) (\hbar / d)$ in the momentum distribution function.' author: - 'L. Vidmar' - 'J. P. Ronzheimer' - 'M. Schreiber' - 'S. Braun' - 'S. S. Hodgman' - 'S. Langer' - 'F. Heidrich-Meisner' - 'I. Bloch' - 'U. Schneider' bibliography: - 'references\_arXiv.bib' title: ' Dynamical Quasicondensation of Hard-Core Bosons at Finite Momenta' --- The nonequilibrium dynamics of quantum many-body systems constitutes one of the most challenging and intriguing topics in modern physics. Generically, interacting many-body systems are expected to relax towards equilibrium and eventually thermalize [@rigol08; @polkovnikov11]. This standard picture, however, does not always apply. In open or driven systems, one fascinating counterexample is the emergence of novel steady states with far-from-equilibrium long-range order, i.e., order that is absent in the equilibrium phase diagram. This includes lasers [@chiocchetta15], where strong incoherent pumping gives rise to a coherent emission, and driven ultracold atom systems [@vorberg13]. The emergence of order far from equilibrium is also studied in condensed matter systems [@stojchevska14] and optomechanical systems [@ludwig13]. In recent years, closed quantum systems without any coupling to an environment have come into the focus of experimental and theoretical research. Experimental examples range from ultracold atoms [@greiner02; @kinoshita06; @hofferberth07; @trotzky12; @cheneau12; @schneider12; @ronzheimer13; @langen13; @Xia2015] to quark-gluon plasmas in heavy ion collisions [@berges15]. In closed, non-driven systems, two famous examples for the absence of thermalization [@kinoshita06; @hofferberth07; @langen13] are many-body-localized [@nandkishore14; @altman14; @schreiber15] and integrable systems [@rigol07]. These peculiar systems allow for nonergodic dynamics and novel quantum phenomena. Spontaneously emerging order is in general associated with equilibrium states at low temperatures. The canonical example is the emergence of (quasi-) long-range phase coherence when cooling an ideal Bose gas into a Bose-Einstein (quasi-) condensate [@anderson95; @davis95]. In this case, thermodynamics ensures that, for positive temperatures [@braun13], the single-particle ground state becomes macroscopically occupied and thereby dictates the emerging order. Even in studies of the nonequilibrium dynamics at quantum phase transitions [@sachdevbook], the emergence of coherence is typically associated with gently crossing the transition from an unordered into an ordered state, and the strongest correlations and largest coherence lengths appear in the adiabatic limit [@chenD11; @braun15]. ![ [*Quasicondensation of bosons.*]{} (a) Density of states $\rho(\epsilon)$ of a homogeneous 1D lattice. (b) Dispersion $\varepsilon(q)$ (solid line) and group velocity $v_g(q)$ (dotted line) versus quasimomentum $q$. (c) Sketch of quasimomentum distribution $n(q)$: In equilibrium, 1D bosons quasicondense at the minimum of the band at $q=0$, while in a sudden expansion the quasicondensation of hard-core bosons occurs at $\hbar q=\pm (\pi/2)(\hbar/d)$. This quasimomentum lies in the middle of the spectrum and is consistent with the vanishing energy per particle of this closed many-body system. []{data-label="fig1"}](figpdf_paper/Figure_1.pdf){width="0.95\columnwidth"} Here, in contrast, we study a condensation phenomenon of strongly interacting lattice bosons far from equilibrium. After a sudden quantum quench, we experimentally observe the spontaneous emergence of a long-lived phase order that is markedly different from the equilibrium order (cf. Fig \[fig1\]). To this end, we prepare a density-one Mott insulator of strongly interacting bosons in the center of a three-dimensional (3D) optical lattice. Next, we transform the system into an array of independent one-dimensional (1D) chains, entering the regime of integrable hard-core bosons (HCBs). By suddenly quenching the confining potential along the chains to zero, we induce a sudden expansion of the cloud in a homogeneous lattice [@schneider12; @ronzheimer13; @reinhard13; @Xia2015] with a lattice constant $d$ and detect the formation of a non-ground-state phase profile as a dynamical emergence of peaks at momenta $\hbar k=\pm (\pi/2) (\hbar/d)$, half way between the middle and the edge of the Brillouin zone, in time-of-flight (TOF) distributions. This finite-momentum quasicondensation was first discussed by Rigol and Muramatsu [@rigol04] (see also Refs. [@daley05; @rigol05a; @rodriguez06; @hm08; @vidmar13]), but has not been studied experimentally so far. [*Ideal case.*]{} The idealized setup to study finite-momentum quasicondensates is shown in Fig. \[fig2\]. We consider the Hamiltonian $H=-J\sum_j (\hat a^{\dagger}_{j+1} \hat a_j + {\rm h.c.})$, where $\hat a^{\dagger}_j$ creates a HCB on site $j$ of a 1D lattice. The infinitely large on-site repulsion is accounted for by the hard-core constraint $(\hat a_j^{\dagger})^2=0$. The initial state is a product state $|\psi_0\rangle = \prod_{j \in L_0} \hat a^{\dagger}_j |\emptyset\rangle $, completely filling the central $L_0$ sites of an otherwise empty and infinitely large 1D lattice. This initial state consists of $N=L_0$ localized particles with a flat quasimomentum distribution and contains no off-diagonal correlations, i.e., $\langle \hat a^\dagger_j \hat a_{j+r}\rangle = 0$ for $r \neq 0$. Surprisingly, the quasimomentum distribution $n(q)=\frac{1}{L} \sum_{j,l} e^{-iq(j-l)d} \langle \hat a_j^\dagger \hat a_l \rangle$ develops singularities at [*finite*]{} quasimomenta $\hbar q=\pm(\pi/2) (\hbar/d)$ during the expansion ($t_{\rm E}>0$). As shown in Fig. \[fig2\](c), these singularities correspond to the emergence of [*power-law*]{} correlations $$\label{phaseorder} \langle \hat a^\dagger_j \hat a_{j+r}\rangle = {\cal{A}}(r) e^{i \Phi(r)};\enspace {\cal{A}}(r)\sim r^{-\frac{1}{2}}; \enspace \Phi(r) = \pm\frac{\pi}{2} r \;$$ in each half of the expanding cloud, shown in Fig. \[fig2\](d). These power-law correlations justify the name quasicondensate [@rigol04]. Curiously, the exponent $1/2$ equals the [*ground-state*]{} exponent [@rigol04; @rigol05a], even though the system is far away from equilibrium, with the energy per particle being much higher than in the ground state. In contrast to the ground state, the correlations show a running phase pattern $\Phi(r)$ with a phase difference of $\pm\pi/2$ between neighboring lattice sites, giving rise to peaks at finite quasimomenta. Coherence and quasicondensation emerge independently in the left- and right-moving halves of the cloud, corresponding to two macroscopically occupied degenerate eigenstates of the one-particle density matrix $\langle \hat a_j^\dagger \hat a_l \rangle$ that have spatial support in the left- or right-moving cloud, respectively [@rigol04]. This quasicondensation at finite quasimomenta can equivalently be seen as quasicondensation at $q=0$ in the respective co-moving frames [@hm08]. In one dimension, HCBs can be exactly mapped to noninteracting spinless fermions via the Jordan-Wigner transformation [@cazalilla11]. By virtue of this mapping, the density $n_j = \langle \hat a_j^\dagger \hat a_j \rangle$ of HCBs is identical to that of free fermions for all times, whereas the same is not true for the quasimomentum distribution [@paredes04; @kinoshita04]. While the occupations of fermionic quasimomenta are constants of motion, the non-local phase factors in the Jordan-Wigner transformation give rise to the intricate momentum dynamics studied here. In the ideal case, the dynamical quasicondensates form over a time scale $t_{\rm E}^* \sim 0.3 N \tau$ [@rigol04; @vidmar13], where $N$ is the number of particles in the initial state and $\tau = \hbar/J$ denotes the tunneling time. For very long times, $n(q)$ slowly decays back to its original flat form as a consequence of the dynamical fermionization mechanism [@rigol05; @minguzzi05; @vidmar13]. ![ [*Finite-momentum quasicondensation under ideal conditions*]{} [@rigol04]. We consider $N=50$ initially localized HCBs. (a) Density $n_j$ as a function of time. (b) Density as a function of the rescaled coordinate $\tilde{x}=(j-25.5)/(2t_{\rm E}/\tau)$ in the region bounded by the dashed lines in (a). For $\tilde{x} \leq 1$, the data collapse to the scaling solution [@antal99] $n(\tilde{x}) = \arccos{(\tilde{x})}/\pi$. (c) Quasimomentum distribution $n(q)$ as a function of time. (d) One-particle correlations at $t_{\rm E}=0.24 N \tau$. Main panel: $|\langle \hat a_j^\dagger \hat a_{j+r} \rangle|$ at $j=26$ (circles) and ${\cal A}(r) = \alpha/\sqrt{r}$ (line) with $\alpha=0.29$ [@ovchinikov07]. Inset: Phase pattern $\Phi(r)$. []{data-label="fig2"}](figpdf_paper/Figure_2.pdf){width="0.99\columnwidth"} ![image](figpdf_paper/Figure_3.pdf){width="1.99\columnwidth"} The dynamical quasicondensation at finite momenta is an example of a more general emergence of coherence in a sudden expansion. For instance, interacting fermions described by the Fermi-Hubbard model exhibit ground-state correlations in the transient dynamics as well [@hm08]. Furthermore, there is a close connection to quantum magnetism, as can be seen by mapping HCBs to a spin-1/2 XX chain: The transient dynamics in each half of the expanding cloud of HCBs is equivalent to the melting of a domain-wall state [@antal99; @gobert05; @platini07; @antal08; @lancaster10; @santos11; @eisler13; @sabetta13; @halimeh14; @alba14] of the form $|\psi_0\rangle = $ $|$$\uparrow$$\dots$$\uparrow \uparrow \downarrow \downarrow$$\dots$$\downarrow \rangle$. For this problem, a scaling solution exists [@antal99], which also applies to the sudden expansion at times $t<t_{\rm E}^*$. As a consequence, the densities $n_j$ measured at different times collapse onto a single curve, as shown in Fig. \[fig2\](b). Furthermore, for spin-1/2 XX chains the emergence of power-law decaying transverse spin correlations modulated with a phase of $\pi/2$ has been derived analytically [@lancaster10]. An interesting perspective onto the emergence of coherence results from noticing that both our expanding bosons and the melting domain-walls realize current-carrying nonequilibrium steady states (see [@suppmat] for a discussion). [*Experimental set-up and results.*]{} The experimental set-up is identical to that employed in our previous experiment on [*in-situ*]{} density dynamics [@ronzheimer13]. We load a Bose-Einstein condensate of approximately 10$^5$ $^{39}\mathrm{K}$ atoms from a crossed optical dipole trap into a blue-detuned 3D optical lattice with a lattice depth of $V_0\approx 20E_r$, where $E_r=h^2/(2m\lambda^2)$ denotes the recoil energy with atomic mass $m$ and lattice laser wavelength $\lambda\approx 737\,$nm. During the loading of the lattice, we use a magnetic Feshbach resonance to induce strong repulsive interactions between the atoms, suppressing the formation of doubly occupied sites [@ronzheimer13]. This results in a large density-one Mott insulator in the center of the cloud. We hold the atoms in the deep lattice for a $20\,$ms dephasing period, during which residual correlations between lattice sites are mostly lost such that the atoms essentially become localized to individual lattice sites [@Will2010]. The expansion is initiated at $t_{\rm E}=0$ by simultaneously lowering the lattice depth along the expansion axis in $150\,\mu\mathrm{s}$ to $V_0^{x}\approx 8\, E_r$ (setting $J\approx h \times 300\,$Hz, $\tau \approx0.5\,$ms), and reducing the strength of the optical dipole trap to exactly compensate the anti-confinement of the blue-detuned lattice beams. This creates a flat potential along the expansion direction. Figures \[fig3\](b)-\[fig3\](e) show the ballistic expansion of the [*in-situ*]{} density [@ronzheimer13], monitored using absorption imaging. During the deep lattice period, the magnetic field is changed to tune the on-site interaction strength during the expansion to $U=20J$. We have numerically verified that the essential features of dynamical quasicondensation are still present for this value of $U/J$, with a shift of the peak position by $\approx10$% towards smaller values [@rodriguez06; @suppmat]. In order to measure the momentum distribution as a function of expansion time $t_{\rm E}$, we employ an adapted TOF imaging technique. Directly before shutting off all lattice and trapping potentials, we rapidly increase the lattice intensity along the expansion axis for $5\,\mu\mathrm{s}$ to a depth of $33E_r$. This time is too short to affect correlations between different sites and the momentum distribution. Nonetheless, it results in a narrowing of the Wannier functions, which leads to a broadening of the Wannier envelope in the TOF density distribution and thereby facilitates the observation of higher-order peaks [@suppmat]. Figures \[fig3\](f)-\[fig3\](i) contain the main result of our experiment, namely the TOF density distributions, which correspond approximately to the momentum distribution, taken at different expansion times $t_{\rm E}$. In Fig. \[fig3\](f) the TOF sequence was initiated at $t_{\rm E}=0$, i.e., directly after initiating the expansion. We observe a central peak at $k=0$ and two higher order peaks at $\hbar k=\pm (2\pi)(\hbar/d)$, indicating a weak residual $k=0$ coherence probably resulting from an imperfect state preparation. During the expansion, however, the momentum distribution changes fundamentally and the remnants of the initial coherence quickly vanish. Instead, new peaks at finite momenta are formed. These peaks directly signal the spontaneous formation of a different phase order. This is best seen in Fig. \[fig3\](i) at $t_{\rm E}=36 \tau$, where the finite-momenta peaks are clearly established. The observed peak positions correspond to the expected momenta close to $\hbar k=\pm (\pi/2)(\hbar/d)$ [@suppmat]. In addition, Figs. \[fig3\](g) and \[fig3\](h), taken at $t_{\rm E}=7 \tau$ and $14 \tau$, respectively, hint at a fine structure of the emerging peaks. This structure is a consequence of the finite TOF time $t_{\rm TOF} = 6$ms, which results in the TOF distributions being a convolution of real-space and momentum-space densities. We sketch this situation in Fig. \[fig3\](a). As discussed before, the peaks in $n(k)$ close to $\hbar k=-(\pi/2)(\hbar/d)$ and $+(\pi/2)(\hbar/d)$ originate from the left- and right-moving portions of the cloud, respectively. Due to the finite $t_{\rm TOF}$, the higher-order peak of the left-moving cloud with momentum $(-\pi/2 + 2\pi)(\hbar/d)$ and the main peak of the right-moving cloud with momentum $(\pi/2)(\hbar/d)$ (and vice versa) may overlap in the TOF data. A perfect overlap gives rise to single sharp peaks such as the ones present in the data shown in Fig. \[fig3\](i), while shorter expansion times, as shown in Figs. \[fig3\](g)-\[fig3\](h), result in a partial overlap and additional structure (see [@suppmat] for details). [*Comparison with exact time evolution.*]{} We numerically model the dynamics of 1D HCBs for realistic conditions: [*(i)*]{} The experimental set-up consists of many isolated 1D chains, which are not equivalent due to the 3D harmonic confinement. Experimentally we can only measure an ensemble average over all tubes. [*(ii)*]{} Both the finite temperature of the original 3D Bose-Einstein condensate as well as nonadiabaticities during the lattice loading result in a finite entropy, and thereby holes, in the initial state. We therefore average the results over different initial product states drawn from a thermal ensemble of a harmonically trapped 3D gas of HCBs in the atomic limit [@suppmat]. Chemical potential and temperature were calibrated to reproduce the experimental atom number and an average entropy per particle of $1.2\,k_B$ [@Trotzky2009], thereby leaving no free parameters for the simulations. To test the consistency of the approach, we compare the average density $n_j$ during the expansion with the [*in-situ*]{} images in Figs. \[fig3\](b)-\[fig3\](e) and find a good agreement. In addition, the time evolution of the half width at half maximum of the density distribution, shown in Fig. S4 of [@suppmat], is consistent with the ballistic dynamics as previously measured in the same experimental set-up [@ronzheimer13]. Since the momentum distribution is experimentally measured at a finite $t_{\rm TOF}$, we explicitly calculate the TOF density distributions without employing the far-field approximation [@gerbier08; @suppmat] and compare the results to the experimental data in Figs. \[fig3\](f)-\[fig3\](i). Remarkably, the positions and the structure of the peaks agree very well between experiment and theory, thereby supporting our two main results: The central peaks indeed correspond to a large occupation of quasimomenta close to $\hbar q=\pm (\pi/2)(\hbar/d)$, i.e., to a bunching of particles around the fastest group velocities in the middle of the single-particle spectrum. In addition, the fine structure visible for intermediate expansion times \[cf. Figs. \[fig3\](g)-\[fig3\](h)\], which becomes more apparent in the experiment when comparing different $t_{\rm TOF}$ (cf. Fig. S2 in [@suppmat]), directly confirms the independent emergence of coherence in the left- and right-moving portions of the cloud. Compared to the ideal case, the presence of holes in the initial state causes a reduced visibility of the TOF density distributions [@suppmat]. Moreover, the finite initial entropy gives rise to a crossover of one-particle correlations from a power-law decay at short distances to a more rapid decay at long distances [@suppmat], similar to the effect of a finite temperature [@rigol05b] in equilibrium. We attribute the discrepancies between experimental and numerical results at short times, see Fig. \[fig3\](f), to the weak residual $k=0$ coherence in the initial state. Additional discrepancies may arise because of a small admixture ($\lesssim 5\%$) of doublons in the initial state [@ronzheimer13; @Xia2015] as well as small residual potentials, yet we conclude that these play a minor role [@suppmat]. Compared to the previously studied time dependence of density distributions and expansion velocities [@ronzheimer13], the momentum distribution is more sensitive to such imperfections [@suppmat]. Performing a similar experiment with a single 1D system would allow the predicted scaling of $t_{\rm E}^*$ and the maximum peak height with atom number [@rigol04] to be experimentally tested. [*Conclusions and outlook.*]{} We have reported experimental evidence for a far-from-equilibrium quasicondensation at finite momenta of expanding 1D HCBs in an optical lattice. The expanding particles bunch at quasimomenta close to $\pm (\pi/2)(\hbar/d)$ and the analysis of TOF distributions demonstrates the existence of two independent sources of coherence. Whether such dynamical condensation persists in higher dimensional systems constitutes an open problem, given that the existing theoretical results are based on exact diagonalization of small systems [@hen10] or time-dependent Gutzwiller simulations [@jreissaty11]. Both future experiments or advanced numerical methods (see, e.g., [@carleo12; @zaletel15]) could help clarify this question. More generally, our results raise the question of whether this type of spontaneously emerging coherence is limited to integrable systems and whether genuinely far-from-equilibrium order can also occur in generic closed many-body systems. [*Acknowledgments.*]{} We thank F. Essler, A. Mitra and M. Rigol for helpful discussions. We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) through FOR 801, and the EU (AQuS, UQUAM). This work was supported in part by National Science Foundation Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics. L.V. was supported by the Alexander-von-Humboldt foundation. \[pagesupp\] Experimental details ==================== Increasing the visibility of higher order peaks {#sec:flash} ----------------------------------------------- By switching off the lattice instantaneously at the beginning of the time-of-flight (TOF) sequence, each quasimomentum $\hbar q$ is projected onto a superposition of states with free-space momentum $\hbar k$ (with $k=q$) and higher-order (Bragg) contributions with momenta $\hbar (k \pm n 2\pi / d)$, $n \in \mathbb{N}$. In order to increase the visibility of the higher-order peaks, we increase the intensity of the $x$-lattice for $5\,\mu s$ to approximately $33E_r$ immediately before releasing the atoms from the lattice. This time is long enough to lead to a narrowing of the on-site (Wannier) wavefunctions, resulting in a broadening of their momentum distributions, while being short enough to have no significant influence on the quasimomentum distribution. This leads to a broadening of the envelope of the momentum distribution observed after time of flight and thereby enhances the visibility of the higher order (Bragg) peaks, as shown in Fig. \[fig\_LatticeFlashing\]. ![ [*Experimental TOF density distributions.*]{} Comparison between a measurement (a) without additional lattice pulse and (b) with additional pulse before the TOF sequence (in both cases $t_{\rm E}=22\,\tau$ and $t_{\mathrm{TOF}}=12\,\mathrm{ms}$). Vertical lines are guides to the eye. Density distributions are averaged over 10 experimental realizations. []{data-label="fig_LatticeFlashing"}](figpdf_supmat/FigExpSuppMat_1.pdf){width="\columnwidth"} Analysis of observed peak positions during time of flight --------------------------------------------------------- The expected signatures of the quasicondensation at finite quasimomenta close to $\pm (\pi/2)(\hbar/d)$ are distinct peaks in the density distribution after the TOF sequence. Since the quasicondensates form independently in the left- and right-moving portions of the cloud during the sudden expansion, the peaks observed in the TOF distribution can be interpreted as the sum of two independent contributions originating from different points in space denoted by $d_{\rm L}^0$ and $d_{\rm R}^0$, respectively. For specific TOF times, different momenta stemming from different positions in the cloud can become superimposed and appear as one larger peak in the measured TOF density distribution. To demonstrate this effect, we extract the peak positions for various different TOF times $t_{\mathrm{TOF}}$ with a simple fit function consisting of six Gaussian peaks on top of a broad background. The six small Gaussian peaks are grouped into two groups corresponding to the left- and right-moving portions of the cloud. Each group contains one peak at position $d_{\rm L/R}$ that corresponds to $d_{\rm L/R}^0+t_{\mathrm{TOF}} \cdot \hbar k_{\rm L/R}/m$, respectively, and the additional higher order peaks at positions $d_{\rm L/R}\pm d_{2\pi}$, where $d_{2\pi} = t_{\mathrm{TOF}} \cdot 2\pi \hbar/(d m) $. In order to further reduce the number of free fitting parameters, we assume all six Gaussian peaks to have the same width $w_G$, but allow for individual amplitudes $A_{\rm L/R}^i$. The complete fit function $F(x,\dots)$ is given by $$\begin{split} F(x,\dots)= \; & P_{b}(x,A_{b},d_{b},w_{b})\\ +\,\, &P_{\rm L}(x,A_{\rm L}^{-1},A_{\rm L}^{0},A_{\rm L}^{1},d_{\rm L},w_G)\\ +\,\, &P_{\rm R}(x,A_{\rm R}^{-1},A_{\rm R}^{0},A_{\rm R}^{1},d_{\rm R},w_G) \end{split}$$ with the broad background peak $$P_{b}(x,\dots) = A_{b} e^{\left(-\frac{(x-d_{b})^2}{2 w_{b}^2}\right)}$$ and the groups of small peaks $$\begin{split} P_{\rm L/R}(x,\dots) = \, &A_{\rm L/R}^{-1} e^{\left(-\frac{(x-d_{\rm L/R}-d_{2\pi})^2}{2 w_G^2}\right)}\\ +\, &A_{\rm L/R}^{0} e^{\left(-\frac{(x-d_{\rm L/R})^2}{2 w_G^2}\right)}\\ +\, &A_{\rm L/R}^{1} e^{\left(-\frac{(x-d_{\rm L/R}+d_{2\pi})^2}{2 w_G^2}\right)}. \end{split}\,$$ Due to the additional sudden switching off of the magnetic field used to address the Feshbach resonance and the resulting eddy currents, there are additional position-dependent forces acting on the atoms during the initial stage of the TOF sequence. In order to correctly map the observed distances after time of flight onto the initial momenta, we apply a correction that is determined by monitoring the evolution of the TOF peaks of an equilibrium $q=0$ Bose-Einstein condensate released from the lattice, where the momentum composition is known to consist of peaks at $\pm n 2\pi (\hbar /d)$ with $n\in \mathbb{N}$. Figures \[fig\_PeakPos\](a)-\[fig\_PeakPos\](d) show the density distributions after an expansion time $t_{\rm E} = 36\tau$ in the lattice and varying TOF times $t_{\rm TOF}$ together with the resulting fits. Even though the fits result in strongly varying peak amplitudes, they nonetheless serve to faithfully identify the peak positions. ![ [*Peak positions as a function of time-of-flight time*]{}. (a)-(d) Density distributions obtained for a fixed expansion time of $t_{\rm E} = 36\tau$ (blue solid lines). Black dashed lines are fits with a sum of six Gaussian peaks and a broad background, black solid lines show only the contributions of the six peaks. All measured density distributions are averaged over 9 to 10 experimental realizations. (e) Extracted peak positions for $t_{\rm E} = 36\tau$ (dots). Dashed lines are linear fits, solid black lines show the expected slope for peaks traveling with momentum $\pm (\pi/2)(\hbar/d) $. Error bars in the position are smaller than the data points. []{data-label="fig_PeakPos"}](figpdf_supmat/FigExpSuppMat_2.pdf){width="\columnwidth"} In Fig. \[fig\_PeakPos\](e), we show the extracted positions of the six peaks (red and blue circles) for a fixed expansion time $t_{\rm E} = 36\tau$. The red circles are associated with the component at negative momentum $\hbar k_{\rm L}$ and its corresponding higher order peaks at $\hbar (k_{\rm L}\pm 2\pi /d)$, the blue circles are associated with the positive momentum $\hbar k_{\rm R}$ and higher order peaks. The dashed lines are linear fits, their slopes represent the extracted velocities. For comparison, the solid black lines show the theoretically expected slope for momenta $\pm (\pi/2)(\hbar /d)$. The peaks do indeed propagate with momenta close to the ideal expectation; the small shift towards smaller momenta can be attributed to the finite interaction strength $U/J<\infty$ used in the experiment (see Sec. \[sec:dev\]). The extrapolation to $t_{\mathrm{TOF}} = 0$ shows that the two momentum groups indeed originate from two different points in space in the initial cloud, with the left-moving part starting further to the left and vice versa. At $t_{\mathrm{TOF}} = 6\,\mathrm{ms}$ in Fig. \[fig\_PeakPos\](e), we can also observe how multiple peaks can come to overlap during the TOF evolution, resulting in the observation of fewer but stronger peaks, explaining the number of peaks observed in, e.g., Fig. \[fig\_PeakPos\](a). Time-of-flight distribution without the far-field approximation =============================================================== After the lattice flashing, each site of the lattice can be described by the Wannier function $w_0(r) $, which we approximate by a Gaussian of width $\sigma$: $$w_0(r) = \frac{1}{\sqrt{\sigma \sqrt{\pi}}} e^{-r^2/(2\sigma^2)}.$$ As a consequence of the lattice flashing method described in Sec. \[sec:flash\], $\sigma \ll d$. After switching off the lattice, each Wannier function freely evolves in time. We calculate the free time evolution in momentum space, ${\rm FT}[w_0(r,t_{\rm TOF})] = {\rm FT}[w_0(r)] e^{-i E_k t_{\rm TOF}/\hbar}$, where FT denotes the Fourier transform and $E_k = (\hbar k)^2 /(2m)$ is the free-space single-particle dispersion. For the propagation in real space, we get $$w_0(r,t_{\rm TOF}) = \frac{1}{\sqrt{ ( \sigma+ i \frac{\hbar t_{\rm TOF}}{m \sigma } ) \sqrt{\pi} }} e^{-\frac{r^2}{2\sigma^2 + i \frac{2\hbar t_{\rm TOF}}{m}}}.$$ The total density profile $n_{\rm TOF}(r) \equiv n_{\rm TOF}(r,t_{\rm TOF}) = \langle \hat \psi^\dagger(r) \hat \psi(r) \rangle$ at a given $t_{\rm TOF}$ is a sum over contributions from all (occupied) lattice sites, $$n_{\rm TOF}(r) = \sum_{r_j, r_l} w_0^*(r-r_j,t_{\rm TOF}) w_0(r-r_l,t_{\rm TOF}) \langle \hat a_{r_j}^\dagger \hat a_{r_l} \rangle,$$ where the field operator $\hat \psi(r)$ was expanded in time-evolved Wannier functions. In the case of one-dimensional (1D) lattice dynamics studied here, off-diagonal correlators $\langle \hat a_{r_j}^\dagger \hat a_{r_l} \rangle$ are nonzero along a single direction only, hence we define $x \equiv r/d$ and $\mu \equiv r_\mu$. The density can be written in a compact form by introducing two dimensionless parameters $\alpha = 1/[(\sigma/d)^2 + (\hbar t_{\rm TOF} / (m\sigma d))^{2}]$ and $\beta = \alpha \hbar t_{\rm TOF} /(m \sigma^2)$, resulting in $$\begin{aligned} n_{\rm TOF}(x) &=& \frac{1}{d} \sqrt{\frac{\alpha}{\pi}} e^{-x^2 \alpha} \sum_{j,l} e^{x(l+j) \alpha} e^{-\frac{1}{2} (l^2+j^2) \alpha} \times \nonumber \\ & & \times \; \; e^{-i x (l-j) \beta} e^{i(l^2-j^2) \frac{\beta}{2}} \langle \hat a_j^\dagger \hat a_l \rangle.\end{aligned}$$ This generalizes the result of Gerbier [*et al.*]{} [@gerbier08], where only the phase factors have been taken into account. In our experiment $\sigma = 50$ nm, i.e., $\sigma/d \approx 0.13$. The central task is therefore to calculate the correlators $\langle \hat a_j^\dagger \hat a_l \rangle$, for which we provide details in the following Secs. \[secLnum\] and \[sec:corr\]. Details of the numerical simulations ==================================== Realistic modeling of the initial state {#secLnum} --------------------------------------- We model the experimental set-up by calculating the exact time evolution of many 1D systems of hard-core bosons (HCBs) [@rigol04; @vidmar13] that differ by particle numbers $N$ and their distribution in the initial state. The generic initial state within a single chain is a product state $$\label{i_state} |\varphi_0\rangle = \prod_{j \in {\cal J}} \hat a^{\dagger}_j |\emptyset\rangle,$$ where $\cal J$ does not necessarily represent a sequence of neighboring sites. The initial states are drawn from a thermal distribution of HCBs in the atomic limit ($J=0$) of a 3D cloud and the final result of the simulations is obtained by summing up the data for initial states with different $\cal J$. The probability to find a particle at a given lattice site is given by the following partition function, $$\label{partition} Z = \sum_{n=0,1} e^{-\beta (n V(x,y,z) - n\mu_0)},$$ where $\mu_0$ represents the chemical potential, $\beta=1/(k_{\rm B}T)$ is the inverse temperature and the harmonic confinement is given by $$V(x,y,z)= \frac{m\omega_{x,y}^2}{2} \left[x^2+y^2+(\Lambda z)^2\right],$$ with $\omega_{x,y}\equiv \omega = 2 \pi \times 60$ Hz and the aspect ratio $\Lambda=\omega_z/\omega_{x,y}=2.63$. The 1D systems are aligned along the $x$-direction and the results are integrated over chains at different positions $(y,z)$. Due to the harmonic confinement, chains at different transverse positions can have vastly different atom numbers in the range $N\in [0,80]$. Clearly, for $T \to 0$, the initial state in the 3D cloud consists of an ellipsoid of singly-occupied sites without any holes. A finite-entropy initial state, on the other hand, contains a finite density of holes that increases when moving outwards. Equation (\[partition\]) leads to the global density $$n(x,y,z) = \frac{1}{e^{\beta(V(x,y,z)-\mu_0)}+1}.$$ ![ [*Distribution of probabilities $P(N)$ to find particles in chains with particle number $N$.*]{} We compare the zero-entropy ($S=0$) and the finite-entropy case ($S/N_{\rm 3D} =1.2 \; k_B$). We use a binning with $\Delta N = 10$. []{data-label="fig_ParticleDistribution"}](figpdf_supmat/Distribution_Particles.pdf){width="0.90\columnwidth"} We determine $\beta$ and $\mu_0$ by matching the total particle number $N_{\rm 3D}$ of the 3D cloud and the average entropy per particle $S/N_{\rm 3D}$ to experimental conditions. The average entropy per particle is defined as $$S/N_{\rm 3D} = \sum_{i=(x,y,z)} S_i/N_{\rm 3D},$$ where $$S_i = -k_B \left( n_i \log{n_i} + (1-n_i) \log{(1-n_i)} \right)$$ and $n_i$ represents the probability to find a particle at site $i$. In our experiment, we have $N_{\rm 3D} = \sum_{x,y,z} n(x,y,z) \approx 0.9 \times 10^5$ and we estimate $S/N_{\rm 3D}\approx 1.2 \; k_B$ [@Trotzky2009]. A typical configuration $\cal J$ for $N=50$ is shown in Fig. \[fig\_corr\](a2). Note that we also symmetrize each configuration by taking its mirror image (with respect to the central site of the originally occupied region). In Fig. \[fig\_ParticleDistribution\] we plot the distribution of probabilities $P(N)$ to find particles in chains with a given number of particles $N$. This information reveals the relative contribution of chains with a given particle number to the dynamics. The distribution for $S=0$ is shown with circles. In this case, the 3D state corresponds to a perfectly filled ellipsoid $R_0^2 = x^2 + y^2 + (\lambda z)^2$ with $R_0/d=38$ and, as a consequence, $P(N) \propto N^2$ for $N \lesssim 70$. As a general trend, the distribution shifts towards smaller $N$ with increasing entropy (squares in Fig. \[fig\_ParticleDistribution\]). In our calculations, we simulate the dynamics of representative chains with particle numbers $N \in \{ 1,10,20,30,40,50,60,70\}$ and the relative weights shown in Fig. \[fig\_ParticleDistribution\] for $S/N_{\rm 3D} = 1.2 \; k_B$. All together we average the data over 100 different initial configurations and we have verified that the sampling over this subset of all possible $N$ is sufficient. In all data for [*in-situ*]{} and TOF density distributions shown in the figures, we convolve the numerical data with a Gaussian filter of width $7d$ to account for the finite experimental imaging resolution. ![ [*Time dependence of the core radius $R_{\rm c}(t_{\rm E})$ at $S/N_{\rm 3D} = 1.2 \; k_B$.*]{} The solid line represents the fit function $R_c(t_E)= v_c t_{\rm E} +\gamma$ with $v_c= 2.06(d/\tau)$ and $\gamma=14.8$, obtained by fitting to the numerical data in the time interval $t_{\rm E}/\tau \in [30,50]$. []{data-label="fig_Rcore"}](figpdf_supmat/Rcore.pdf){width="0.90\columnwidth"} ![image](figpdf_supmat/FigCorr.pdf){width="1.99\columnwidth"} Figure \[fig\_Rcore\] shows that the core radius (i.e., the half width at half maximum) increases linearly in time according to $R_{\rm c}(t_{\rm E}) = v_{\rm c}t_{\rm E}+ {\rm \it const}$ with the core velocity very close to $v_{\rm c}=2 (d/\tau)$, in agreement with [@ronzheimer13]. This suggests that the presence of holes in the initial state does not affect the ballistic expansion of 1D HCBs. Correlation functions from the exact time evolution of a 3D cloud of HCBs {#sec:corr} ------------------------------------------------------------------------- We proceed with the analysis of the one-particle correlations $\langle \hat a^\dagger_j \hat a_{l}\rangle$ during the time evolution. We compare a representative chain at $S/N_{\rm 3D}=1.2 \; k_B$ to the ideal initial state, both with $N=50$ particles. The ideal initial state is (see the main text) $$\label{idealstate} |\psi_0\rangle = \prod_{j \in L_0} \hat a^{\dagger}_j |\emptyset\rangle,$$ where the length of the region $L_0$ equals $N$. The distribution of particles in $|\psi_0\rangle$ is displayed in Fig. \[fig\_corr\](a1), while the particle configuration of a typical chain at $S/N_{\rm 3D}=1.2 \; k_B$ is shown in Fig. \[fig\_corr\](a2). The modulus of the one-particle density matrix $|\langle \hat a^\dagger_j \hat a_{l}\rangle|$ resulting from $|\psi_0\rangle$ is shown in Fig. \[fig\_corr\](b1) for $t_{\rm E}=10 \tau$. The two squares visualize the two sources of coherence emerging in the left- and right-moving cloud [@rigol04], corresponding to the peaks in $n(q)$ at $\hbar q=-(\pi/2)(\hbar/d)$ and $\hbar q= (\pi/2)(\hbar/d)$ in Fig. \[fig2\](c), respectively. Within the squares, the correlations decay according to a power law (cf. Fig. \[fig2\](d) in the main text). The modulus of the one-particle density matrix $|\langle \hat a^\dagger_j \hat a_{l}\rangle|$ for the chain displayed in Fig. \[fig\_corr\](a2) is shown in Fig. \[fig\_corr\](b2) at $t_{\rm E}=14\tau$. As this example illustrates, the correlations in chains with holes show a less regular pattern, compared to the ideal situation. However, the calculated phase differences between neighboring sites exactly equal $\pm \pi/2$, identical to the ideal case shown in the inset of Fig. \[fig2\](d) of the Letter. In Fig. \[fig\_corr\](b3) we plot $|\langle \hat a^\dagger_j \hat a_{j+r}\rangle|$ as a function of $r$ at $j=68$ and the comparison to the function $f(r)=\alpha/r^{1/2}$ clearly shows that, taking holes in the initial state into account, the correlations decay much faster than in the ideal case for large distances. Therefore, even though signatures of the finite-momentum quasicondensation can still be detected in a system with a finite initial entropy, the correlations have become short ranged. This is equivalent to the behavior of equilibrium 1D Bose gases at any finite temperature [@giamarchibook], where temperature cuts off the ground-state power-law decay. The temperature, or the amount of entropy, thereby sets the crossover scale from power-law to exponentially decaying correlations. Other sources for a loss of coherence {#sec:dev} ------------------------------------- While the overall agreement between the experimental data and our numerical simulations is quite good (see Figs. \[fig3\](f)-\[fig3\](i) of the Letter), there are nevertheless small discrepancies. For instance, the experimental data exhibits broader peaks and a lower visibility even at the longest expansion times \[see Fig. 3(g) in the main text\]), which arise from additional sources of decoherence present in the experiment. We have numerically investigated the effect of the following deviations from ideal conditions: (i) the finite interaction strength $U/J=20<\infty$ used in the experiment, (ii) the presence of small residual potentials, (iii) the presence of a small admixture of doublons in the initial state, and (iv) the influence of coherence in the initial state, i.e., the presence of a maximum in $n(q)$ at $q=0$. Away from the HCB limit \[i.e., investigating the effects (i) and (iii)\], one needs to resort to time-dependent density matrix renormalization group (tDMRG) simulations [@vidal04; @daley04; @white04]. For the tDMRG method, a realistic modeling with particle numbers as large as in the experiment and sufficiently long times is currently unfeasible. ![ [*Quasimomentum distribution of HCBs and interacting bosons at $U/J=20$.*]{} We compare the dynamics at $t_{\rm E}=(N/4)\tau$ and $N=16$. Results for HCBs are shown with dashed lines in both panels. In (a), we show $n(q)$ for the expansion from the ground state (g.s.) at $U/J=20$ and the initial product state (p.s.) $|\psi_0\rangle$ from Eq. (\[idealstate\]). The solid and dashed vertical lines denote the estimate $q'=1.42/d$ and $1.47/d$, respectively, for the quasimomenta at the shifted maxima. In (b) we show $n(q)$ for the expansion from an initial product state with a single doublon and hole (dash-dotted line; averaged over 184 configurations) and with two doublons and two holes (solid line; averaged over 552 configurations). []{data-label="fig_finiteU"}](figpdf_supmat/fig_U20.pdf){width="0.90\columnwidth"} The dynamics of the quasimomentum distribution in the sudden expansion at $U/J<\infty$ starting from the correlated ground state at density one has been investigated in [@rodriguez06]. It has been observed that the position of the maxima moves to smaller values of $\hbar |q| < (\pi/2)(\hbar/d) $. As a heuristic estimate for the peak position [@rigol04] and to account for the conservation of total energy in this quantum quench, one can assume that all particles condense at the point in the single-particle dispersion corresponding to the average energy per particle. This results in the condition $E/N = -2J \cos{(q d)}$. In the hard-core case, the average energy per particle vanishes, and hence one obtains $\hbar q=\pm (\pi /2)(\hbar/d)$. For expansion from the correlated [*ground state*]{} at $U/J=20$, we express the ground-state energy $E_{\rm g.s.}=-4NJ^2/U$ by second-order perturbation theory around the $J/U=0$ limit, arriving at $\hbar q'=\arccos{(2J/U)}(\hbar/d)$. Our tDMRG results for $n(q)$ at $U/J=20$ are shown in Fig. \[fig\_finiteU\](a) and the estimate for the position of the peak $\hbar q'= \arccos{(2/20)}(\hbar/d) = 1.47(\hbar/d)$ is in good agreement with the numerical data [@rodriguez06]. In our experiment, we ideally start from the [*product state*]{} (\[idealstate\]) of one boson per site. The quasimomentum distribution for the latter initial state, shown in Fig. \[fig\_finiteU\](a), also exhibits a slight shift of the maxima to smaller absolute values of the peak position. One can understand this shift from the two-component picture discussed in Sec. IV of Ref. [@sorg14] as follows: We separate the gas into the ballistically expanding unbound particles and inert doublons. The doublons form on time scales for which the cloud has not yet considerably expanded into an empty lattice [@ronzheimer13]. Upon opening the trap, the doublons undergo the quantum distillation process [@hm09; @muth12; @bolech12; @Xia2015] that makes them accumulate in the center of the lattice, such that on the time scales probed by the experiment, they do not contribute to the quasicondensation (see the supplemental material of [@ronzheimer13]). The number of doublons $\braket{d} = 4NJ^2/[U(U-6J)]$ can be estimated from second-order perturbation theory (see Eq. (39) in [@sorg14]). The condition for the position of the dynamically formed quasicondensate can then be formulated as $E'/N' = -2J \cos{(q'd)}$, where $E'=-\braket{d}U$ and $N'=N-2\braket{d}$ are the energy and the number of ballistically expanding particles, respectively. For $U/J=20$, we find $\hbar q' = 1.42(\hbar/d)$, in very good agreement with the numerical data shown in Fig. \[fig\_finiteU\](a). The main effect of a finite $U/J=20$ is thus the slight shift of the peak positions in the dynamical quasicondensation of $n(q)$. On the other hand, the shape of the peaks does not change considerably. Small residual potentials also cause a time-dependent shift of the position of the maxima [@mandt13], but do not lead to a broadening of these peaks. Our numerical results for HCBs (not shown here) are consistent with these predictions. From our previous work [@ronzheimer13], we estimate that a small fraction of $\lesssim 5\%$ doublons is present in the initial state. Unfortunately, this regime of small doublon fractions can hardly be accessed with the tDMRG method as it would require large particle numbers, severely restricting the accessible time scales. We have performed tDMRG simulations with one or two doublons in the initial state with an overall particle number of $N=16$. This translates into a fraction of doublons $> 5\%$ and we observe that in these cases (after averaging over many initial configurations) the core expansion velocity $v_{\rm c}$ drops below the experimentally measured value, inconsistent with the results of Ref. [@ronzheimer13]. It is nonetheless obvious from our tDMRG simulations shown in Fig. \[fig\_finiteU\](b) that the presence of doublons in the initial state causes a significant reduction of the visibility of the finite-momentum peaks in $n(q)$. Thus deviations between our experimental data and the results of the numerical simulations for HCBs can be partially attributed to doublons. Finally, we discuss the influence of the existence of some short-range coherence in the system present at $t_{\rm E}=0$ on the expansion dynamics and the TOF distribution. Such $q=0$ short-range coherence is evident in the experimental data shown in Fig. \[fig3\](d) of the main text, and results in additional peaks in the TOF distribution beyond those expected from the emergent quasicondensates at finite momenta. This initial $q=0$ coherence, which stems from an imperfect state preparation, clearly decays rather fast and one can see from two examples that it is not detrimental to the formation of quasicondensates at finite momenta at sufficiently long expansion times. First, in the expansion from the correlated ground state of a Mott insulator at $U/J<\infty$, there is some short-range coherence present at $t_{\rm E}=0$ already. Nonetheless, the results of Ref. [@rodriguez06], where the expansion from the correlated Mott-insultating states was studied, clearly show that sharp peaks at finite momenta form dynamically in $n(q)$. Second, consider the sudden expansion of hard-core bosons from a box trap, yet with a density below unity in the initial state and from a correlated state such as the ground state in that box. For this case, we show the time evolution of the quasimomentum distribution $n(q)$ in Fig. \[fig:sm-new\]. As expected, there is a maximum in $n(q)$ at $q=0$ at $t_{\rm E}=0$, which quickly diminishes in favor of the quasicondensation peaks that emerge at $\hbar q=\pm \pi/2 (\hbar/d)$. The quasicondensation peaks ultimately sit at the same position as if the expansion had started from the state given in Eq. , but the distribution is not symmetric around $\hbar q=\pm \pi/2 (\hbar/d)$. Note that the reappearance of the peak at $q=0$ observed at the longest expansion time $t=36\tau$ is a precursor to the dynamical fermionization occurring at very long expansion times [@rigol05; @minguzzi05]. ![ [*Sudden expansion of HCBs from an initial state with an incommensurate density.*]{} We show $n(q)$ for the expansion from a box trap with $N=45$ particles and an initial density $n=0.9$, plotted for expansion times $t_{\rm E}/\tau=0,7,14,22,36$. []{data-label="fig:sm-new"}](figpdf_supmat/fig_hcb_n09.pdf){width="0.93\columnwidth"} Current-carrying states and power-laws -------------------------------------- Both our expanding bosons and the melting domain-walls realize current-carrying states: In the first case, we are dealing with particle currents, in the latter case, with spin currents. Based on the scaling solution for the domain-wall melting [@antal99], one can associate a current $j(\tilde x)$ and a density $n(\tilde x)$ to each point in space where $\tilde x$ is the rescaled coordinate (see the main text). In the limit of large systems, each region described by the rescaled coordinate corresponds to an extended region in terms of the original spatial coordinate $x$, suggesting that one can use the local density approximation to describe the system. In this picture, we seek homogeneous reference systems that have the same density $n=n(\tilde x)$ and the same current $j=j(\tilde x)$ as the expanding cloud at the rescaled position $\tilde x$. We then need two parameters to match these conditions. If we assume that the reference systems have periodic boundary conditions, then these two parameters are the chemical potential $\mu$, fixing the density, and the flux $\phi$ through the ring, fixing the current. By numerical comparison we find that the chemical potential is linear in $x$ while the flux $\phi$ is independent of $\tilde x$ and always given by $\pi/2$ (details will be published elsewhere). The question of power-law correlations can thus be reformulated in two ways: First, do the (homogeneous) reference systems on which $\phi$ enforces a nonzero current have power-law correlations and second, does this also apply to inhomogeneous systems. For simple 1D spin systems that allow a mapping onto free fermions and that have conserved currents, the answer to both questions is yes, according to the analysis of [@antal98; @antal99]. The extension to interacting systems with or without exactly conserved currents or including integrability breaking terms will be addressed in a future publication. Examples for such systems are the spin-1/2 XXZ model or the 1D Bose-Hubbard model. Beyond specific spin-1/2 chain models, related questions pertaining to the existence of power-law correlations in excited states that carry a finite current have been addressed in various contexts, including highly excited states in integrable 1D systems [@pantil13; @fokkema14], quantum magnetism [@antal98; @antal97; @gobert05], hydrodynamics [@schmittmann95] and statistical physics [@spohn83; @katz83; @schmittmann95]. Thus our work connects a broad range of branches of theoretical physics.
--- abstract: 'We show that, for many Lie superalgebras admitting a compatible $\mathbb{Z}$-grading, Kac induction functor gives rise to a bijection between simple supermodules over a Lie superalgebra and simple supermodules over the even part of this Lie superalgebra. This reduces the classification problem for the former to the one for the latter. Our result applies to all classical Lie superalgebra of type $I$, in particular, to the general linear Lie superalgebra ${\mathfrak{gl}}(m|n)$. In the latter case we also show that the rough structure of simple ${\mathfrak{gl}}(m|n)$-supermodules and also that of Kac supermodules depends only on the annihilator of the ${\mathfrak}{gl}(m)\oplus {\mathfrak}{gl}(n)$-input and hence can be computed using the combinatorics of BGG category $\mathcal{O}$.' address: - - author: - 'Chih-Whi Chen' - Volodymyr Mazorchuk title: Simple supermodules over Lie superalgebras --- Introduction and description of the results {#sectintro} =========================================== Classification problems are central in representation theory. One of the basic classification problems is the problem of classification of all simple modules for a given algebra. For Lie algebras, this problem is rather difficult. For simple Lie algebras, some kind of solution (more precisely, a reduction theorem which reduces classification of simple modules to classification of equivalence classes of irreducible elements in a certain non-commutative principal ideal domain) exists only for the Lie algebra $\mathfrak{sl}(2)$, see Block’s paper [@Bl]. For a Lie superalgebra $\mathfrak{g}$, classification of simple $\mathfrak{g}$-supermodules is, naturally, at least as hard as classification of simple modules over the even Lie algebra part $\mathfrak{g}_{{{{\overline}0}}}$. In case $\mathfrak{g}_{{{{\overline}0}}}$ is isomorphic to $\mathfrak{sl}(2,\mathbb{C})$ or $\mathfrak{gl}(2,\mathbb{C})$, one could expect some analogue of Block’s classification theorem. For the Lie superalgebras $\mathfrak{osp}(1|2)$, such an analogue was obtained in [@BO] following Block’s approach and, for the Lie superalgebra $\mathfrak{q}(2)$ (and its various subquotients), such an analogue was obtained in [@Ma] using a reduction technique based on application of Harish-Chandra bimodules. There are also some special cases in which much stronger results are known. The most significant one is the equivalence of certain categories of strongly typical $\mathfrak{g}$-supermodules and certain categories of $\mathfrak{g}_{{{{\overline}0}}}$-modules established in [@Go02s] for basic classical Lie superalgebras. This equivalence automatically provides a bijection between isomorphism classes of simple objects in categories in question and hence reduces the relevant part of the classification problem for $\mathfrak{g}$ to the corresponding problem for $\mathfrak{g}_{{{{\overline}0}}}$. There are also various constructions of certain classes of simple (non highest weight) modules over Lie superalgebras, see e.g. [@DMP; @GG; @FGG; @BCW; @WZZ; @BM; @CZ] and references therein. The main motivation for the present paper was to investigate in which generality one can obtain a complete reduction result which connects classification of simple $\mathfrak{g}$-supermodules and classification of simple $\mathfrak{g}_{{{{\overline}0}}}$-(super)modules. Our first main result is the following. [**Theorem A.**]{} [*Let ${\mathfrak}g = {\mathfrak}g_{{{\overline}0}}\oplus {\mathfrak}g_{{{{\overline}1}}}$ be a Lie superalgebra with $\text{dim}({\mathfrak}g_{{{{\overline}1}}})< \infty$ admitting a compatible $\mathbb{Z}$-grading ${\mathfrak}g= {\mathfrak}g_{-1}\oplus {\mathfrak}g_0 \oplus {\mathfrak}g_{1}$. Let $K({}_-):\mathfrak{g}_{{{\overline}0}}\text{-}\mathrm{smod}\to \mathfrak{g}\text{-}\mathrm{smod}$ be Kac induction functor. Then, for any simple $\mathfrak{g}_{{{\overline}0}}$-supermodule $V$, the $\mathfrak{g}$-supermodule $K(V)$ has simple top, denoted $L(V)$, and the correspondence $V\mapsto L(V)$ gives rise to a bijection between the sets of isomorphism classes of simple $\mathfrak{g}_{{{\overline}0}}$- and $\mathfrak{g}$-supermodules.*]{} We also provide, in full generality, several criteria for simplicity of Kac modules, with arbitrary simple input, in terms of typicality of the involved central characters. In the case of the general linear Lie superalgebra $\mathfrak{gl}(m|n)$ we also study the rough structure of Kac modules with arbitrary simple input along with the $\mathfrak{g}_{{{\overline}0}}$-rough structure of simple $\mathfrak{g}$-supermodules. Our second main result is the following: [**Theorem B.**]{} [*For $\mathfrak{g}=\mathfrak{gl}(m|n)$, let $L$ and $L'$ be two simple $\mathfrak{g}$-supermodules such that there is a finite-dimensional supermodule $E$ with $E\otimes L\twoheadrightarrow L'$. Let $V$ and $V'$ be their $\mathfrak{g}_{{{\overline}0}}=\mathfrak{gl}(m)\oplus \mathfrak{gl}(n)$-correspondents. Then both multiplicities $[K(V):L']$ and $[\mathrm{Res}^{\mathfrak{g}}_{\mathfrak{g}_{{{\overline}0}}}(L):V']$ are well-defined and finite and can be computed using BGG category $\mathcal{O}$.* ]{} The paper is organized as follows: in Section \[sectPre\] we collected all necessary preliminaries. In Section \[s3\] we compare (induced) Kac modules with their coinduced counterparts. Section \[s4\] studies simple supermodules and, in particular, contains a proof of Theorem A. In this section one can also find detailed examples of all classical Lie superalgebras of type I and various criteria of simplicity for Kac modules. Section \[s5\] is devoted to the study of rough structure and, in particular, establishes Theorem B. The first author is supported by Vergstiftelsen. The second author is supported by the Swedish Research Council and G[ö]{}ran Gustafsson Stiftelser. Preliminaries {#sectPre} ============= {#sectPre.1} Throughout this paper, we let ${\mathfrak}g = {\mathfrak}g_{{{\overline}0}}\oplus {\mathfrak}g_{{{{\overline}1}}}$ be a Lie superalgebra with $\text{dim}( {\mathfrak}g_{{{{\overline}1}}})< \infty$ and assume that ${\mathfrak}g$ has a compatible $\mathbb{Z}$-grading ${\mathfrak}g= {\mathfrak}g_{-1}\oplus {\mathfrak}g_0 \oplus {\mathfrak}g_{1}$. Namely, ${\mathfrak}g_0 ={\mathfrak}g_{{{\overline}0}}$ and ${\mathfrak}g_{{{\overline}1}}= {\mathfrak}g_1 \oplus {\mathfrak}g_{-1}$, where ${\mathfrak}g_{\pm 1}$ are ${\mathfrak}g_{{{\overline}0}}$-submodules of ${\mathfrak}g_{{{\overline}1}}$ with $[{\mathfrak}g_{1},{\mathfrak}g_{ 1}] =[{\mathfrak}g_{-1},{\mathfrak}g_{-1}] =0$. We set ${\mathfrak}g_{\geq 0}: = {\mathfrak}g_0 \oplus {\mathfrak}g_1$ and ${\mathfrak}g_{\leq 0}: = {\mathfrak}g_0 \oplus {\mathfrak}g_{-1}$. {#section} For a given vector superspace $V = V_{{{\overline}0}}\oplus V_{{{\overline}1}}$ and a homogeneous element $v\in V$, we denote the parity of $v$ by ${\overline}v$. We recall the parity reversing functor $\Pi$ defined on the category of vector superspaces as follows: $$(\Pi V)_{{{\overline}0}}=V_{{{\overline}1}},~(\Pi V)_{{{\overline}1}}=V_{{{\overline}0}}.$$ Throughout the present paper, all homomorphisms in the category of modules over Lie superalgebras are supposed to be homogeneous of degree zero. Therefore, a module $M$ over a Lie superalgebra is not necessary isomorphic to $\Pi M$. {#section-1} For a given Lie (super)algebra ${\mathfrak}L$, we denote the universal enveloping algebra of ${\mathfrak}L$ by $U({\mathfrak}L)$. We let $U = U({\mathfrak}g)$ and $U_{{{{\overline}0}}} =U({\mathfrak}g_{{{\overline}0}})$. Observe that $U$ is a finite extension of the ring $U_{{{{\overline}0}}}$ with basis $\Lambda({\mathfrak}g_{\bar{1}})$, the exterior algebra of the vector space ${\mathfrak}g_{\bar{1}}$. Let $Z({\mathfrak}g)$ and $Z({\mathfrak}g_{{{\overline}0}})$ denote the center of $U$ and $U_{{{\overline}0}}$, respectively. Also, we denote the center of ${\mathfrak}g_{{{\overline}0}}$ by ${\mathfrak}z({\mathfrak}g_{{{\overline}0}})$. For a given ${\mathfrak}g$- (resp. ${\mathfrak}g_{{{\overline}0}}$-) central character $\chi$ and a ${\mathfrak}g$- (resp. ${\mathfrak}g_{{{\overline}0}}$-) module $M$, we set $$M_{\chi} : = \{m\in M \|~ (z-\chi(z))^rm =0, \text{ for all } z\in Z({\mathfrak}g) \,\,(\text{resp. } z\in Z({\mathfrak}g_{{{\overline}0}}))\text{ and all }r\gg 0 \}.$$ Consider the category ${\mathfrak}g$-smod $= U$-smod of finitely generated (left) $U$-supermodules, the category ${\mathfrak}g_{{{{\overline}0}}}$-smod $= U_{{{\overline}0}}$-smod of finitely generated (left) $U_{{{\overline}0}}$-supermodules, and the category $U$-mod-$U$ of finitely generated $U$-$U$-bimodules. As ${\mathfrak}g_{{{{\overline}0}}}$ is even, ${\mathfrak}g_{{{{\overline}0}}}\text{-smod}$ is just a direct sum of two copies of ${\mathfrak}g_{{{{\overline}0}}}\text{-mod}$, the category of finitely generated (left) $U_{{{\overline}0}}$-supermodules. We have the exact restriction, induction and coinduction functors $$\text{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}:{\mathfrak}g \text{-smod}\to {\mathfrak}g_{{{{\overline}0}}}\text{-smod}\quad\text{ and }\quad \text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g},~\text{Coind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}: {\mathfrak}g_{{{{\overline}0}}}\text{-smod}\rightarrow {\mathfrak}g \text{-smod}.$$ By [@BF Theorem 2.2] (also see [@Go]), the functors $\text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}$ and $\text{Coind}^{{\mathfrak}g}_{{\mathfrak}g_{{{\overline}0}}}$ are isomorphic up to the equivalence given by tensoring with the one-dimensional ${\mathfrak}g_{{{\overline}0}}$-module on the top degree subspace of $U({\mathfrak}g_{{{\overline}1}}) = \Lambda {\mathfrak}g_{{{{\overline}1}}}$. For a ${\mathfrak}g$-central character $\chi$, we denote by ${\mathfrak}g\text{-smod}_{\chi}$ the full subcategory of ${\mathfrak}g$-smod consisting of all ${\mathfrak}g$-supermodules annihilated by some power of $\chi$. Similarly, for a ${\mathfrak}g_{{{{\overline}0}}}$-central character $\chi$, we denote by ${\mathfrak}g_{{{{\overline}0}}}\text{-smod}_{\chi}$ the full subcategory of ${\mathfrak}g_{{{{\overline}0}}}$-smod consisting of all ${\mathfrak}g_{{{{\overline}0}}}$-supermodules annihilated by some power of $\chi$. Induced modules {#sectInd} --------------- For a given ${\mathfrak}g_{{{\overline}0}}$-supermodule $V$, we may extend $V$ trivially to a ${\mathfrak}g_{{{\overline}0}}\oplus {\mathfrak}g_{1}$-supermodule and define the [*Kac module*]{} of $V$ as follows: $$K(V) := \text{Ind}_{{\mathfrak}g_{\geq 0}}^{\mathfrak{g}}(V).$$ This defines an exact functor $K(\cdot): {\mathfrak}g_{{{\overline}0}}\text{-smod}\rightarrow {\mathfrak}g\text{-smod}$ which we call [*Kac functor*]{}. For a given $M\in {\mathfrak}g$-smod, we have the usual adjunction $$\label{eqkac1} \text{Hom}_{{\mathfrak}g}(K(V), M) = \text{Hom}_{{\mathfrak}g_{{{\overline}0}}}(V, M^{{\mathfrak}g_1}),$$ where $M^{{\mathfrak}g_1}:=\{m\in M\| ~ {\mathfrak}g_1\cdot m =0\}.$ Also, for a given ${\mathfrak}g_{{{\overline}0}}$-supermodule $V$, we define the [*opposite Kac module*]{} $K'(V)$ of $V$, see e.g. [@Ge98 Section 3.3], which is given as follows: $$\begin{aligned} \label{DefoppoKac} &K'(V): = \text{Ind}_{{\mathfrak}g_{\leq 0}}^{{\mathfrak}g}(V),\end{aligned}$$ where ${{\mathfrak}g_{-1}}V$ is defined to be zero. Just like in the previous paragraph, this defines an exact functor $K'(\cdot): {\mathfrak}g_{{{\overline}0}}\text{-smod}\rightarrow {\mathfrak}g\text{-smod}$ and we have a similar adjunction $$\label{eqkac2} \text{Hom}_{{\mathfrak}g}(K'(V), M) = \text{Hom}_{{\mathfrak}g_{{{\overline}0}}}(V, M^{{\mathfrak}g_{-1}}),$$ where $M^{{\mathfrak}g_{-1}}:=\{m\in M\| ~ {\mathfrak}g_{-1}\cdot m =0\}.$ We may observe that $K(V) \cong \Lambda({\mathfrak}g_{-1}) \otimes V$ and $K'(V) \cong \Lambda({\mathfrak}g_{1}) \otimes V$ as vector spaces. We set $$\Lambda^{\text{max}}({\mathfrak}g_{-1}) :=\Lambda^{\text{dim}{\mathfrak}g_{-1}}({\mathfrak}g_{-1}) \quad\text{ and }\quad \Lambda^{\text{max}}({\mathfrak}g_{1}) :=\Lambda^{\text{dim}{\mathfrak}g_{1}}({\mathfrak}g_{1}).$$ Coinduced modules {#sectCoind} ----------------- For a given ${\mathfrak}g_{{{\overline}0}}$-supermodule $V$, we may extend $V$ trivially to a ${\mathfrak}g_{{{\overline}0}}\oplus {\mathfrak}g_{1}$-module and define the [*super*]{} coinduced module $\text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ (cf. [@Ge98] and [@Sc06 Chapter 4, Section 2]) of $V$ as follows: $$\begin{aligned} &\{ f\in \text{Hom}_{\mathbb{C}}(U(\mathfrak{g}),N) |~f(pu) =(-1)^{\overline{p}\overline{f}} pf(u), \text{ for all homogeneous }p\in {\mathfrak}{g}_{\geq 0}, u\in U({\mathfrak}{g})\}, \end{aligned}$$ with the action $(xf)(u):=(-1)^{\overline{x}(\overline{u}+\overline{f})}f(ux)$, for all homogeneous $x\in \mathfrak{g}$, $f\in \text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ and $u\in U({\mathfrak}{g})$. In particular, we may observe that $\text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}\cong \text{Hom}_{\mathbb C}(\Lambda ({\mathfrak}g_{-1}), V)$ as vector space. The following adjunction is proved in [@Sc06 Chapter 4, Section 2, Proposition 3]. \[supercoind\] There are natural isomorphisms $$\emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) \cong \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}})$$ given by $$\begin{aligned} & \emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) \ni \phi \mapsto \hat\phi \in \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}}),\\ & \emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) \ni \tilde\psi \mapsfrom \psi \in \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}}), \end{aligned}$$ where $\hat\phi(v)(y): = (-1)^{{\overline}y{\overline}v}\phi(yv),$ and $\tilde\psi(v): = \psi(v)(1),$ for homogeneous $y\in U$ and $v\in V$. Also we define the [*usual*]{} coinduced module $\text{{{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ (cf. [@Go]) of $V$ as the following ${\mathfrak}g$-supermodule: $$\begin{aligned} &\{ f\in \text{Hom}_{\mathbb{C}}(U(\mathfrak{g}),N) |~f(pu) = pf(u), \text{ for all homogeneous }p\in {\mathfrak}{g}_{\geq 0}, u\in U({\mathfrak}{g})\}, \end{aligned}$$ with the action $(xf)(u):=f(ux)$, for all $x\in \mathfrak{g}$, $f\in \text{{{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$V$}}$ and $u\in U({\mathfrak}{g})$. Also, we have the following adjunction between the restriction functor and usual coinduction functor. \[usualcoind\] There are natural isomorphisms $$\emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) = \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}})$$ given by $$\begin{aligned} & \emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) \ni \phi \mapsto \hat\phi \in \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}}),\\ & \emph{Hom}_{{\mathfrak}g_{\geq 0}}(\emph{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V),W) \ni \tilde\psi \leftarrow \psi \in \emph{Hom}_{{\mathfrak}g}(V,\emph{{{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(W)$}}),\end{aligned}$$ where $\hat\phi(v)(y): = \phi(yv),$ and $\tilde\psi(v): = \psi(v)(1),$ for $y\in U$ and $v\in V$. This follows from the usual adjunction between restriction and coinduction. Since both super coinduction functor [$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$]{}$(\bullet)$ and usual coinduction functor $\text{coind}_{{\mathfrak}g_{\geq} 0}^{{\mathfrak}g}(\bullet)$ are right adjoint to the restriction functor by Lemma \[supercoind\] and Lemma \[usualcoind\], it follows that [$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$]{}$(\bullet) \cong \text{coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(\bullet)$. The following lemma gives an explicit ${\mathfrak}g$-isomorphism. \[lem::CoindareIso\] There is a natural isomorphism [*[$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$]{}$(V)$*]{} $\cong$ [*[$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$]{}$(V)$*]{}. Define a linear isomorphism $(\bullet)^{\dag}: \text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$} $\rightarrow $ {{$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ via $$f^{\dag}(y): = (-1)^{{\overline}f \cdot {\overline}y}f(y),$$ for all homogeneous elements $f \in \text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ and $y \in U$ (also see, e.g., [@ChWa12 Definition 1.1]).  For a given $f \in \text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$, we check that $f^{\dag}\in$ [[$\text{coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$]{}(V)]{}: $$\begin{aligned} &f^{\dag}(yx) = (-1)^{{\overline}f({\overline}y+{\overline}x)}f(yx) = (-1)^{{\overline}f({\overline}x)}yf(x) =y f^{\dag}(x),\end{aligned}$$ for homogeneous $y\in U({\mathfrak}g_{\geq 0})$ and $x \in U$. Therefore $(\bullet)^{\dag}$ is an even linear isomorphism between superspaces. We now show that $(\bullet)^{\dag}$ intertwines the ${\mathfrak}g$-actions. Let $f\in \text{{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}$(V)$}}$ and $x\in {\mathfrak}g$ be homogeneous elements. We may note that the parity of $xf$ is ${\overline}x + {\overline}f$. Also, for a given homogeneous element $y\in U$, we have $$\begin{aligned} (xf)^{\dag}(y) & =(-1)^{({\overline}x+{\overline}f){\overline}y} (xf)(y) \\ & =(-1)^{({\overline}x+{\overline}f){\overline}y}(-1)^{({\overline}f+{\overline}y){\overline}x}f(yx) \\ & =(-1)^{({\overline}x+{\overline}y){\overline}f}f(yx) \\ & =f^{\dag}(yx) = (xf^{\dag})(y). \end{aligned}$$ Consequently, we have $(xf)^{\dag} = xf^{\dag}$, for all $x\in {\mathfrak}g$. Therefore $(\bullet)^{\dag}$ is an even isomorphism of ${\mathfrak}g$-supermodules. The naturality of $(\bullet)^{\dag}$ follows directly from the definitions. We note that Lemma \[lem::CoindareIso\] can be used to match adjunctions in Lemma \[supercoind\] and Lemma \[usualcoind\]. In a similar way, one defines the [*opposite*]{} coinduced modules [$\text{Coind}_{{\mathfrak}{g}_{\leq 0}}^{{\mathfrak}{g}}$]{}$(V)$, for a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule $V$. Induced vs coinduced modules {#s3} ============================ In this section we study, in detail, induced modules and coinduced modules and relations between them. Kac modules and opposite Kac modules ------------------------------------ \[newl1\] Let $V$ be any non-zero ${\mathfrak}g_0$-supermodule. Then every non-zero $\mathfrak{g}_{-1}$-submodule of $K(V)$ has a non-zero intersection with $\Lambda^{\emph{max}}({\mathfrak}g_{-1})\otimes V$, in particular, $$(K(V))^{\mathfrak{g}_{-1}}=\Lambda^{\emph{max}}({\mathfrak}g_{-1})\otimes V.$$ We first fix a basis $B$ for ${\mathfrak}g_{-1}$. Recall that $K(V) \cong \Lambda({\mathfrak}g_{-1}) \otimes V$ as a vector space. For a given $0\leq j\leq \text{dim}{\mathfrak}g_{-1}$, let $\{x_1,\ldots ,x_{\ell}\}$ be a basis of $\Lambda^{j}({\mathfrak}g_{-1})$ such that each $x_i$ is a monomial in $B$. Then, for each $x_i$, there is an element $y_k \in \Lambda({\mathfrak}g_{-1})$ such that $y_k\cdot x_i \in \delta_{ik}(\Lambda^{\text{max}}({\mathfrak}g_{-1})\backslash \{0\})$. From this it follows that, for a given non-zero element $x\in K(V)$, we have $\Lambda({\mathfrak}g_{-1})x \cap \big(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V\big) \neq 0$. The claim follows. \[Kacsimplesoc\] Let $V$ be a simple ${\mathfrak}g_0$-supermodule. Then every ${\mathfrak}g$-submodule of $K(V)$ contains $\Lambda^{\emph{max}}({\mathfrak}g_{-1})\otimes V$, in particular, $$\emph{Soc}(K(V)) = U \cdot(\Lambda^{\emph{max}}({\mathfrak}g_{-1})\otimes V)$$ and, moreover, $\emph{Soc}(K(V))$ is a simple module. Similarly, $K'(V)$ has simple socle. In particular, both $K(V)$ and $K'(V)$ are indecomposable. Since $\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes \Lambda^{\text{max}}({\mathfrak}g_{-1}^*)$ is isomorphic to the trivial ${\mathfrak}g_0$-supermodule, we obtain that $\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V$ is a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule. Now the claim follows directly from Lemma \[newl1\]. The observation of Lemma \[Kacsimplesoc\] that Kac modules have simple socles is interesting and slightly unexpected. In would be natural to expect that Kac modules, being [*induced*]{}, have simple tops. The latter statement, however, requires much more effort and we refer the reader to Theorem \[MainThm1\]. \[InvInKac\] By Lemma \[newl1\], for a simple ${\mathfrak}g_0$-supermodule $V$, we have $$K(V)^{{\mathfrak}g_{-1}} = \Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V$$ and, similarly, that $K'(V)^{{\mathfrak}g_{1}} = \Lambda^{\text{max}}({\mathfrak}g_1 )\otimes V$. Therefore we have $$\begin{aligned} &(\text{Soc}(K(V)))^{{\mathfrak}g_{-1}} = \Lambda^{\text{max}}({\mathfrak}g_{-1}) \otimes V, \label{eqn1} \\ &(\text{Soc}(K'(V)))^{{\mathfrak}g_{1}} = \Lambda^{\text{max}}({\mathfrak}g_{1}) \otimes V.\label{eqn2}\end{aligned}$$ In particular, since $\Lambda^{\text{max}}({\mathfrak}g_i^*) \otimes \Lambda^{\text{max}}({\mathfrak}g_i)$ are trivial ${\mathfrak}g_0$-supermodules, for $i=\pm 1$, we have $$\text{Soc}(K(V)) \cong \text{Soc}(K(W)) \quad\text{ if and only if }\quad V\cong W$$ and $$\text{Soc}(K'(V)) \cong \text{Soc}(K'(W)) \quad\text{ if and only if }\quad V\cong W.$$ \[KacOppoKac\] Let $V$ and $W$ be simple ${\mathfrak}g_0$-supermodules. Then we have: 1. \[KacOppoKac.1\] Existence of a non-zero homomorphism from $K(V)$ to $K'(W)$ is equivalent to the condition $V \cong \Lambda^{\emph{max}}({\mathfrak}g_1) \otimes W$. 2. \[KacOppoKac.2\] Every non-zero ${\mathfrak}g$-homomorphism $f:K(V)\rightarrow K'(W)$ satisfies $f(K(V)) =\emph{Soc}(K'(W))$. 3. \[KacOppoKac.3\] If $K(V)\cong K'(W)$, then both $K(V)$ and $K'(W)$ are simple supermodules. Using and , we have $$\begin{aligned} &\text{Hom}_{{\mathfrak}g}(K(V), K'(W)) = \text{Hom}_{{\mathfrak}g_{{{\overline}0}}}(V, K'(W)^{{\mathfrak}g_1}) = \text{Hom}_{{\mathfrak}g_{{{\overline}0}}}(V, \Lambda^{\text{max}}({\mathfrak}g_1) \otimes W).\end{aligned}$$ As $\Lambda^{\text{max}}({\mathfrak}g_1) \otimes W$ is a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule, claim  follows. From claim  and an analogue of Lemma \[Kacsimplesoc\] for $K'(W)$ we also obtain claim . Finally, claim  follows from claim . We note that one can characterize all isomorphisms between Kac modules and opposite Kac modules using Corollary \[KacOppoKac\] and the criteria of simplicity of Kac modules given in Section \[Sect::CriforSimpleKac\]. 0.5cm Isomorphism between induced and coinduced modules ------------------------------------------------- The following theorem is an analog of [@Ge98 Proposition 2.1.1(ii)]. \[Thm::CoindisoInd\] For a given simple ${\mathfrak}g_0$-supermodule $V$, we have $$\begin{aligned} &\emph{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(\Lambda^{\emph{max}}({\mathfrak}g_{-1})\otimes V)\cong K(V), \label{Eq::CoindisoInd1} \\ &\emph{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V)\cong K(\Lambda^{\emph{max}}({\mathfrak}g_{-1}^*) \otimes V), \label{Eq::CoindisoInd2}\end{aligned}$$ up to parity change. Similarly, we have $$\begin{aligned} &\emph{Coind}_{{\mathfrak}g_{\leq 0}}^{{\mathfrak}g}(\Lambda^{\emph{max}}({\mathfrak}g_{1})\otimes V)\cong K'(V), \label{Eq::CoindisoInd3} \\ &\emph{Coind}_{{\mathfrak}g_{\leq 0}}^{{\mathfrak}g}(V)\cong K'(\Lambda^{\emph{max}}({\mathfrak}g_{1}^*) \otimes V), \label{Eq::CoindisoInd4}\end{aligned}$$ up to parity change. Note that and are equivalent due to the fact that $\Lambda^{\text{max}}({\mathfrak}g^*_{-1})\otimes \Lambda^{\text{max}}({\mathfrak}g_{-1})$ is isomorphic to the trivial ${\mathfrak}g_0$-supermodule. By [@BF Theorem 2.2], $\mathrm{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V)$ is isomorphic to $K(W)$, for some simple ${\mathfrak}g_0$-supermodule $W$. To identify $W$ it is convenient to look at the category $\mathfrak{g}\text{-}\mathrm{mod}^{\mathbb{Z}}$ of all $\mathbb{Z}$-graded $\mathfrak{g}$-supermodules. By construction both $K(X)$ and $\mathrm{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(X)$ are $\mathbb{Z}$-graded, for any ${\mathfrak}g_0$-supermodule $X$ concentrated in a single $\mathbb{Z}$-degree. Note that simple ${\mathfrak}g_0$-supermodules are always concentrated in a single $\mathbb{Z}$-degree. Abusing notation, we consider the [*standard*]{} graded lifts $K(X)$ and $\mathrm{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(X)$ in which $X$ is concentrated in degree $0$. Then all non-zero components of $K(X)$ have non-positive degrees with $\Lambda^{\text{max}}(\mathfrak{g}_{-1}) \otimes X$ concentrated in degree $-\dim(\mathfrak{g}_{-1})$. All non-zero components of $\mathrm{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(X)$ have non-negative degrees with $\dim(\mathfrak{g}_{-1})$ being the maximal non-zero degree. Therefore $W$ must be isomorphic to the degree $\dim(\mathfrak{g}_{-1})$ component of $\mathrm{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V)$. By construction (cf. [@BF Theorem 2.3]), the latter one is $\Lambda^{\mathrm{max}}({\mathfrak}g_{-1}^*) \otimes V$. This proves isomorphisms and . Isomorphisms and are proved in a similar way. \[Coindsimplesoc\] Let $V$ be a simple ${\mathfrak}g_0$-supermodule. Then both $\emph{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}{g}}(V)$ and $\emph{Coind}_{{\mathfrak}{g}_{\leq 0}}^{{\mathfrak}{g}}(V)$ have simple socle. The claim follows from Theorem \[Thm::CoindisoInd\] and Lemma \[Kacsimplesoc\]. Simple modules over Lie superalgebra of type I {#s4} ============================================== Classification of simple ${\mathfrak}g$-supermodules ---------------------------------------------------- In this subsection we prove that Kac functor gives rise to a one-to-one correspondence between simple ${\mathfrak}g$-supermodules and simple ${\mathfrak}g_{{{\overline}0}}$-supermodules. The main theorem in this subsection is the following. \[MainThm1\] Let ${\mathfrak}g$ be as in Subsection \[sectPre.1\]. 1. \[MainThm1.1\] For any simple ${\mathfrak}g_{{{\overline}0}}$-supermodule $V$, the module $K(V)$ has a unique maximal submodule. The unique simple top of $K(V)$ is denoted $L(V)$. The correspondence $$\label{Eqcortop} V\mapsto L(V)$$ gives rise to a bijection between the set of isomorphism classes of simple ${\mathfrak}g_{{{\overline}0}}$-supermodules and the set of isomorphism classes of simple ${\mathfrak}g$-supermodules. 2. \[MainThm1.2\] The correspondence, $$\begin{aligned} &V\mapsto \emph{Soc}\big(\emph{{$\text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{{\mathfrak}{g}}$}} (V)\big) \cong \emph{Soc}\big(K(V\otimes \Lambda^{\emph{max}}({\mathfrak}g_{-1}^*))\big), \label{Eqcorsoc} \end{aligned}$$ gives rise to a bijection between the set of isomorphism classes of simple ${\mathfrak}g_{{{\overline}0}}$-supermodules and the set of isomorphism classes of simple ${\mathfrak}g$-supermodules. We need some preparation before we can prove Theorem \[MainThm1\]. \[lem::SocleInCoind\] Let $M$ be a simple ${\mathfrak}g$-supermodule. Then there is a simple ${\mathfrak}g_0$-supermodule $N$ such that $$M \hookrightarrow \emph{Coind}_{{\mathfrak}{g}_{\geq 0}}^{\mathfrak{g}}(N).$$ As $U({\mathfrak}g)$ is finitely-generated over $U({\mathfrak}g_{\geq 0})$, we have $$0\neq \text{Hom}_{{\mathfrak}g_{\geq 0}}(\text{Res}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(M), N) = \text{Hom}_{{\mathfrak}g}(M, \text{Coind}_{{\mathfrak}{g}_{\geq 0}}^{\mathfrak{g}}(N)),$$ for some simple ${\mathfrak}g_{\geq 0}$-supermodule $N$. Since $U({\mathfrak}g_1)=\Lambda ({\mathfrak}g_1)$ is finite dimensional, we have $N^{{\mathfrak}g_1}\neq 0$. Also, $N^{{\mathfrak}g_1}$ is a ${\mathfrak}g_{\geq 0}$-submodule and therefore $N =N^{{\mathfrak}g_1}$. The claim follows. \[SocIndCoind\] For a simple ${\mathfrak}g$-supermodule $M$, there exist simple ${\mathfrak}g_0$-supermodules $V_1$, $V_2$, $V_3$, $V_4$ such that $$M = \emph{Soc}(K(V_1))= \emph{Soc}(K'(V_2))= \emph{Soc}(\emph{Coind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V_3)) = \emph{Soc}(\emph{Coind}_{{\mathfrak}g_{\leq 0}}^{{\mathfrak}g}(V_4)).$$ Since $\Lambda^{\text{max}}({\mathfrak}g_{i}^*)\otimes\Lambda^{\text{max}}({\mathfrak}g_{i})$ is isomorphic to the trivial ${\mathfrak}g_0$-supermodule, for $i=\pm 1$, the claim follows directly from Lemmata \[lem::SocleInCoind\] and \[Kacsimplesoc\] and Theorem \[Thm::CoindisoInd\]. \[nlem5\] Under the assumptions of Theorem \[MainThm1\], the module $K(V)$ has a unique maximal submodule. We first show that all simple quotients of $K(V)$ are isomorphic. Let $f: K(V) \twoheadrightarrow L$ be such that $L$ is simple. By Corollary \[SocIndCoind\] there is a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule $W$ such that $L\cong \text{Soc}(K'(W))$. We would like to show that $W$ is uniquely determined by $V$. By Remark \[InvInKac\], we have $f(V)\subseteq \Lambda^{\text{max}}({\mathfrak}g_{1})\otimes W$ since $V \subseteq K(V)^{{\mathfrak}g_1}$. Now $\Lambda^{\text{max}}({\mathfrak}g_{1})\otimes W$ is simple since $\Lambda^{\text{max}}({\mathfrak}g^*_{1})\otimes \Lambda^{\text{max}}({\mathfrak}g_{1})$ is isomorphic to the trivial ${\mathfrak}g_0$-supermodule. Thus $f|_V: V \cong \Lambda^{\text{max}}({\mathfrak}g_{1})\otimes W$, and hence $W\cong \Lambda^{\text{max}}({\mathfrak}g_{1}^*)\otimes V$ as ${\mathfrak}g_{{{\overline}0}}$-supermodules. In other words, every simple quotient of $K(V)$ is isomorphic to the simple socle of $K'(\Lambda^{\text{max}}({\mathfrak}g_{1}^*)\otimes V)$. Using adjunction and Schur’s lemma, we also have $$\dim\mathrm{Hom}_{\mathfrak{g}}(K(V),K'(W))= \dim\mathrm{Hom}_{\mathfrak{g}_{\geq 0}}(V,\Lambda^{\text{max}}({\mathfrak}g_{1})\otimes W)= \dim\mathrm{Hom}_{\mathfrak{g}_{\geq 0}}(V,V)=1.$$ The claim of the lemma follows. We start by proving claim . Recall that $\Lambda^{\text{max}}({\mathfrak}g_{i}^*)\otimes \Lambda^{\text{max}}({\mathfrak}g_{i})$ is isomorphic to the trivial ${\mathfrak}g_0$-supermodule, for $i=\pm 1$. By Theorem \[Thm::CoindisoInd\] and Corollaries \[Coindsimplesoc\] and \[SocIndCoind\], the mapping is well-defined and surjective. Also, from Theorem \[Thm::CoindisoInd\] and Remark \[InvInKac\] it follows that is injective. This proves claim . Now we prove claim . By Lemma \[nlem5\], the correspondence is well-defined. Note that, by Lemma \[Kacsimplesoc\], for any simple ${\mathfrak}g_0$-supermodule $X$, the socle of every $K(X)$ is $\mathbb{Z}$-graded. From claim  it thus follows that all simple ${\mathfrak}g$-supermodules are $\mathbb{Z}$-gradeable. In particular, $L(X)$ is $\mathbb{Z}$-gradeable. We fix a $\mathbb{Z}$-grading on $L(X)$ such that the top non-zero graded component is of degree $0$. For $i\in\mathbb{Z}$, denote by $\langle i\rangle$ the shift of grading functor on $\mathfrak{g}\text{-}\mathrm{smod}^\mathbb{Z}$ which maps homogeneous elements of degree $j$ to homogeneous elements of degree $j-i$. If $X$ and $Y$ are simple ${\mathfrak}g_0$-supermodules, then the only chance for $\mathrm{Hom}_{\mathfrak{g}\text{-}\mathrm{smod}^\mathbb{Z}}(L(X),L(Y)\langle i\rangle)$ to be non-zero is when $i=0$ (for any non-zero homomorphism must be an isomorphism and, unless $i=0$, the top non-zero graded components of $L(X)$ and $L(Y)$ would not match). However, as the degree zero component of $L(X)$ is isomorphic to $X$ and the degree zero component of $L(Y)$ is isomorphic to $Y$, in the case $X\not\cong Y$ we have $$\mathrm{Hom}_{\mathfrak{g}\text{-}\mathrm{smod}^\mathbb{Z}}(L(X),L(Y)\langle i\rangle)=0$$ since $\mathrm{Hom}_{\mathfrak{g}_0}(X,Y)=0$. This shows that the correspondence is injective. Finally, let $L$ be a simple ${\mathfrak}g$-supermodule. We can consider it as a $\mathbb{Z}$-graded supermodule such that all non-zero components have non-positive degrees and the degree zero component $X$ is non-zero. Then $X$ is a ${\mathfrak}g_{\geq 0}$-supermodule. Due to exactness of $\mathrm{Ind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}$, simplicity of $L$ implies simplicity of $X$. As ${{\mathfrak}g_{1}}X=0$, we even get that $X$ is a simple ${\mathfrak}g_{0}$-supermodule. By adjunction, there is a non-zero homomorphism from $K(X)$ to $L$. This shows that the correspondence is surjective, completing the proof. We recall the parity change functor $\Pi$ defined in Section \[sectPre\] and conclude this subsection with the following corollary. We have $L(V)\not \cong \Pi L(V)$, for each simple ${\mathfrak}g_0$-supermodule $V$. Suppose that there is an isomorphism $f: L(V) \cong \Pi L(V)$. By Remark \[InvInKac\] and Corollary \[SocIndCoind\] we may note that $L(V)^{{\mathfrak}g_1} \cong V$, $\Pi L(V)^{{\mathfrak}g_1} \cong \Pi V$, as ${\mathfrak}g_0$-supermodules. Consequently, we have $f|_V:V \cong \Pi V$, a contradiction. Examples: simple supermodules over classical Lie superalgebra of type I ----------------------------------------------------------------------- In this subsection, we consider the example of the [*classical Lie superalgebras of type I*]{} (see, e.g., [@Mu12 Chapter 2, 3] and [@ChWa12 Section 1.1]), they are: $$\begin{aligned} &(\text{Type \bf A}): ~{\mathfrak}{gl}(m|n), {\mathfrak}{sl}(m|n) \text{ and } {\mathfrak}{sl}(n|n)/\mathbb{C}I_{n|n};\label{cLIa} \\ &(\text{Type \bf C}): ~{\mathfrak}{osp}(2|2n); \label{cLIc} \\ &(\text{Strange series}): ~{\mathfrak}{p}(n)\text{ and } {\mathfrak}{p}'(n):=[{\mathfrak}{p}(n), {\mathfrak}{p}(n)]; \label{cLIp}\end{aligned}$$ where $m>n \geq 1$ are integers. Each of these classical Lie superalgebra of type I admits a $\mathbb{Z}_2$-compatible $\mathbb{Z}$-grading $$\begin{aligned} &\mathfrak{g}=\mathfrak{g}_{-1} \oplus \mathfrak{g}_0 \oplus \mathfrak{g}_{+1}. \label{Zgrad}\end{aligned}$$ As an application, we may conclude that Theorem \[MainThm1\] holds for all classical Lie superalgebra of type I. ### General linear Lie superalgebra ${\mathfrak}{gl}(m|n)$ {#s4.2.1} Let ${\mathbb C}^{m|n}$ be the standard complex superspace of (graded) dimension $(m|n)$. With respect to a fixed ordered homogeneous basis in ${\mathbb C}^{m|n}$, the [*general linear Lie superalgebra*]{} ${\mathfrak{g}}={\mathfrak{gl}}(m|n) = {\mathfrak}{gl}({\mathbb C}^{m|n})$ can be realized as the space of $(m+n)\times (m+n)$ matrices over ${\mathbb C}$. The even subalgebra ${\mathfrak}g_{{{\overline}0}}$ of ${\mathfrak{g}}$ is isomorphic to ${\mathfrak}{gl}(m)\oplus {\mathfrak}{gl}(n)$. We let $e_{ij}$, $1\le i,j\le m+n$, denote the $(i,j)$-th matrix unit. The [*Cartan subalgebra*]{} of ${\mathfrak{g}}$ consisting of all diagonal matrices is denoted by ${\mathfrak{h}}={\mathfrak{h}}_{m|n} =\text{span} \{e_{ii}|1\le i\le m+n\}$. We denote by $\{{{\varepsilon}}_i|1\le i\le m+n\}$ the basis in ${\mathfrak{h}}^*={\mathfrak{h}}_{m|n}^*$ which is dual to $\{e_{ii}|1\le i\le m+n\}$. Let $\Phi=\{{{\varepsilon}}_i-{{\varepsilon}}_j|1\le i\neq j\le m+n\}$ be the root system of ${\mathfrak{g}}$ and denote by $\Phi_{{{\overline}0}}$ and $\Phi_{\bar 1}$ the set of even roots and the set of odd roots, respectively. We also denote the set of positive roots by $\Phi^+:= \{{{\varepsilon}}_i-{{\varepsilon}}_j|1\le i< j\le m+n\}$ and the set of negative roots by $\Phi^-: = - \Phi^+$. The Weyl group $W={\mathfrak}{S}_m\times{\mathfrak}{S}_n$ acts on ${\mathfrak}h^*$ in the obvious way. For each root $\alpha \in \Phi$, let ${\mathfrak}g_{\alpha}$ be the corresponding root space. We have the triangular decomposition $$\begin{aligned} \label{Eq::TriDecom} &{\mathfrak{g}}={\mathfrak n}^- \oplus {\mathfrak{h}}\oplus {\mathfrak n},\end{aligned}$$ where $\displaystyle {\mathfrak}{n}=\bigoplus_{\alpha\in\Phi^+}{\mathfrak{g}}_\alpha$ and $\displaystyle {\mathfrak}{n}^-=\bigoplus_{\alpha\in-\Phi^+}{\mathfrak{g}}_\alpha$. The subalgebra ${\mathfrak}b:={\mathfrak}h \oplus {\mathfrak}n$ of upper triangular matrices is called the [*(standard) Borel subalgebra*]{}. Also, we let ${\mathfrak}b_{{{\overline}0}}:= ({\mathfrak}b\cap {\mathfrak{g}}_{{{\overline}0}})\subset {\mathfrak}g_{{{\overline}0}}$ be the standard Borel subalgebra of ${\mathfrak}g_{{{\overline}0}}$. The $\mathbb{Z}_2$-compatible $\mathbb{Z}$-grading of $\mathfrak{g}$ in terms of matrix realization are given by letting $$\begin{aligned} &\mathfrak{g}_{0} ={\mathfrak}g_{{{\overline}0}}= \left\{ \left( \begin{array}{cc} a & 0\\ 0 & d\\ \end{array} \right) \| ~a\in {\mathbb C}^{m^2}, d\in {\mathbb C}^{n^2} \right\},\label{Zgradg0}\\ &\mathfrak{g}_{+1} =\bigoplus_{\alpha\in \Phi_{\bar 1}^+} {\mathfrak}g_{\alpha} = \bigoplus_{1\leq i,j\leq n}{\mathbb C}e_{i,m+j} = \left\{ \left( \begin{array}{cc} 0 & b\\ 0 & 0\\ \end{array} \right) \| ~b\in {\mathbb C}^{mn} \right\},\label{Zgradgl}\\ &\mathfrak{g}_{-1} =\bigoplus_{\alpha\in \Phi_{\bar 1}^-} {\mathfrak}g_{\alpha} = \bigoplus_{1\leq i,j\leq n} {\mathbb C}e_{m+i,j} = \left\{ \left( \begin{array}{cc} 0 & 0\\ c & 0\\ \end{array} \right) \| ~c\in {\mathbb C}^{mn} \right\}.\label{Zgradg-l} \end{aligned}$$ In particular, ${\mathfrak}g_1 \cong \mathbb{C}^m\otimes (\mathbb{C}^{n})^*$ and ${\mathfrak}g_{-1} \cong \mathbb{C}^{n}\otimes (\mathbb{C}^{m})^*$ as ${\mathfrak}g_{{{\overline}0}}$ modules. We let $${\mathfrak}s : = [{\mathfrak}g_{{{\overline}0}}, {\mathfrak}g_{{{\overline}0}}] \cong {\mathfrak}{sl}(m)\oplus {\mathfrak}{sl}(n)$$ be the maximal semisimple ideal of ${\mathfrak}g_{{{\overline}0}}$. We define the associated grading operator $d^{{\mathfrak}{gl}(m|n)}$ for ${\mathfrak}{gl}(m|n)$ as follows: $$\begin{aligned} \label{Eq::GrOp} &d^{{\mathfrak}{gl}(m|n)}:= \sum_{m+1\leq i\leq m+n} e_{ii} = \left( \begin{array}{cc} 0 &0 \\ 0 & {I}_{m}\\ \end{array} \right) \in {\mathfrak}z({\mathfrak}g_{{{\overline}0}}).\end{aligned}$$ If $V$ is a simple ${\mathfrak}g_0$-supermodule, then $d^{{\mathfrak}{gl}(m|n)}$ acts on $V$ as a scalar $d^{{\mathfrak}{gl}(m|n)}_V \in \mathbb{C}$ by Dixmier’s theorem, see, e.g., [@Di96 Proposition 2.6.8]. Therefore, $K(V)$ can be decomposed into $d$-eigenspaces: $$\begin{aligned} \label{Eq::GrOnKV} K(V) = \bigoplus_{i=0}^{\text{dim}({\mathfrak}g_{-1})} K(V)_{d^{{\mathfrak}{gl}(m|n)}_V -i} \cong \bigoplus_{i=0}^{\text{dim}({\mathfrak}g_{-1})} \Lambda^{i}({\mathfrak}g_{-1})\otimes V, \end{aligned}$$ where $K(V)_{d^{{\mathfrak}{gl}(m|n)}_V-i}$ is the eigenspace of $d^{{\mathfrak}{gl}(m|n)}$ with eigenvalue $d^{{\mathfrak}{gl}(m|n)}_V -i$. This means that the homogeneous components of the $\mathbb{Z}$-grading on $L(V)$ are eigenspaces for $d^{{\mathfrak}{gl}(m|n)}$ with different eigenvalues. For given positive integers $m,n$, the subsuperalgebra $${\mathfrak}{sl}(m|n): = [{\mathfrak}{gl}(m|n), {\mathfrak}{gl}(m|n)],$$ is called the [*special linear Lie superalgebra*]{}. The superalgebra ${\mathfrak}{sl}(m|n)$ is the kernel of the supertrace on ${\mathfrak}{gl}(m|n)$. We have ${\mathfrak}{sl}(m|n)_0 = {\mathfrak}{sl}(m)\oplus {\mathfrak}{sl}(n)\oplus {\mathbb C}I_{m|n},$ where $$I_{m|n}: =\left( \begin{array}{cc} nI_m &0 \\ 0 & m{I}_{n}\\ \end{array} \right),$$ and ${\mathfrak}{sl}(m|n)_1 = {\mathfrak}{gl}(m|n)_1 , {\mathfrak}{sl}(m|n)_{-1} = {\mathfrak}{gl}(m|n)_{-1}.$ Note that ${\mathfrak}{gl}(m|n)_0 = {\mathfrak}{sl}(m|n)_0\oplus {\mathbb C}d^{{\mathfrak}{gl}(m|n)}$. The Lie superalgebra ${\mathfrak}{sl}(m|n)$ is simple if and only if $m\neq n$, moreover, ${\mathfrak}{sl}(n|n)/{\mathbb C}I_{n,n}$ is a simple Lie superalgebra as well. ### Orthosymplectic Lie superalgebra ${\mathfrak}{osp}(2|2n)$ The orthosymplectic Lie superalgebra ${\mathfrak}{osp}(m|2n)$ is the subsuperalgebra of ${\mathfrak}{gl}(m|2n)$ preserving a non-degenerated supersymmetric bilinear forms (see, e.g., [@ChWa12 Section 1.1.3] and [@Mu12 Section 2.3]). In particular, ${\mathfrak}{osp}(2|2n)$ is a classical Lie superalgebra of type I: $${\mathfrak}{osp}(2|2n)= \left\{ \left( \begin{array}{cccc} c &0 & x &y\\ 0 & -c& v & u\\ -u^t& -y^t & a &b\\ v^t &x^t & c& -a^t \\ \end{array} \right)\|\text{$c\in {\mathbb C}$; $x,y,v,u\in {\mathbb C}^{n}$; $a,b,c\in {\mathbb C}^{n^2}$; $b=b^t$, $c=c^t$} \right\}.$$ The $\mathbb{Z}_2$-compatible $\mathbb{Z}$-grading of ${\mathfrak}{osp}(2|2n)$ are given as follows: $$\begin{aligned} &{\mathfrak}{osp}(2|2n)_0= \left\{ \left( \begin{array}{cccc} c &0 & 0 &0\\ 0 & -c& 0 & 0\\ 0& 0 & a &b\\ 0 &0 & c& -a^t \\ \end{array} \right)\|\text{$c\in {\mathbb C}$; $a,b,c\in {\mathbb C}^{n^2}$; $b=b^t$, $c=c^t$} \right\}, \\ &{\mathfrak}{osp}(2|2n)_1= \left\{ \left( \begin{array}{cccc} 0 &0 & x &y\\ 0 & 0& 0 & 0\\ 0& -y^t & 0 &0\\ 0 &x^t & 0& 0 \\ \end{array} \right)\|\text{$x,y\in {\mathbb C}^{n}$} \right\}, \\ &{\mathfrak}{osp}(2|2n)_{-1}= \left\{ \left( \begin{array}{cccc} 0 &0 & 0 &0\\ 0 & 0& v & u\\ -u^t& 0& 0 &0\\ v^t & 0& 0& 0 \\ \end{array} \right)\|\text{$v,u\in {\mathbb C}^{n}$} \right\}.\end{aligned}$$ We define the associated grading operator $d^{{\mathfrak}{osp}(2|2n)}$ for ${\mathfrak}{osp}(2|2n)$ as follows: $$\begin{aligned} \label{Eq::GrOpforosp} &d^{{\mathfrak}{osp}(2|2n)}:=\left( \begin{array}{cccc} 1 &0 & 0 &0\\ 0 & -1& 0 & 0\\ 0& 0 & 0 &0\\ 0 &0 & 0& 0 \\ \end{array} \right) \in {\mathfrak}z({\mathfrak}{osp}(2|2n)_0).\end{aligned}$$ Observe that ${\mathfrak}{osp}(2|2n) = {\mathbb C}d^{{\mathfrak}{osp}(2|2n)}\oplus {\mathfrak}{sp}(2n)$. If $V$ is a simple ${\mathfrak}g_0$-supermodule, then $d^{{\mathfrak}{osp}(2|2n)}$ acts on $V$ as a scalar $d^{{\mathfrak}{osp}(2|2n)}_V \in \mathbb{C}$ by Dixmier’s theorem ([@Di96 Proposition 2.6.8]). Just like in Subsection \[s4.2.1\], the homogeneous components of the $\mathbb{Z}$-grading on any $K(V)$ or $L(V)$ are eigenspaces for $d^{{\mathfrak}{osp}(2|2n)}$ with different eigenvalues. ### Periplectic Lie superalgebra ${\mathfrak}{p}(n)$ The periplectic Lie superalgebra ${\mathfrak}{p}(n)$ is a subalgebra of ${\mathfrak}{gl}(n|n)$ preserving a non-degenerated odd symmetric bilinear form (see, e.g., [@ChWa12 Section 1.1.5]). The standard matrix realization is given by $${\mathfrak}{p}(n)= \left\{ \left( \begin{array}{cc} a & b\\ c & -a^t\\ \end{array} \right)\| ~ a,b,c\in {\mathbb C}^{n^2},~b=b^t\text{ and }c=-c^t \right\}.$$ The superalgebra ${\mathfrak}p(n)$ admits a $\mathbb{Z}_2$-compatible $\mathbb{Z}$-grading inherited from the $\mathbb{Z}$-grading , and . Namely, $$\begin{aligned} &{\mathfrak}p(n)_0 = {\mathfrak}p(n)_{{{\overline}0}}= \bigoplus_{1\leq i,j \leq n}{\mathbb C}(e_{ij} -e_{n+j,n+i}) = \left\{ \left( \begin{array}{cc} a & 0\\ 0 & -a^t\\ \end{array} \right)\| ~a\in {\mathbb C}^{n^2} \right\}. \\ &{\mathfrak}p(n)_1 = \bigoplus_{1\leq i\leq j \leq n}{\mathbb C}(e_{i,n+j} +e_{j,n+i}) = \left\{ \left( \begin{array}{cc} 0 & b\\ 0 & 0 \\ \end{array} \right)\| ~b\in {\mathbb C}^{n^2}, b=b^t \right\}, \\ &{\mathfrak}p(n)_{-1} = \bigoplus_{1\leq i<j \leq n}{\mathbb C}(e_{n+i,j} - e_{n+j,i}) = \left\{ \left( \begin{array}{cc} 0 & 0\\ c & 0\\ \end{array} \right)\| ~c\in {\mathbb C}^{n^2}, c=-c^t \right\}. \end{aligned}$$ We may note that ${\mathfrak}p(n)_{-1} \cong \Lambda^2(\mathbb{C}^{n*})$ and ${\mathfrak}p(n)_{+1} \cong S^{2}(\mathbb{C}^n)$, as ${\mathfrak}p(n)_0$-supermodules. The subalgebra ${\mathfrak}p(n)' := [{\mathfrak}p(n), {\mathfrak}p(n)]$ inherits a $\mathbb{Z}_2$-compatible $\mathbb{Z}$-grading as follows: $${\mathfrak}p(n)'_0 \cong {\mathfrak}{sl}(n), \quad{\mathfrak}p(n)'_1 = {\mathfrak}p(n)_1 \quad\text{ and }\quad{\mathfrak}p(n)'_{-1} = {\mathfrak}p(n)_{-1}.$$ We define the associated grading operator $d^{{\mathfrak}p(n)}$ for ${\mathfrak}{p}(n)$ as follows: $$\begin{aligned} \label{Eq::GrOpforp} &d^{{\mathfrak}p(n)}:= \sum_{1\leq i\leq n} (e_{ii} - e_{n+i,n+i})= \left( \begin{array}{cc} I_n &0 \\ 0 & -{I}_{n}\\ \end{array} \right) \in {\mathfrak}z({\mathfrak}p(n)_0).\end{aligned}$$ Then ${\mathfrak}p(n)_0 = {\mathfrak}p(n)'_0\oplus {\mathbb C}d^{{\mathfrak}p(n)}$. If $V$ is a simple ${\mathfrak}g_0$-supermodule, then $d^{{\mathfrak}{p}(n)}$ acts on $V$ as a scalar $d^{{\mathfrak}{p}(n)}_V \in \mathbb{C}$ by Dixmier’s theorem ([@Di96 Proposition 2.6.8]). Just like in Subsection \[s4.2.1\], the homogeneous components of the $\mathbb{Z}$-grading on any $K(V)$ or $L(V)$ are eigenspaces for $d^{{\mathfrak}{p}(n)}$ with different eigenvalues. Simple supermodules for classical Lie superalgebras of type I ------------------------------------------------------------- The following statement is an immediate consequence from the properties of the grading operator mentioned above: Let ${\mathfrak}g = {\mathfrak}{gl}(m|n), {\mathfrak}{osp}(2|2n)$ or ${\mathfrak}p(n)$. Let $V$ be a simple ${\mathfrak}g_0$-supermodule. For any submodule $N \subseteq K(V)$, the decomposition $$N = \bigoplus_{k\geq 0}\left(N\cap (\Lambda^k{\mathfrak}g_{-1} \otimes V)\right)$$ is the eigenspace decomposition with respect to the action of $d^{{\mathfrak}g}$. In particular, if we consider the standard $\mathbb{Z}$-grading on $K(V)$, then all ${\mathfrak}g$-submodules of $K(V)$ are, automatically, $\mathbb{Z}$-graded submodules. The following proposition reduces the study of simple supermodules over all classical Lie superalgebras of type I, see , to the study of simple supermodules over ${\mathfrak}{gl}(m|n)$, ${\mathfrak}{osp}(2|2n)$ and ${\mathfrak}p(n)$. \[nprop7\] Let ${\mathfrak}g= {\mathfrak}{gl}(m|n), {\mathfrak}p(n)$, and set ${\mathfrak}g':= [{\mathfrak}g,{\mathfrak}g] \emph{(}={\mathfrak}{sl}(m|n), {\mathfrak}p(n)'\emph{)}$. 1. \[nprop7.1\] If $M$ is a simple ${\mathfrak}g$-supermodule, then $\emph{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(M)$ is a simple ${\mathfrak}g'$-supermodule. 2. \[nprop7.2\] If $M$ and $N$ are simple ${\mathfrak}g$-supermodules, then $\emph{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(M)\cong \emph{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(N)$ if and only if $M^{{\mathfrak}g_1}$ and $N^{{\mathfrak}g_1}$ are isomorphic as ${\mathfrak}g'_0$-supermodules. 3. \[nprop7.3\] Every simple ${\mathfrak}g'$-supermodule has the form $\emph{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(M)$, for some $M$ as above. Note that the difference between ${\mathfrak}g_0$ and ${\mathfrak}g'_0$ is given by the central element $d^{{\mathfrak}g}$. Therefore Proposition \[nprop7\] says that the elements in each fiber of the map $M\mapsto \mathrm{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(M)$ are indexed by complex numbers which prescribe the scalar with which $d^{{\mathfrak}g}$ acts on the simple ${\mathfrak}g_0$-supermodule $M^{{\mathfrak}g_1}$. As the difference between ${\mathfrak}g_0$ and ${\mathfrak}g'_0$ is given by the central element $d^{{\mathfrak}g}$, the restriction of a simple ${\mathfrak}g_0$-supermodule to ${\mathfrak}g'_0$ remains simple, moreover, this restriction map is surjective. Let $M$ be a simple ${\mathfrak}g$-supermodule. From Corollary \[SocIndCoind\] we have that there is a simple ${\mathfrak}g_0$-supermodule $V$ such that $M\cong \mathrm{Soc}(\mathrm{Ind}_{\mathfrak{g}_{\geq 0}}^{\mathfrak{g}}(V)) = U({\mathfrak}g)\cdot(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V)$. As $d^{{\mathfrak}g}$ acts as a scalar on the simple ${\mathfrak}g_0$-supermodule $\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V$, we have $$U({\mathfrak}g)\cdot(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V)= U({\mathfrak}g')\cdot(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V).$$ As $\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V$ is a simple ${\mathfrak}g_0$-supermodule, it follows that $\text{Res}^{{\mathfrak}g_0}_{{\mathfrak}g'_0}\big(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V\big)$ is also simple. Therefore we have $$\text{Res}_{{\mathfrak}g'}^{{\mathfrak}g}(M) \cong \text{Res}_{{\mathfrak}g'}^{{\mathfrak}g}\big(U({\mathfrak}g')\cdot(\Lambda^{\text{max}} ({\mathfrak}g_{-1})\otimes V)\big) = \text{Soc}\big(\text{Ind}^{{\mathfrak}g'}_{{\mathfrak}g'_{\geq 0}}(\text{Res}_{{\mathfrak}g'_0}^{{\mathfrak}g_0}(V))\big),$$ which is a simple ${\mathfrak}g'$-supermodule by Lemma \[Kacsimplesoc\]. This proves claim . Let now $M'$ be a simple ${\mathfrak}g'$-supermodule. Then, by Corollary \[SocIndCoind\], there is a simple ${\mathfrak}g'_0$-supermodule $V'$ such that $$M'\cong \text{Soc}\big(\text{Ind}_{{\mathfrak}g'_{\geq 0}}^{{\mathfrak}g'}(V')\big) = U({\mathfrak}g')\cdot(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V').$$ Let $V$ be any simple ${\mathfrak}g_0$-supermodule such that $\text{Res}_{{\mathfrak}g'_0}^{{\mathfrak}g_0}(V) = V'$, then we have $$\text{Res}_{{\mathfrak}g'}^{{\mathfrak}g}\big(\text{Soc}(\text{Ind}_{{\mathfrak}g_{\geq 0}}^{{\mathfrak}g}(V))\big)= U({\mathfrak}g')\cdot(\Lambda^{\text{max}}({\mathfrak}g_{-1})\otimes V') =\text{Soc}(\text{Ind}_{{\mathfrak}g'_{\geq 0}}^{{\mathfrak}g'}(V'))\cong M'.$$ Claims  and follow. Criteria for simplicity of Kac modules {#Sect::CriforSimpleKac} -------------------------------------- In the beginning parts of this subsection, we assume that ${\mathfrak}g$ is one of the classical Lie superalgebra of type I with $\text{dim}{\mathfrak}g_1=\text{dim}{\mathfrak}g_{-1}$, namely, $${\mathfrak}g = {\mathfrak}{gl}(m|n),\quad {\mathfrak}{sl}(m|n) ,\quad {\mathfrak}{sl}(n|n)/\mathbb{C}I_{n|n}\quad \text{ or } \quad {\mathfrak}{osp}(2|2n).$$ The periplectic Lie superalgebra ${\mathfrak}p(n)$ will be discussed in Subsection \[Criforpn\]. A criterion for simplicity of finite dimensional Kac module was given by Kac in [@Ka78] in terms of typicality of highest weights. In this section, we provide criteria for simplicity of Kac modules for arbitrary (simple) input of Kac functor. ### BGG category of ${\mathfrak}g$-supermodules and Duflo’s theorem Following [@BGG76], consider the BGG category $\mathcal{O}=\mathcal{O}(\mathfrak{g},\mathfrak{h},\mathfrak{n})$ associated to the standard triangular decomposition $$\mathfrak{g}=\mathfrak{n}^-\oplus \mathfrak{h}\oplus \mathfrak{n}$$ of $\mathfrak{g}$. It is the full subcategory of ${\mathfrak}g$-smod consisting of all ${\mathfrak}g$-supermodules on which ${\mathfrak}h$ acts semisimply and ${\mathfrak}b$ acts locally finitely. We set ${\mathcal}O_{{{\overline}0}}: = {\mathcal}O({\mathfrak}g_{{{\overline}0}},{\mathfrak}h_{{{\overline}0}},{\mathfrak}n_{{{\overline}0}})$. For ${\lambda}\in {\mathfrak}h^*$, we denote by $V({\lambda})$ the simple even ${\mathfrak}b_{{{\overline}0}}$-highest weight ${\mathfrak}g_{{{\overline}0}}$-supermodule with highest weight ${\lambda}$. We set $K({\lambda}):=K(V(\lambda))$. For ${\lambda}\in {\mathfrak}h^*$, the corresponding [*Verma supermodule*]{} $\Delta({\lambda})$ (over ${\mathfrak}g$) is defined by $${\Delta}({\lambda}):=U({\mathfrak}{g})\otimes_{{\mathfrak}b} \mathbb C_{{\lambda}},$$ where $ \mathbb C_{{\lambda}}$ is the even one-dimensional ${\mathfrak}b$-supermodule module corresponding to $\lambda$. The unique simple quotient of $\Delta({\lambda})$ is denoted by $L(\lambda)$. We let $\chi_{\lambda}$ (resp. $\chi_{{\lambda}}^{{{\overline}0}}$) be the $U$-central (resp. $U_{{{\overline}0}}$-central) character corresponding to ${\lambda}$. Denote by $\rho\in {\mathfrak}h^*$ the Weyl vector as in [@ChWa12 Remark 1.21]: $$\begin{aligned} \label{Eq::WelyVc} &\rho = \frac{1}{2} \sum_{\alpha \in \Phi_{{{{\overline}0}}}}\alpha - \frac{1}{2}\sum_{\alpha \in \Phi_{{{{\overline}1}}}}\alpha.\end{aligned}$$ Let $(\cdot,\cdot)$ be the non-degenerated $W$-invariant form on ${\mathfrak}h^*$ as defined in [@ChWa12 Section 1.2]. We consider the dot-actions of $W$ given by $w\cdot{\lambda}=w({\lambda}+\rho)-\rho$, for all $w\in W$ and $\lambda\in {\mathfrak}h^*$. A weight $\lambda$ is called [*integral*]{} if $(\lambda,\alpha) \in\mathbb{Z}$ for all even roots $\alpha$. An integral weight is called [*dominant*]{} if ${\lambda}$ is dominant for the dot-action of $W$ and [*regular*]{} if $( {\lambda}+\rho,\alpha)\neq 0$, for all simple even roots $\alpha$. A weight ${\lambda}$ is called [*typical*]{} if $(\lambda+\rho,\alpha)\neq 0$, for all odd roots $\alpha$. *(*cf. [@Du77]*)* \[Dufthm\] Let $V$ be a simple ${\mathfrak}g_0$-supermodule. Then there exist ${\lambda}\in {\mathfrak}h$ such that $\emph{Ann}_{U_{{{\overline}0}}}(V) = \emph{Ann}_{U_{{{\overline}0}}}(V({\lambda}))$. For a given ${\mathfrak}g$-supermodule (resp. ${\mathfrak}g_{{{\overline}0}}$-supermodule) $X$, we denote its $U$-annihilator (resp. $U_{{{\overline}0}}$-annihilator) by $\text{Ann}_U(X)$ (resp. $\text{Ann}_{U_{{{\overline}0}}}(X)$). Together with Theorem \[Dufthm\], the following lemma shows that annihilators of arbitrary Kac modules are annihilators of Kac modules in ${\mathcal}O$. \[AnnKac\] Let $V$ and $W$ be simple ${\mathfrak}g_{{{\overline}0}}$-supermodules such that $\emph{Ann}_{U_{{{\overline}0}}}(V) = \emph{Ann}_{U_{{{\overline}0}}}(W)$. Then $\emph{Ann}_U (K(V)) =\emph{Ann}_U (K(W))$. Mutatis mutandis the proof of [@Di96 Proposition 5.1.7]. ### Simplicity criteria Fix two non-zero elements $$X^{-}\in \Lambda^{\text{max}}({\mathfrak}g_{-1})\quad\text{ and }\quad X^{+}\in \Lambda^{\text{max}}({\mathfrak}g_{1}).$$ We have the decomposition $$\begin{aligned} \label{Eq::Omega} &X^{+} X^- = \Omega + \sum_i x_i r_i y_i,\end{aligned}$$ for some $x_i \in \Lambda({\mathfrak}g_{ -1})\backslash \mathbb{C}$, $y_i \in \Lambda({\mathfrak}g_{1})\backslash \mathbb{C}$, $r_i\in U({\mathfrak}g_{{{{\overline}0}}})$ and $\Omega \in Z({\mathfrak}g_{{{\overline}0}})$. As $X^{+}\cdot X^-$ has $\mathfrak{h}$-weight $0$, by the Poincar[é]{}-Birkhoff-Witt Theorem, we have the decomposition such that $$x_i \in \Lambda({\mathfrak}g_{ -1})\backslash \mathbb{C},\quad y_i \in \Lambda({\mathfrak}g_{1})\backslash \mathbb{C}\quad\text{ and }\quad r_i, \Omega \in U({\mathfrak}g_0).$$ It remains to show that $\Omega \in Z( {\mathfrak}g_{{{\overline}0}})$. For $r\in {\mathfrak}g_{{{\overline}0}}$, we want to show that $r\Omega - \Omega r =0$. The one-dimensional ${\mathfrak}g_{{{\overline}0}}$-representation $\Lambda^{\text{max}}({\mathfrak}g_{1})\otimes \Lambda^{\text{max}}({\mathfrak}g_{-1})$ is concentrated in the $\mathfrak{h}$-weight $0$. Hence, $rX^+X^- = X^+X^-r$. Since $$r( \sum_i x_i r_i y_i) - ( \sum_i x_i r_i y_i)r = \sum_i x'_i r'_i y'_i,$$ for some $r'_i\in U({\mathfrak}g_0)$, $x_i' \in \Lambda({\mathfrak}g_{ -1})\backslash \mathbb{C}$ and $y_i' \in \Lambda({\mathfrak}g_{1})\backslash \mathbb{C}$, the claim follows from the Poincar[é]{}-Birkhoff-Witt Theorem. Let ${\mathfrak}g:={\mathfrak}{gl}(m|1)$ and choose $X^{\pm}$ as follows: $$X^+:= e_{1,m+1}e_{2,m+1}\cdots e_{m,m+1},\qquad X^-:= e_{m+1,1}e_{m+1,2}\cdots e_{m+1,m}.$$ By a direct calculation, we have the expression $$\Omega = \sum_{\sigma \in {\mathfrak}S_m}(-1)^{\sigma} X_{m,\sigma(m)}X_{m-1,\sigma(m-1)}\cdots X_{1,\sigma(1)},$$ where $X_{ij}:=[e_{i,m+1},e_{m+1,j}]+\delta_{ij}(m-i)1_{U}\in U({\mathfrak}g_0)$, for $1\leq i,j\leq m$. Now we are ready to formulate our first simplicity criterion for $K(V)$. \[thm::criforsimpleKac\] Let $V$ be a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule. Then the following assertions are equivalent. 1. \[thmkac.1\] $K(V)$ is simple. 2. \[thmkac.2\] $\Omega$ acts on $V$ as a injective linear operator. 3. \[thmkac.3\] $\Omega$ acts on $V$ as a non-zero scalar. We first prove $\Rightarrow$. Suppose that $K(V)$ is simple but $\Omega v =0$, for some non-zero $v\in V$. On the one hand, simplicity of $K(V)$ and Lemma \[Kacsimplesoc\] imply $$U_{{{\overline}0}}\Lambda({\mathfrak}g_1)\Lambda ({\mathfrak}g_{-1})\cdot (\Lambda^{\text{max}} ({\mathfrak}g_{-1}) \otimes v) = K(V)$$ and hence $U_{{{\overline}0}}\Lambda^{\text{max}}({\mathfrak}g_1)\cdot (\Lambda^{\text{max}} ({\mathfrak}g_{-1}) \otimes v) = V$. On the other hand, $\Omega v =0$ means that $\Lambda^{\text{max}}({\mathfrak}g_1)\cdot (\Lambda^{\text{max}}({\mathfrak}g_{-1}) \otimes v) =0$, a contradiction. We next prove $\Rightarrow$. In this case we have $X^+ X^- (1_U\otimes v) = 1_U\otimes \Omega v\neq 0$ in $V\subset K(V)$, for any non-zero $v \in V$. From Lemma \[Kacsimplesoc\] it thus follows that $\text{soc}(K(V)) \supseteq K(V)$, that is, $K(V)$ is simple. The equivalence of and follows from Dixmier’s theorem [@Di96 Proposition 2.6.8]. We now give our second criterion for simplicity of Kac module which is formulated in terms of $U_{{{\overline}0}}$-annihilators. For a given $\lambda \in {\mathfrak}h^*$, recall that $V(\lambda)$ denotes the simple highest weight ${\mathfrak}g_{{{\overline}0}}$-supermodule of highest weight $\lambda$ with respect to the Borel subalgebra ${\mathfrak}b_{{{\overline}0}}$. The following corollary shows that simplicity of Kac modules can be determined in terms of the annihilator of the simple ${\mathfrak}g_0$-input of Kac functor. \[aaa1\] Let $V$ and $W$ be two simple ${\mathfrak}g_{{{\overline}0}}$-supermodules. If $\emph{Ann}_{U_{{{\overline}0}}}(V) = \emph{Ann}_{U_{{{\overline}0}}}(W)$, then the following assertions are equivalent: 1. \[aaa1.1\] $K(V)$ is simple. 2. \[aaa1.2\] $K(W)$ is simple. 3. \[aaa1.3\] $\emph{Ann}_{U_{{{\overline}0}}}(V) = \emph{Ann}_{U_{{{\overline}0}}}(V({\lambda}))$, for some typical ${\lambda}$. By Theorem \[Dufthm\], there exists ${\lambda}\in {\mathfrak}h^*$ such that $$\text{Ann}_{U_{{{\overline}0}}}(V) = \text{Ann}_{U_{{{\overline}0}}}(W) = \text{Ann}_{U_{{{\overline}0}}}(V({\lambda})).$$ In particular, all these annihilators contain $U({\mathfrak}g_{{{\overline}0}})\chi_{{\lambda}}^{{{{\overline}0}}}$. Therefore $\Omega$ acts as the same scalar $\chi_{{\lambda}}^{{{{\overline}0}}}(\Omega)$ on both $V$ and $W$. Therefore and are equivalent by Theorem \[thm::criforsimpleKac\]. The fact that $\chi_{{\lambda}}^{{{{\overline}0}}}(\Omega)\neq 0$ if and only if ${\lambda}$ is typical follows from [@Go Subsection 4.2] which says that the evaluation at ${\lambda}$ of the Harish-Chandra projection of $\Omega$ has the form $$\prod_{\alpha\in\Phi^+_{\bar{1}}}({\lambda}+\rho,\alpha).$$ This completes the proof. It is natural to consider also the simplicity problem for the opposite Kac module $K'(V)$ of $V$. Recall that $2\rho_{{{{\overline}1}}}$ denotes the sum of all odd positive roots. If ${\lambda}$ is such that $V= V({\lambda})$ is finite-dimensional, then it is well-known that the simplicity of $K({\lambda})$ is equivalent to the simplicity of $K'({\lambda}- 2\rho_{{{{\overline}1}}})$, which is equivalent to the typicality of ${\lambda}$, see e.g. [@Ge98 Lemma 3.3.1] and [@Ka78]. The highest weight ${\mathfrak}g_{{{\overline}0}}$-supermodule $V(2\rho_{{{{\overline}1}}})$ is one-dimensional and hence can be denoted by $\mathbb{C}_{2\rho_{{{{\overline}1}}}}$. We now extend the comparison of simplicity of Kac modules and opposite Kac modules to full generality. \[cor::criforsimpleoppoKac\] Let $V$ be a simple ${\mathfrak}g_{{{\overline}0}}$-supermodules. Then the following are equivalent: 1. \[Cor::Cri2::KVsimple\] $K'(V)$ is simple. 2. \[Cor::Cri2::KWsimple\] $K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$ is simple. 3. \[cor::crif3\] $\mathrm{Ann}_{U_{{{\overline}0}}}(V)=\mathrm{Ann}_{U_{{{\overline}0}}}(V({\lambda}-2\rho_{{{{\overline}1}}}))$, for some typical ${\lambda}$. If any of the above conditions is satisfied, then $K'(V) \cong K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$. An analogue of the decomposition for $K'(V)$ yields existence of a non-zero homomorphism $\varphi:K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})\to K'(V)$ whose image coincides with the socle of $K'(V)$ by Lemma \[Kacsimplesoc\]. A similar argument gives a non-zero homomorphism $\psi:K'(V)\to K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$ whose image coincides with the socle of $K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$. If any of $K'(V)$ or $K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$ is simple, then both $\varphi\circ\psi$ and $\psi\circ\varphi$ are isomorphisms when restricted to the eigenspaces of both extremal eigenvalues of $d^{{\mathfrak}g}$. As $K'(V)$ is generated by one of these extremal eigenspaces and $K(V\otimes\mathbb{C}_{2\rho_{{{{\overline}1}}}})$ is generated by the other one, we obtain that both $\varphi$ and $\psi$ are isomorphism. The equivalence between and follows. The equivalence between and follows from Corollary \[aaa1\]. ### {#Criforpn} Here we discuss the periplectic Lie superalgebra ${\mathfrak}p(n)$ which has been excluded in the previous parts of this subsections. For ${\mathfrak}p(n)$ we can also define the Cartan subalgebra ${\mathfrak}h_{{\mathfrak}p(n)}:= {\mathfrak}p(n)\cap {\mathfrak{h}}$, the Borel subalgebra ${\mathfrak}b_{{\mathfrak}p(n)} = {\mathfrak}b_{{\mathfrak}p(n)_0} \oplus {\mathfrak}p(n)_1$ and the corresponding BGG categories ${\mathcal}O$ and ${\mathcal}O_{{{\overline}0}}$. For ${\mathfrak}p(n)$, the principal difficulty is the asymmetry of negative and positive roots. However, in [@Se02 Corollary 5.8], it is shown that, for a simple ${\mathfrak}p(n)_0$-supermodule $V$, the corresponding Kac module $K(V)$ is simple if $V$ admits a typical central character, which is a ${\mathfrak}p(n)$-analog of our Corollary \[aaa1\]. This asymmetry of positive and negative roots makes the opposite Kac modules always non-simple. It also enables us to construct indecomposable modules from the difference between Kac and opposite Kac modules. \[OpKpnprop\] Let $V$ be a simple ${\mathfrak}p(n)_0$-supermodule. Then $K'(V)$ is indecomposable but not simple. Let us denote ${\mathfrak}p(n)$ by ${\mathfrak}g$ in this proof. Consider $\mathfrak{g}\text{-}\mathrm{mod}^{\mathbb{Z}}$. By the universal property of induced modules, in $\mathfrak{g}\text{-}\mathrm{mod}^{\mathbb{Z}}$ we have a non-zero homomorphism $f$ from $K(\Lambda^{\text{max}} ({\mathfrak}g_1) \otimes V)$ to $K'(V)\langle\dim({\mathfrak}g_1)\rangle$. Note that the minimal non-zero homogeneous component of $K(\Lambda^{\text{max}} ({\mathfrak}g_1) \otimes V)$ has degree $-\dim({\mathfrak}g_{-1})$ while the minimal non-zero homogeneous component of $K'(V)\langle\dim({\mathfrak}g_1)\rangle$ has degree $-\dim({\mathfrak}g_{1})$ which is strictly smaller than $-\dim({\mathfrak}g_{-1})$. Therefore $f$ cannot be surjective. This implies that $K'(V)$ is not simple. The fact that $K'(V)$ is indecomposable is proved in Lemma \[Kacsimplesoc\]. Rough structure of Kac modules {#s5} ============================== Coker-categories {#s5.1} ---------------- In this section, we assume that ${\mathfrak}g=\mathfrak{gl}(m|n)$. For a ${\mathfrak}g$-supermodule $P$, we denote by ${\mathcal}C_P$ the [*coker*]{}-category of $P$, that is ${\mathcal}C_P$ is the full subcategory of the category of all ${\mathfrak}g$-supermodules, which consists of all modules $M$ which have a presentation $$X\to Y \twoheadrightarrow M,$$ where $X$ and $Y$ are isomorphic to direct summands of $P\otimes E$, for some finite dimensional weight ${\mathfrak}g$-supermodule $E$. Similarly, we define [*coker*]{}-categories for modules over Lie algebras, see e.g. [@MaSt08]. In this section we describe a part of the structure of Kac modules with arbitrary simple input, called the [*rough structure*]{} in [@MaSt08] by comparing it with the rough structure of Kac modules in BGG category ${\mathcal}O$. Harish-Chandra bimodules {#Subsection::HCB} ------------------------ In this subsection we collect all necessary preliminaries about the main technical ingredient in the study of rough structure, namely, about Harish-Chandra bimodules. ### {#section-2} Here we introduce Harish-Chandra bimodules. Let us start with $U_{{{\overline}0}}$. The full subcategory of ${\mathfrak}g_{{{\overline}0}}$-mod which consists of all finite-dimensional weight modules is denoted by ${\mathcal}F_{{{\overline}0}}$. Each $U_{{{\overline}0}}$-$U_{{{\overline}0}}$-bimodule $M$ can be considered as a ${\mathfrak}g_{{{\overline}0}}$-module $M^{{{\text{ad}}}}$ with respect to the adjoint action of ${\mathfrak}g_{{{\overline}0}}$. The category ${\mathcal}H_{{{\overline}0}}$ of [*Harish-Chandra $U_{{{\overline}0}}$-$U_{{{\overline}0}}$-bimodules*]{} is defined as the full subcategory in the category of all finitely generated $U_{{{\overline}0}}$-$U_{{{\overline}0}}$-bimodules which consists of all bimodules $M$ such that the ${\mathfrak}g_{{{\overline}0}}$-module $M^{{{\text{ad}}}}$ is a direct sum of simples in ${\mathcal}F_{{{\overline}0}}$, moreover, each simple appears in $M^{{{\text{ad}}}}$ with a finite multiplicity. For two ${\mathfrak}g_{{{\overline}0}}$-supermodules $M$ and $N$, we denote by $\mathcal{L}(M,N)$ the $U_{{{\overline}0}}$-$U_{{{\overline}0}}$-bimodule of all linear maps from $M$ to $N$ which are locally finite with respect to the adjoint action of ${\mathfrak}g_{{{\overline}0}}$. The category ${\mathcal}H$ of [*Harish-Chandra $U$-$U$-bimodules*]{} is the full subcategory of the category of $U$-$U$-bimodules which consists of all bimodules $M$ whose restriction to $U_{{{\overline}0}}$-$U_{{{\overline}0}}$-bimodules is in ${\mathcal}H_{{{\overline}0}}$, see [@MaMe12 Section 5.1]. Abusing notation, for two ${\mathfrak}g$-supermodules $M$ and $N$, we denote by $\mathcal{L}(M,N)$ the $U$-$U$-bimodule $\mathcal{L}(\text{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(M),\text{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(N))$. The full subcategory of ${\mathfrak}g$-smod which consists of all finite-dimensional weight supermodules is denoted by ${\mathcal}F$. For $E \in {\mathcal}F$, we define a ${\mathfrak}g$-bimodule structure on $E\otimes U$ as in [@BeGe Section 2.2] and [@Co16 Section 2.4]: $$X(v\otimes u)Y = (Xv)\otimes (uY) +(-1)^{{\overline}X\cdot {\overline}v}v\otimes (XuY),$$ for all homogeneous $X,Y\in {\mathfrak}g$, $v\in E$ and $u \in U$. The following identity is proved in [@BeGe Section 2.2] in the setup of Lie algebras, however, the same proof works also for Lie superalgebras: $$\begin{aligned} \label{Eq::1stBeGeIdentity} \text{Hom}_{U\text{-mod-}U}(E\otimes U, M)\cong \text{Hom}_{U}(E, M^{{{\text{ad}}}}). \end{aligned}$$ ### {#section-3} Let $M$ be a ${\mathfrak}g$-supermodule, then the ${\mathfrak}g$-action on $M$ defines a $U$-$U$-homomorphism from $U$ to ${\mathcal}L(M,M)$. The kernel of this homomorphism is $\text{Ann}_U(M)$ and we have the following embedding of $U$-$U$-bimodules: $$U/\text{Ann}_U(M) \hookrightarrow {\mathcal}L(M,M).$$ One says that [*Kostant’s problem*]{} for $M$ has a positive solution if the above embedding is an isomorphism, see [@Jo80; @Go02; @MaMe12]. By [@Go02 Proposition 9.4], which can be applied as we assumed ${\mathfrak}g=\mathfrak{gl}(m|n)$, Kostant’s problem has a positive solution for all typical Verma modules. We note that [@Go02 Proposition 9.4] is formulated for [*strongly typical*]{} Verma modules, however, for ${\mathfrak}g=\mathfrak{gl}(m|n)$ the notions of “typical” and “strongly typical” coincide, see [@Go02 Subsection 2.5.5]. Coker categories for Kac modules {#Sect::CateOfKacandEqiv} -------------------------------- This subsection generalizes [@MaSt08 Section 11.6]. Following [@MaSt08 Remark 76], for simplicity, we will work with regular integral central characters. The general case follows from the integral and regular one by standard techniques, in particular using translations out and on the walls and the equivalences from [@ChMaWa13]. Recall that ${{\mathfrak}s}=[{{\mathfrak}g_{{{\overline}0}}},{{\mathfrak}g_{{{\overline}0}}}]$. Let $V$ be a simple ${\mathfrak}g_{{{\overline}0}}$-supermodule such that $L:= \text{Res}_{{\mathfrak}s}^{{\mathfrak}g_{{{\overline}0}}}(V)$ admits a regular and integral central character. Observe that every simple ${\mathfrak}g_{{{\overline}0}}$-supermodule $S$ is determined uniquely by the underlying simple ${\mathfrak}s$-supermodule $\text{Res}_{{\mathfrak}s}^{{\mathfrak}g_{{{\overline}0}}}(S)$ and a linear functional (depending on $S$) on ${\mathfrak}z({\mathfrak}g_{{{\overline}0}})$. Abusing notation, we use $\cdot$ to denote the $W$-action for ${\mathfrak}s$, that is, $$w\cdot{\lambda}=w({\lambda}+\rho_{{{{\overline}0}}})-\rho_{{{{\overline}0}}},$$ for all $w\in W$ and ${\lambda}\in {\mathfrak}h^*_{{\mathfrak}s}$. By Theorem \[Dufthm\], there is a dominant weight $\nu$ and $\sigma\in W$ such that $\text{Ann}_{U({\mathfrak}s)} (L) = \text{Ann}_{U({\mathfrak}s)}V(\sigma\cdot \nu)$. We may assume that $\sigma$ is contained in a right cell associated with a parabolic subalgebra ${\mathfrak}p \subseteq {\mathfrak}s$ as in [@MaSt08 Remark 14]. Therefore there is a dominant weight $\mu$ such that the parabolic block ${\mathcal}O^{{\mathfrak}p}_{\mu}$ contains exactly one simple module $V(y\cdot \mu)$, and this module is projective (see, e.g., [@IrSh88 3.1]). Tensoring, if necessary, with finite dimensional modules, without loss of generality we may assume that $\mu$ is typical and [*generic*]{} in the sense of [@MaMe12 Subsection 5.3]. Let $F$ be the projective functor given in [@MaSt08 Proposition 61] and define ${\overline}N$ to be the simple quotient of $FL$ (in fact, as it turns out, ${\overline}N=FL$). We refer the reader to [@MaSt08 Section 11] for more details of our setup. In particular, we have that $$\text{Ann}_{U({\mathfrak}s)}({\overline}N) = \text{Ann}_{U({\mathfrak}s)}( V(y\cdot \mu))= \text{Ann}_{U({\mathfrak}s)}( V(\mu))$$ and, consequently, $\mathrm{Ann}_U(K(\mu))=\mathrm{Ann}_U(K(y\cdot \mu))$, see Lemma \[AnnKac\]. \[thm::g0CokerEquiv\]*(*[@MaSt08 Theorem 66]*)* The functor $$\Xi_{{{\overline}0}}: = {\mathcal}L({\overline}N, -)\otimes_{U({\mathfrak}s)} V(y\cdot \mu): {\mathcal}C_{{\overline}N} \rightarrow {\mathcal}C_{V(y\cdot \mu)},$$ is an equivalence. We extend the categories ${\mathcal}C_{{\overline}N}$ and ${\mathcal}C_{V(y\cdot \mu)}$ of ${\mathfrak}s$-supermodules to categories of ${\mathfrak}g_{{{\overline}0}}$-supermodules by allowing arbitrary scalar actions of ${\mathfrak}z({\mathfrak}g_{{{\overline}0}})$. It is proved in [@MaSt08 Lemma 67] that ${\mathcal}C_{{\overline}N}$ and ${\mathcal}C_{V(y\cdot \mu)}$ are both admissible in the sense of [@MaSt08 Section 6.3]. Let $I:= \text{Ann}_U(K({\overline}N))=\text{Ann}_U(K(y\cdot \mu))=\text{Ann}_U(K(\mu))$, see Lemma \[AnnKac\]. Denote by ${\mathcal}H_I^1$ the full subcategory of ${\mathcal}H$ which consists of all bimodules annihilated by $I$ from the right. Now we can formulate the following equivalence of coker-categories. \[Thm::Equi\] Assume that the weight $\mu$ defined above is typical. Then there are equivalences of categories, $$\begin{aligned} \label{thm::coker} {\mathcal}C_{K(\overline{N})}\cong {\mathcal}H_I^1 \cong {\mathcal}C_{K(y\cdot \mu)},\end{aligned}$$ sending $K({\overline}N)$ to $K(y\cdot \mu)$. Our proof follows [@BeGe Theorem 5.9], [@KhMa04 Theorem 5] and [@MaMe12 Theorem 5.1]. Note that the second equivalence in is just a special case of the first one. So, we just need to prove the first equivalence. \[nlem11\] Kostant’s problem for both $K(\mu)$ and $K(y\cdot \mu)$ has positive solutions. For $K(\mu)$, the claim follows from [@Go02 Proposition 9.4] and [@Ja83 6.9 (10)]. For a given simple ${\mathfrak}g$-supermodule $E\in {\mathcal}F$, we have $$\dim\mathrm{Hom}_{{\mathfrak}g}(K(\mu)\otimes E, K(\mu)) = \dim\mathrm{Hom}_{{\mathfrak}g}(K(y\cdot \mu)\otimes E, K(y\cdot \mu)),$$ by a similar argument used in the proof of [@MaSt08 Lemma 70] and [@MaSt08 Theorem 60]. Hence $$\dim\mathrm{Hom}_{{\mathfrak}g}(E, {\mathcal}L(K(\mu), K(\mu))) = \dim\mathrm{Hom}_{{\mathfrak}g}(E, {\mathcal}L(K(y\cdot \mu), K(y\cdot \mu))),$$ by [@Ja83 6.8(3)]. Since Kostant’s problem has a positive solution for $K(\mu)$, it follows that $$\dim\mathrm{Hom}_{{\mathfrak}g}(E, U/I) = \dim\mathrm{Hom}_{{\mathfrak}g}(E, {\mathcal}L(K(y\cdot \mu), K(y\cdot \mu))).$$ As $K(\mu)$ and $K(y\cdot \mu)$ have the same annihilators, it follows that Kostant’s problem has a positive solution for $K(y\cdot \mu)$. \[nlem15\] Kostant’s problem for $K({\overline}N)$ has a positive solution. To see this, for any simple ${\mathfrak}g$-supermodule $E\in {\mathcal}F$, we have $$\begin{aligned} &\dim\mathrm{Hom}_{{\mathfrak}g} (E, {\mathcal}L(K({\overline}N), K({\overline}N)))= \dim\mathrm{Hom}_{{\mathfrak}g}(K({\overline}N)\otimes E, K({\overline}N))= \dim\mathrm{Hom}_{{\mathfrak}s}({\overline}N\otimes E', {\overline}N),\end{aligned}$$ for some finite-dimensional ${\mathfrak}s$-supermodule $E'$. By [@MaSt08 Theorem 60 (iii)] and [@MaSt08 Proposition 65], Kostant’s problem has positive solutions for ${\overline}N$ and $V(y\cdot \mu)$. Therefore $$\begin{aligned} &\dim\mathrm{Hom}_{{\mathfrak}s}({\overline}N\otimes E', {\overline}N) \\ &=[{\mathcal}L({\overline}N, {\overline}N): E'] \hskip 3cm (\text{by \cite[6.8(3)]{Ja83}})\\ &= [U({\mathfrak}s)/\text{Ann}_{U({\mathfrak}s)}({\overline}N): E'] \hskip 1.5cm (\text{by \cite[Proposition 65]{MaSt08}})\\ & = [U({\mathfrak}s)/\text{Ann}_{U({\mathfrak}s)}( V(y\cdot \mu)): E'] \hskip 0.5cm (\text{by \cite[Proposition 65]{MaSt08}}) \\ &= [{\mathcal}L( V(y\cdot \mu), V(y\cdot \mu)): E'] \hskip 1.1cm (\text{by \cite[Theorem 60 (iii)]{MaSt08}})\\ &= \dim\mathrm{Hom}_{{\mathfrak}s}(V(y\cdot \mu)\otimes E', V(y\cdot \mu)).\end{aligned}$$ Consequently, we obtain $$\begin{aligned} \dim\mathrm{Hom}_{{\mathfrak}g} (E, {\mathcal}L(K({\overline}N), K({\overline}N))) = \dim\mathrm{Hom}_{{\mathfrak}g}(E, {\mathcal}L(K( y\cdot \mu), K( y\cdot \mu))).\end{aligned}$$ Since Kostant’s problem has a positive solution for $K(y\cdot \mu)$ by Lemma \[nlem11\] and $K({\overline}N)$ and $K(y\cdot \mu)$ have the same annihilators, we have $$\dim\mathrm{Hom}_{{\mathfrak}g}(E, U/I) = \dim\mathrm{Hom}_{{\mathfrak}g}(E, {\mathcal}L(K(y\cdot \mu), K(y\cdot \mu))).$$ This means that Kostant’s problem has positive solution for $K({\overline}N)$. \[nlem17\] The supermodule $K(\overline{N})$ is projective in ${\mathcal}C_{K(\overline{N})}$. Let $M\in {\mathcal}C_{K(\overline{N})}$. By adjunction, we have $$\text{Hom}_{{\mathfrak}g}(K({\overline}N), M)\cong \text{Hom}_{{\mathfrak}g_{\geq 0}}({\overline}N, (M)^{{\mathfrak}g_1}).$$ Our dominance assumptions on $\mu$ imply that $$\text{Hom}_{{\mathfrak}g_{\geq 0}}({\overline}N, (M)^{{\mathfrak}g_1})\cong \text{Hom}_{{\mathfrak}g_{{{\overline}0}}}({\overline}N, M)$$ and the claim follows from the fact that ${\overline}N$ is projective in $\mathcal{C}_{{\overline}N}$. We want to show that the functors $$\label{Eq::MainEquiv_1} F:= -\otimes_U K({\overline}N): {\mathcal}H_I^1 \rightarrow {\mathcal}C_{K(\overline{N})},\qquad G:=\mathcal{L}(K(\overline{N}),{}_-):{\mathcal}C_{K(\overline{N})}\to \mathcal{H}_I^1$$ are mutually inverse equivalences. We have $G(K(\overline{N}))\cong U/I\in \mathcal{H}_I^1$. Moreover, from Lemma \[nlem17\] it follows by the same arguments as in [@Ja83 6.9(9)] that $G$ is exact. As $G$ commutes with tensoring with finite dimensional $\mathfrak{g}$-supermodules and all projectives in $C_{K(\overline{N})}$ have, by definition, the form $E\otimes K(\overline{N})$, for some finite dimensional $\mathfrak{g}$-supermodule $E$, it follows that $G$ sends $C_{K(\overline{N})}$ to $\mathcal{H}_I^1$, in particular, $G$ is well-defined. As in [@Ja83 6.22], the functor $F$ is left adjoint to $G$, in particular, $F$ is also well-defined. Using Lemma \[nlem15\], the claim that $F$ and $G$ are mutually inverse equivalences of categories follows similarly to [@BeGe Theorem 5.9], [@KhMa04 Theorem 5] and [@MaMe12 Theorem 5.1]. Rough structure of Kac modules {#rough-structure-of-kac-modules} ------------------------------ We denote by $\Xi:= {\mathcal}L(K({\overline}N), -)\otimes_U K(y\cdot \mu)$ the equivalence from ${\mathcal}C_{K({\overline}N)}$ to ${\mathcal}C_{K(y\cdot \mu)}$ in Theorem \[Thm::Equi\]. The functor $\Xi$ induces a bijection between the sets $\text{Irr}({\mathcal}C_{K({\overline}N)})$ and $\text{Irr}({\mathcal}C_{K(y\cdot \mu)})$ of isomorphism classes of simple objects in ${\mathcal}C_{K({\overline}N)}$ and ${\mathcal}C_{K(y\cdot \mu)}$, respectively. We note that simple objects in ${\mathcal}C_{K({\overline}N)}$ and ${\mathcal}C_{K(y\cdot \mu)}$ are not necessarily simple as ${\mathfrak}g$-supermodules. However, just as in [@MaSt08 Section 11], every simple object in ${\mathcal}C_{K({\overline}N)}$ and ${\mathcal}C_{K(y\cdot \mu)}$ has simple top, as a ${\mathfrak}g$-supermodule, and the annihilator of the radical of a simple object is strictly bigger than that of the simple top. Consequently, we have an induced bijection $$\hat\Xi:~\text{Irr}^{{\mathfrak}g}({\mathcal}C_{K({\overline}N)})\rightarrow \text{Irr}^{{\mathfrak}g}({\mathcal}C_{K(y\cdot \mu)})$$ between the sets of isomorphism classes of simple ${\mathfrak}g$-supermodule quotients of simple objects in ${\mathcal}C_{K({\overline}N)}$ and ${\mathcal}C_{K(y\cdot \mu)}$. For $L(V) \in \text{Irr}^{{\mathfrak}g}({\mathcal}C_{K({\overline}N)})$, we define $\xi_V\in {\mathfrak}h^*$ via $$\begin{aligned} &L(\xi_V) \cong \hat\Xi(L(V)), \end{aligned}$$ in particular, we have $\xi_{{\overline}N} = y\cdot \mu$ since $\Xi K({\overline}N) =K(y\cdot \mu)$. We are now in a position to state the main result of this section which describes rough structure of Kac modules. \[thmrough\] For $L(V),~L(W) \in \emph{Irr}^{{\mathfrak}g}({\mathcal}C_{K({\overline}N)})$, we have the following multiplicity formula in the category of ${\mathfrak}g$-supermodules: $$\begin{aligned} \label{Eq::RoughStr} [K(V):L(W)] =[K(\xi_V):L(\xi_W)].\end{aligned}$$ The two sides of the equality are matched using $\Xi$, cf. [@MaSt08 Theorem 72]. Theorem \[thmrough\] says that the combinatorics of the rough structure of $K(V)$ only depends on the annihilator of $V$. Depending on $V$, the rough structure of $K(V)$ might coincide, or not, with its fine structure. In general, just like in [@MaSt08 Section 11], our approach and, in particular, Theorem \[thmrough\] does not allow us to control possible simple subquotients of $K(V)$ whose annihilator is strictly bigger than that of $K(V)$. Moreover, it is known that this fine structure of $K(V)$ really depends on $V$ and not just on the annihilator of $V$. Furthermore, in general, there is also a chance that the module $K(V)$ might be non-artinian. Rough structure of simple $\mathfrak{gl}(m|n)$-supermodules ----------------------------------------------------------- In this subsection, we obtain a similar description of the rough structure of restrictions to ${\mathfrak}g_{{{\overline}0}}$ of simple ${\mathfrak}g$-supermodules in $\text{Irr}^{{\mathfrak}g}({\mathcal}C_{K({\overline}N)})$. ### {#section-4} Let $\chi: = \chi_{\mu}$ be the ${\mathfrak}g$-central character of the typical weight $\mu$. Then $K({\overline}N) = K({\overline}N)_{\chi }$ and ${\overline}N = {\overline}N_{\chi^{{{{\overline}0}}}}$ since $K(y\cdot \mu)\in {\mathfrak}g\text{-mod}_{\chi }$ and $V(y\cdot \mu) \in {\mathfrak}g\text{-mod}_{\chi^{{{{\overline}0}}}}$. \[Thm::GorelikEquivThm\]*(*[@Go02s Theorem 1.3.1]*)* The categories ${\mathfrak}g_{{{{\overline}0}}}\emph{-smod}_{\chi^{{{{\overline}0}}}}$ and ${\mathfrak}g\emph{-smod}_{\chi}$ are equivalent via the equivalences $(\emph{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g})_{\chi}$ and $(\emph{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g})_{\chi^{{{{\overline}0}}}}$. We recall the equivalence $\Xi_{{{\overline}0}}$ between ${\mathcal}C_{K({\overline}N)}$ and ${\mathcal}C_{K(y\cdot \mu)}$ that was defined in Theorem \[thm::g0CokerEquiv\]. \[nlem25\] There is an isomorphism of functors $\emph{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}\circ \Xi \cong \Xi_{{{\overline}0}}\circ \emph{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}$. Applying $(\cdot)_{\chi}$ to $\text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(V(y\cdot \mu)) \twoheadrightarrow K(y\cdot \mu)$, we get $(\text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(V(y\cdot \mu)))_{\chi} \twoheadrightarrow K(y\cdot \mu)$ as $K(y\cdot \mu)$ is indecomposable. Hence Theorem \[Thm::GorelikEquivThm\] gives $(\text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(V(y\cdot \mu)))_{\chi}\cong K(y\cdot \mu)$. Therefore, we have $ \text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(V(y\cdot \mu)) = K(y\cdot \mu)\oplus M$, for some ${\mathfrak}g$-supermodule $M$ with $M_{\chi} =0$. Similarly, $\text{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}(K({\overline}N)) = {\overline}N \oplus M'$, for some ${\mathfrak}g$-supermodule $M'$ with $M'_{\chi_{{{{\overline}0}}}} =0$. Recall that $X\otimes_A Y=0$ provided that there is an element $a\in A$ which annihilates $X$ and acts invertibly on $Y$, in particular, if $X$ and $Y$ have different generalized central characters. Taking this into account, the observations in the previous paragraph allow us to compute as follows: $$\begin{array}{rcl} \text{Res}_{{\mathfrak}g_{{{{\overline}0}}}}^{{\mathfrak}g}\left({\mathcal}L(K({\overline}N),{}_-)\otimes_U K(y\cdot \mu)\right) &\cong&\text{Res}_{{\mathfrak}g_{{{{\overline}0}}}}^{{\mathfrak}g}\left({\mathcal}L(K({\overline}N),{}_-)\otimes_U \text{Ind}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}V(y\cdot \mu) \right)\\ &\cong&{\mathcal}L( \text{Res}_{{\mathfrak}g_{{{{\overline}0}}}}^{{\mathfrak}g}(K({\overline}N)), \text{Res}_{{\mathfrak}g_{{{{\overline}0}}}}^{{\mathfrak}g}({}_-)) \otimes_{U_{{{\overline}0}}} V(y\cdot \mu) \\ &\cong&{\mathcal}L({\overline}N, \text{Res}_{{\mathfrak}g_{{{\overline}0}}}^{{\mathfrak}g}({}_-))\otimes_{U_{{{\overline}0}}} V(y\cdot \mu), \end{array}$$ as desired. Here in the second row we used the obvious analogue of [@Co16 Lemma 3.7(2)]. We have a bijection $$\hat\Xi_{{{\overline}0}}: \text{Irr}^{{\mathfrak}g_{{{\overline}0}}}({\mathcal}C_{{\overline}N})\rightarrow \text{Irr}^{{\mathfrak}g_{{{\overline}0}}}({\mathcal}C_{V(y\cdot \mu)}),$$ induced by $\Xi_{{{\overline}0}}$, between the sets of isomorphism classes of the simple ${\mathfrak}g_{{{\overline}0}}$-quotients of simple objects in ${\mathcal}C_{{\overline}N}$ and ${\mathcal}C_{V(y\cdot \mu)}$. For a given $W \in \text{Irr}^{{\mathfrak}g_{{{\overline}0}}}({\mathcal}C_{{\overline}N})$, we define the related weight $\zeta_W\in {\mathfrak}h^*$ by $$\begin{aligned} &V(\zeta_W) \cong \hat\Xi_{{{\overline}0}}(W).\end{aligned}$$ The next statement describes the ${\mathfrak}g_{{{\overline}0}}$-rough structure of simple ${\mathfrak}g$-supermodules in terms of the combinatorics of category $\mathcal{O}$ for ${\mathfrak}g_{{{\overline}0}}$. For $L(V)\in \mathrm{Irr}^{{\mathfrak}g}({\mathcal}C_{K({\overline}N)})$ and $W\in \mathrm{Irr}^{{\mathfrak}g_{{{\overline}0}}}({\mathcal}C_{{\overline}N})$, we have the following multiplicity formula in the category of ${\mathfrak}g_{{{\overline}0}}$-supermodules: $$\begin{aligned} \label{Eq::RoughStr-s} [\mathrm{Res}^{\mathfrak{g}}_{\mathfrak{g}_{{{\overline}0}}}(L(V)):W] = [\mathrm{Res}^{\mathfrak{g}}_{\mathfrak{g}_{{{\overline}0}}}(L(\xi_V)):V(\zeta_W)].\end{aligned}$$ Equality  is obtained by, first, applying $\Xi_{{{\overline}0}}$ to the left hand side and then using Lemma \[nlem25\]. [99999]{} I. Bagci, K. Christodoulopoulou, E. Wiesner. Whittaker categories and Whittaker modules for Lie superalgebras. Comm. Algebra [**42**]{} (2014), no. 11, 4932–4947. V. Bavula, F. van Oystaeyen. The simple modules of the Lie superalgebra $\mathfrak{osp}(1,2)$. J. Pure Appl. Algebra [**150**]{} (2000), no. 1, 41–52. A. Bell. R. Farnsteiner. On the theory of Frobenius extensions and its application to Lie superalgebras. Trans. Amer. Math. Soc. [**335**]{} (1993), no. 1, 407–424. J. Bernstein, S. Gelfand. Tensor products of finite- and infinite-dimensional representations of semisimple Lie algebras. Compositio Math. [**41**]{} (1980), no. 2, 245–285. I. Bernstein, I. Gelfand, and S. Gelfand. A certain category of $\mathfrak{g}$-supermodules. Funkcional. Anal. i Prilozen. [**10**]{} (1976), 1–8. R. Block. The irreducible representations of the Lie algebra $\mathfrak{sl}(2)$ and of the Weyl algebra. Adv. in Math. [**39**]{} (1981), no. 1, 69–110. C. Boyallian, V. Meinardi. Irreducible continuous representations of the simple linearly compact n-Lie superalgebra of type W. J. Algebra [**490**]{} (2017), 493–517. Y. Cai, K. Zhao. Module structure on $U(H)$ for basic Lie superalgebras. Toyama Math. J. [**37**]{} (2015), 55–72. S.-J. Cheng and W. Wang. Dualities and representations of Lie superalgebras. Graduate Studies in Mathematics [**144**]{}. American Mathematical Society, Providence, RI, 2012. S.-J. Cheng, V. Mazorchuk, W. Wang. Equivalence of blocks for the general linear Lie superalgebra. Lett. Math. Phys. [**103**]{} (2013), no. 12, 1313–1327. K. Coulembier. The Primitive Spectrum of a Basic Classical Lie Superalgebra. Communications in Mathematical Physics [**348.2**]{} (2016): 579–602. I. Dimitrov, O. Mathieu, I. Penkov. On the structure of weight modules. Trans. Amer. Math. Soc. [**352**]{} (2000), no. 6, 2857–2869. J. Dixmier. Enveloping algebras. Revised reprint of the 1977 translation. Graduate Studies in Mathematics, [**11**]{}. American Mathematical Society, Providence, RI, 1996. M.  Duflo. Sur la classification des idéaux primitifs dans l’algèbre enveloppante d’une algèbre de Lie semi-simple. Ann. of Math. (2) [**105**]{} (1977), no. 1, 107–120. T. Ferguson, M. Gorelik, D. Grantcharov. Bounded highest weight modules over $\mathfrak{osp}(1,2n)$. Lie algebras, Lie superalgebras, vertex algebras and related topics, 135–144, Proc. Sympos. Pure Math., [**92**]{}, Amer. Math. Soc., Providence, RI, 2016. J. Germoni. Indecomposable representations of special linear Lie superalgebras. Journal of Algebra [**209.2**]{} (1998): 367–401. M. Gorelik. On the ghost centre of Lie superalgebra. Ann. Inst. Fourier (Grenoble) [**50**]{} (2000), no. 6, 1745–1764. M. Gorelik. Annihilation theorem and separation theorem for basic classical Lie superalgebras. Journal of the American Mathematical Society [**15**]{} (2002), no. 1, 113–165. M. Gorelik. Strongly typical representations of the basic classical Lie superalgebras. J. Amer. Math. Soc. [**15**]{} (2002) no. 1, 167–184. M. Gorelik, D. Grantcharov. Bounded highest weight modules over $\mathfrak{q}(n)$. Int. Math. Res. Not. IMRN [**2014**]{}, no. 22, 6111–6154. D. Grantcharov. Coherent families of weight modules of Lie superalgebras and an explicit description of the simple admissible $\mathbf{sl}(n+1|1)$-modules. J. Algebra [**265**]{} (2003), no. 2, 711–733. R. Irving, B. Shelton. Loewy series and simple projective modules in the category ${\mathscr O}\sb S$. Pacific J. Math. [**132**]{} (1988), no. 2, 319–342. J. C. Jantzen. Einh[ü]{}llende Algebren halbeinfacher Lie-Algebren. Proceedings of the International Congress of Mathematicians, (1983). A. Joseph. Kostant’s problem, Goldie rank and the Gelfand-Kirillov conjecture. Invent. Math. [**56**]{} (1980), no. 3, 191–213. V. Kac. Representations of classical Lie superalgebras. Differential geometrical methods in mathematical physics II (1978): 597–626. O. Khomenko, V. Mazorchuk. Structure of modules induced from simple modules with minimal annihilator. Canad. J. Math. [**56**]{} (2004), no. 2, 293–309. V. Mazorchuk. Classification of simple $\mathfrak{q}_2$-supermodules. Tohoku Math. J. (2) [**62**]{} (2010), no. 3, 401–426. V. Mazorchuk, V. Miemietz. Serre functors for Lie algebras and superalgebras. Ann. Inst. Fourier [**62**]{} (2012) no. 1, 47–75. V. Mazorchuk, C. Stroppel. Categorification of (induced) cell modules and the rough structure of generalised Verma modules. Adv. Math. [**219**]{} (2008), no. 4, 1363–1426. I. Musson. Lie superalgebras and enveloping algebras. Graduate Studies in Mathematics, 131. American Mathematical Society, Providence, RI, 2012. M. Scheunert. The Theory of Lie Superalgebras. Lecture Notes in Mathematics [**716**]{}, Springer, 1979. V. Serganova. On representations of the Lie superalgebra ${\mathfrak}p(n)$. Journal of Algebra [**258.2**]{} (2002): 615–630. Z. Wei, Y. Zhang, Q. Zhang. Simple modules for some Cartan-type Lie superalgebras. Hacet. J. Math. Stat. [**44**]{} (2015), no. 1, 129–152. Department of Mathematics, Uppsala University, Box. 480, SE-75106, Uppsala, SWEDEN,\ emails: [chih-whi.chenmath.uu.se]{}
--- abstract: | We review recent results on the linear and non-linear optical response of realistic quantum-wire structures. Our theoretical approach is based on a set of generalized semiconductor Bloch equations, and allows a full three-dimensional multisubband description of Coulomb correlation for any shape of the confinement profile, thus permitting a direct comparison with experiments for available state-of-the-art wire structures. Our results show that electron-hole Coulomb correlation removes the one-dimensional band-edge singularities from the absorption spectra, whose shape results to be heavily modified with respect to the ideal free-particle case over the whole range of photoexcited carrier densities. address: | Istituto Nazionale Fisica della Materia (INFM) and\ Dipartimento di Fisica, Università di Modena\ via Campi 213/A, I-41100 Modena, Italy author: - 'Fausto Rossi[^1] and Elisa Molinari' title: | Coulomb-correlation effects on the non-linear\ optical properties of realistic quantum wires --- -4mm -4mm Introduction ============ The dominant role played by Coulomb correlation in the optical response of semiconductors and its dependence on dimensionality has now been long recognized [@Books]. More recently, increasing interest has been devoted to one-dimensional (1D) structures [@Review_Wires], prompted by promising advances in quantum-wire fabrication and application, e.g. in quantum-wire lasers. The main goal of such effort in basic research as well as technological applications is to achieve structures with improved optical efficiency as compared to two-dimensional (2D) and three-dimensional (3D) ones. A common argument in favour of this effort is based on the well known van Hove divergence in the 1D joint density-of-states (DOS), which is expected to give rise to very sharp peaks in the optical spectra of 1D structures. Such prediction is however based on free-particle properties of ideal 1D systems and it ignores any disorder-induced and Coulomb-correlation effects. As pointed out in the pioneering papers by Ogawa et al. [@Ogawa], electron-hole correlation is expected to strongly influence the optical spectra of 1D systems. Their theoretical investigation —based on a single-subband-model solution of the 1D Schrödinger equation in terms of a modified 1D Coulomb potential [@Ogawa]— shows that the inverse-square-root singularity in the 1D DOS at the band edge is smoothed when electron-hole correlation is taken into account. The question is whether one can expect that the above theoretical predictions, obtained for model 1D systems, also apply to the real quantum wires made available by the present technology. Indeed, wires with the best optical quality presently include structures obtained by epitaxial growth on non-planar substrates (V-shaped wires) [@Review_Wires; @Kapon; @Rinaldi], or by cleaved-edge quantum well overgrowth (T-shaped wires) [@Wegscheider; @Sakaki]. Owing to the shape of the confinement potential, these systems are far from an ideal 1D character. While quasi-1D confinement has been demonstrated for the lowest level [@Review_Wires; @Kapon; @Rinaldi; @Wegscheider; @Sakaki], excited states gradually approach a 2D-like behaviour. Moreover, in the available samples the subband separation is still relatively small, so that the coupling between different subbands may be important. From an experimental point of view, it is a matter of fact that, while 2D features are clearly observed in photoluminescence excitation spectra of quantum wells, so far no “sharp" 1D features have been detected in the corresponding spectra of quantum wires. This is true despite the high quality of some of these structures, whose 1D character has been independently established by other methods [@Review_Wires; @Kapon; @Rinaldi; @Wegscheider]. However, the measured spectra are expected to be also strongly influenced by disorder-induced inhomogeneous broadening [@Vancouver] and, therefore, it has been sofar difficult to identify the role played by electron-hole correlation. From all these considerations, the following questions need to be answered: - [*Are electron-hole correlation effects playing a dominant role also in realistic quantum-wire structures ? If so, are these expected to hinder the possible advantages of the reduced dimensionality for relevant values of temperature and carrier density ?*]{} Theoretical approach ==================== To answer these questions, a full 3D approach for the analysis of Coulomb correlation in realistic quantum wires has been recently proposed [@PRL1; @PRL2]. This theoretical scheme, described in , is based on a generalization of the well-known semiconductor Bloch equations (SBE) [@Books] to the case of a multisubband wire. More specifically, by denoting with $\{k_z\nu^{e/h}\}$ the free electron and hole states ($k_z$ and $\nu^{e/h}$ being, respectively, the wavevector along the wire direction, $z$, and the subband index corresponding to the confinement potential in the $xy$ plane), we consider as kinetic variables the various distribution functions of electrons and holes $f^{e/h}_{k_z\nu}$ as well as the corresponding diagonal ($\nu^e = \nu^h = \nu$) interband polarizations $p^{ }_{k_z\nu}$. This kinetic description is a generalization to 1D systems of a standard approach for the study of bulk semiconductors [@Books] recently applied also to superlattice structures [@PRL_SL]. Within our $k_z\nu$ representation, the SBE, describing the time evolution of the above kinetic variables, are written as $$\begin{aligned} \label{SBE} \frac{\partial}{\partial t} f^{e/h}_{\pm k_z\nu} &=& \frac{1}{i\hbar} \left({\cal U}^{ }_{k_z\nu} p^*_{k_z\nu} - {\cal U}^*_{k_z\nu} p^{ }_{k_z\nu}\right) + \frac{\partial}{\partial t} f^{e/h}_{\pm k_z\nu}\biggl|_{inco} \nonumber \\ \frac{\partial}{\partial t} p^{ }_{k_z\nu} &=& \frac{1}{i\hbar} \left({\cal E}^e_{k_z\nu} + {\cal E}^h_{-k_z\nu} \right) p^{ }_{k_z\nu} + \frac{1}{i\hbar} {\cal U}^{ }_{k_z\nu} \left(1 - f^e_{k_z\nu} - f^h_{-k_z\nu}\right) + \frac{\partial}{\partial t} p^{ }_{k_z\nu}\biggl|_{inco} \ ,\end{aligned}$$ where ${\cal U}^{ }_{k_z\nu}$ and ${\cal E}^{e/h}_{k_z\nu}$ are, respectively, the renormalized fields and subbands, whose explicit form involves the full 3D Coulomb potential [@PRL1]. The $\pm$ sign in Eq. (\[SBE\]) refers to electrons ($e$) and holes ($h$), respectively, while the last terms on the rhs of Eq. (\[SBE\]) denote the contributions due to incoherent processes, e.g. carrier-carrier and carrier-phonon scattering. In this paper we focus on the quasi-equilibrium regime. Therefore, Fermi-Dirac $f^{e/h}_{k_z\nu}$ are assumed and the solution of the set of SBE (\[SBE\]) simply reduces to the solution of the polarization equation. This is performed by means of a direct numerical evaluation of the stationary solutions, i.e. polarization eigenvalues and eigenvectors. These two ingredients fully determine the absorption spectrum as well as the exciton wavefunction in 3D real space. In particular, the electron-hole correlation function vs. the relative free coordinate $z = z^e-z^h$ is given by: $g(z) \propto \sum_{k_z k'_z \nu} p^*_{k_z\nu} p^{ }_{k'_z\nu} e^{i(k'_z-k_z) z}$. The main ingredients entering our calculation are then the single-particle energies and wavefunctions, which in turn are numerically computed starting from the real shape of the 2D confinement potential deduced from TEM, as in Ref. [@Rinaldi]. Numerical results ================= The above theoretical scheme has been applied to realistic V- and T-shaped wire structures. In particular, here we show results for the GaAs/AlGaAs V-wires of Ref. [@Rinaldi] and the GaAs/AlAs T-shaped wire of Ref. [@Sakaki] (sample S2 \[$d_1=d_2=53\AA$\]). In order to better illustrate the role played by electron-hole correlation, in Fig. 1 we first show the linear-absorption spectra obtained when taking into account the lowest wire subband only. Here, results of our Coulomb-correlated (CC) approach are compared with those of the free-carrier (FC) model [@foot1]. For both V-shaped \[Fig. 1(a)\] and T-shaped \[Fig. 1(b)\] structures, electron-hole correlation introduces two important effects: First, the excitonic peak arises below the onset of the continuum, with different values of binding energies (about $12$ and $16$meV, respectively) in good agreement with experiments [@Rinaldi]. As discussed in , this difference is mainly ascribed to the different barrier height while the excitonic confinement is found to be shape (V vs. T) independent. Second, we find a strong suppression of the 1D DOS singularity, in agreement with previous investigations based on simplified 1D models [@Ogawa]. Let us now discuss the physical origin of the dramatic suppression of the band-edge singularity in the CC absorption spectrum \[solid lines in Fig. 1\]. Since the optical absorption is proportional to the product of the electron-hole DOS and the oscillator strength (OS), we have studied these two quantities separately. Figure 2(a) shows that the quantity which is mainly modified by CC is the OS. Here, the ratio between the CC and FC OS is plotted as a function of the excess energy with respect to the band edge (solid line). This ratio is always less than one and, in agreement with the results of 1D models [@Ogawa], it goes to zero at the band edge. Such vanishing behaviour is found to dominate the 1D DOS singularity (dashed line) and, as a result, the absorption spectrum exhibits a regular behaviour at the band edge \[solid lines in Fig. 1\]. Since the OS reflects the value of the correlation function $g(z)$ for $z = 0$ [@Ogawa], i.e. the probability of finding electron and hole at the same place, the vanishing behaviour of the OS in Fig. 2(a) seems to indicate a sort of electron-hole “effective repulsion”. This is confirmed by a detailed analysis of the electron-hole correlation function, $g(z)$, reported in Fig. 2(b). Here $g(z)$ (corresponding to the square of the exciton wavefunction in a 1D model) is plotted for three different values of the excess energy. We clearly see that the values of $g$ for $z = 0$ correspond to the values of the OS ratio at the same energies \[solid line in Fig. 2(a)\]. Moreover, we notice the presence of a true “electron-hole correlation hole”, whose spatial extension strongly increases when approaching the band edge. The above analysis provides a positive answer to our first question: also for realistic quantum-wire structures electron-hole correlation leads to a strong suppression of the 1D band-edge singularity in the linear-absorption spectrum. Contrary to the 2D and 3D case, the Sommerfeld factor, i.e. the ratio between the CC and FC absorption, results to be less than unity. Finally, in order to answer our second and more crucial question, we must consider that most of the potential quantum-wire applications, i.e. 1D lasers and modulators, operate in strongly non-linear-response regimes [@Review_Wires]. In such conditions, the above linear-response analysis has to be generalized taking into account additional factors as: (i) screening effects, (ii) band renormalization, (iii) space-phase filling. We want to stress that all these effects are already accounted for in our SBE (\[SBE\]) [@PRL1]. Figure 3 shows the first quantitative analysis of non-linear absorption spectra of realistic V-shaped wire structures for different carrier densities at room temperature. In Fig. 3(a) we show as a reference the results obtained by including the lowest subband only. In the low-density limit (case A: $n = 10^4$ cm$^{-1}$) we clearly recognize the exciton peak. With increasing carrier density, the strength of the excitonic absorption decreases due to phase-space filling and screening of the attractive electron-hole interaction, and moreover the band renormalization leads to a red-shift of the continuum. Above the Mott density (here about $8*10^{5}cm^{-1}$), the exciton completely disappears. At a density of $4*10^6$ cm$^{-1}$ (case D) the spectrum already exhibits a negative region corresponding to stimulated emission, i.e. gain regime. As desired, the well pronounced gain spectrum extends over a limited energy region (smaller than the thermal energy); However, its shape results to differ considerably from the ideal FC one. The FC result is plotted in the same figure and marked with diamonds; note that it has been shifted in energy to align the onset of the absorption, to allow a better comparison of the line-shapes [@foot2]. The typical shape of the band-edge singularity in the ideal FC gain spectrum results to be strongly modified by electron-hole correlation. Also at this relatively high carrier density the OS corresponding to the CC model goes to zero at the band edge as for the low-density limit \[Fig. 2(a)\]. As a consequence, the FC peak is strongly suppressed and only its high-energy part survives. The overall effect is a broader and less pronounced gain region. Finally, Fig. 3(b) shows the non-linear spectra corresponding to the realistic case of a 12-subband V-shaped wire. In comparison with the single-subband case \[Fig3(a)\], the multisubband nature is found to play an important role in modifying the typical shape of the gain spectra, which for both CC and FC models result to extend over a range much larger than that of the single-subband case for the present wire geometry \[Fig. 3(a)\]. In addition, the Coulomb-induced suppression of the single-subband singularities, here also due to intersubband-coupling effects, tends to further reduce the residual structures in the gain profile. Thus, even in the ideal case of a quantum wire with negligible disorder- and scattering-induced broadening, our analysis indicates that, for the typical structure considered, the shape of the absorption spectra over the whole density range differs considerably from the relatively sharp FC spectrum of Fig. 1. Conclusions =========== We have presented a theoretical analysis of the linear and non-linear optical properties of realistic quantum wires. Our approach is based on a numerical solution of the semiconductor Bloch equations describing the multisubband 1D system. We have applied such approach to typical T- and V-shaped structures, whose parameters reflect the current state-of-the-art in the quantum-wire fabrication. Our results allow us to reconsider the perspectives of quantum-wire physics and technology. In particular, comparing the non-linear absorption spectra of Fig. 3(a) and (b), we see that the broad gain region in (b) is mainly ascribed to the multisubband nature or, more precisely, to the small intersubband splitting compared to the single-subband gain range in (a). This confirms that, in order to obtain sharp gain profiles, one of the basic steps in quantum-wire technology is to produce structures with increased subband splitting. However, the disorder-induced inhomogeneous broadening, not considered here, is known to increase significantly the spectral broadening [@Vancouver] and this effect is expected to increase with increasing subband splitting. Therefore, extremely-high-quality structures (e.g. single-monolayer control) seem to be the only possible candidates for successful quantum-wire applications. Acknowledgments {#acknowledgments .unnumbered} =============== We thank Roberto Cingolani and Guido Goldoni for stimulating and fruitful discussions. This work was supported in part by the EC Commission through the Network “Ultrafast”. See e.g. J. Shah, [*Ultrafast Spectroscopy of Semiconductors and Semiconductor Nanostructures*]{} (Springer, Berlin, 1996); H. Haug and S.W. Koch, [*Quantum Theory of the Optical and Electronic Properties of Semiconductors*]{}, 3rd Edn., World Scientific, Singapore (1994). For a review see: R. Cingolani and R. Rinaldi, Rivista Nuovo Cimento [**16**]{}, 1 (1993). Tetsuo Ogawa and Toshihide Takagahara, Phys. Rev. [**B43**]{}, 14325 (1991); ibid. [**B44**]{}, 8138 (1991). E. Kapon et al., Phys. Rev. Lett. [**63**]{}, 430 (1989). R. Rinaldi et al., Phys. Rev. Lett. [**73**]{}, 2899 (1994). L. Pfeiffer et al., Appl. Phys. Lett. [**56**]{}, 967 (1990); W. Wegscheider et al., Appl. Phys. Lett. [**65**]{}, 2510 (1994). T. Someya, H. Akiyama, and H. Sakaki, Phys. Rev. Lett. [**76**]{}, 2965 (1996). E. Molinari et al., in [*Proc. 22nd Internat. Conf. on the Physics of Semiconductors*]{}, edited by D.J. Lockwood, World Scientific, Singapore (1994), p. 1707. F. Rossi and E. Molinari, Phys. Rev. Lett. [**76**]{}, 3642 (1996); Phys. Rev. B53, 16462 (1996). F. Rossi, G. Goldoni, and E. Molinari, Phys. Rev. Lett. [**78**]{}, 3527 (1997). T. Meier, F. Rossi, P. Thomas, and S.W. Koch, Phys. Rev. Lett. [**75**]{}, 2558 (1995). All the spectra were computed assuming a phenomenological energy broadening of 2 meV. This small broadening (as compared to that of realistic wire structures [@Vancouver]) allows to better identify the role played by electron-hole correlation. As for the case of 3D and 2D systems (see e.g. R. Cingolani et al., Adv. Phys. [**40**]{}, 535 (1991); Phys. Rev. [**B48**]{}, 14331 (1993), and refs. therein), our analysis seems to indicate a moderate red-shift of the band gap. However, in order to provide a detailed analysis of the band-gap renormalization, a more refined screening model (including non-diagonal terms of the dielectric tensor as well as dynamic-screening effects) is required. [^1]: phone: +39 (59) 586072 — fax: +39 (59) 367488 – E-mail: Rossi@UniMo.It
  1.0cm [**Search for energetic cosmic axions utilizing terrestrial/celestial magnetic fields**]{} 1.0cm [K. ZIOUTAS ]{} 0.4cm [*Physics Department, University of Thessaloniki, GR-54006 Thessaloniki, Greece.*]{} 0.5cm [D. J. THOMPSON ]{} 0.4cm [*NASA/Goddard Space Flight Center, Code 660, Greenbelt MD 20771, USA.*]{} 0.5cm 0.5cm [E. A. PASCHOS ]{} 0.4cm [*Institut für Physik, Universität Dortmund, D-44221 Dortmund, Germany.*]{} 1.0cm [*12. 8. 1998* ]{} 0.6cm [**Abstract.**]{} [Orbiting $\gamma$-detectors combined with the magnetic field of the Earth or the Sun can work parasitically as cosmic axion telescopes. The relatively short field lengths allow the axion-to-photon conversion to be coherent for $m_{axion} \sim 10^{-4}$ eV, if the axion kinetic energy is above $\sim 500$ keV (Earth’s field), or, $\sim 50$ MeV (Sun’s field), allowing thus to search for axions from $e^+e^-$ annihilations, from supernova explosions, etc. With a detector angular resolution of $\sim 1^o$, a more efficient sky survey for energetic cosmic axions passing [*through the Sun*]{} can be performed. Axions or other axion-like particles might be created by the interaction of the cosmic radiation with the Sun, similarly to the axion searches in accelerator beam dump experiments; the enormous cosmic energy combined with the built-in coherent Primakoff effect might provide a sensitive detection scheme, being out of reach with accelerators. The axion signal will be an excess in $\gamma$-rays coming either from a specific celestial place behind the Sun, e.g. the Galactic Center, or, from any other direction in the sky being associated with a violent astrophysical event, e.g. a supernova. Earth bound detectors are also of potential interest. The axion scenario also applies to other stars or binary systems in the Universe, in particular to those with superstrong magnetic fields.]{} 1.8cm ———————————————————- [*Emails*]{} : Konstantin.Zioutas@cern.ch             DJT@bartok.gsfc.nasa.gov             Paschos@hal1.physik.uni-dortmund.de 1. Introduction {#introduction .unnumbered} =============== An attractive solution of the strong CP problem invokes a new symmetry, the Peccei-Quinn (PQ) symmetry (U$_{PQ}$(1)). The spontaneous breaking of this new symmetry predicts the existence of a light neutral pseudoscalar particle, the [*axion*]{}, which is closely related to the neutral pion [@pq; @ww]. In fact, there are good reasons to believe that if the PQ mechanism is responsible for preserving $CP$ in the strong interactions, then the [*axion*]{} is the dark matter [@kamion], i.e. [*axions*]{} may exist as primordial cosmic relics copiously produced in the early Universe, and eventually thermalized in a way similar to the 2.7$^o$K cosmic background radiation. The [*axion*]{} also arises in supersymmetry and superstring theories. Thus, the [*axion*]{} is one of the leading and promising non-baryonic candidate for the ubiquitous dark matter in the universe [@ggr]. Astrophysical observations and laboratory experiments leave open an [*axion*]{} rest mass window around $m_a \approx 10^{-4}~$eV (within $\pm$1-2 orders of magnitude). For all these reasons, [*axions*]{} have received much attention in elementary particle (astro)physics. The [*axion*]{} decay into two photons (${\sf \it a \rightarrow \gamma \gamma}$) is the reaction mainly used to search for them. Inside matter or in a magnetic field in vacuum, the [*axion*]{} couples to a virtual photon (Primakoff effect), producing a real photon ($\gamma$), which can be detected : $$a ~ +~ \gamma_{virtual} \rightleftharpoons \gamma$$ The [*axion*]{} behaves like a very weakly interacting photon or pion ($\pi^0$), and, in a reaction, it can replace a magnetic dipole $\gamma$ or a $\pi^0$. Energetic [*axions*]{} with mean thermal energy equal to $\sim$4 keV [@dml; @zzz] or $\sim$ 160 MeV [@emrt] are possibly copiously produced via the Primakoff effect inside the Sun or during a Supernova explosion, respectively. They also could be produced in astrophysical beam dumps [@gaisser], similar to the beam dump in accelerators, replacing energetic $\pi^0$’s or $\gamma$’s in the electromagnetic/hadronic cascade reactions involved. Therefore, energetic [*axions*]{} have been searched already in high energy neutrino experiments [@oldneu]. A not so unusual beaming effect can compensate for the large distance to the Earth. In other words, it is not so unreasonable to expect high energy cosmic [*axions*]{}, beyond those expected to be emitted from a supernova. Finally, the existence of particles beyond the Standard Model with similar couplings can not be excluded. 2. Previous work {#previous-work .unnumbered} ================ The stimulation for this proposal comes from two recent works, which appeared almost simultaneously by two groups [@emrt] searching for energetic [*axions*]{} from SN1987A with data from orbiting detectors. However, they could have provided an [*axion*]{} signature for specific parameter values, i.e., for $m_a \leq 10^{-9}$eV and an [*axion*]{}-to-photon coupling constant $g_{a\gamma \gamma} \geq 3\cdot 10^{-12}$ GeV$^{-1}$ assuming $\sim 1~kpc~=~3\cdot 10^{19}m$ galactic magnetic field length of $\sim 2~\mu gauss$, where the [*axion*]{}-to-photon conversion via the coherent Primakoff effect can take place. In such a case, an orbiting detector pointing to the SN1987A position in the sky should have measured an excess of energetic photons during this historical supernova observation on Earth, provided the number of the emitted [*axions*]{} were sufficient to trigger the detector. These two groups [@emrt] have also estimated the hypothetical thermal [*axion*]{} spectrum from the supernova, with the main unknown being the [*axion*]{}-to-photon coupling constant ($g_{a\gamma \gamma}$). 3. The suggestion {#the-suggestion .unnumbered} ================= The concept of this work is similar with that given in ref. [@emrt]. The main difference is the choise of the magnetic field between the [*axion*]{} source and the $\gamma-$ray detector, where the [*axion*]{} conversion takes place; we suggest to use the solar ($\sim 2 ~gauss$) and/or the terrestrial ($\sim 0.5 ~gauss$) magnetic fields. In order to justify this choise, we give below two relations, which describe the [*axion*]{} interaction inside a static magnetic field. Firstly, the [transverse]{} magnetic field strength [B]{}, its length [L]{} and the [*axion*]{}-to-photon coupling constant $g_{a\gamma \gamma}$ are the fundamental parameters in the calculation of the coherent [*axion*]{}-to-photon conversion inside [B]{}. The probability to have a photon from an energetic [*axion*]{} entering perpendicularly a 1 T$\cdot$m magnetic field , is [@dml; @zzz; @grls] $$P_{a\rightarrow \gamma} = \left(\frac{gBL}{2}\right)^2= 2.5\times 10^{-21} \left[\frac{ {\sf B}}{1T}\frac{{\sf L}}{1m} \frac{{\sf g_{a\gamma \gamma}}}{10^{-10}GeV^{-1}}\right]^2 ,$$ It is interesting to notice the [$(B\cdot L)^2$]{} dependence of the coherent [*axion*]{}-to-photon conversion rate. Secondly, for massive [*axions*]{}, in order to fullfil the coherence relation (2), i.e. to exclude deconstructive [*axion*]{}-photon interference over the magnetic field length ([L]{}), the [*axion*]{} rest mass ($m_a$) and its total energy ($E_a = E_\gamma =\hbar \omega$) must satisfy a second relation [@dml; @zzz] $$L~ \leq~ \frac{(2\pi \hbar c)\cdot (\hbar \omega)} {|m_a^2-m_{\gamma}^2|c^4}$$ In this relation, $m_{\gamma}$ reflects the acquired mass of the photons inside matter, which depends on the electron density : $m_{\gamma}[\mu eV]\approx 0.37\times \sqrt{\rho_e[10^8/cm^3]}$. Thus, for an electron density $\rho_e \leq 10^8/cm^3$ (i.e. $m_{\gamma}\ll m_a$), which can be the case with the considered terrestrial and solar regions [@allen], relation (3) becomes $$L~ \leq~ \frac{(2\pi \hbar c)\cdot (\hbar \omega)} {m_a^2c^4}$$ Inserting $m_a\approx 10^{-4}$eV and $\bar{E_a}\approx 160$ MeV into this relation, the resulting coherence length (for $E_a \geq 50$ MeV) can be as large as $$L ~\approx ~ 4\cdot 10^{9}~m ~\approx~ 6R_{\odot}$$ Similarly, for $m_a\approx 10^{-4}$eV and $E_a~=~511~keV$ it follows : $$L ~\approx ~ 10^{7}~m ~\approx~ 1R_{\oplus}$$ One should notice that the study in ref. [@emrt] is sensitive to [*axions*]{} with rest mass below $10^{-9}$eV, because they used the much longer coherent-galactic-magnetic field ($\sim 1~kpc$); this mass range is far below the open [*axion*]{} mass window ($m_a \approx 10^{-(4\pm 2)}$ eV). Thus, taken into account the coherence lengths given in relations (5) and (6), the Earth’s magnetic field is in this respect just appropriate for an [*axion*]{} threshold energy of $\sim$ 500 keV, while the more efficient solar magnetic field, fits to high energy [*axions*]{} ($E_a \geq 50$ MeV). We take for the terrestrial and solar $B\cdot L$ values $$(B\cdot L)_{\oplus} ~\approx~ 300~T\cdot m, ~~~and,~~~ (B\cdot L)_{\odot}~\approx ~3\cdot 10^4~T\cdot m ~,$$ respectively. It is worth mentioning that solar flares with $\sim kgauss$ magnetic fields and some $10^3~ km$ in size can have $B\cdot L \approx 10^5-10^6~ T\cdot m$. For comparison, one should keep in mind that laboratory magnetic [*axion*]{} spectrometers use magnetic fields with $B\cdot L \approx 10-100~T\cdot m$ at best [@dml; @zzz; @shigetaka]; inspite of the $\sim 2-10$ T field strength, they have to be anyhow short (s. relation (4)), because of the much lower expected solar [*axion*]{} energy ($\sim$ 4 keV). With relation (2) we can estimate the conversion probability $P_{a\rightarrow \gamma}$ for an energetic [*axion*]{} propagating inside the Sun’s, or, Earth’s magnetic field : $$P_{a\rightarrow \gamma}^{\odot} = 2.5\times 10^{-12}\cdot \left[\frac{ {\sf (B\cdot L)}}{(3\cdot 10^{4}~T\cdot m)} \frac{{\sf g_{a\gamma \gamma}}}{10^{-10}GeV^{-1}}\right]^2$$ and $$P_{a\rightarrow \gamma}^{\oplus} = 2.5\times 10^{-16}\cdot \left[\frac{ {\sf (B\cdot L)}}{(300~T\cdot m)} \frac{{\sf g_{a\gamma \gamma}}}{10^{-10}GeV^{-1}}\right]^2$$ The Earth’s magnetic field allows in principle for a simultaneous $\sim 2\pi$ survey of the sky, even though the field of view (f.o.v.) of an individual (orbiting) detector is smaller. This is not the case with the solar field; as the Earth and the Sun change continuously their orientation in space, one can scan ‘[*through the sun*]{}’ a big part of the sky with $\sim 0.5^o$ opening angle. We consider an effective solar [*axion*]{} conversion region of a few solar radii (see relation (5)) including the Sun. To be more specific, we give a numerical example for a [supernova]{}, which might be taken as reference for other violent astrophysical events. The expected integrated [*axion*]{} flux ($\Phi_a$) on Earth from a supernova explosion, which lasts some $\sim 10$-20 seconds and is at a distance $D\approx6~kpc$, is [@emrt] $$\Phi_a(\bar{E_a}\approx 160~MeV)~\approx~2.5\cdot 10^9~axions\cdot cm^{-2}\cdot \left[ \frac{g_{a\gamma \gamma}}{10^{-10}GeV^{-1}} \right]^2 \cdot \left[\frac{6~kpc}{D}\right]^2,$$ with the [*axions*]{} created from the scattering of thermal photons on protons through the Primakoff effect ([*$p\gamma \rightarrow pa$*]{}). The energy released by the [*axions*]{} is $\sim 100$ times smaller than the energy released through the escaping neutrino burst. Combining relations (8), (9) and (10), we estimate the signal $S^{\odot} = P_{a\rightarrow \gamma}^{\odot}\times \Phi_a$ for an orbiting detector. The flux of [*axions*]{} from the supernova converts into photons of $\sim$160 MeV in the solar field and gives the final signal : $$S^{\odot}~\approx~6\cdot 10^{-3}~cm^{-2}\cdot \left[\frac{g_{a\gamma \gamma}}{10^{-10} GeV^{-1}}\right]^4 ~\approx~8~cm^{-2}\cdot \left[\frac{g_{a\gamma \gamma}}{6\cdot 10^{-10} GeV^{-1}}\right]^4$$ Similarly, the intervening terrestrial field yields : $$S^{\oplus}~\approx~6\cdot 10^{-7}~cm^{-2}\cdot \left[\frac{g_{a\gamma \gamma}}{10^{-10} GeV^{-1}}\right]^4 ~\approx~8\cdot 10^{-4}~cm^{-2}\cdot \left[\frac{g_{a\gamma \gamma}}{6\cdot 10^{-10} GeV^{-1}}\right]^4$$ Notice the $(g_{a\gamma \gamma})^4$ dependence of the signal, while $g_{a\gamma \gamma} \leq 6\cdot 10^{-10} GeV^{-1}$ is the presently best experimental limit for the coupling constant [@shigetaka]. In a supernova explosion electrons and [positrons]{} are created by the interacting neutrinos above the neutrinosphere ($\rho\leq 10^{11} g/cm^3$) via the reactions $\nu \bar{\nu} \rightarrow e^+e^-,~ \nu_e p\rightarrow pe^- $ and $\bar{\nu_e} p\rightarrow ne^+$, the dominant processes which generate neutrino opacity [@rcd]. A similar situation might arise in the case of two merging neutron stars [@eichler]. While the annihilation $\gamma$-line of those positrons is completely shielded, [*axions*]{} created in the $e^+e^-$-annihilation can escape. It is worth remembering that 511 keV [*axion*]{} search, following the reaction $e^+e^-\rightarrow \gamma a$, has been performed already in laboratory experiments with positron sources [@carboni]. Needless to say that the same process could occur with those obscured intense positron places in the Universe, while the intervening terrestrial or any other magnetic field works as [*axion*]{}-to-photon converter (s. section 5.). Obviously, the solar signature ($S^{\odot}$) will show-up when the detector, the Sun and the [*axion*]{} source are aligned within $\sim 0.5^o$, in which case the f.o.v. of the detector sees the [*axion*]{} source. The geometry with the Earth’s magnetic field is actually free from such constraints, however, the conversion efficiency is smaller (s. relation (8) and (9)). In other words, taking into account the suggested [*axion*]{} conversion inside the terrestrial or solar magnetic field in the evaluation of $\gamma$-ray data, an orbiting gamma detector is also an [*axion*]{} telescope, scaning continuously the sky for such events. [**Background:**]{} The isotropic cosmic $\gamma$-ray flux above $\sim$[100 MeV]{} is $\sim 10^{-5} \gamma ~'s/cm^2\cdot s\cdot sr$ while that from the Galactic plane is by a factor of $\sim 10$ higher [@moon]. Furthermore, the electromagnetic/hadronic interaction of the cosmic radiation with the Sun must give rise to energetic photons, and in addition there are $\gamma$-rays from solar activity, e.g. flares [@flares]. Sofar, the orbiting EGRET detector measured an excess of high-energy gamma radiation from the moon. The lunar flux above 100 MeV is $\sim 5\cdot 10^{-7} \gamma ~'s/cm^2\cdot s$, while the limit obtained for the quiet Sun is below $\sim 2\cdot 10^{-7} \gamma ~'s/cm^2\cdot s$ [@moon]. The [511 keV]{} flux from the Galactic Center is $\sim 2\cdot 10^{-4} \gamma ~'s/cm^2\cdot s$, with the $3\gamma$ annihilation continuum below 511 keV being by a factor $\sim 5$ higher [@purcell]. The observed Galactic positron annihilation rate is $\geq 10^{43}/sec$ [@rr]; those positrons can be created through the decay from radioactive nuclei produced by supernovae, novae, and the massive Wolf-Rayet stars with violent surface activity, but also from $\gamma$-$\gamma$ pair production in the vicinity of an accreting black hole [@purcell], whose violence is probably without analog. [**The axion signature**]{} : [a)]{} The [*axion*]{} signal associated with the Earth’s field will be burst-like and in coincidence with some violent astrophysical event, e.g. a supernova. This is a type of signal discussed in ref. [@emrt].  [b)]{} Cosmic [*axions*]{} converted in the solar field can be identified as an excess of $\gamma$-rays coming from the region of the Sun, provided the direction from the detector to the Sun points at the same time to a specific source outside the Sun, e.g. the Galactic Center. If a $\gamma$-ray excess is seen coincident with a radio/optical/x-ray flare, then the gamma radiation could be from the flare. These flares are monitored continuously by the GOES satellite, so screening out solar flare events is straightforward. [^1] 4. Orbiting and Earth bound detectors {#orbiting-and-earth-bound-detectors .unnumbered} ===================================== The requirements for the high energy gamma detector are actually obvious from the previous considerations of the potential cosmic [*axion*]{} signature. We give here in summary the main optimum specifications : 1) threshold energy $\sim 0.5/50~MeV$, with a modest energy resolution being sufficient, 2) background suppression, 3) large detector aperture/f.o.v., 4) angular resolution of a few degrees and 5) photon identification. For example, EGRET satisfies most of the above requirements. The planned GLAST project [@glast1], with an effective area $\sim 8000~cm^2$ (above 1 GeV), f.o.v. covering $\sim 20\%$ of the sky, angular resolution of $2.5^o $ (at 100 MeV) and $0.1^o$ (above 10 GeV), energy range 10 MeV to 300 GeV, energy resolution $\sim 10\%$, will be a factor of $\sim 30$ more sensitive than EGRET and it can become the best potential high energy cosmic [*axion*]{} antenna in orbit. For the search of the 511 keV line different orbiting detectors come into question. The OSSE detector is certainly the best instrument in orbit since 1991 [@osse]; it has $\sim 2000~cm^2$ aperture at 511 keV, f.o.v. $3.8^o\times 11.4^o$, an energy resolution of 8$\%$ at 661 keV, while its energy range from 50 keV to 10 MeV is just complementary to the GLAST performance. The planned European mission INTEGRAL will also be sensitive to 511 keV [*axions*]{} (energy range 15 keV to 10 MeV); its f.o.v. will be $4.8^o-16^o$ and its targets of observation will include the Galactic Center. Further, one should reconsider data from detectors, which have had within their f.o.v. the region of SN1987A, eventhough there is as yet no [*axion*]{} flux estimate at 511 keV from astrophysical places like a supernova or other source in the sky. Following the same reasoning at high energies, [Earth bound detectors]{} come also into question, provided they have the required photon identification signature, and, the directional reconstruction of the incident photon. However, a sky survey ‘through the Sun’ requires a solar blind $\gamma$-detector, i.e., the detection technique can not use atmospheric Cherenkov or scintillation light in the visible. The high energy $\gamma$-radiation seen from the Moon with EGRET [@moon] and the observed shadowing of cosmic rays by the Sun and the Moon, with surface [@surface] and deep underground detectors [@ambrosio], show the feasibility of this kind of investigations. [^2] 5. Discussion {#discussion .unnumbered} ============= We have used a supernova explosion as a representative astrophysical violent event, for which a possible [*axion*]{} involvement below $\sim 300$ MeV has been estimated already quantitatively. However, it is reasonable to assume that if [*axions*]{} or any other [*axion*]{}-like particles exist, then, they could be copiously created in other flare stars or in transient brightenings, which we know to release comparable or greater energy. This work suggests primarily to utilize the terrestrial and the solar magnetic field as [*axion*]{}-to-photon converters, in order to perform with orbiting detectors a sky survey, searching for cosmic [*axions*]{} with energy above $\sim 0.5/50$ MeV. An [*axion*]{} signature can show-up either as a burst, or as an event rate being proportional to the intervening $(B\cdot L)^2$ value between the hypothetical [*axion*]{} source and the detector. This can be the case, for example, with the Earth’s field by comparing gamma rates observed at different distances from the Earth. For example, the INTEGRAL mission will fly between $\sim 10000~km$ and $\sim 150000~km$. We also mention a few other places in the sky as potential sources for [*axions*]{}. [**a)**]{} [astrophysical ‘beam dumps’]{} [@gaisser], e.g. relativistic ‘fireballs’, jets, etc., which seem to be associated with the as yet enigmatic Gamma Ray Bursts (GRBs); the most powerful explosions in the Universe after the Big Bang : the released energy (some $10^{52\pm 2}~ergs$) is probably much more than that from a supernova explosion [@BP; @dar]. [**b)**]{} the [Galactic Center]{} (GC), which is one of the most dynamical regions in our Galaxy, with numerous activities remaining hidden. For example, EGRET observed a $\gamma$-ray source luminosity $L\approx 10^4 L_{\odot}$ in the energy range 30 MeV to 20 GeV [@egret1]. Further, the recent OSSE discovery of a giant cloud of positrons extending $\sim 1~kpc$ above the GC was unexpected, since antimatter is thought to be relatively rare in the Universe. [**c)**]{} [close binaries]{}, e.g. cataclysmic variables, hypernova [@BP], etc. Inspite of missing predictive theoretical models for energetic cosmic [*axions*]{} beyond those to be emitted from a supernova, we propose to implement this kind of investigations in the different photon detectors in orbit or on Earth. The realization of these investigations require only an appropriate data re-evaluation and/or trigger. Such sky surveys might unravel novel physical processes occuring deep inside a star or our Galaxy. For cosmic [*axions*]{} with energy far above that expected to be emitted from a supernova, i.e. $E_a\gg 10^8$eV, also the $\sim kpc$ galactic magnetic fields considered in ref. [@emrt] can be very efficient [*axion*]{} converters for an [*axion*]{} rest mass in the open [*axion*]{} mass range (s. rel. (4)). Of course, no orbiting detector can measure the energy of such very energetic photons. However, for the purpose of this suggestion, a mere photon identification might well be sufficient as a first signature. An additional perspective, of no minor importance, is the possibility to create and detect, during the quasi ‘beam dump’ of the cosmic radiation into the Sun, [*axions*]{} or other new weakly interacting particles with similar couplings [@nomad1]. Because of the huge thickness of the Sun, even a very weakly interacting component of the cosmic radiation might interact there, which is beyond reach in accelerator beam dump experiments. In addition, the advantage of this configuration compared with accelerator experiments is the much higher cosmic energy, combined with the built-in highly efficient coherent [*axion*]{}-to-photon conversion inside the solar magnetic field. Finally, inspired by the celebrated microlensing phenomenon [@BP0], it does not escape our attention that the considered alignement between the cosmic [*axion*]{}-source, the solar field and the $\gamma$-detector can also happen with another magnetic star in the sky replacing the Sun [@ra]. The [*axion*]{} interaction can be enhanced in stars having strong magnetic fields, e.g. for $B\geq 10^{12}~gauss $ [@BP; @magnetars] around a neutron star, or $B\leq 10^9~gauss$ around a white dwarf [@ra], where the $B\cdot L$-values can be above $\sim 10^{12}~T\cdot m$; for certain parameter values, the conversion efficiency [*axion*]{}-to-photon, and [*vice versa*]{}, might reach reasonable values. In particular, the [*axion*]{} scenario could be present in eclipsing (close) binaries with superstrong magnetic fields, whose configuration might imply a high degree of alignment with the Earth. The small size of a neutron star allows coherence Primakoff interaction also in the x-ray region [@morris]. Therefore, if [*axions*]{} exist, they could be responsible for some of the time variable or transient cosmic $\gamma$-ray sources, including GRBs and Soft Gamma-ray Repeaters. 1.2cm [**Acknowledgements**]{} 0.4cm Two of us (E.A.P. and K.Z.) like to thank NATO for a cooperative research grant. [99]{} R.D. Peccei, H.R. Quinn, Phys. Rev. [**D16**]{} (1977); Phys. Rev. Lett. [**38**]{} (1977) 1440. F. Wilczek, Phys. Rev. Lett. [**40**]{} (1978) 279; S. Weinberg, [*ibid*]{} [**48**]{} (1978) 223. M. Kamionkowski, CU-TP-866, CAL-648, hep-ph/9710467 (24. 10. 1997). G. G. Raffelt, Proc. XVIII Intern. Conference on Neutrino Physics and Astrophysics, ed. Y. Suzuki and Y. Totsuka, Takayama, Japan (June 1998). See also hep-ph/9806506 (27.6.1998), and references therein. D. M. Lazarus, G. C. Smith, R. Cameron, A. C. Melissinos, G. Ruoso, Y. K. Semertzidis, F. A. Nezrick, Phys. Rev. Lett. [**69**]{} (1992) 2333. K. Zioutas et al., astro-ph/9801176 (18. 1. 1998), N.I.M. A (1998) in press. E. Masso and R. Toldra, Phys. Rev. [**D52**]{} (1995) 1755, J. A. Grifols, E. Masso, R. Toldra, Phys. Rev. Lett. [**77**]{} (1996) 2372, E. Masso, astro-ph/9704056 and J. W. Brockway, E. D. Carlson, G. G. Raffelt, Phys. Lett. [**B383**]{} (1996) 439. T. K. Gaisser, [*Cosmic Rays and Particle Physics*]{}, Cambridge University Press (1990) 177. T. Hansl et al., Phys. Lett. [**74B**]{} (1978) 139, P. Alibran et al., [*ibid*]{} [**74B**]{} (1978) 134, P.C. Bosetti et al., [*ibid*]{} [**74B**]{} (1978) 143, F. Bergsma et al., [*ibid*]{} [**157B**]{} (1985) 458. G. Raffelt and L. Stodolsky, Phys. Rev. [**D37**]{} (1988) 1237; G. Raffelt, Phys. Rev. [**D33**]{} (1986) 897. C. W. Allen, [*Astrophysical Quantities*]{}, 2nd edition, University of London, The Athlone Press (1963) pp. 133, 176. S. Moriyama, M. Minowa, T. Namba, Y. Inone, Y. Takusu, A. Yamamoto, RESCEU-23/98, hep-ex/9805026, submitt. to Phys. Lett. [**B**]{} (1998). R. C. Duncan, S. L. Shapiro, I. Wasserman, ApJ. [**309**]{} (1986) 141; S. E. Woosley, E. Baron, ApJ. [**391**]{} (1992) 228; S. E. Woosley, Astron. $\&$ Astrophys. Suppl. Series [**97**]{} (1993) 205. D. Eichler et al., Nature [**340**]{} (1989) 126. P. S. Sreekumar et al., ApJ. [**494**]{} (1998) 523; D. J. Thompson, D. L. Bertsch, D. J. Morris, R. Mukherjee, J. Geophys. Res. - Space Phys. [**102**]{}, A7 (1997) 14735; S. D. Hunter et al., ApJ. [**481**]{} (1997) 205. E.g. : G. Kanbach et al., Astron. $\&$ Astrophys. Suppl. Series [**97**]{} (1993) 349; N. G. Leikov et al., [*ibid*]{} [**97**]{} (1993) 345. W. R. Purcell et al., ApJ. Lett. [**417**]{} (1993) 738, ApJ. [**491**]{} (1997) 725; R. L. Kinzer et al., Astron. $\&$ Astrophys. [**120**]{} (1996) C317. R. Ramaty, R. E. Lingenfelter, Astron. $\&$ Astrophys. Suppl. Series [**97**]{} (1993) 127. E. D. Carlson and L.-S. Tseng, Report HUTP-95/A025 and hep-ph/9507345. P. F. Michelson, SPIE Proc. [**2806**]{} (1996) 31. W. N. Johnson et al., ApJ. Suppl. [**86**]{} (1993) 693. E.g.: U. Amaldi et al., Phys. Lett. [**153B**]{} (1985) 444; S. Orito et al., Phys. Rev. Lett. [**63**]{} (1989) 597; M. V. Akopyan et al., Phys. Lett. [**B272**]{} (1991) 443; S. Asai et al., Phys. Rev. Lett. [**66**]{} (1991) 2440; T. Mitsui et al., Phys. Rev. Lett. [**70**]{} (1993) 2265; T. Maeno et al., Phys. Lett. [**B351**]{} (1995) 574; M. Skalsey, R. S. Conti Phys. Rev. [**A55**]{} (1997) 984. M. Ambrosio et al., ApJ. [**464**]{} (1996) 954, [*ibid*]{} [**415**]{} (1993) L147, Phys. Rev. [**D47**]{} (1993) 2675; A. Borione et al., Phys. Rev. [**D49**]{} (1994) 1171. M. Ambrosio et al., The MACRO collaboration, hep-ex/9807006 (7. 7. 1998). B. Paczyński, astro-ph/9706232 (23. 6. 1997), and, 4th Huntsville GRB Symposium/Huntsville, Sept. 1997, s. also astro-ph/9712123 (8. 12. 1997). A. Dar, ApJ. [**500**]{} (1998) L93. S. N. Gninenko and N. V. Krasnikov, Phys. Lett. [**B427**]{} (1998) 307; J. Altegoer et al., The NOMAD collaboration, Phys. Lett. [**B428**]{} (1998) 197. H. A. Mayer-Hasselwander et al., Astron. $\&$ Astrophys. [**335**]{} (1998) 161; s\. also D. H. Hartmann et al., astro-ph/9709029 (3.9.1997). B. Paczyński, ApJ. [**304**]{} (1986) 1. G. G. Raffelt, Stars as Laboratories for Fundamental Physics, The University of Chicago Press, Chicago $\&$ London (1996) p.185. R. C. Duncan and C. Thompson, ApJ. [**392**]{} (1992) L9. D. E. Morris, Phys. Rev. [**D34**]{} (1986) 843. This work considers X-ray emission ($E_{\gamma}\approx 50~$keV) from the magnetosphere of a pulsar, if [*axions*]{} are thermally created in its core (s. also ref. [@ra]). [^1]: The rather strong magnetic fields in solar flares are also quite suitable [*axion*]{} converters. With better understanding of solar flares the question arises whether $\gamma$-rays from [*axions*]{} could be identified within solar flares observations (s. ref. [@cc] for very low-mass axions in the x-ray region). [^2]: Similarly, in accelerators, high energy detector systems with their inner charged particle tracking magnetic field surrounded by $\sim 4\pi$ electromagnetic calorimeter can also perform this kind of investigations. In fact, they can operate parasitically for this purpose, provided they have a built-in trigger, which allows to register any cosmic ray hitting the detector when the accelerator is OFF, or between beam crossings. The potential [*axion*]{} signature, i.e. isolated energetic photons coming from the magnetic field region, will be of interest independent on the time of occurence or direction of arrival. Fortunately, background measurements can be performed by switching OFF the magnetic field. The effective $B\cdot L$ value is usually $\approx 1-10$ $T\cdot m$, while the $\sim 1~m$ transverse field length makes them coherent high energy [*axion*]{} converter for an [*axion*]{} rest mass up to 1-10 eV; their geometry allows to perform, however, [simultaneously]{} a full sky high-energy [*axion*]{} survey. To the best of our knowledge, none magnetic detector was ON to convert energetic or $\sim$511 keV [*axions*]{} during SN1987A. A search for energetic [*axions*]{} can also be performed with the powerful accelerator bending magnets [@zzz], which have $B\cdot L \approx 100~T\cdot m\approx (B\cdot L)_{\oplus}$ and a built-in angular resolution/acceptance of $\leq 0.5^o$.
--- abstract: 'We have performed a series of three-dimensional simulations of the interaction of a supersonic wind with a non-spherical radiative cloud. These simulations are motivated by our recent three-dimensional model of a starburst-driven galactic wind interacting with an inhomogeneous disk, which show that an optically emitting filament can be formed by the break-up and acceleration of a cloud into a supersonic wind. In this study we consider the evolution of a cloud with two different geometries (fractal and spherical) and investigate the importance of radiative cooling on the cloud’s survival. We have also undertaken a comprehensive resolution study in order to ascertain the effect of the assumed numerical resolution on the results. We find that the ability of the cloud to radiate heat is crucial for its survival, with a radiative cloud experiencing a lower degree of acceleration and having a higher relative Mach number to the flow than in the adiabatic case. This diminishes the destructive effect of the Kelvin-Helmholtz instability on the cloud. While an adiabatic cloud is destroyed over a short period of time, a radiative cloud is broken up via the Kelvin-Helmholtz instability into numerous small, dense cloudlets, which are drawn into the flow to form a filamentary structure. The degree of fragmentation is highly dependent on the resolution of the simulation, with the number of cloudlets formed increasing as the Kelvin-Helmholtz instability is better resolved. Nevertheless, there is a clear qualitative trend, with the filamentary structure still persistent at high resolution. The geometry of the cloud effects the speed at which the cloud fragments; a wind more rapidly breaks-up the cloud in regions of least density. A cloud with a more inhomogeneous density distribution fragments faster than a cloud with a more uniform structure (e.g. a sphere). We confirm the mechanism behind the formation of the H$\alpha$ emitting filaments found in our global simulations of a starburst-driven wind. Based on our resolution study, we conclude that bow shocks around accelerated gas clouds, and their interaction, are the main source of the soft X-ray emission observed in these galactic-scale winds.' author: - 'Jackie L. Cooper, Geoffrey V. Bicknell, and Ralph S. Sutherland' - 'Joss Bland-Hawthorn' title: 'Starburst-Driven Galactic Winds: Filament Formation and Emission Processes' --- INTRODUCTION ============ The interstellar medium is known to be inhomogeneous, consisting of various gaseous phases, from cool molecular clouds to tenuous million degree gas [@Cox2005]. A study of the interaction of a supersonic wind with a dense cloud is a problem with many astronomical applications, such as galactic winds, supernova remnants, and broad absorption line quasars, and has received much attention. In the past this interaction has been studied both analytically [e.g. @Mckee1975; @Heathcote1983] and numerically [e.g. @Sgro1975; @Woodward1976; @Nittmann1982]. Over the last two decades, numerous two- and three- dimensional simulations have been performed. Many early attempts assumed an adiabatic interaction [e.g. @Stone1992; @Klein1994; @Xu1995], while later attempts have included radiative cooling [e.g. @Mellema2002; @Melioli2004; @Fragile2004; @Fragile2005; @Marcolini2005; @Melioli2005; @Tenorio-Tagle2006; @Orlando2005; @Orlando2006; @Orlando2008]. Among the plethora of simulations reported in the literature, the effects of thermal conduction [e.g. @Marcolini2005; @Orlando2005; @Orlando2006; @Orlando2008; @Recchi2007], magnetic fields [e.g. @MacLow1994; @Gregori1999; @Gregori2000; @Fragile2005; @Orlando2008; @Shin2008], photoevaporation [e.g @Melioli2005; @Raga2005; @Tenorio-Tagle2006], and the presence of multiple clouds [e.g. @Poludnenko2002; @Melioli2005; @Pittard2005; @Tenorio-Tagle2006] have been considered. The wind/cloud interaction has also been investigated in the laboratory via laser experiments [e.g. @Klein2003; @Hansen2007]. In this paper, we report on our high resolution three-dimensional simulations of the interaction of a supersonic wind with a non-uniform radiative cloud. This work is motivated by our recent simulations of a starburst-driven galactic wind [@Cooper2008; hereafter paper [i]{}], which showed that a optically emitting filament could be formed by the break-up and ram-pressure acceleration of a cloud into a supersonic wind. These simulations were three-dimensional, radiative, and incorporated an inhomogeneous disk that allowed us to study the interaction of a galactic scale wind with fractal clouds of disk gas. The interaction of a spherical cloud with a shock wave is often characterized by four evolutionary phases [e.g. @Klein1994; and references therein]: 1. An initial phase where the blast wave interacts with the cloud. A shock passes into the cloud, with another shock reflected into the surrounding medium. 2. A phase of shock compression where the flow around the cloud converges on the axis at the rear. During this phase, the cloud begins to flatten, with its transverse size greatly reduced. A shock is also driven into the back of the cloud. 3. The re-expansion phase where the shock transmitted into the cloud has reached the back surface and produces a strong rarefaction back into the cloud. This leads to expansion of the shocked cloud downstream. 4. A final phase where the cloud fragments and is destroyed by hydrodynamical instabilities (e.g. Kelvin-Helmholtz and Rayleigh-Taylor instabilities). @Klein1994 performed an extensive series of two-dimensional simulations of an spherical adiabatic cloud interacting with a shock wave. They found that irrespective of the initial parameters, the cloud was destroyed within several cloud crushing times. @Xu1995 confirm this result in their three-dimensional simulations. However, [@Mellema2002] in a short letter, reported their two-dimensional study of the evolution a cloud in a radio galaxy cocoon. Their simulations included the effects of radiative cooling and showed that merging of the front and back shocks leads to the formation of an elongated structure. This structure breaks-up into fragments which are not immediately destroyed. This increased ability for the cloud to survive has been reproduced by all studies that implement radiative cooling in their simulations. In addition, it has been shown that strong cooling in the cloud can cause a thin dense shell that acts to protect the cloud from ablation [@Fragile2004; @Melioli2005; @Sutherland2007]. Thermal conduction can suppress the hydrodynamical instabilities that act to fragment a cloud [@Orlando2005; @Marcolini2005], while the presence of a magnetic field has been shown to both hasten [e.g @Gregori1999] and delay [e.g. @MacLow1994; @Fragile2005] the cloud’s destruction. More recently, @Orlando2008 performed three-dimensional simulations of the wind/cloud interaction that included the effects of radiative cooling, thermal conduction, and magnetic fields. They showed that in the presence of an ambient magnetic field, the effect of thermal conduction in stabilizing the cloud may be diminished depending on the alignment of the field. Clearly the survival of a cloud interacting with a strong wind is a problem of significantly more complexity than indicated by the purely adiabatic scenario investigated by @Klein1994 and others. In order to compare the results of our current study to those of our global model we exclude the thermal conduction, magnetic fields, or photoevaporation. However, we consider the possible effect of these phenomena on our results in § \[missing\]. In this work, we further investigate the effects of radiative cooling on the survival of the cloud, by performing high $\sim 0.1$ pc resolution simulations of both a radiative and an adiabatic cloud, allowing us to perform a direct comparison between the two scenarios. Symmetry is [*not*]{} assumed, with the entire cloud being modeled, in contrast to the strategy employed in many previous three-dimensional models. This allows us to fully investigate of the turbulent flow on the evolution of the cloud. In order to understand the effects of the cloud’s structure on the wind/cloud interaction, we consider two different cloud geometries: a more realistic fractal cloud, similar to those that comprised the inhomogeneous disk in paper [i]{}, and the idealized case of a spherical cloud. The effect of the cloud’s geometry on its evolution has been investigated before. @Xu1995 considered the interaction of a shock wave with a spherical and two different prolate cloud geometries, each with a different alignment of the cloud’s major axis. In addition, @Mellema2002 considered both spherical and elliptical cloud geometries. Both studies show that the initial geometry of the cloud can alter the evolution of the cloud significantly. One of the most important results from paper [i]{} was our suggestion of a mechanism for the formation of the filaments seen in starburst-driven winds at optical wavelengths [see @Veilleux2005; for review]. According to paper [i]{} the filaments are formed from clouds of disk gas that have been accelerated into the outflow by the ram-pressure of the wind. An important question that arises is: Can the clouds survive being immersed in a hot supersonic wind long enough to form a filament and remain sufficiently cool to emit at optical temperatures? Or will they be heated and destroyed by hydrodynamical instabilities? Here we set out to answer this question. Another significant result to arise from paper [i]{} is our proposal of four different mechanisms that would give rise to soft X-ray emission that is spatially correlated with the filamentary optical emission; a major finding of recent [*Chandra*]{} observations [@Strickland2002; @Strickland2004a; @Strickland2004b; @Cecil2002; @Martin2002]. Our global simulations found that soft X-rays can arise from (i) mass-loading from ablated clouds, (ii) the intermediate temperature interface between the hot wind and cool filaments, (iii) bow shocks upstream of clouds accelerated into the outflow, and (iv) interactions between these bow shocks. The first two mechanisms involve the mixing of hot and cold gas, and are possibly caused by numerical diffusion in the simulations, and thus may not be physical. To investigate this possibility, we have performed a detailed resolution study of the wind/cloud interaction. The study also allows us to test the impact of the numerical resolution on the evolution of the cloud. NUMERICAL METHOD {#method} ================ Description of the Code ----------------------- The simulations were performed using the PPMLR (Piecewise Parabolic Method with a Lagrangian Remap) code utilized in paper [i]{}. PPMLR is a multidimensional hydrodynamics code based on the method described by @Colella1984 and has been extensively modified [e.g. @Sutherland2003a; @Sutherland2003b] from the original VH-1 code [@Blondin1995]. Thermal cooling, based on output from the MAPPINGS III code [@Sutherland1993] has been implemented. This allows for a realistic evolution of a radiatively cooling gas [@Sutherland2003b; @Saxton2005]. The simulations discussed in this paper are three-dimensional and utilize Cartesian (x,y,z) coordinates. They were performed on the SGI Altix computer operated by the the Australian Partnership for Advanced Computing. Problem Setup ------------- In order to study the wind/cloud interaction in sufficient detail, whilst still retaining the ability to follow the evolution and survival of the cloud over a period of approximately 1 million years, the simulations cover a physical spatial range of 50 $\times$ 50 $\times$ 150 pc, with the cloud centered on the origin. As our intent is to compare these simulations to the formation and survival of the clouds found in paper [i]{}, we choose initial conditions for the wind based upon the results of those simulations (Table \[init\_cond\]). In order to understand our choice, we briefly recount the formation and evolution of the starburst-driven winds which we simulated in paper [i]{}: 1. A series of small bubbles of hot ($T \gtrsim 10^7 K$) gas form in the starburst region. As these bubbles expand, they merge and follow the path of least resistance out of the disk of the galaxy, i.e. the tenuous hot gas surrounding the denser disk clouds. 2. As the bubble breaks out of the disk, it begins sweeping up the the surrounding halo gas entering the “snow-plow” phase of its evolution. The structure of the wind in the phase is characterized by 5 different zones: (i) the injection zone, (ii) a supersonic free wind, (iii) a region of hot, shocked turbulent gas, (iv) a cooler “shell” of swept-up halo gas, and (v) the undisturbed ambient gas. 3. Clouds of disk gas inside and surrounding the central injection zone are broken-up by the freely expanding wind; fragments of disk gas are accelerated into the outflow by the ram-pressure of the wind. Since it is the interaction of the clouds with the freely expanding wind that results in their fragmentation, we select the temperature, density and velocity of our supersonic wind to be that of the inner free-wind region, namely $T_{\rm w} = 5 \times 10^{6} \rm~K$, $n_{\rm w} = 0.1 \rm~cm^{-3}$, and $v_{\rm w} = 1200 \rm~km~s^{-1}$ respectively. [lcc]{} Wind Temperature (K) & $T_{\rm w}$ & $5 \times 10^{6}$\ Wind Density ($\rm cm^{-3}$) & $n_{\rm w}$ & 0.1\ Wind Velocity ($\rm km~s^{-1}$) & $v_{\rm w}$ & $1200$\ Wind Mach Number & $\mathcal{M}_{\rm w}$ & 4.6\ Average Cloud Temperature (K) & $T_{\rm c}$ & $5 \times 10^{3}$\ Cloud Velocity ($\rm km~s^{-1}$) & $v_{\rm c}$ & 0\ Cloud Radius (pc) & $r_{\rm c}$ & 5\ Fractal Cloud Volume ($\rm pc^{3}$) & $V_{\rm c\_frc}$ & 1491\ Spherical Cloud Volume ($\rm pc^{3}$) & $V_{\rm c\_sph}$ & 523\ Our simulations make use of two cloud geometries: a fractal shaped and a spherical shaped cloud. The fractal cloud was chosen in order to allow us to compare the break-up of the clouds in this study to the results of our global model in paper [i]{} and has the same form as the clouds in the inhomogeneous disk used in the global study. The use of the fractal cloud also allows us to investigate the effects of inhomogeneities in the clouds’s density distribution on its evolution. A spherical cloud was also modeled in order to allow us to better understand the importance of the assumed cloud geometry. To create the fractal cloud we first created a 1024 $\times$ 1024 $\times$ 1024 sized fractal cube using the method described in @Sutherland2007 and paper [ i]{}. A single cloud was isolated and extracted from this cube using a blob-coloring technique where each cell is examined, and any discontinuous group of non-zero cells given a unique label. An appropriate cloud was then selected and placed at cell number (256,256,256) of a 512 $\times$ 512 $\times$ 1536 sized grid, i.e. the origin of the simulation. The smaller resolution grids used in our fractal simulations were created by downsizing this larger grid (see Table \[sim\_param\]). These arrays represent the density of the cloud. The initial grid was setup by first setting the density ($n_{\rm w}$) and pressure ($P_{\rm w} = n_{w}kT_{w}/{\mu}m_{p}$) to be that of the hot wind. The fractal cloud was then created by adding the density array representing the cloud to the density of the halo gas. To create the spherical cloud, a high density spherical region of radius $r_{\rm c} = 5 \rm~pc$ was centered on the origin of the computational grid. The radial profile of the density of the spherical cloud is described by an exponential with a scaling radius of 3, in order to mimic the tapered density distribution in the fractal cloud’s core. A similar tapered density distribution was considered by @Nakamura2006 for an adiabatic cloud. The boundary condition on the inner z axis was set to have a fixed inflow with the same properties as that of the hot wind (e.g. $T_{\rm w} = 5 \times 10^{6} \rm~K$, $n_{\rm w} = 0.1 \rm~cm^{-3}$, and $v_{\rm w} = 1200 \rm~ km~s^{-1}$). All other boundaries were set to be inflowing/outflowing. Figure \[fig:dens\_profile\] shows the initial density distribution of both the fractal and spherical clouds. The average number density of the fractal cloud is set to be $n_{\rm c} = 63 \rm ~cm^{-3}$ and has a total mass of $M_{\rm c} = 1387 \rm ~M_{\odot}$, occupying a volume of $V_{\rm c\_frc} = 1491 \rm ~pc^{3}$. The spherical cloud is setup to occupy a similar volume to the dense core of the fractal cloud, having a radius of $r_{\rm c} = 5 \rm~pc$ and occupying a volume of $V_{\rm c\_sph} = 523 \rm ~pc^{3}$. The average density of the spherical cloud is $n_{\rm c} = 91 \rm ~cm^{-3}$ and has a total mass of $M_{\rm c} = 523 \rm ~M_{\odot}$. The lower average density of the fractal cloud is due to the large volume of less dense ($n_{\rm c} = 30 \rm ~cm^{-3}$) gas that surrounds the cloud core (Fig. \[fig:dens\_profile\]; left panel). In order to understand the effect of the cloud’s assumed initial density on its evolution, a simulation in which the density of the fractal cloud was doubled ($n_{\rm c} = 126 \rm ~cm^{-3}$) was also performed. In all simulations, the temperature and velocity of each cloud was set to be $T_{\rm c} = 5 \times 10^{3} \rm~K$ and $v_{\rm c} = 0 \rm ~km~s^{-1}$ respectively. While these simulations were designed to be applicable to starburst-driven winds and therefore have densities and temperatures typically found in a such an environment, they are also applicable to other astronomical phenomena that involve the interaction of a supersonic wind with a cloud of gas (e.g supernova remnants). When cooling is included, the PPMLR code has a one parameter scaling, which is discussed in @Sutherland2007. In general, the density scale of the simulations is inversely proportional to the spatial scale, so that within reason, these simulations can be adapted to problems on both larger and small scales. The Simulations --------------- The interaction of a supersonic wind with a cloud of dense gas is a problem of some complexity. Whilst there are many factors that could affect a clouds evolution and survival, such as thermal conduction [@Marcolini2005; @Orlando2005], magnetic fields [@Fragile2005] and photoevaporation [@Tenorio-Tagle2006] (see § \[missing\]), here we focus on the importance of radiative cooling and the effect on the clouds initial structure. We have also performed a comprehensive resolution study in order to ascertain the degree as to which the assumed resolution effected our global simulations in paper [i]{}. We adopt the following naming convention for our simulations: An $r$ or an $a$ indicates whether the simulation includes radiative cooling or is adiabatic, while an $f$ or an $s$ indicates if the geometry of the cloud is fractal or spherical respectively. The numerical value indicates the number of cells in the x-plane of the computational grid. For example, the simulation rf384 includes radiative cooling, incorporates a fractal cloud and utilizes a computational grid of size $384 \times 384 \times 1152$ cells. An exception to the naming convention is rfd384, which is identical to rf384, but whose cloud is twice as dense. [ccccccc]{} rf064 & $64 \times 64 \times 192$ & 0.78 & 63 & 1387 & yes & F\ rf096 & $96 \times 96 \times 288$ & 0.52 & 63 & 1387 & yes & F\ rf128 & $128 \times 128 \times 384$ & 0.39 & 63 & 1387 & yes & F\ rf192 & $192 \times 192 \times 576$ & 0.26 & 63 & 1387 & yes & F\ rf256 & $256 \times 256 \times 768$ & 0.20 & 63 & 1387 & yes & F\ rf384 & $384 \times 384 \times 1152$ & 0.13 & 63 & 1387 & yes & F\ rf512 & $512 \times 512 \times 1536$ & 0.10 & 63 & 1387 & yes & F\ af384 & $384 \times 384 \times 1152$ & 0.13 & 63 & 1387 & no & F\ rs384 & $384 \times 384 \times 1152$ & 0.13 & 91 & 703 & yes & S\ as384 & $384 \times 384 \times 1152$ & 0.13 & 91 & 703 & no & S\ rfd384 & $384 \times 384 \times 1152$ & 0.13 & 126 & 2770 & yes & F\ In total, eleven simulations were performed, with the purpose of each falling within the four different categories outlined below. 1. [*Resolution Study*]{} - Models [**rf064**]{}, [**rf096**]{}, [ **rf128**]{}, [**rf192**]{}, [**rf256**]{}, [**rf384**]{}, and [**rf512**]{} form our resolution study. These seven simulations all include radiative cooling and utilize a fractal cloud. The resolution of each simulation is given in Table \[sim\_param\] and ranges from 0.79 to 0.10 pc per cell width. 2. [*Cloud Structure*]{} - Model [**rs384**]{} has a resolution of 0.13 pc per cell width, includes radiative cooling and utilizes a spherical cloud. By comparison to rf384, this model is designed to investigate the effect of the shape of the cloud on its break-up and survival. 3. [*Radiative Cooling*]{} - Models [**af384**]{} and [**as384**]{} have the same properties as rf384 and rs384 respectively, but are adiabatic in nature. Both models are designed to help understand the importance of radiative cooling on the evolution and survival of a cloud. 4. [*Cloud Density*]{} - Model [**rfd384**]{} is identical to rf384, but has a larger cloud density and mass of $n_{\rm c} = 126 \rm ~cm^{-3}$ and $M_{\rm c} = 2770 \rm ~M_{\odot}$ respectively. This model is designed to investigate the effect of the clouds initial density on its evolution. A summary of the parameters used in each simulation is given in Table \[sim\_param\]. For each simulation we record the density, temperature, pressure, velocity, emissivity, and a cloud gas tracer in each cell at intervals of 0.01 Myr. With the exception of rf512, each simulation is followed until the time at which the cloud flows off computational grid. In the case of rf512, a computational error occurred while the simulation was in progress. Unfortunately this simulation is too computationally expensive to re-run, and we are therefore only able to follow the evolution of rf512 to a time t = 0.7 Myr. As such, our resolution study only considers the first 0.7 Myr of the evolution. Based on their two-dimensional adiabatic simulations, @Klein1994 suggest that a resolution of 120 cells per cloud radius is necessary in order to sufficiently capture the hydrodynamics of the wind/cloud interaction. While this proposed benchmark is easily achieved for a two-dimensional study, it becomes computationally demanding for a fully three-dimensional simulation that include more complicated physics (e.g. radiative cooling, thermal conduction). In order to overcome this difficulty, other authors have performed two-dimensional axisymmetric simulations [@Orlando2008], assumed symmetry in the solution [@Gregori1999; @Gregori2000; @Melioli2005; @Orlando2005], and neglected the more complex physics in their three-dimensional calculations [@Orlando2005]. We [*do not*]{} assume symmetry as we show in § \[evolution\] that, even in the idealized case of a spherical cloud, the solution becomes asymmetric as the cloud is broken-up and accelerated into the turbulent flow. As the PPMLR code used in this work and paper [i]{} utilizes a uniform grid, the computational resources required to model the entire cloud at the 120 cells per cloud radius resolution suggested by @Klein1994 would be excessive. We are forced to limit the main simulations in this study to a resolution of 38 cells per cloud radius (we consider a resolution of 50 cells per cloud radius in our resolution study). Nevertheless, it has been shown that the global properties of the interaction, as well as the averaged characteristics of cloud ablation process, can be well described at resolutions below this criterion [see, for example, @Gregori2000; @Poludnenko2002; @Melioli2005]. EVOLUTION ========= Description of the Wind/Cloud Interaction ----------------------------------------- ### Spherical Cloud We start with the simple case of the interaction of a spherical radiative cloud (rs384) with a supersonic wind, before describing the evolution of the fractal cloud. In order to illustrate the hydrodynamics of the wind/cloud interaction, throughout this section we will use slices of the density, temperature, pressure and velocity through the central y=0 plane. Figure \[fig:dens\_sph\] shows the evolution of the spherical cloud a 6 different epochs from 0.1 to 1.1 Myr at 0.2 million year intervals. Each panel represents the logarithm of the number density ($\rm cm^{-3}$). The top panels of Figure \[fig:var\_sph\] show the logarithm of the temperature ($\rm K$), the middle panels show the logarithm of the pressure ($\rm cm~s^{-2}$), and the bottom panels show the magnitude of the velocity ($\rm km~s^{-1}$). The evolution at two different epochs is given: 0.35 Myr (left panels) and 0.75 Myr (right panels). The spherical cloud is initially at rest and is immersed in a hot ($T = 10^6 ~\rm K$), supersonic ($V = 1200 ~\rm km~s^{-1}$) wind. A bow shock is immediately formed upstream of the cloud. At 0.10 Myr (Fig. \[fig:dens\_sph\]; upper left panel), the “front” of the cloud has been exposed to the high pressure of the shock, while gas is ablated from the “back” of the cloud a result of the Kelvin-Helmholtz instability and the strong rarefaction that is formed. A high density shock ($n = 1000 ~\rm cm^{-3}$) begins to propagate through the cloud, reflecting of the back wall at approximately 0.30 Myr (Fig. \[fig:dens\_sph\]; middle left panel). The Kelvin-Helmholtz instability continues to work, stripping material from the edge of the cloud (e.g. 0.5 - 1.1 Myr). This material is funneled to approximately 5 pc behind the cloud where it combines and condenses forming a tail of dense ($n \sim 10 ~\rm cm^{-3}$), cool ($T ~\sim 10^4 ~\rm K$) cloudlets, which are entrained into the hot turbulent flow downstream from the main cloud (Fig. \[fig:var\_sph\]; upper left panel). As the cloud evolves (Figs. \[fig:dens\_sph\]; right panels), the tail of cool gas continues to grow, becoming thicker as the cloud elongates. The cloud material appears as a filament of cool, low velocity ($v < 400 ~\rm km~s^{-1}$) gas, immersed inside a region of hot ($T > 10^7 ~\rm K$), turbulent gas with velocity $v \sim 400 - 1000 ~\rm km~s^{-1}$. Small cloudlets continue to be broken off the main cloud as the Kelvin-Helmholtz instability acts to shed its outer later. However, at 0.75 Myr only 12% of the mass of material that remains on the computational grid is found mixed into the hot wind. The bulk of the mass is still found in the cloud’s elongated core and tail, and despite the driving of radiative shocks into the cloud(s) there is little increase in the temperature of the cloud material as it radiates heat, remaining cool and cohesive as it leaves the computational grid. ### Fractal Cloud {#frac_cloud} Figure \[fig:dens\_cld\] shows the evolution of the radiative fractal cloud in model rf384 from 0.1 to 1.1 Myr through density slices at intervals of 0.2 Myr. Figure \[fig:dens\_hd\] is identical to Figure \[fig:dens\_cld\], but shows the evolution of a cloud (rfd384) with twice the density and mass than rf384. As with the radiative spherical cloud, a bow shock immediately forms upstream of the cloud. However, a significant effect of the inhomogeneous structure of the cloud is the formation of a shock off each ridge on the cloud’s surface that is exposed to the wind. This has the effect of creating a “web” of interacting shocks. By 0.3 Myr (Fig. \[fig:dens\_cld\]; middle left panel), the density at the front of the cloud has jumped to $n \sim 1000 ~\rm cm^{-3}$, but fallen to approximately $n = 10 ~\rm cm^{-3}$ at the rear of the cloud. Despite the high pressure exerted on the cloud, very little heating occurs, with the cloud maintaining temperatures of $T \sim 10^3 - 10^4 ~\rm K$ (Fig. \[fig:var\_cld\]; left upper two panels). As the cloud evolves (Fig. \[fig:dens\_cld\]; right panels), small cloudlets are broken off the main cloud by the Kelvin-Helmholtz instability. These cloudlets form a filamentary structure downstream of the main cloud and have velocities in the range of $0 - 400 ~\rm km~s^{-1}$, somewhat slower than the velocity of the surrounding stream ($v > 800 ~\rm km~s^{-1}$) (Fig. \[fig:var\_cld\]; bottom panels). The cloud is exposed to the high temperature and velocity of the wind at all points along the front surface of the cloud. The cloud breaks up fastest in regions where the density is lowest and the radius of the cloud is at its smallest. The cloud is continually eroded by the Kelvin-Helmholtz instability, the fragments of which are immersed in a low pressure, turbulent gas. At 0.75 Myr, the percentage of the mass of cloud material remaining on the computational grid that has mixed into the hot wind is $\sim$ 25%. The bulk of the original cloud mass is found in the remaining core remnant and the stream of small, dense $\sim$ 1 pc cloudlets. If these cloudlets are exposed to the wind they produce their own high pressure bow shock upstream of their position. However, the majority of the cloudlets are sheltered from the wind by other fragments broken off from the main cloud. These cloudlets survive to leave the computational grid. The higher density of the cloud in model rfd384 (Fig. \[fig:dens\_hd\]) results in the cloud retaining is structural integrity far longer than the lower density cloud in rf384. While the evolution of the higher density cloud is overall very similar to that in rf384, the break-up of the cloud via the Kelvin-Helmholtz instability is slower: At 1.1 Myr (bottom right panel), a significant bulk of the cloud material still remains on the computational grid and displays a similar morphology to the cloud in rf384 at 0.7 Myr (Fig. \[fig:dens\_cld\]; top right panel). In contrast to rf384, at 0.75 Myr only 11% of the clouds mass remaining on the computational grid is found mixed into the hot wind. However, it is likely this amount will increase as the cloud is further eroded by the Kelvin-Helmholtz instability. As observed by @Xu1995, the initial shape of the cloud has an effect on its subsequent evolution, with our fractal cloud fragmenting faster than the spherical cloud. While this is due in part to the lower average density of the fractal cloud, even the high density fractal cloud, whose average density and total mass is greater than that of the spherical cloud (see Table \[sim\_param\]), has a greater degree of fragmentation and is less cohesive when it leaves the computational grid. This is a result of the inhomogeneous nature of the fractal cloud’s initial density distribution and the larger cross-section is presents to the incident wind (Fig. \[fig:dens\_profile\]; left panel). The cloud first fragments along regions where the wind finds paths of least resistance, i.e. regions of low density. As a result the fractal cloud breaks-up into multiple core fragments. As the wind finds no regions of least resistance in the spherical cloud, it is able to retain a single cohesive structure for a longer period of time. Thus, not only is the initial geometry of the cloud important in determining its evolution, the distribution of the cloud’s density determines how quickly the cloud begins to fragment. More homogeneous density distributions would result in less initial fragmentation. Effect of Radiative Cooling --------------------------- ### Adiabatic Case In order to understand the degree as to which the inclusion of radiative cooling affects the survival of a cloud, we performed 2 simulations in which cooling was neglected: as384 and af384 for the spherical and fractal case respectively. Figure \[fig:dens\_adi\] shows the density of the adiabatic clouds in as384 (left) and af384 (right) at 0.3, 0.6, 0.9 Myr epochs. It can be immediately seen that in the absence of radiative cooling there is a far greater degree of mixing of cloud material with the surrounding stream, with 40% and 59% of the fractal and spherical clouds’ masses respectively on the computational grid found mixed into the hot wind at 0.75 Myr. The destruction of the adiabatic fractal cloud occurs faster and is more complete than the adiabatic spherical cloud. At 0.9 Myr the cloud has been almost completely destroyed (Fig. \[fig:dens\_adi\]; bottom right panel), with the cloud material mixed into the hot gas. The initial interaction with the wind is the same as in the radiative case, with a bow shock forming upstream of the cloud. However, the cloud gas quickly begins to heat to temperatures of the order of $T \sim 10^6 ~\rm K$ (Fig. \[fig:var\_af\]; upper left panel) and the cloud expands. Again the cloud breaks up first in regions where the density is low and the cloud radius is at its smallest. The high pressure exerted on the cloud (Fig. \[fig:var\_af\]; middle left panel) and the Kelvin-Helmholtz instability act to strip material from the cloud. While this material is able to survive in the radiative model, here the cloudlets broken off the main cloud are quickly heated and destroyed. The bulk velocity of the gas downstream of the main cloud is lower than that found in the radiative case (Fig. \[fig:var\_af\]; lower panels). In the case of the adiabatic spherical cloud, the shedding of the cloud’s outer layer by the Kelvin-Helmholtz instability, also seen in the radiative model, is greatly enhanced. By 0.9 Myr (Fig. \[fig:dens\_adi\]; bottom left panel), only the cloud core remains and is subsequently destroyed by the Kelvin-Helmholtz instability as the simulation progresses. As with the fractal cloud, the bulk velocity of the gas downstream of the main cloud is lower than that of the radiative spherical cloud at the same time (Fig. \[fig:var\_as\]; lower right panel). The evolution of a spherical adiabatic cloud is discussed in more detail below. ### Cloud Survival {#survival} One of the significant effects of the inclusion of radiative cooling is the longer life time of the impacted cloud. While this effect has been observed in the past by other authors [e.g. @Mellema2002; @Melioli2005], our study is of higher resolution and does not assume any symmetry, making it is a useful exercise to directly compare the simple case of the evolution of the spherical cloud in both our adiabatic (as384) and radiative (rs384) models in order to determine the mechanism behind the radiative cloud’s survival. The initial interaction of the wind and cloud is shown via density slices in Figure \[fig:dens\_initial\] at 0.05, 0.20 and 0.35 Myr epochs in both adiabatic (left) and radiative (right) models. It can be clearly seen that while the evolution begins almost identically with a bow shock forming upstream of the cloud, in the adiabatic case cloud material immediately starts being ablated from the back of the cloud. As in the radiative model, a shock propagates through the adiabatic cloud. However, the initial density increase observed in the cloud is not as extreme (e.g. $n < 1000 ~\rm cm^{-3}$). The cloud material is heated to temperatures of the order of $T > 10^5 ~\rm K$ and the cloud expands. The shock travels though the cloud, reflecting off the back surface at approximately 0.27 Myr, somewhat faster than in the radiative cloud. The adiabatic cloud expands transversely as it is accelerated downstream. However, this transverse expansion is suppressed in the radiative cloud as a result of the lower degree of heating of the cloud gas (Fig. \[fig:var\_sph\]; upper panels). In both cases, the Kelvin-Helmholtz instability acts to strip material from the edges of the cloud forming a tail of material downstream of the cloud position (Fig. \[fig:dens\_initial\]; middle and lower panels). In the adiabatic model, this tail is geometrically thick with density and temperature $n \sim 1 ~\rm cm^{-3}$ and $T \sim 10^6 ~\rm K$ respectively. In contrast, the tail formed in the radiative model is geometrically thin with density $n \sim 10 ~\rm cm^{-3}$ and temperature $T \sim 10^4 ~\rm K$. In the adiabatic model, the internal cloud shock reflects again off the front the cloud at approximately 0.35 Myr. At this time, the transverse expansion of the cloud persists and the Kelvin-Helmholtz instability continues to strip material from the clouds exterior into the downstream tail of gas. (Fig. \[fig:dens\_initial\]; lower left panel). The transverse expansion (Fig. \[fig:var\_as\]; middle panels) results in a higher rate of acceleration in the adiabatic model. As a consequence, the cloud has a lower relative Mach number relative to the stream (Fig. \[fig:var\_as\]; lower left panel) than the radiative cloud (Fig. \[fig:var\_sph\]; lower left panel). The growth rate of the Kelvin-Helmholtz instability is lower for higher Mach numbers and its effect is strongly diminished for the radiative cloud. The adiabatic cloud is more easily disrupted and destroyed. This can be dramatically seen in the middle left panel of Figure \[fig:dens\_adi\] where the Kelvin-Helmholtz instability has stripped the entire outer layer of the adiabatic spherical cloud [see also Fig. 3 of @Orlando2005]. Since a radiative cloud is broken-up via the Kelvin-Helmholtz instability into a filamentary structure of small $\sim$ 1 pc sized clouds, the survival of these small clouds is of interest. We now compare the cloud crushing time ($t_{\rm crush}$) and the Kelvin-Helmholtz timescale ($t_{\rm KH}$) to the cooling time ($t_{\rm cool}$) of a cloud with radius $R_{\rm c} = 1 ~\rm pc$, density $\rho_{\rm c} = 10 ~\rm cm^{-3}$, temperature $T_{\rm c} = 10^4 ~\rm K$, and velocity $v_{\rm c} = 200 ~\rm km~s^{-1}$. The cloud crushing time of such a cloud is $t_{\rm crush} \approx R_{\rm c}/v_{\rm sh} \approx (\rho_{\rm c}/\rho_{\rm w}) R_{\rm c}/v_{\rm w} = 3 \times 10^{12} ~\rm s$, where the density and velocity of the wind is $\rho_{\rm w} = 0.1 ~\rm cm^{-3}$ and $v_{\rm w} = 1000 ~\rm km~s^{-1}$ respectively. The Kelvin-Helmholtz timescale is $t_{\rm KH} = R_{\rm c}(\rho_{\rm c} + \rho_{\rm w})/(v_{\rm c}-v_{\rm w})(\rho_{\rm c} \rho_{\rm w}) = 3 \times 10^{11} ~\rm s$. The cooling time for a 1 pc cloud in our simulations is of the order of $10^{10}$ seconds, somewhat shorter than the cloud crushing time and the Kelvin-Helmholtz timescale, suggesting that the cloudlets may remain sufficiently stable to ablation and survive to later times. In addition, we note that the self-gravity of the clouds may cause them to collapse, becoming more difficult to disrupt [@Mellema2002]. RESOLUTION STUDY {#resolution} ================ Mass Flux --------- In order to test the dependence of our results on the numerical resolution of the code, we have performed seven simulations of the radiative fractal cloud interacting with a supersonic wind at increasing resolutions from 0.78 - 0.10 pc per cell width (see Table \[sim\_param\]). We calculated the flux of mass through a surface at $z = 75 ~\rm pc$ over the first 0.7 million years of each simulation in our resolution study. Figure \[fig:mass\_flux\] shows this mass flux for each simulation as a function of time (left panel), as well as the total mass flux integrated over the first 0.7 Myr of the simulation as a function of resolution (right panel). The mass flux over the first 0.4 Myr of the evolution is similar at all resolutions. This is a result of the well resolved hot stream of gas passing through the flux surface. After this point, the cloud material begins to pass though the surface and the mass flux starts to vary with resolution. The initial drop in the mass flux seen in the left hand panel of Figure \[fig:mass\_flux\] is due to the rarefaction that passes though the flux surface (see Fig. \[fig:dens\_sph\]; top left panel). From approximately 0.2 to 0.4 Myr, the density of the stream increases as mass is ablated from the rear of the cloud resulting in a similar flux at all resolutions. During this time, the mass flux gradually increases as the $n = 0.1 ~\rm cm^{-3}$ mass loaded stream of gas passes through the flux surface. At approximately 0.4 Myr, the mass flux begins to vary rapidly, dramatically increasing in each simulation. The large variation in the mass flux is due to the turbulent nature of the gas passing though the flux surface and the dense cloudlets immersed within this gas. The cloudlets pass through the surface at different times resulting in fluctuations in the mass flux. There is a general trend of increasing mass flux with the resolution of the simulation. This trend can be explained by the increase in fragmentation of the cloud with increasing numerical resolution. Figure \[fig:res\_dens\] shows volume renderings of the projected density at 0.7 Myr in models rf064, rf128, rf256 and rf512, which have resolutions of 0.78, 0.39, 0.20, and 0.10 pc per cell width respectively. The increase in the fragmentation of the cloud with resolution can clearly be seen, and will be discussed in more detail in § \[fragment\]. At the highest resolution attempted in this study (0.1 pc), the filaments resemble a “foam” of cloudlets, while at low resolution the cloud has been broken-up into only a few large fragments. As a consequence, the cross-section of dense material that passes through the flux surface at any given time after 0.4 Myr is larger in the higher resolution simulations. The total integrated mass flux passing through the surface at $z = 0.75 ~\rm pc$ over the first 0.7 Myr also increases with numerical resolution (Fig. \[fig:mass\_flux\]; right panel). Again this is caused by the larger degree of fragmentation at high resolution, resulting in more ablation of cloud material from the back of the cloud. Between our highest and lowest resolution simulations the difference in mass flux is approximately 10%. This discrepancy is likely to increase at higher resolutions, although it is possible that convergence may occur at extremely high resolution simulations ($> 0.1~\rm pc$) that utilize an adaptive mesh. For the 2 pc per cell width resolution of our global simulations in paper [i]{}, this error is increased to approximately 20%. We will discuss the impact of the numerical resolution on the results of paper [i]{} in § \[SB\_winds\]. Cloud Fragmentation {#fragment} ------------------- The most significant effect of increasing the numerical resolution of the simulation is the increase in fragmentation of the cloud. The increase in cloud fragmentation can clearly be seen in Figure \[fig:res\_dens\], with the the cloud broken into only a few large fragments in the lowest resolution simulations, but 100’s of fragments at higher resolution. Nevertheless, filamentary structure, where the concentration of cloudlets is higher, can still be made out at high resolution. These “filaments” are located in similar positions to the filaments in the low resolution simulations. We are able to calculate the properties of each cloudlet by using an algorithm which allows us to pick out and select fragments . Note that we impose a minimum mass of $M_c = 10^{-3} ~\rm M_{\odot}$ for a fragment to be selected. Figure \[fig:number\_clouds\] shows the number of cloudlets produced as a function of numerical resolution. There is a general trend from the number of cloudlets produced in the interaction to increase as a power law with increasing resolution. Even with larger computational resources and an adaptive mesh, this trend is likely to continue ad infinitum. The dependence of the degree of fragmentation on the numerical resolution has been observed by other authors in both two-dimensional [@Klein1994] and three-dimensional [@Stone1992] simulations of a spherical cloud. This effect can be explained by the growth rate of Kelvin-Helmholtz instability, which is faster at smaller wavelengths. As the resolution is increased, this instability is increasingly resolved and more fragmentation of the cloud is observed. This is illustrated in Figure \[fig:Kelvin-helmholtz\], where the Kelvin-Helmholtz instability can clearly be seen to increasingly fragment the cloud at higher numerical resolution. ![(a) Velocity histogram of the cloudlets, (b) mass histogram of the cloudlets, (c) total cloudlet mass (M$_{\rm tot}$) as function of the cloudlet mass. (online: 0.78 pc/cell \[navy\], 0.39 pc/cell \[cyan\], 0.20 pc/cell \[gold\], and 0.10 pc/cell \[red\]).[]{data-label="fig:cloud_prop"}](f15a_color.pdf "fig:") ![(a) Velocity histogram of the cloudlets, (b) mass histogram of the cloudlets, (c) total cloudlet mass (M$_{\rm tot}$) as function of the cloudlet mass. (online: 0.78 pc/cell \[navy\], 0.39 pc/cell \[cyan\], 0.20 pc/cell \[gold\], and 0.10 pc/cell \[red\]).[]{data-label="fig:cloud_prop"}](f15b_color.pdf "fig:") ![(a) Velocity histogram of the cloudlets, (b) mass histogram of the cloudlets, (c) total cloudlet mass (M$_{\rm tot}$) as function of the cloudlet mass. (online: 0.78 pc/cell \[navy\], 0.39 pc/cell \[cyan\], 0.20 pc/cell \[gold\], and 0.10 pc/cell \[red\]).[]{data-label="fig:cloud_prop"}](f15c_color.pdf "fig:")\ Figure \[fig:cloud\_prop\] gives velocity (left) and mass (centre) histograms of the cloudlets , as well as the total mass of the cloudlets as function of the cloudlet mass (right), at 0.7 Myr for the 4 resolutions shown in Figure \[fig:res\_dens\]. The massive fragments ($M \gtrsim 10^{2} - 10^{3} ~\rm M_{\odot}$) present at all resolutions are the remnants of the cloud core. Since the number of cloudlets increases with resolution, the highest resolution simulations are comprised of numerous low mass cloudlets. In general, there is a trend towards increasing smaller mass fragments at all resolutions. However, in all cases, the majority of mass of the cloudlet system is found in the massive fragments ($>$ 40 %) rather than the smaller fragments. The velocity of the cloudlets does not vary significantly with resolution. Despite the large number of lower mass fragments present in the high resolution simulations, they still fall within the velocity range of $v_{\rm c} = 150 - 400 ~\rm km~s^{-1}$. The bulk of the cloudlets at all resolutions have a velocity in the range of $v_{\rm c} = 180 - 220 ~\rm km~s^{-1}$. This is likely to increase as the evolution progresses and the cloudlets are further accelerated by the wind. Unfortunately, we are unable to fully resolve the interaction of a radiative cloud with a supersonic wind at this time. This limits our ability to draw any reliable conclusions regarding the small scale evolution of the wind/cloud interaction. A significant increase in numerical resolution and/or the use of an adaptive mesh would be required in order for convergence to possibly occur. Nevertheless, there is a clear trend in the large-scale evolution of the wind/cloud interaction at all resolutions considered in this study, allowing us to draw some physical conclusions. For example, the soft X-ray luminosity is sufficiently resolved and is almost constant with numerical resolution (see § \[xray\]). The effect of radiation in keeping the cloud cool, suppressing the transverse expansion and minimizing the effect of the Kelvin-Helmholtz instability, is also not effected by the resolution of the simulation. In all cases, the cloud is not immediately destroyed and mixed into the hot wind as seen in adiabatic models, but is instead broken-up into numerous small cloudlets. The major effect of increasing the resolution is the increased fragmentation of the cloud. However, we still see the same qualitative structure at high resolution, with the cloud breaking-up to form a filamentary structure that becomes finer and finer as more detail is resolved. EMISSION IN STARBURST-DRIVEN WINDS {#SB_winds} ================================== Filamentary H$\alpha$ Emission ------------------------------ At optical wavelengths (such as H$\alpha$), starburst-driven winds appear as spectacular filamentary systems extending several kpc along the minor axis of the host galaxy, e.g. M82 [@Shopbell1998], NGC 3079 [@Veilleux1994], NGC 1569 [@Westmoquette2008]. While it has long been proposed that this filamentary material was expelled from the central region of the galaxy [@Lynds1963; @Bland1988], until now the mechanism behind the formation of the filaments has not been completely understood. In paper [i]{}, we proposed that the filaments were formed via clouds of disk gas that are broken-up and accelerated into the outflow by the ram-pressure of the wind. In order for this mechanism to be viable, the cloud fragments need to survive and remain sufficiently cool to emit at H$\alpha$ temperatures. The simulations presented in this paper allow us to address this important issue. As in paper [i]{}, we define the H$\alpha$ emitting gas to be cloud material with temperatures in the range of $T = 5 \times 10^3 - 3 \times 10^4 ~\rm K$. We note that as photoionization is not included in our model, the H$\alpha$ emission discussed in this paper arises solely from shock ionization. However, photoionization is known to play a role in the ionization of the filaments in many winds. For example, the filaments in M82’s wind are known to be photoionized at low distances above and below the galactic plane, with shock ionization becoming dominant at large distances. An investigation into the effects of photoionization on the cloudlets is warranted, but is beyond the scope of this study. Figure \[fig:halpha\] shows a three-dimensional volume rendering of the density of the H$\alpha$ emitting gas at 0.5 Myr in model rf384 (Online: Movie of evolution of the H$\alpha$ emitting gas in rf384 over the first 1.37 Myr). It can be immediately seen in Figure \[fig:halpha\] that the H$\alpha$ emitting material corresponds to the dense cloud material. Thus, the survival mechanism for a cloud proposed in § \[survival\] can be invoked to explain the filaments observed in starburst-driven winds. As discussed in § \[evolution\], the cloud is broken-up via the Kelvin-Helmholtz instability, with the fragments subsequently entrained into the outflow forming a filamentary structure. Figure \[fig:vel\] (left panel) gives the emission weighted histogram of the z-velocity along the filament at 0.7 Myr in model rf384. The majority of the H$\alpha$ emission has velocities in the range of $v \sim 0 - 30 ~\rm km~s^{-1}$, with the velocity increasing with distance along the z-axis. This is consistent with the H$\alpha$ gas reaching higher velocities at larger distances above the galaxy plane, which was observed in the global simulations in paper [i]{}. Note that this result represents only the velocity dispersion at the base of a [*single*]{} filament early in its evolution, with the velocity likely to increase as the cloudlets, which form the filament, are further accelerated in the direction of the flow. As in our global simulations, we again find that it is the ram-pressure of the wind that accelerates the clouds. The main difference between the filaments formed in paper [i]{} and the filaments found here is the number of cloudlets that comprise the filament; a direct result of the higher numerical resolution of the simulations in this work. If we were able to increase the resolution of the global simulations, we would find similar fine structure in the filaments that are formed. Figure \[fig:ha\_mass\_flux\] (left) shows the mass flux of the H$\alpha$ emitting gas through a surface at $z = 75 ~\rm pc$ at each resolution considered in our study. As expected, there is no H$\alpha$ emitting gas passing though the surface until approximately 0.3 Myr, as the dense cloud material has yet to encounter the flux surface. As in Figure \[fig:mass\_flux\], the difference in the mass flux at each resolution is a result of the increasing fragmentation of the H$\alpha$ emitting clouds at high resolution. The right hand panel of Figure \[fig:ha\_mass\_flux\] shows the total integrated mass flux, over the first 0.7 Myr, of the H$\alpha$ emitting gas as a function of resolution. We see an increase in the total mass passing through the flux surface, again caused by the increase in fragmentation as the Kelvin-Helmholtz instability is further resolved. Even at the high resolution of these simulations the H$\alpha$ mass flux is yet to converge. In § \[survival\] we discussed how a cloud could survive the interaction with a hot supersonic wind. The ability of a cloud to radiate heat is crucial for the clouds survival, allowing it to remain stable to ablation and emit at H$\alpha$ temperatures. Without this ability, the cloud quickly heats above $T = 10^6 ~\rm K$, expands and becomes susceptible to the Kelvin-Helmholtz instability. While a radiative cloud is still disrupted, the small sized fragments that are broken off the main cloud have cooling times faster than the cloud crushing time and the Kelvin-Helmholtz growth rate, and thus, may possibly survive. In the adiabatic case, the fragments get heated and destroyed, with the cloud material quickly becoming mixed into the hot wind. If the fragments survive, they are drawn-out into strands and form a filament downstream of the original cloud position, reminiscent of the filaments seen in starburst-driven winds. Soft X-Ray Emission {#xray} ------------------- In paper [i]{} we proposed four mechanisms that could give rise to the soft X-ray emission that is observed to be spatially correlated to the H$\alpha$ emitting filaments. This correlation has been observed by [*Chandra*]{} in many starburst-driven winds [e.g @Cecil2002; @Strickland2004a; @Strickland2004b]. Here we summarize the proposed mechanisms: 1. The mass-loaded wind. This is the largest contributor to the soft X-ray emission in the global simulations. As mass is ablated from the clouds it is mixed into the surrounding turbulent gas, creating a region of hot ($T \gtrsim 10^{6} ~\rm K$) rapidly cooling gas that emits strongly at soft X-ray energies. 2. The intermediate temperature interface between the hot wind and cool filaments. Gas at the boundary between the hot and cool gas mixes to produce a thin region of intermediate density and temperature. Like the mass-loaded wind, this mixed gas is a strong emitter of soft X-rays. 3. Bow shocks. Gas is heated to X-ray temperatures as a bow shock is formed upstream of dense clouds accelerated into the flow. 4. Colliding bow shocks. When two bow shocks interact, the gas is further shock heated to X-ray temperatures. The first two processes involve the mixing of hot and cold gas and could be the result of numerical diffusion in the simulations and therefore not physical. Our resolution study allows has to examine the effect of increasing resolution on the soft X-ray emission and determine the realism of the above possible emission processes. As in paper [i]{}, we infer the X-ray luminosity in the soft (0.5 - 2.0 keV) energy band using broadband cooling fractions obtained from the MAPPINGS IIIr code [@Sutherland1993]. Figure \[fig:xray\] shows the soft X-ray emissivity in models rf128 (left) and rf512 (right) at 0.3, 0.5, and 0.7 Myr epochs. The strongest X-ray emitter in both models is the bow shock that immediately forms upstream of the cloud as it interacts with the wind. Regions where bow shocks are interacting result in the highest X-ray emissivities. Apart from a few marginally bright tails coming off some of the cloudlets, particularly in model rf128 (see Fig. \[fig:xray\]; bottom left panels), we see no evidence that mass-loading of the wind by ablation from the cloud is a significant contributor to the soft X-ray luminosity. We also see very little evidence that the intermediate temperature interface plays a significant role. There are a few bright regions upstream of cloudlets that have been exposed to the wind. However, it is likely that this enhanced emission is related to the X-ray emitting bow shocks that have formed around each cloudlet. The main difference between the simulations shown in Figure \[fig:xray\] at high (right panel) and low (left panel) resolution is in the structure of the main bow shock and the emission from the cloudlets. While we observe some structure in the low resolution simulations, as the resolution is increased we see clear regions where colliding bow shocks lead to a significant increase in the X-ray emissivity. This is more evident at later times when the main cloud has fragmented and there are many X-ray emitting bow shocks upstream of the resulting cloudlets (Fig. \[fig:xray\]; bottom right panels). It might be expected than that the increase in fragmentation at higher resolutions would lead to an increase in the X-ray luminosity as more bow shocks are formed. However, the majority of the cloudlets formed are sheltered from the impacting wind by the main cloud and do not form a bow shock. Thus, they are not seen at soft X-ray energies and do not contribute to the X-ray luminosity (Fig. \[fig:xray\]; right panels). Indeed, the soft X-ray luminosity hardly varies with resolution. Figure \[fig:xray\_lum\] shows the soft X-ray luminosity as a function of resolution for our radiative fractal cloud at 0.7 Myr. At all resolutions, the luminosity is of the order of $L_{\rm x} \sim 10^{36} ~\rm erg~s^{-1}$. This amount varies only negligibly from low to high resolution. This is a result of the strongest X-ray source being the main bow shock upstream of the original gas cloud, which is present at all resolutions. As discussed above, we see little evidence that the mass-loaded component of the wind, nor the intermediate temperature interface between the hot and cold gas plays a large role in the soft X-ray emission. While high resolution ($< 0.5 ~\rm pc/cell$) global simulations are needed in order to confirm this result, we conclude that bow shocks and their interaction are the main source of the soft X-ray emission in starburst-driven winds. O[vi]{} Emission {#Ovi} ---------------- The importance of radiative cooling in the formation and survival of the filaments was discussed in § \[survival\]. For our proposal of the formation of a filament, via the break-up and acceleration of cool disk gas into the wind, to be a viable mechanism, cooling must be present in the outflow. Observations of the O[vi]{} emission line could be used to detect this cooling in the filaments. @Heckman2001 report the detection of O[ vi]{} emission in the dwarf starburst galaxy NGC 1705, which they associate with cooling in the outflow from gas at temperatures of $T \gtrsim 3 \times 10^5 ~\rm K$. They propose turbulent mixing layers [e.g. @Slavin1993] as a possible origin for this emission. In order to determine were the O[vi]{} emission may arise in our simulations, we have produced a “map” of the predicted O[vi]{} emission in models rf384, rs384, af384, and as384 at 0.7 Myr (Fig. \[fig:Ovi\]). We assume that the O[vi]{} emission in our simulations falls within the temperature range of $T = 1 \times 10^5 - 4 \times 10^5 ~\rm K$. When cooling is neglected (bottom two panels), O[vi]{} emission is only observed around the surviving cloud core. This emission is the result of the high degree of mixing of the hot and cold gas seen in the adiabatic simulations. In the radiative models (top two panels), O[vi]{} emission is observed throughout the flow and is closely aligned to the filamentary gas. This emission is caused by the mixing of hot and cold gas in the vicinity of each cool cloudlet that comprises the filament. The distribution of the O[vi]{} emission is most significant in the radiative fractal model (rf384; top panel), where the structure of the filament can still be clearly seen. The emission weighted histogram of the z-velocity for this model at 0.7 Myr is shown in the right hand panel of Figure \[fig:vel\]. The velocity dispersion of the O[vi]{} gas is similar to that of the H$\alpha$ emitting filaments, falling in the range of $v \sim 0 - 40 ~\rm km~s^{-1}$. While our investigation of the O[vi]{} emission is only preliminary, further detection of O[vi]{} kinematics similar to those proposed in this work would lend support the premise of cooling in the filaments of starburst winds. MISSING PHYSICS {#missing} =============== As discussed in § \[method\], the simulations performed in this study and paper [i]{} do not include thermal conduction or magnetic fields. While an investigation into the role these phenomena play in the wind/cloud interaction is beyond the scope of this work, they may influence the evolution and survival of the filaments in starburst winds. As such, we briefly discuss any differences their inclusion may have had on our results. Thermal Conduction ------------------ The effects of thermal conduction on the wind/cloud interaction has been investigated over the last decade by a number of authors [e.g. @Vieser2000; @Hensler2002; @Marcolini2005; @Orlando2005; @Recchi2007]. The immediate effects on the evolution of a radiative cloud has been described in detail by @Marcolini2005 who modeled the interaction of a cloud of radius $R_{\rm c} = 15 ~\rm pc$ and temperature $T_{\rm c} = 10^4 ~\rm K$ with a hot wind of temperature $T_{\rm w} = 10^6 ~\rm K$. Here we summarize the evolution of a radiative cloud in their two-dimensional simulation that includes the effects of thermal conduction: 1. In the early phases of the cloud’s evolution, thermal conduction results in a converging shock forming around the cloud. The cloud shrinks in size until the central pressure in the cloud’s core is high enough to halt the collapse. 2. The cloud begins to re-expand and over time becomes elongated in the downstream direction. 3. As the cloud evolves, it is compressed in a non-uniform manner due to the shape of the cloud and the complexity of the flow. 4. After 1 Myr, the cloud forms a filamentary structure along the symmetry axis of the simulation. The key difference to the evolution of a cloud in a simulation that neglects thermal conduction is that the cloud does not suffer hydrodynamical instabilities and fragment. This is because thermal conduction smooths the density and velocity gradients across the surface of the cloud, inhibiting the growth of instabilities [also see @Vieser2000]. However, @Marcolini2005 note that there is also significant mass loss from the cloud through evaporation in their thermal conduction models. @Orlando2005 also considered the effects of thermal conduction on a cloud’s evolution by investigating the survival of a cloud of radius $R_{\rm c} = 1 ~\rm pc$ being overrun by Mach 30 and 50 shocks. They performed both three-dimensional simulations that ignored radiative cooling and thermal conduction and two-dimensional simulations where they were included. Similar to @Marcolini2005, they found that thermal conduction inhibited the growth of hydrodynamical instabilities. In their simulations, the structure of a cloud consists of a cool dense core surrounded by a “corona” of gas, dominated by thermal conduction, that gradually evaporates as the cloud evolves. The cloud does not fragment and there is significant mass loss through evaporation. In the case of the slower Mach 30 shock, the dense core is unaffected by heat conduction and still fragments into multiple cloudlets. They also investigated the X-ray emission that would arise from their simulations [@Orlando2006]. They found that for the slower shock, the soft X-ray emission arose in the thermally conducting corona, while the cold core would likely emit at optical wavelengths. In the case of the faster Mach 50 shock, they suggest that there would be no optical component. In this paper, we find that the soft X-ray emission arises primarily from the main bow shock upstream of the original cloud. The optically emitting filamentary gas originates from the cool cloudlets that are the remnants of this cloud. As the simulations of @Marcolini2005 are the closest in initial conditions to our study, we use them as a basis for our supposition on the effect of thermal conduction on our results. Most significantly, thermal conduction would likely act to inhibit the growth of the Kelvin-Helmholtz instability, which is responsible for the high degree of cloud fragmentation seen in our simulations. While the initial evolution of the cloud would be similar to that discussed in § \[evolution\], the break-up of the cloud would likely be totally or partially suppressed. The initial structure of the cloud (e.g. fractal or spherical) would play a role in determining the degree of any fragmentation that occurs. A more cohesive H$\alpha$ emitting filamentary structure would be formed. In their study, @Marcolini2005 reported the bow shock upstream of the initial cloud to be the main source of the soft X-ray emission in their non thermal conducting simulations. When thermal conduction was considered they found an increase in the soft X-ray emission in the region between the cloud surface and the bow shock. It is likely that our simulations would show a similar result. Note that this emission is still complimentary to the filamentary H$\alpha$ emission, as seen by observations of starburst winds. We conclude that thermal conduction may help the survival time of a radiative cloud overrun by a supersonic wind by stabilizing it against the destructive effect of the Kelvin-Helmholtz instability. While there will be less mixing of cloud material into the wind by hydrodynamical effects, there will still be significant mass loss from the cloud via evaporation. However, as discussed below, the presence of a magnetic field may counteract any benefits to cloud stability gained through thermal conduction. Magnetic Fields --------------- Magnetic fields have been shown by a number of authors to have a varying effect on the wind/cloud interaction. A variety of different circumstances have been taken into consideration, such as the strength and orientation of the magnetic field. Early models were largely two-dimensional and ignored the effects of radiative cooling [e.g. @MacLow1994], while later models have been three-dimensional [e.g. @Gregori1999; @Gregori2000; @Shin2008], and/or included radiative cooling and thermal conduction [e.g. @Fragile2005; @Orlando2008]. These studies have shown that the orientation of the magnetic field (i.e. parallel, perpendicular, or oblique to the flow) has a significant effect on the evolution of the cloud. For example, @Shin2008 found that in the case of a strong oblique magnetic field, the cloud was actually pushed out of the central y=0 plane as it evolved. It has also been suggested that magnetic fields can help suppress the hydrodynamical instabilities that shred the cloud in a non-magnetized simulation [@MacLow1994; @Fragile2005], with less fragmentation occurring with increasing field strength. On the other hand, @Gregori1999 show that in a three-dimensional model, the growth of hydrodynamical instabilities can actually be accelerated by the presence of a magnetic field. It is widely thought that the presence of a magnetic field will act to suppress thermal conduction [e.g. @Chandran1998; @Narayan2001]. @Marcolini2005 suggest that the coefficient of thermal conductivity $\kappa$ may be significantly reduced below the Spitzer level in the presence of a magnetic field. To investigate how a reduced degree of conductivity may effect their results, they performed a number of simulations where $\kappa$ was below the standard Spitzer level $\kappa_{\rm sp} = 6.1 \times 10^{-7} ~T^{5/2} ~\rm erg~s~K^{-1}$. They found that at sub-Spitzer levels, thermal conduction is not as efficient at suppressing the hydrodynamical instabilities that fragment the cloud. More recently, @Orlando2008 ran a series of two-dimensional axisymmetric simulations that included radiative cooling, thermal conduction, and magnetic fields. They showed that while thermal conduction was not completely suppressed in the presence of a magnetic field, regardless of the orientation of the field, the cloud would be broken up and destroyed by hydrodynamical instabilities. However, they also found that this effect was lessened in the case of a strong magnetic field. While clearly the interplay between thermal conduction and magnetic fields will influence the evolution and survival of the filaments in starburst winds, without detailed three-dimensional simulations (with initial conditions appropriate to the problem) that consider radiative cooling, thermal conduction and the complex nature of magnetic fields, it is difficult to hypothesize what effect the magnetic field would have on a filament’s formation and survival. Nevertheless, a filamentary structure is still likely to be formed by the wind/cloud interaction, with the most significant difference being in the number of cloudlets that comprise the filament. We do not anticipate that there would be a large effect to the H$\alpha$, soft X-ray, and O[vi]{} emission processes discussed in § \[SB\_winds\]. SUMMARY ======= We have performed a series of three-dimensional simulations of the interaction of a supersonic wind with a radiative cloud. We consider two different cloud geometries (i.e. fractal and spherical), which enable us to investigate the impact of the initial shape and structure of the cloud on its subsequent evolution. This work was motivated by the simulations of the formation of a starburst-driven galactic wind in a inhomogeneous interstellar medium, reported in paper [i]{}. The aim of this work is to investigate the possible survival mechanism of a cloud accelerated by a hot freely expanding wind. We also set out to determine the effect of the numerical resolution of the evolution of the cloud and the implied soft X-ray emission associated with the interaction. The results of this study are as follows: 1. Both the initial geometry and the density distribution of the cloud significantly affect its evolution. A cloud which has a more inhomogeneous distribution of density fragments more than a cloud with a more uniform structure (e.g. a sphere). The wind more rapidly breaks the cloud apart in regions where it encounters the least density. 2. A radiative cloud survives longer than an identical adiabatic cloud. This is a result of the lower degree of heating in the radiative cloud, which suppresses the transverse expansion seen in the adiabatic case. The radiative cloud experiences a lower degree of acceleration and has a higher relative Mach number to the flow, diminishing the destructive effect of the Kelvin-Helmholtz instability. 3. The number of fragments formed by the break-up of the cloud increases as a power law with increasing numerical resolution. This is a direct result of further resolving the Kelvin-Helmholtz instability, which grows more quickly at shorter wavelengths. The number of fragments formed increases down to the resolution of the simulation and will not converge. 4. The calculated mass flux increases with numerical resolution. This is due to the turbulent nature of the stream and the increasing fragmentation of the cloud. High ($ < 0.1~ \rm pc/cell$) resolution and an adaptive mesh would be required for convergence to possibly occur. 5. A radiative cloud fragments into numerous cool, small dense cloudlets. These cloudlets are entrained into the turbulent flow, forming an overall filamentary structure, with regions where the concentration of cloudlets is higher. The velocity of the cloudlets at 0.7 Myr falls in the range of $v_{\rm c} = 150 - 400 ~\rm km~s^{-1}$ irrespective of resolution. 6. The filamentary structure that is formed and the range of velocities found are in good agreement with optical observations of starburst-driven winds. Thus, we confirm our conclusion from paper [i]{}, that H$\alpha$ emitting filaments can be formed from clouds accelerated into a supersonic wind by the ram-pressure of the wind. 7. There is little variation in the estimated soft X-ray luminosity of the radiative fractal cloud at all numerical resolutions considered, indicating that the X-ray emission is well resolved. 8. Soft X-ray emission arises primarily from the main bow shock, produced in the initial interaction, and from bow shocks produced upstream of fragments that are directly exposed to the wind. Regions where these bow shock interact are strong X-ray emitters. We see little evidence that the mixing of hot and cold gas (e.g. mass-loading and the boundary between the cool cloud material and the hot wind), contribute significantly to the X-ray emission. 9. The O[vi]{} emission arises is the same vicinity as the H$\alpha$ emission and has comparable emission weighted velocities, suggesting that the detection of O[vi ]{} in an outflow may be indicative of cooling in the filaments. The ability for a cloud to radiate heat is crucial for it to survive immersed inside a hot, turbulent, supersonic wind. While effects such as thermal conduction and magnetic fields will have an effect on the clouds survival, without radiative cooling the cloud is quickly destroyed by the Kelvin-Helmholtz instability, with the cloud’s material completely mixed into the surrounding stream. Thus, for a model of the wind/cloud interaction to be realistic, radiative cooling certainly cannot be neglected. Through the results of this work and paper [i]{}, we have shown that an optically emitting filament is easily formed via the interaction of a cool, dense cloud and a hot, tenuous supersonic wind under the conditions typically found in a starburst wind. We also find soft X-ray emission that has a natural spatial relationship to the filamentary gas. A relationship also seen in Chandra observations of these winds. Clearly, the multiphase nature of the interstellar medium is crucial for the formation of the filaments in starburst winds and can help explain much of the optical and soft X-ray emission detected in these complex objects. [47]{} natexlab\#1[\#1]{} , J., & [Tully]{}, B. 1988, Nature, 334, 43 , J. 1995, The VH-1 User’s Guide, Univ. Virginia , G., [Bland-Hawthorn]{}, J., & [Veilleux]{}, S. 2002, ApJ, 576, 745 , B. D. G. & [Cowley]{}, S. C. 1998, Phys. Rev. Lett., 80, 3077 , P., & [Woodward]{}, P. R. 1984, J. Comp. Phys, 54, 174 , J. L., [Bicknell]{}, G. V., [Sutherland]{}, R. S., & [Bland-Hawthorn]{}, J. 2008, ApJ, 674, 157 , D. P. 2005, ARA&A, 43, 337 , P. C., [Anninos]{}, P., [Gustafson]{}, K., & [Murray]{}, S. D. 2005, ApJ, 619, 327 , P. C., [Murray]{}, S. D., [Anninos]{}, P., & [van Breugel]{}, W. 2004, ApJ, 604, 74 , G., [Miniati]{}, F., [Ryu]{}, D., & [Jones]{}, T. W. 1999, ApJL, 527, L113 , G., [Miniati]{}, F., [Ryu]{}, D., & [Jones]{}, T. W. 2000, ApJ, 543, 775 , J. F., [Robey]{}, H. F., [Klein]{}, R. I., & [Miles]{}, A. R. 2007, ApJ, 662, 379 , S. R., & [Brand]{}, P. W. J. L. 1983, MNRAS, 203, 67 , T. M., [Sembach]{}, K. R., [Meurer]{}, G. R., [Strickland]{}, D. K., [Martin]{}, C. L., [Calzetti]{}, D., & [Leitherer]{}, C. 2001, ApJ, 554, 1021 , G. and [Vieser]{}, W. 2002, Ap&SS, 281, 275 , R. I., [Budil]{}, K. S., [Perry]{}, T. S., & [Bach]{}, D. R. 2003, ApJ, 583, 245 , R. I., [McKee]{}, C. F., & [Colella]{}, P. 1994, ApJ, 420, 213 , C. R., & [Sandage]{}, A. R. 1963, ApJ, 137, 1005 , M.-M., [McKee]{}, C. F., [Klein]{}, R. I., [Stone]{}, J. M., & [Norman]{}, M. L. 1994, ApJ, 433, 757 , A., [Strickland]{}, D. K., [D’Ercole]{}, A., [Heckman]{}, T. M., & [Hoopes]{}, C. G. 2005, MNRAS, 362, 626 , C. L., [Kobulnicky]{}, H. A., & [Heckman]{}, T. M. 2002, ApJ, 574, 663 , C. F., & [Cowie]{}, L. L. 1975, ApJ, 195, 715 , C., & [de Gouveia Dal Pino]{}, E. M. 2004, A&A, 424, 817 , C., [de Gouveia dal Pino]{}, E. M., & [Raga]{}, A. 2005, A&A, 443, 495 , G., [Kurk]{}, J. D., & [R[ö]{}ttgering]{}, H. J. A. 2002, A&A, 395, L13 , F., [McKee]{}, C. F., [Klein]{}, R. I., & [Fisher]{}, R. T. 2006, ApJS, 164, 477 , R. & [Medvedev]{}, M. V. 2001, ApJL, 562, L129 , J., [Falle]{}, S. A. E. G., & [Gaskell]{}, P. H. 1982, MNRAS, 201, 833 , S., [Bocchino]{}, F., [Peres]{}, G., [Reale]{}, F., [Plewa]{}, T., & [Rosner]{}, R. 2006, A&A, 457, 545 , S., [Bocchino]{}, F., [Reale]{}, F., [Peres]{}, G., & [Pagano]{}, P. 2008, ApJ, 678, 274 , S., [Peres]{}, G., [Reale]{}, F., [Bocchino]{}, F., [Rosner]{}, R., [Plewa]{}, T., & [Siegel]{}, A. 2005, A&A, 444, 505 , J. M., [Dyson]{}, J. E., [Falle]{}, S. A. E. G., & [Hartquist]{}, T. W. 2005, MNRAS, 361, 1077 , A. Y., [Frank]{}, A., & [Blackman]{}, E. G. 2002, ApJ, 576, 832 , A. C., [Steffen]{}, W., & [Gonz[á]{}lez]{}, R. F. 2005, Revista Mexicana de Astronomia y Astrofisica, 41, 45 , S., & [Hensler]{}, G. 2007, A&, 476, 841 , C. J., [Bicknell]{}, G. V., [Sutherland]{}, R. S., & [Midgley]{}, S. 2005, MNRAS, 359, 781 , A. G. 1975, ApJ, 197, 621 , M.-S., [Stone]{}, J. M., & [Snyder]{}, G. F. 2008, ApJ, 680, 336 , P. L., & [Bland-Hawthorn]{}, J. 1998, ApJ, 493, 129 , J. D., [Shull]{}, J. M., & [Begelman]{}, M. C. 1993, ApJ, 407, 83 , J. M., & [Norman]{}, M. L. 1992, ApJL, 390, L17 , D. K., [Heckman]{}, T. M., [Colbert]{}, E. J. M., [Hoopes]{}, C. G., & [Weaver]{}, K. A. 2004, ApJS, 151, 193 —. 2004, ApJ, 606, 829 , D. K., [Heckman]{}, T. M., [Weaver]{}, K. A., [Hoopes]{}, C. G., & [Dahlem]{}, M. 2002, ApJ, 568, 689 , R. S., & [Bicknell]{}, G. V. 2007, ApJS, 173, 37 , R. S., [Bicknell]{}, G. V., & [Dopita]{}, M. A. 2003, ApJ, 591, 238 , R. S., [Bisset]{}, D. K., & [Bicknell ]{}, G. V. 2003, ApJS, 147, 187 , R. S., & [Dopita]{}, M. A. 1993, ApJS, 88, 253 , G., [Mu[ñ]{}oz-Tu[ñ]{}[ó]{}n]{}, C., [P[é]{}rez]{}, E., [Silich]{}, S., & [Telles]{}, E. 2006, ApJ, 643, 186 , S., [Cecil]{}, G., & [Bland-Hawthorn]{}, J. 2005, ARA&A, 43, 769 , S., [Cecil]{}, G., [Bland-Hawthorn]{}, J., [Tully]{}, R. B., [Filippenko]{}, A. V., & [Sargent]{}, W. L. W. 1994, ApJ, 433, 48 , W. and [Hensler]{}, G. 2000, Ap&SS, 272, 189 , M. S., [Smith]{}, L. J., & [Gallagher]{}, J. S. 2008, MNRAS, 383, 864 , P. R. 1976, ApJ, 207, 484 , J., & [Stone]{}, J. M. 1995, ApJ, 454, 172
--- abstract: | In this paper, we consider the following elliptic Toda system associated to a general simple Lie algebra with multiple singular sources $$\begin{cases} -\Delta w_i=\sum_{j=1}^na_{i,j}e^{2w_j}+2\pi\sum_{\ell=1}^m\beta_{i,\ell}\delta_{p_\ell} \quad&\mbox{in}\quad\mathbb{R}^2,\\ \\ w_i(x)=-2\log|x|+O(1)~\mbox{as}~|x|\to\infty,\quad &i=1,\cdots,n, \end{cases}$$ where $\beta_{i,\ell}\in[0,1)$. Under some suitable assumption on $\beta_{i,\ell}$ we establish the existence and non-existence results. This paper generalizes Luo-Tian’s [@Luo-Tian] and Hyder-Lin-Wei’s [@hlw] results to the general Toda system. address: - 'Department of Mathematics, University of British Columbia, Vancouver, B.C., Canada, V6T 1Z2' - 'Department of Mathematics, University of British Columbia, Vancouver, B.C., Canada, V6T 1Z2' - 'Wen  Yang, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, P.O. Box 71010, Wuhan 430071, P. R. China' author: - Ali Hyder - Juncheng Wei - Wen Yang title: On the general Toda system with multiple singular points --- Introduction ============ In this paper, we shall consider the following singular Toda system with multiple singular sources $$\label{1.su} -\Delta w_i =\sum_{j=1}^na_{i,j}e^{2w_j}+2\pi\sum_{\ell=1}^m\beta_{i,\ell}\delta_{p_\ell}\quad\mbox{in}\quad\mathbb{R}^2,$$ where $a_{i,j}$ is the Cartan matrix associated to a simple Lie algebra, $\beta_{i,l}\in[0,1)$, $p_1,\cdots,p_m$ are distinct points in ${\mathbb{R}^2}$ and $\delta_{\ell}$ denotes the Dirac measure at $p_\ell,~\ell=1,\cdots,m.$ When the Lie algebra is $\mathbf{A}_1=\mathfrak{sl}_2$, becomes the Liouville equation $$\label{1.liouville} \Delta u+e^{2u}=-2\pi\sum_{\ell=1}^m\gamma_{\ell}\delta_{p_\ell}\quad\mbox{in}\quad{\mathbb{R}^2}.$$ The Toda system and the Liouville equation arise in many physical and geometric problem. On geometric side, Liouville equation is related to the Nirenberg problem of finding a conformal metric with prescribed Gaussian curvature if $\{p_1,\cdots,p_m\}=\emptyset$, and the existence of the same curvature metric of problem with conical singularities at $\{p_1,\cdots,p_m\}$. When the Lie algebra is $\mathbf{A}_n$, the Toda system is closely related to holomorphic curves in projective spaces [@Doliwa] and the Plücker formulas [@gh], while the periodic Toda systems are related to harmonic maps [@M-Guest]. In physics, the Toda system is a well-known integrable system and closely related to the $\mathcal{W}$-algebra in conformal field theory, see [@bfr; @frrtw] and references therein. Liouville equation and Toda system also played an important role in Chern-Simons gauges theory. For example, $\mathbf{A}_n$ (n=2) Toda system governs the limit equations as physical parameters tend to $0$ and is used to explain the physics of high temperature, we refer the readers to [@Dunne; @Yang; @Yang1] for more background on it. For the Liouville equation , Chen and Li [@CL] classified all solutions when there is no singular source provided the total integration of $e^{2u}$ on $\mathbb{R}^2$ is finite. Under the same integrability condition, Prajapat and Tarantello [@PT] completed the classification with one singular point. The question of conformal metrics with multiple conical singularities has been widely studied using various viewpoints. When $m=2$, equation is equivalent to a Mean Field equation on $\mathbb{S}^2$ with three singularities, which can be chosen as $0,1$ and $\infty$ by Möbius transformation, Eremenko’s [@Eremenko] work gives the necessary and sufficient condition for the existence of a conformal metric of constant Gaussian curvature by studying the monodromy of the corresponding second order hypergeometric equation. For equation with general $m\geq3$, Troyanov [@Tr] proved that there exists a solution provided $\gamma_{\ell},~\ell=1,2,\cdots,m$ satisfies the following condition $$\label{1.troyanov} 0<2-\sum_{\ell=1}^m\gamma_{\ell}<2\min\{1,\min_{1\leq \ell\leq m}(1-\gamma_{\ell})\}.$$ Later on, Luo-Tian [@Luo-Tian] proved that if the condition $0<\gamma_{\ell}<1$ is satisfied for $1\leq \ell\leq m$, then is necessary and sufficient, and the solution is unique. Precisely, we state it as the following theorem \[th.lt\] Let $m\geq 3$ and $p_1,\cdots,p_m$ be $m$ distinct points in ${\mathbb{R}^2}$. Then there exists a solution to verifying the following behavior $$\begin{aligned} \label{1.ltbehavior}\left\{ \begin{array}{ll} u(x)=-\gamma_\ell\log|x-p_\ell|+\mbox{bounded continuous function}\quad &\mbox{around each}~p_{\ell}, \\ \rule{0cm}{.5cm} u(x)=-2\log|x|+\mbox{bounded continuous function} &\mbox{as}~|x|\to\infty,\\ \rule{0cm}{.5cm} \gamma_\ell\in(0,1),&\ell=1,\cdots,m, \end{array}\right.\end{aligned}$$ if and only if holds. Moreover, the solution is unique. We rewrite as follows $$\label{1.condition-1} \sum_{\ell=1}^m\gamma_\ell<2\quad\mathrm{and}\quad\sum_{\ell\neq j}\gamma_\ell>\gamma_j~\mbox{for every}~ j=1,\cdots,m.$$ It is interesting to get a counterpart result of Theorem \[th.lt\] for the Toda system of general simple Lie algebra. When the Lie algebra is $\mathfrak{sl}_3$, the first two authors of this paper and Lin [@hlw] deduce an existence result of provided $\beta_{i,\ell},~i=1,2,~\ell=1,\cdots,m$ satisfies $$\label{1.condition-2} 3(1+\beta_{i,j})<2\sum_{\ell=1}^m\beta_{i,\ell}+\sum_{\ell=1}^m\beta_{3-i,\ell},~ \sum_{l=1}^3\beta_{i,\ell}<2~\mbox{for}~i=1,2,~j=1,\cdots,m.$$ While the authors also show that an equivalent condition of for $\mathbf{A}_2$ Toda system is not sufficient for the existence of solutions to satisfying the behavior . In this paper, we shall study the same problem for general Toda system. Precisely, we shall consider the existence and non-existence of solutions $(w_1,\cdots,w_n)$ to satisfying $$\begin{aligned} \label{1.asy} \left\{ \begin{array}{ll} w_i(x)=-\beta_{i,\ell}\log|x-p_\ell|+h_{i,\ell}\quad &\mbox{around each point}~p_\ell,\\ \rule{0cm}{0.4cm} w_i(x)=-2\log|x|+h_{i,m+1}\quad &\mbox{as}~|x|\to+\infty,\\ \rule{.0cm}{0.4cm} h_{i,\ell}(x)~\mbox{is continuous in a neighbourhood of}~p_\ell,\end{array}\right. $$ for $i=1,\cdots,n$ and $\ell=1,\cdots,m$ and $h_{i,m+1}$ is bounded outside a compact set. We set $$u_i(x)=w_i(x)+\sum_{\ell=1}^m\beta_{i,\ell}\log|x-p_\ell|,\quad i=1,\cdots,n.$$ We see that $w_i$ solves if and only if $u_i$ solves $$\begin{aligned} \label{1.u} \left\{ \begin{array}{ll}-\Delta u_i=\sum_{j=1}^na_{i,j}K_je^{2u_j}\quad\mbox{in}\quad {\mathbb{R}^2},\\ \rule{0cm}{.5cm} K_i(x):=\prod_{\ell=1}^m\frac{1}{|x-p_{\ell}|^{2\beta_{i,\ell}}}. \end{array}\right.\end{aligned}$$ The condition in terms of $u_i$ is $$\label{1.newasy} u_i(x)=-{\beta_i} \log|x|+\mbox{ a bounded continuous function on}~B_{1}^c,$$ where $$\label{1.betai} \beta_i:=2-\sum_{\ell=1}^m\beta_{i,\ell}.$$ Our first result of this paper is on the existence of solutions to : \[th1.1\] Let $m\geq 3$. Suppose $\{\beta_{i,\ell},~i=1,\cdots,n,~\ell=1,\cdots,m\}$ satisfies $$\label{1.condition} 2\sum_{j=1}^na^{i,j}-1+\beta_{i,\ell}<\sum_{j=1}^n\sum_{s=1}^ma^{i,j}\beta_{j,s},\quad\forall i=1,\cdots,n,~\ell=1,\cdots,m,$$ where $(a^{i,j})_{n\times n}$ is the inverse matrix of $(a_{i,j})_{n\times n}.$ Then given any $m$ distinct points $\{p_{\ell}\}_{\ell=1}^m\subset{\mathbb{R}^2}$ there exists a continuous solution $(u_1,\cdots,u_n)$ to satisfying with $\beta_i$ as in . We notice that when $n=2$ and $(a_{i,j})$ is Cartan matrix for $\mathbf{A}_2$, then is equivalent to the condition . Next, we shall show that the equivalent condition proposed by Luo-Tian for single Liouville equation could not work for general Toda systems, namely a condition of the following form can not guarantee the existence of solutions to - $$\label{1.ltcondition} \sum_{s=1,~s\neq \ell}^m\beta_{i,s}>\max_i\beta_{i,\ell}\quad\mbox{for every}\quad i=1,\cdots,n,~\ell=1,\cdots,m.$$ The second result of this paper is the following \[th1.2\] There exist a tuple of points $\{p_{\ell}\}_{\ell=1}^m\subset{\mathbb{R}^2}$ and $\beta_{i,\ell},~i=1,\cdots,n,~\ell=1,\cdots,m$ satisfying , such that equation has no solution satisfying . Let us close the introduction by mentioning the idea used in the proof of Theorem \[th1.2\]. Our main tools are the induction method and Pohozaev identity. Based on the previous result, we already find points such that has no solution when the coefficient matrix is $\mathbf{A}_2$ and $\beta_{i,\ell}$ satisfies . This is the starting point of our approach. By assuming the non-existence result of a low rank Toda system, we obtain that the non-existence result also holds for a higher rank (with rank plus one) by choosing suitable points. The crucial thing of our argument is to exclude a blow-up phenomena for the higher rank Toda system, where the Pohozaev identity plays an important role. The blow-up phenomena for a general Toda system is very complicated, one of the fundamental issue concerning is the computation of the local mass at the blow-up point. Until now, we can only compute it for $\mathbf{A}_n,~\mathbf{B}_n$, $\mathbf{C}_n$ and $\mathbf{G}_2$, see [@lyz]. This paper is organized as follows. We study the existence result (Theorem \[th1.1\]) and non-existence result (Theorem \[th1.2\]) in sections 2 and 3 respectively. In the last section, we present all the necessary lemmas and facts, including the Cartan matrix of all the simple Lie algebra and their inverse matrices. Proof of Theorem \[th1.1\] ========================== In this section we shall prove Theorem \[th1.1\]. We notice that if $u_1,\cdots,u_n$ is a continuous solution to - with $\beta_{i,\ell}<1$ for all $i=1,\cdots,n$ and $\ell=1,\cdots,m$, then $$K_i(x)e^{2u_i}\in L^1({\mathbb{R}^2})$$ and $u_i$ has the following representation formula $$u_i(x)=\frac{1}{2\pi}\sum_{j=1}^na_{i,j}\int_{{\mathbb{R}^2}}\log\left(\frac{1}{|x-y|}\right)K_j(y)e^{2u_j(y)}dy+c_i, ~i=1,\cdots,n,$$ for some $c_i\in{\mathbb{R}}$. Moreover, using the asymptotic behavior , we have $$\sum_{j=1}^na_{i,j}\int_{{\mathbb{R}^2}}K_je^{2u_j}dx=2\pi\beta_i,\quad i=1,\cdots,n,$$ that is $$\label{2.rel-1} \int_{{\mathbb{R}^2}}K_ie^{2u_i}=2\pi\bar\beta_i,\quad \bar\beta_i=\sum_{j=1}^na^{i,j}\beta_j,\quad i=1,\cdots,n.$$ Then Theorem \[th1.1\] is equivalent to the existence of solutions $(u_1,\cdots,u_n)$ to satisfying and $\bar\beta_i$ verifies $$\label{2.rel-2} \bar\beta_i>0,\quad \bar\beta_i<1-\beta_{i,\ell}\quad \mbox{for every}~i=1,\cdots,n,~\ell=1,\cdots,m.$$ As the paper [@hlw], we shall use a fixed point argument to prove the existence. To set up our argument, we introduce the following functional space $$\mathbb{X}=\underbrace{C_0({\mathbb{R}^2})\times\cdots\times C_0({\mathbb{R}^2})}_n,\quad \|\mathbf{v}\|_\mathbb{X}=:\max_i\left\{\|v_i\|_{L^\infty({\mathbb{R}^2})}~\mbox{for}~\mathbf{v}\in\mathbb{X}\right\},$$ where ${\bf{v}}=(v_1,\dots,v_n) $, and $C_0({\mathbb{R}^2})$ denotes the space of continuous functions vanishing at infinity. We fix $u_0\in C^\infty({\mathbb{R}^2})$ such that $$u_0(x)=-\log|x|\quad\mbox{on}\quad B_{1}^c. $$ For $v\in C_0({\mathbb{R}^2})$ we set $c_{i,v}$ to be the unique number such that $$\int_{{\mathbb{R}}^2}\bar{K_i}e^{2(v+c_{i,v})}dx=2\pi\bar\beta_i,\quad \bar K_i:=K_ie^{2\beta_i u_0},\quad i=1,\cdots,n,$$ where $\bar\beta_i$ is defined in . Now we define $$\mathcal{T}:\mathbb{X}\to\mathbb{X},~(v_1,\cdots,v_n)\mapsto(\bar v_1,\cdots,\bar v_n),$$ where we have set $$\label{2.barv} \bar v_i(x):=\frac{1}{2\pi}\sum_{j=1}^na_{i,j}\int_{{\mathbb{R}}^2}\log\left(\frac{1}{|x-y|}\right)\bar K_j(y)e^{2(v_j(y)+c_{j,v_j})}dy-\beta_i u_0(x),\quad i=1,\cdots,n.$$ As $\beta_i=\sum_{j=1}^na_{i,j}\bar\beta_j$, for $x\in B_{1}^c$, can be written as $$\bar v_i(x)=\frac{1}{2\pi}\sum_{j=1}^na_{i,j}\int_{{\mathbb{R}}^2}\log\left(\frac{|x|}{|x-y|}\right)\bar K_j(y)e^{2(v_j(y)+c_{j,v_j})}dy,\quad i=1,\cdots,n.$$ Using the fact that $$\bar K_i=K_ie^{2\beta_iu_0}=O(|x|^{-4})\quad\mbox{for}~|x|~\mbox{large},$$ one can prove that $$\bar v_i(x)\to 0~\mbox{as}~ |x|\to\infty,\quad i=1,\cdots,n.$$ Moreover, the operator $\mathcal{T}$ is compact, see [@HMM Lemma 4.1]. To find a fixed point of the map $\mathcal{T}$, it suffices to show that $\mbox{deg}(I-\mathcal{T},\mathbb{X},0)\neq0$. We shall use a homotopy type argument to prove the latter fact. In our homotopy type argument, we need the result below. \[pr2.1\] There exists a constant $C>0$ such that $$\|\mathbf{v}\|_{\mathbb{X}}\leq C~ \mbox{for every}~ (\mathbf{v},t)\in\mathbb{X}\times[0,1]~ \mbox{satisfying}~ {\mathbf{v}}=t\mathcal{T}({\mathbf{v}}).$$ We assume by contradiction that the result is false, then there exists $${\mathbf{v}}^k=(v_1^k,\cdots,v_n^k)\quad\mbox{and}\quad t^k\in(0,1]$$ with $${\mathbf{v}}^k=t^k\mathcal{T}({\mathbf{v}}^k)\quad\mbox{and}\quad \|{\mathbf{v}}^k\|_\mathbb{X}\to\infty.$$ We set $$\psi_i^k:=v_i^k+c_i^k,\quad c_i^k=c_{i,v_i^k}+\frac12\log t^k.$$ Then we have $$\psi_i^k(x)=\frac{1}{2\pi}\int_{{\mathbb{R}^2}}\sum_{j=1}^n\log\left(\frac{1}{|x-y|}\right)a_{i,j}\bar K_j(y)e^{2\psi_{j}^k(y)}dy-t^k\beta_iu_0(x)+c_i^k,\quad i=1,\cdots,n.$$ For $|x|\geq 1$ the above equation can be written as $$\psi_i^k(x)=\frac{1}{2\pi}\int_{{\mathbb{R}^2}}\sum_{j=1}^n\log\left(\frac{|x|}{|x-y|}\right)a_{i,j}\bar K_j(y)e^{2\psi_{j}^k(y)}dy+c_i^k,\quad i=1,\cdots,n.$$ Next we claim that $$\label{2.claim} \max_i\left\{\sup\psi_i^k(x)\right\}\xrightarrow{k\to\infty}\infty.$$ Indeed if is not true, we can use the Green representation together with $\max_i\sup\psi_i^k(x)\leq C$ to obtain that $\|{\mathbf{v}}\|_{{\mathbb{X}}}\leq C$, a contradiction. Thus holds. Without loss of generality we may assume that [^1] $$\sup\psi_1^k(x)=\max_i\left\{\sup\psi_i^k(x)\right\}.$$ Let $x^k\in{\mathbb{R}^2}$ be a point such that $$\sup\psi_1^k(x)<\psi_1^k(x^k)+1.$$ If $x^k$ is bounded then, up to a subsequence, $x^k\to x^\infty.$ We consider the following two cases. Case 1. $|x^k|$ is uniformly bounded. We notice that $\psi_i^k,~i=1,\cdots,n$ satisfies the following equation $$\Delta\psi_i^k+\sum_{j=1}^na_{i,j}\bar K_je^{2\psi_j^k}{+t^k\beta_i{\Delta}u_0}=0.$$ Let $$\mathcal{S}=\{p\in {\mathbb{R}}^2\mid \max_i\psi_i^k(p_k)\to\infty~\mbox{ for some }~p_k\to p\}.$$ For $p\in\mathcal{S}$, we define $$\sigma_i(p)=\frac{1}{2\pi}\lim_{r\to0}\lim_{k\to\infty}\int_{B_r(p)}\bar K_ie^{2\psi_i^k},\quad i=1,\cdots,n.$$ By Lemma \[le4.1\], we have $$\label{2-i} \sigma_i(p)\geq \mu_i(p)\quad\mbox{holds at least for one}~i\in\{1,\cdots,n\},$$ where $$\begin{aligned} \mu_i(p):=\left\{ \begin{array}{ll} 1&\quad\text{if }p\not\in\{p_1,\dots,p_m\},\\ \rule{0cm}{.4cm} 1-\beta_{i,\ell}&\quad\text{if }p=p_\ell,\quad \ell=1,\dots,m. \end{array}\right.\end{aligned}$$ Using and the fact $$\frac{1}{2\pi}\int_{{\mathbb{R}^2}}\bar K_ie^{2\psi_i^k}=\bar\beta_i,$$ we have for some $i\in\{1,\dots,m\}$ $$\bar\beta_i\geq \sigma_i\geq \mu_i\geq \min_{\ell}\{1,1-\beta_{i,\ell}\}.$$ This contradicts to . Thus $\{|x_k|\}$ is unbounded.\ Case 2. $|x^k|\to\infty$. We set $$\tilde\psi_i^k(x)=\psi_i^k(\frac{x}{|x|^2}),\quad \tilde{K}_i(x)=\frac{1}{|x|^4}\bar K_i(\frac{x}{|x|^2}) \quad\mathrm{in}\quad{\mathbb{R}^2}\setminus\{0\},~i=1,\cdots,n,$$ and we extend them continuously at the origin. Then $\tilde\psi_i^k$ satisfies $$\Delta\tilde\psi_i^k+\sum_{j=1}^na_{i,j}\tilde K_je^{2\tilde\psi_i^k}=0\quad\mathrm{in}\quad B_{1}.$$ Since $\tilde K_i$ is continuous, $\tilde{K}_i(0)>0,~i=1,\cdots,n$ and $$\tilde\psi_i^k(\tilde x^k)\to\infty,\quad \tilde x^k=\frac{x^k}{|x^k|^2}\to0,$$ repeating the arguments of Case 1, we get $$\bar\beta_i\geq\frac{1}{2\pi}\int_{B_{1}}\tilde K_je^{2\tilde\psi_i^k}\geq 1\geq1-\beta_{i,\ell}\quad\mbox{for any}~\ell=1,\cdots,m.$$ Contradiction arises again. Therefore there is no blow up for $\{\psi_i^k\}$, and we finish the proof. By Proposition \[pr2.1\], we get that $$\mbox{deg}(I-\mathcal{T},\mathbb{X},0)=\mbox{deg}(I,\mathbb{X},0)=1.$$ From which we derive the existence of solution to - . Proof of Theorem \[th1.2\] ========================== In this section we shall prove Theorem \[th1.2\]. We provide the details for $\mathbf{A}_n$ only, and state the differences for other cases at the end of this section. Before the proof, we make the following preparation. For $n\geq 2$ we consider the following tuple of positive numbers: $$\mathcal{B}=\left\{b_1,\cdots,b_{4n+1}\in(0,1)\mid\{b_i\}_{i=1}^{4n+1}~\mbox{satisfies the assumption}~\mathcal{D}\right\},$$ where the assumption $\mathcal{D}$ is: $$\mbox{Assumption}~\mathcal{D}: \begin{cases} \mbox{(d1)}~ &\sum_{i=1}^5b_i=2,~b_2=b_3<\frac12 b_1,~b_4=b_5,~2b_1+b_4<2,\\ \mbox{(d2)}~ &\sum_{i=1}^4b_{4l-3+i}=2,~l=2,\cdots,n,\\ \mbox{(d3)}~ &b_{4l-2}=b_{4l-1}=b_{4l},~l=2,\cdots,n,\\ \mbox{(d4)}~ &n^2b_{4i+1}<\frac{1}{400},~i=1,\cdots,n,\\ \mbox{(d5)}~ &b_4=b_9. \end{cases}$$ Let us point out that the set $\mathcal{B}$ is not empty, we can choose $$\begin{aligned} &b_1=1-\frac23{\varepsilon},\quad b_2=b_3=\frac12-\frac23{\varepsilon},\quad b_4=b_5={\varepsilon},\\ &b_{4l-2}=b_{4l-1}=b_{4l}=\frac23-\frac{1}{3}{\varepsilon}, ~b_{4l+1}={\varepsilon},~l=2,\cdots,n. \end{aligned}$$ Then it is easy to see that the above $\{b_i\}_{i=1}^{4n+1}$ satisfies the assumption $\mathcal{D}$ provided ${\varepsilon}<\frac{1}{400n^2}$. From (d1) and (d4), we see that $$\label{3.con1} b_4+b_1>1\quad\mbox{and}\quad b_4+b_2=b_4+b_3<1,$$ and $$\label{3.con2} b_{4i+1}<\frac{1}{50},\quad i=1,\cdots,n.$$ We shall show a non-existence result to the Toda system with $m=3n+1$ and $\beta_{i,\ell}$ satisfying the following: $$\label{3.con3} \begin{aligned} &\beta_{i,3i-2+\ell}=b_{4i-3+\ell},~i=2,\cdots,n,~\ell=1,2,3,\\ &\beta_{1,\ell}=b_\ell,~\ell=1,\cdots,4,\quad\mbox{and}\quad \beta_{i,\ell}=0~\mbox{for the other}~i,\ell. \end{aligned}$$ Then using (d1) and (d2), we get $$\beta_i:=2-\sum_{\ell=1}^{3n+1}\beta_{i,\ell}=b_{4i+1}.$$ While from (d1) and (d3), it is not difficult to see that $$\beta_{1,1}<\beta_{1,2}+\beta_{1,3}+\beta_{1,4},$$ and $$\beta_{i,j}<\sum_{\ell\neq j}\beta_{i,\ell},\quad i=2,\cdots,n.$$ Hence $\beta_{i,\ell}$ satisfies . Now let us state the main result of this section: \[pr3.1\] Let $n\geq 2$ and $\beta_{i,\ell}$ be as in with $\{b_1,\cdots,b_{4n+1}\}$ satisfying the assumption $\mathcal{D}$, then there exists points $\{p_\ell\}_{\ell=1}^{3n+1}$ such that equation (with the corresponding Lie algebra matrix $\mathbf{A}_n$) has no solution satisfying the asymptotic condition . We shall apply the induction method to prove the conclusion. When $n=2$, the problem becomes $$\begin{aligned} \left\{ \begin{array}{ll} \Delta u_1+a_{1,1}K_1e^{2u_1}-a_{1,2}K_2e^{2u_2}=0\quad\text{in }{\mathbb{R}}^2\\ \rule{0cm}{0.5cm} \Delta u_2-a_{2,1}K_1e^{2u_1}+a_{2,2}K_2e^{2u_2}=0 \quad\text{in }{\mathbb{R}}^2 \end{array} \right. $$ where $$K_1(x)=\prod_{i=1}^4\frac{1}{|x-p_i|^{2\beta_{1,i}}},\quad K_2(x)=\prod_{i=5}^7\frac{1}{|x-p_i|^{2\beta_{2,i}}}.$$ One can check that $b_1,b_2,b_3,b_4,b_6,b_7,b_8$ satisfies the assumptions $\mathcal{A}1)$ to $\mathcal{A}5)$ (with $\beta_i=b_i$ for $i=1,2,3,4$ and $\beta_i=b_{i+1}$ for $i=5,6,7$), then by [@hlw Lemma 3.2], we can find points $p_1,\cdots,p_7$ such that has no solution with the asymptotic behavior . Suppose the result holds for $n_0$ with $2\leq n_0$, and let $p_1,\cdots,p_{3n_0+1}$ be the points such that the following equation has no solution $$\begin{aligned} \label{3.m-1} \left\{\begin{array}{ll}\Delta u_i+\sum_{j=1}^{n_0}a_{i,j}K_je^{2u_j}=0, &\quad i=1,\cdots,n_0,\\ \rule{0cm}{.5cm} u_i(x)=-(2-\sum_{\ell=1}^3\beta_{i,3i-2+\ell})\log|x|~\mbox{as}~|x|\to\infty, &\quad i=1,\cdots,n_0.\end{array}\right.\end{aligned}$$ Now let us find points [$\{p_i\}_{i=3n_0+2}^{3n_0+4}$]{} so that the conclusion holds for $\{p_i\}_{i=1}^{3n_0+4}$. Let $p_{3n_0+2}$ be a fixed point (different from $p_1,\cdots,p_{3n_0+1}$). We claim that for $|p_{3n_0+3}|,|p_{3n_0+4}|$ large and $p_{3n_0+3}\neq p_{3n_0+4}$ there exists no solution to with $n=n_0+1$ having the asymptotic behavior $$u_i(x)=-\beta_i\log|x|+O(1)\quad\mbox{as}\quad|x|\to+\infty,~i=1,\cdots,n_0+1.$$ We shall prove the claim by contradiction. Suppose there is a sequence of solutions $\{u_i^k\}_{i=1}^{n_0+1}$ of - ($n$ is replaced by $n_0+1$) with $$p_{\ell}=p_{\ell,k},\quad |p_{\ell}|\to\infty\quad\mbox{for}\quad \ell=3n_0+3,3n_0+4.$$ Equivalently, we have $u_i^k,~i=1,\cdots,n_0+1$ satisfies $$\begin{aligned} \left\{\begin{array}{ll} \Delta u_i^k+\sum_{j=1}^{n_0+1}a_{i,j}\widetilde K_{j}e^{2u_j^k}=0 \quad\mbox{in} &\quad {\mathbb{R}^2},\quad i=1,\cdots,n_0+1,\\ \rule{0cm}{.5cm} \int_{{\mathbb{R}^2}}{\widetilde}K_ie^{2u_i^k}=2\pi\bar\beta_i, &\quad i=1,\cdots,n_0+1,\\ \rule{0cm}{.5cm} |p_{3n_0+3}|,|p_{3n_0+4}|\to+\infty,\end{array}\right.\end{aligned}$$ where ${\widetilde}K_i(x)=K_i(x),~i=1,\cdots,n_0,$ and $${\widetilde}K_{n_0+1}(x)=|p_{3n_0+3}|^{2\beta_{n_0+1,3n_0+3}}|p_{3n_0+4}|^{2\beta_{n_0+1,3n_0+4}} \prod_{\ell=1}^{3}\frac{1}{|x-p_{3n_0+1+\ell}|^{2\beta_{n_0+1,3n_0+1+\ell}}}.$$ Notice that ${\widetilde}K_i,~i=1,\cdots,n_0$ is independent of $k$ and is integrable due to . We see that $${\widetilde}K_{n_0+1}(x)\xrightarrow{k\to\infty}\frac{1}{|x-p_{3n_0+2}|^{2\beta_{n_0+1,3n_0+2}}}~\mbox{locally uniformly in}~{\mathbb{R}^2}\setminus\{p_{3n_0+2}\}.$$ We shall divide our proof into the following steps: Step 1. We prove that $$\label{3.pr-3} \lim_{R\to\infty}\lim_{k\to\infty}\int_{B_R^c}{\widetilde}K_ie^{2u_i}=0,\quad i=1,\cdots,n_0.$$ We set $$\hat u_i^k(x)=u_i^k(\frac{x}{|x|^2})-\beta_i\log|x|,\quad i=1,\cdots,n_0+1,$$ then setting $$q_{\ell}:=\frac{p_{\ell}}{|p_{\ell}|^2},\quad \ell=1,\cdots,3n_0+4,$$ we see that $\hat u_i^k$ satisfies $$\Delta\hat u_i^k(x)+\sum_{j=1}^{n_0+1}a_{i,j}\hat K_je^{2\hat u_{j}^k}=0,\quad i=1,\cdots,n_0,n_0+1,$$ where $$\hat K_i(x)=\prod_{\ell=1}^{3n_0+1}\frac{|q_{\ell}|^{2\beta_{i,\ell}}}{|x-q_{\ell}|^{2\beta_{i,\ell}}}, \quad i=1,\cdots,n_0,$$ and $$\hat K_{n_0+1}=|q_{3n_0+2}|^{2\beta_{n_0+1,3n_0+2}} \prod_{\ell=1}^3\frac{1}{|x-q_{3n_0+1+\ell}|^{2\beta_{n_0+1,3n_0+1+\ell}}}.$$ We set $$\bar\beta_i=\sum_{j=1}^{n_0+1}c^{i,j}\beta_j,\quad i=1,\cdots,n_0+1,$$ where $c^{i,j}$ is the inverse matrix of $a_{i,j}$ of rank $n_0+1$. Using Lemma \[le4.inverse\], we have $$\sum_{j=1}^{n_0+1}c^{i,j}<4(n_0+1)^2.$$ Combined with (d4), it is easy to check that $$\label{3.pr-6} \frac{1}{2\pi}\int_{{\mathbb{R}^2}}\hat K_ie^{2\hat u_i^k} =\bar\beta_i<\frac{1}{100},\quad i=1,\cdots,n_0+1,$$ and $$\hat K_i(x)\to 1\quad\mbox{as}\quad |x|\to0,\quad i=1,\dots,n_0.$$ Now we can apply Lemma \[le4.bm\] with $\alpha=0$, to get $$\hat u_i^k(x)\leq C~\mbox{in a neighborhood of origin},\quad i=1,\cdots,n_0.$$ Thus follows.\ Step 2. Set $$\mathfrak{S}_i=\left\{x\in{\mathbb{R}^2}:\mbox{there is a sequence}~x^k\to x~\mbox{such that}~u_i^k(x^k)\to\infty\right\},~i=1,\cdots,n_0+1.$$ We shall show that $$\mathfrak{S}=\bigcup_{i=1}^{n_0+1}\mathfrak{S}_i=\emptyset.$$ For any $p\in\mathfrak{S},$ we set $$\label{3-sigma-p} \sigma_i(p)=\frac{1}{2\pi}\lim_{r\to0}\lim_{k\to\infty}\int_{B_r(p)}{\widetilde}K_ie^{u_i^k}dx,\quad i=1,\cdots,n_0+1.$$ It is well known that $u_i^k$ satisfies the integral equation $$u_i^k(x)=\frac{1}{2\pi}\int_{{\mathbb{R}^2}}\log\left(\frac{1+|y|}{|x-y|}\right) \sum_{j=1}^{n_0+1}a_{i,j}{\widetilde}K_je^{2u_j^k(y)}dy+C_{i}^k,~i=1,\cdots,n_0+1.$$ For any $p\in\mathfrak{S}$, let $r_0>0$ be such that $B_{r_0}(p)\cap\mathfrak{S}=\{p\}$, then from the above integral representation, one can show that $$|u_i^k(x)-u_i^k(y)|\leq C\quad\mbox{for every}\quad x,y\in\partial B_{r_0}(p),~\quad i=1,\cdots,n_0+1.$$ Therefore, in a neighborhood of each blow-up point, we see that $u_i^k$ satisfies the bounded oscillation property, and it implies that $\sigma_1(p),\cdots,\sigma_{n_0+1}(p)$ satisfies the pohozaev identity (see Lemma \[le4.1\]) $$\label{3.poho} \sum_{i=1}^{n_0+1}\sigma_i^2(p)-\sum_{i=1}^{n_0}\sigma_i(p)\sigma_{i+1}(p)=\sum_{i=1}^{n_0+1}\mu_i(p)\sigma_i(p),$$ where $$\mu_i(p)=\begin{cases} 1,\quad &\mbox{if}~p\notin\{p_1,\cdots,p_{3n_0+1}\},\\ 1-\beta_{i,\ell},\quad &\mbox{if}~p=p_{\ell}. \end{cases}$$ Using , we conclude that for at least one index $i$ of $\{1,\cdots,n_0+1\}$, $\sigma_i(p)\geq\mu_i(p)$. As a consequence, $$\label{3.2-1} \bar\beta_i\geq \sigma_1(p)\geq \mu_i(p).$$ For $p\in{\mathbb{R}^2}\setminus\{p_1\}$, we shall show is impossible. Using and , we have $$\begin{cases} \beta_{1,\ell}+\bar\beta_1<1~\mbox{for}~\ell=2,3,4,\\ \beta_{i,3i-2+\ell}+\bar\beta_i<1~\mbox{for}~i=2,\cdots,n_0,~\ell=1,2,3,\\ \beta_{n_0+1,3n_0+2}+\bar\beta_{n_0+1}<1. \end{cases}$$ This implies never holds if $p\in{\mathbb{R}^2}\setminus\{p_1\}$. If $p=p_1$, then we can apply Lemma \[le4.bm\] to conclude that $u_i^k,~i=2,\cdots,n_0+1$ are uniformly bounded above in a neighborhood of $p_1,$ otherwise, we would get $$\bar\beta_i\geq\sigma_i(p)\geq 1,$$ which contradicts to (d4). Then we get $$\label{3.2-3} \bar\beta_1\geq\sigma_1(p)=1-\beta_{1,1}.$$ In fact, [as $\beta_{1,4}=b_4=b_5=b_9=\beta_2$, $\beta_1=b_5$,]{} we have $$\label{3.2-3-0} \bar\beta_1=\sum_{j=1}^{n_0+1}c^{1,j}\beta_j\geq\frac{n_0+1}{n_0+2}\beta_{1,4}+\frac{n_0}{n_0+2}\beta_{1,4} \geq \beta_{1,4},$$ where we used $c^{i,j}=\frac{\min\{i,j\}(n_0+2-\max\{i,j\})}{n_0+2}$, see Lemma \[le4.inverse\]. Using (d1) and , we have $$\bar\beta_1+\beta_{1,1}\geq b_1+b_4>1.$$ Therefore, the strict inequality of holds, i.e., $$\bar\beta_1>\sigma_1(p)=1-\beta_{1,1}.$$ Then we can apply the arguments of [@Brezis-Merle Theorem 3] to get that concentration property holds for $u_1^k$, that is $$u_1^k(x)\to-\infty\quad\mbox{locally uniformly in}~{\mathbb{R}^2}\setminus\mathfrak{S}_1,$$ we must have that the Cardinality of $\mathfrak{S}_1$ is at least 2, thanks to the Step 1. However, we have already shown that $$\mathfrak{S}\setminus\{p_1\}=\emptyset.$$ Thus contradiction arises again, and $\mathfrak{S}=\emptyset.$\ Step 3. $u_i^k\to \bar u_i$ in $C_{\mathrm{loc}}^2({\mathbb{R}^2}),~i=1,\cdots,n_0$, where $u_1,\cdots,u_{n_0}$ satisfies . Since $$\mathfrak{S}=\emptyset\quad\mbox{and}\quad \bar\beta_i>0,~i=1,\cdots,n_0,$$ one of the following holds: passing to a subsequence if necessary, 1. $u_i^k\to\bar u_i$ in $C_{\mathrm{loc}}^2({\mathbb{R}^2})$ for $i=1,\cdots,n_0+1,$ 2. $u_i^k\to\bar u_i$ in $C_{\mathrm{loc}}^2({\mathbb{R}^2})$ for $i=1,\cdots,n_0,$ and $u_{n_0+1}^k\to-\infty$ locally uniformly in ${\mathbb{R}^2}$. Now we assume by contradiction that (i) happens. Then we get that the limit functions $(\bar u_1,\cdots,\bar u_{n_0+1})$ satisfy the system $$\left\{\begin{array}{ll}\Delta\bar u_i+\sum_{j=1}^{n_0+1}a_{i,j}\bar K_je^{\bar u_j}=0~\mbox{in}~{\mathbb{R}^2},\quad i=1,\cdots,n_0+1,\\ \rule{.0cm}{.5cm} \int_{{\mathbb{R}^2}}\bar K_ie^{2\bar u_i}=2\pi\bar\beta_i,~i=1,\cdots,n_0,\quad \int_{{\mathbb{R}^2}}\bar K_{n_0+1}e^{2\bar u_{n_0+1}}=2\pi\gamma\leq 2\pi\bar\beta_{n_0+1},\end{array}\right. $$ where $$\bar K_i(x)={\widetilde}K_i(x),~i=1,\cdots,n_0,\quad\mbox{and}\quad \bar K_{n_0+1}=\frac{1}{|x-p_{3n_0+2}|^{2\beta_{n_0+1,3n_0+2}}}.$$ Then one has $$\lim_{|x|\to\infty}\frac{\bar u_{n_0+1}(x)}{\log|x|}=-(2\gamma-\bar\beta_{n_0}),$$ and together with $\bar K_{n_0+1}e^{\bar u_{n_0+1}}\in L^1({\mathbb{R}^2})$ we have $$\label{3.3-2} \beta_{n_0+1,3n_0+2}+2\gamma-\bar\beta_{n_0}>1,$$ and this is impossible due to (d2)-(d4) in assumption $\mathcal{D}$. Therefore, (ii) holds, and we get that it reduces to the equation . However, has no solution and contradiction arises. Thus, the conclusion holds also for $n_0+1$ and we finish the whole proof. This is a direct consequence of Proposition \[pr3.1\]. For $\mathbf{E}_6$, $\mathbf{E}_7$, $\mathbf{E}_8$, $\mathbf{B}_n,$ $\mathbf{C}_n,$ and $\mathbf{D}_n$ with $n\geq 3$, , we can derive the counterpart non-existence results from $\mathbf{A}_5$, $\mathbf{A}_6$, $\mathbf{A}_7$, $\mathbf{A}_{n-1}$ through almost the same argument of Proposition \[pr3.1\]. Indeed, we use $\mathbf{D}_n,~n\geq 4$ ($\mathbf{D}_n$ only make sense for $n\geq3$ and $\mathbf{D}_3=\mathbf{A}_3$) as an example to explain it. Let $\beta_{i,\ell}$ be as in with $\{b_1,\cdots,b_{4n+1}\}$ satisfying the assumption $\mathcal{D}$. Using Proposition \[pr3.1\], we can find points $\{p_{\ell}\}_{\ell=1}^{3n-2}$ such that equation has no solution satisfying the asymptotic behavior . Then we prove the non-existence result by contradiction. Following the Step 1 of the proof of Proposition \[pr3.1\], we define the same sequence of solutions of with $n_0+1$ replaced by $n$ and $\mathbf{A}_{n_0+1}$ by $\mathbf{D}_n$, and reach the same conclusion for first $n-1$ components. In Step 2, we prove that the blow-up phenomena can not happen. To show that the blow-up point $p\notin\mathbb{R}^2\setminus\{p_1\}$, the only thing we used is the Pohozaev identity and there exists at least one index $i\in\{1,\cdots,n\}$ such that $\sigma_i(p)\geq \mu_i(p)$. Using Lemma \[le4.1\], we see that it holds for general simple Lie algebra matrix. To show $p\neq p_1$, we only use and it is easy to check that it holds also for the other cases by and . In Step 3, we used when we exclude the case that the limit function $\bar u_n$ can not be bounded uniformly. While for $\mathbf{D}_n$ case, we get $$\lim_{|x|\to+\infty}\frac{\bar u_n(x)}{\log|x|}=-(2\tilde\gamma-\bar\beta_{n-2}),\quad \tilde\gamma=\frac{1}{2\pi}\int_{{\mathbb{R}^2}}\bar K_{n}e^{2\bar u_n},$$ and $\bar K_{n}e^{\bar u_n}\in L^1(\mathbb{R}^2)$ implies that $$\beta_{n,3n-1}+2\tilde\gamma-\bar\beta_{n-2}>1.$$ Using (d2)-(d4) in the assumption $\mathcal{D}$, we can show that the above inequality is still not true and the case that $\bar u_n$ is uniformly bounded can also be excluded. For $\mathbf{C}_2$ ($\mathbf{B}_2$ is equivalent to $\mathbf{C}_2$) and $\mathbf{G}_2$, we need to use a different tuple of numbers. Let us fix $b_1,\cdots,b_7\in(0,1)$ satisfies the following assumption: $$\mbox{Assumption}~\mathcal{D}_1: \begin{cases} \mbox{(d6)}~ &\sum_{\ell=1}^4b_{\ell}+b_4=2,~2b_1+b_4<2,\\ \mbox{(d7)}~ &\sum_{\ell=5}^7b_{\ell}+b_4=2,~b_5=b_6=b_7,\\ \mbox{(d8)}~ &b_2=b_3<\frac12b_1,~b_{4}<\frac{1}{1000}. \end{cases}$$ A typical example of $(b_1,\cdots,b_7)$ satisfying assumption $\mathcal{D}_1$ is $$\begin{aligned} &b_1=1-\frac23{\varepsilon},~b_2=\frac12-\frac23{\varepsilon},~b_3=\frac12-\frac23{\varepsilon},~b_4={\varepsilon},\\ &b_5=b_6=b_7=\frac{2}{3}-\frac13{\varepsilon},\quad {\varepsilon}<\frac{1}{1000}. \end{aligned}$$ We set $$\label{3.bcg-2} \beta_{1,\ell}=\begin{cases} b_{\ell},\quad&\mbox{if}~\ell=1,2,3,4\\ 0,&\mbox{if}~\ell=5,6,7 \end{cases},\qquad \beta_{2,\ell}=\begin{cases} 0,\quad&\mbox{if}~\ell=1,2,3,4\\ b_{\ell},&\mbox{if}~\ell=5,6,7 \end{cases}.$$ Then we can follow the arguments of [@hlw Lemma 3.1 and Lemma 3.2] to find points $\{p_{\ell}\}_{\ell=1}^7$ such that has no solution verifying the asymptotic behavior . For $\mathbf{F}_4$ we can use the non-existence result of $\mathbf{A}_2$ to find points $\{p_{\ell}\}_{\ell=1}^{10}$ such that $$\label{3.todaf4-3} \left\{ \begin{array}{ll}\Delta u_1+2K_1e^{2u_1}-K_2e^{2u_2}=0~&\mbox{in}~{\mathbb{R}^2},\\ \rule{0cm}{.5cm} \Delta u_2-K_1e^{2u_1}+2K_2e^{2u_2}-2K_3e^{2u_3}=0\quad &\mbox{in}~{\mathbb{R}^2},\\ \rule{0cm}{.5cm} \Delta u_3-K_2e^{2u_2}+2K_3e^{2u_3}=0~&\mbox{in}~{\mathbb{R}^2},\\ \rule{0cm}{.5cm} u_i(x)=-(2-\sum_{\ell=1}^m\beta_{i,\ell})\log|x|+O(1)~\mbox{as}~|x|\to\infty,\quad \quad & i=1,2,3, \end{array}\right. $$ has no solution. By letting $\hat u_3=u_3+\frac12\log2$, we can make to a new system with a symmetric coefficient matrix. In this case, we can derive the corresponding Pohozaev identity of from [@LWZ Proposition 3.1], $$\sum_{i=1}^2\sigma_i^2(p)+2\sigma_3^2(p)-\sigma_1(p)\sigma_2(p)-2\sigma_2(p)\sigma_3(p) =\sum_{i=1}^2\mu_i\sigma_i(p)+2\mu_3\sigma_3(p),$$ where $\sigma_i(p)$ is defined in the same spirit of . We can easily see that there exists at least one index $i\in\{1,2,3\}$ such that $\sigma_i(p)\geq\mu_i(p).$ Then we follow the proof of Proposition \[pr3.1\] to deduce the non-existence result of for suitable points $\{p_{\ell}\}_{\ell=1}^{10}$. [Based on the non-existence result of , we fix the points $\{p_{\ell}\}_{\ell=1}^{10}.$ Next, we repeat the argument of Proposition \[pr3.1\] to derive the non-existence result of $\mathbf{F}_4$ Toda system by choosing appropriate points $\{p_{\ell}\}_{\ell=11}^{13}.$]{} Some useful results =================== In this section, we shall present several useful facts which are used in previous section. The first one is on the matrices of the general simple Lie algebras: $$\begin{aligned} &\mathbf{A}_n=:\scriptsize{\left(\begin{matrix} 2&-1&0&\cdots&0&0&0\\ -1&2&-1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&2&-1&0\\ 0&0&0&\cdots&-1&2&-1\\ 0&0&0&\cdots&0&-1&2 \end{matrix}\right)},\quad \mathbf{B}_n=:\scriptsize{\left(\begin{matrix} 2&-1&0&\cdots&0&0&0\\ -1&2&-1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&2&-1&0\\ 0&0&0&\cdots&-1&2&-2\\ 0&0&0&\cdots&0&-1&2 \end{matrix}\right)},\\ &\mathbf{C}_n=:\scriptsize{\left(\begin{matrix} 2&-1&0&\cdots&0&0&0\\ -1&2&-1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&2&-1&0\\ 0&0&0&\cdots&-1&2&-1\\ 0&0&0&\cdots&0&-2&2 \end{matrix}\right)},\quad \mathbf{D}_n=:\scriptsize{\left(\begin{matrix} 2&-1&0&\cdots&0&0&0\\ -1&2&-1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&2&-1&-1\\ 0&0&0&\cdots&-1&2&0\\ 0&0&0&\cdots&-1&0&2 \end{matrix}\right)},\\ &\mathbf{E}_6=:\scriptsize{\left(\begin{matrix} 2&-1&0&0&0&0\\ -1&2&-1&0&0&0\\ 0&-1&2&-1&0&-1\\ 0&0&-1&2&-1&0\\ 0&0&0&-1&2&0\\ 0&0&-1&0&0&2 \end{matrix}\right)},\quad \mathbf{E}_7=:\scriptsize{\left(\begin{matrix} 2&-1&0&0&0&0&0\\ -1&2&-1&0&0&0&0\\ 0&-1&2&-1&0&0&0\\ 0&0&-1&2&-1&0&-1\\ 0&0&0&-1&2&-1&0\\ 0&0&0&0&-1&2&0\\ 0&0&0&-1&0&0&2 \end{matrix}\right)},\\ &\mathbf{E}_8=:\scriptsize{\left(\begin{matrix} 2&-1&0&0&0&0&0&0\\ -1&2&-1&0&0&0&0&0\\ 0&-1&2&-1&0&0&0&0\\ 0&0&-1&2&-1&0&0&0\\ 0&0&0&-1&2&-1&0&-1\\ 0&0&0&0&-1&2&-1&0\\ 0&0&0&0&0&-1&2&0\\ 0&0&0&0&-1&0&0&2 \end{matrix}\right)},~ \mathbf{F}_4=:\scriptsize{\left(\begin{matrix} 2&-1&0&0\\ -1&2&-2&0\\ 0&-1&2&-1\\ 0&0&-1&2 \end{matrix}\right)},~ \mathbf{G}_2=:\scriptsize{\left(\begin{matrix} 2&-1\\ -3&2 \end{matrix}\right)}.\end{aligned}$$ We shall derive an estimate on each entry of the inverse matrix of above matrices. For $\mathbf{A}_n,~\mathbf{B}_n,~\mathbf{C}_n$ and $\mathbf{D}_n$ type matrices, we get their inverse matrices as follows, see [@RS section 4] or [@wz]. $$\label{4.abcd} \begin{aligned} &\left(\mathbf{A}_{n}^{-1}\right)_{i,j}=\frac{\min\{i,j\}(n+1-\max\{i,j\})}{n+1},~1\leq i,j\leq n.\\ &\left(\mathbf{B}_n^{-1}\right)_{i,j}=\begin{cases} \min\{i,j\},~&1\leq i\leq n-1,~1\leq j\leq n,\\ \frac12j,~&i=n,~1\leq j\leq n. \end{cases}\\ &\left(\mathbf{C}_n^{-1}\right)_{i,j}=\begin{cases} \min\{i,j\},~&1\leq i\leq n,~1\leq j\leq n-1,\\ \frac12i,~&1\leq i\leq n,~j=n. \end{cases}\\ &\left(\mathbf{D}_n^{-1}\right)_{i,j}=\begin{cases} \min\{i,j\},~&1\leq i,j\leq n-2,\\ \frac12\min\{i,j\},~&1\leq\min\{i,j\}\leq n-2<\max\{i,j\}\leq n,\\ \frac14(n-2),~&i=n,j=n-1,~\mathrm{or}~i=n-1,j=n,\\ \frac{1}{4}n,~&i=n-1,j=n-1,~\mathrm{or}~i=n,j=n. \end{cases} \end{aligned}$$ By straightforward computation, we have $$\label{4.efg} \begin{aligned} &\mathbf{E}_6^{-1}=\scriptsize{\left(\begin{matrix} 4/3&5/3&2&4/3&2/3&1\\ 5/3&{10/3}&4&8/3&4/3&2\\ 2&4&6&4&2&3\\ 4/3&8/3&4&10/3&5/3&2\\ 2/3&4/3&2&5/3&4/3&1\\ 1&2&3&2&1&2 \end{matrix}\right)}, \quad \mathbf{E}_7^{-1}=\scriptsize{\left(\begin{matrix} 3/2&2&5/2&3&2&1&3/2\\ 2&4&5&6&4&2&3\\ 5/2&5&15/2&9&6&3&9/2\\ 3&6&9&12&8&4&6\\ 2&4&6&8&6&3&4\\ 1&2&3&4&3&2&2\\ 3/2&3&9/2&6&4&2&7/2 \end{matrix}\right)},\\ &\mathbf{E}_8^{-1}=\scriptsize{\left(\begin{matrix} 2&3&4&5&6&4&2&3\\ 3&6&8&10&12&8&4&6\\ 4&8&12&15&18&12&6&9\\ 5&10&15&20&24&16&8&12\\ 6&12&8&24&30&20&10&15\\ 4&8&12&16&20&14&7&10\\ 2&4&6&8&10&7&4&5\\ 3&6&9&12&15&10&5&8 \end{matrix}\right)},~ \mathbf{F}_4^{-1}=\scriptsize{\left(\begin{matrix} 2&3&4&2\\ 3&6&8&4\\ 2&4&6&3\\ 1&2&3&2 \end{matrix}\right)},~ \mathbf{G}_2^{-1}=\scriptsize{\left(\begin{matrix} 2&1\\ 3&2 \end{matrix}\right)}. \end{aligned}$$ From and , we get the following conclusion \[le4.inverse\] For each type Cartan matrix $(a_{i,j})_{n\times n},$ let $(c_{i,j})_{n\times n}$ be its inverse matrix. Then we have the following estimate on $c_{i,j},~1\leq i,j\leq n,$ $$0<c_{i,j}<4n.$$ The following lemma is a generalization of Brezis-Merle [@Brezis-Merle] type result, we refer the readers to [@hlw Lemma 5.1] for a proof. \[le4.bm\] Let $u^k$ be a sequence of solutions to $$\left\{ \begin{array}{ll} \Delta u^k+\frac{f^k(x)}{|x|^{2\alpha}}e^{2u^k}=g_k~\mbox{in}~B_1,\\ \rule{0cm}{.5cm} \int_{B_1}\frac{f^k(x)}{|x|^{2\alpha}}{e^{2u^k}}dx\leq 2\pi(1-\alpha-\delta), \end{array}\right. $$ where $\delta>0$, $\alpha\in[0,1)$, and $g^k$ is a family of non-negative functions such that $\|g^k\|_{L^1(B_1)}\leq C$. Suppose that $0\leq f^k\leq C$ and $\inf_{B_1\setminus B_{\tau}}f^k\geq C_{\tau}$ for some $\tau\in(0,\frac13)$. Then $\{u^k\}$ is locally uniformly bounded from above in $B_1.$ The last result of this section is about the Pohozaev typde identity for singular Toda system. See [@LWYZ; @LWZ; @lyz] for related results. \[le4.1\] Let $(u_1^k,\cdots,u_n^k)$ be a sequence of solutions to $$\begin{cases} \Delta u_i^k+\sum_{j=1}^na_{i,j}\frac{h_j^k}{|x|^{2\alpha_j}}e^{2u_j^k}=0~\mbox{in}~B_1,&i=1,\cdots,n,\\ \int_{B_1}\frac{h_i^k}{|x|^{2\alpha_i}}e^{2u_i^k}dx\leq C,\quad &i=1,\cdots,n,\\ |u_i^k(x)-u_i^k(y)|\leq C~\mbox{for every}~x,y\in\partial B_1,~&i=1,\cdots,n,\\ \|h_i^k(x)\|_{C^3(B_1)}\leq C,\quad 0<\frac1C\leq h_i^k(x)~\mathrm{in}~B_1,~&i=1,\cdots,n, \end{cases}$$ for some $\alpha_i<1,~i=1,\cdots,n$ and $B_1$ is the unit ball in ${\mathbb{R}^2}$. Assume that $0$ is the only blow up point, that is, $$\sup_{B_1\setminus B_{{\varepsilon}}} u_i^k(x)\leq C({\varepsilon})~\ \mbox{for every}~\ 0<{\varepsilon}<1,\quad i=1,\cdots,n.$$ Then setting $\mu_i=1-\alpha_i$ and $$\sigma_i:=\frac{1}{2\pi}\lim_{r\to0}\lim_{k\to\infty}\int_{B_r}\frac{h_i^k(x)}{|x|^{2\alpha_i}}e^{2u_i^k(x)}dx, \quad i=1,\cdots,n,$$ we have $$\begin{aligned} &\mathbf{A}_n:~\sum_{i=1}^n\sigma_i^2-\sum_{i=1}^{n-1}\sigma_i\sigma_{i+1}=\sum_{i=1}^n\mu_i\sigma_i,\\ &\mathbf{B}_n:~\sum_{i=1}^{n-1}\sigma_i^2+2\sigma_n^2-\sum_{i=1}^{n-2}\sigma_{i}\sigma_{i+1}- 2\sigma_{n-1}\sigma_n=\sum_{i=1}^{n-1}\mu_i\sigma_i+2\mu_n\sigma_n,\\ &\mathbf{C}_n:~2\sum_{i=1}^{n-1}\sigma_i^2+\sigma_n^2-2\sum_{i=1}^{n-1}\sigma_i\sigma_{i+1}= 2\sum_{i=1}^{n-1}\mu_i\sigma_i+\mu_n\sigma_n,\\ &\mathbf{D}_n:~\sum_{i=1}^n\sigma_i^2-\sum_{i=1}^{n-2}\sigma_{i}\sigma_{i+1}-\sigma_{n-2}\sigma_n= \sum_{i=1}^n\mu_i\sigma_i,\\ &\mathbf{E}_n:~\sum_{i=1}^n\sigma_i^2-\sum_{i=1}^{n-2}\sigma_{i}\sigma_{i+1}-\sigma_{n-3}\sigma_n= \sum_{i=1}^n\mu_i\sigma_i,~n=6,7,8,\\ &\mathbf{F}_4:~\sum_{i=1}^2\sigma_i^2+2\sum_{i=3}^4\sigma_i^2-\sigma_1\sigma_2 -2\sum_{i=2}^3\sigma_i\sigma_{i+1}=\sum_{i=1}^2\mu_i\sigma_i+2\sum_{i=3}^4\mu_i\sigma_i,\\ &\mathbf{G}_2:~3\sigma_1^2-3\sigma_1\sigma_2+\sigma_2^2=3\mu_1\sigma_1+\mu_2\sigma_2.\end{aligned}$$ In particular, for each type Toda system, if $(\sigma_1,\cdots,\sigma_n)\neq(0,\cdots,0)$, then there exists at least one index $i\in\{1,\cdots,n\}$ such that $$\label{4.comparison} \sigma_i\geq \mu_i=1-\alpha_i.$$ Since $\mathbf{A}_n$, $\mathbf{D}_n$, $\mathbf{E}_6,~\mathbf{E}_7,$ and $\mathbf{E}_8$ are symmetric matrix, we get the corresponding Pohozaev identity from [@LWZ Proposition 3.1] directly. For $\mathbf{B}_{n}$, $\mathbf{C}_{n}$ and $\mathbf{G}_2$, we derive their Pohozaev identities from $\mathbf{A}_{2n}$, $\mathbf{A}_{2n-1}$, $\mathbf{A}_6$ type Toda system respectively, see [@nie1 Lemma 4.1 and Lemma 4.2] and [@nie2 Example 3.4]. While for $\mathbf{F}_4$, let $\hat u_i=u_i,~i=1,2$ and $\hat u_i=u_i+\frac12\log2,~i=3,4$, we get $(\hat u_1,\hat u_2,\hat u_3,\hat u_4)$ satisfies $$\label{4.modifiedf4} \begin{cases} \Delta \hat u_1^k+2\frac{h_1^k}{|x|^{2\alpha_1}}e^{2\hat u_1^k}-\frac{h_2^k}{|x|^{2\alpha_2}}e^{2\hat u_2^k}=0,\\ \Delta \hat u_2^k-\frac{h_1^k}{|x|^{2\alpha_1}}e^{2\hat u_1^k}+2\frac{h_2^k}{|x|^{2\alpha_2}}e^{2\hat u_2^k} -\frac{h_3^k}{|x|^{2\alpha_3}}e^{2\hat u_3^k} =0,\\ \Delta \hat u_3^k-\frac{h_2^k}{|x|^{2\alpha_2}}e^{2\hat u_2^k}+\frac{h_3^k}{|x|^{2\alpha_3}}e^{2\hat u_3^k} -\frac12\frac{h_4^k}{|x|^{2\alpha_4}}e^{2\hat u_4^k}=0,\\ \Delta \hat u_4^k-\frac12\frac{h_3^k}{|x|^{2\alpha_3}}e^{2\hat u_3^k} +\frac{h_4^k}{|x|^{2\alpha_4}}e^{2\hat u_4^k}=0. \end{cases}$$ We see that the coefficient matrix of is symmetric. By applying [@LWZ Proposition 3.1], we get the related Pohozaev identity. We shall prove for $\mathbf{A}_n$ only, the other cases can be proved similarly. The Pohozaev identity for $A_n$ can be written as $$\sum_{i=1}^n\sigma_i(\sigma_i-\mu_i)=\sum_{i=1}^{n-1}\sigma_i\sigma_{i+1}\geq 0.$$ This shows that holds for at least one index $i\in \{1,\dots, n\}$. [10]{} <span style="font-variant:small-caps;">J. Balog, L. Féher, L. O’Raifeartaigh:</span> *Toda theory and $\mathcal{W}$-algebra from a gauged WZNW point of view,* Ann. Physics, [**203**]{} (1990) 76-136. <span style="font-variant:small-caps;">H. Brezis, F. Merle:</span> *Uniform estimates and blow-up behaviour for solutions of $-\Delta u=V(x)e^u$ in two dimensions*, Comm. Partial Differential Equations **16** (1991), 1223-1253. <span style="font-variant:small-caps;">W. Chen, C. Li:</span> *Classification of solutions of some nonlinear elliptic equations*, Duke Math. J. **63** (3) (1991), 615-622. <span style="font-variant:small-caps;">A. Doliwa:</span> *Holomorphic curves and Toda systems* Lett. Math. Phys. 39(1), 21-32 (1997). <span style="font-variant:small-caps;">G. Dunne:</span> *Self-dual Chern-Simons Theories*, Lecture Notes in Physics. Springer, Berlin (1995). <span style="font-variant:small-caps;">A. Eremenko:</span> *Metrics of positive curvature with conic singularities on the sphere*, Proc. Amer. Math. Soc. **132** (2004), no. 11, 3349-3355. <span style="font-variant:small-caps;">L. Féher, L. OˇRaifeartaigh, P. Ruelle, I. Tsutsui and A. Wipf:</span> *Generalized Toda theories and $\mathcal{W}$-algebras associated with integral gradings,* Ann. Physics, [**213**]{} (1992) 1-20. <span style="font-variant:small-caps;">P. Griffiths, J. Harris:</span> *Principles of algebraic geometry,* John Wiley, (2014). <span style="font-variant:small-caps;">M. Guest:</span> *Harmonic maps, loop groups, and integrable systems*, London Mathematical Society Student Texts, vol.38, Cambridge University Press, Cambridge, 1997. <span style="font-variant:small-caps;">A. Hyder, C.S. Lin, J.C. Wei:</span> *On $SU(3)$ Toda system with multiple singular sources*, preprint. <span style="font-variant:small-caps;">A. Hyder, G. Mancini, L. Martinazzi:</span> *Local and nonlocal singular Liouville equations in Euclidean spaces*, arXiv: 1808.03624 (2018). <span style="font-variant:small-caps;">C. S. Lin, Z. Nie, J. Wei:</span> *Toda system and hypergeometric equations*, Transcations of American Math Society **370** (2018), no. 11, 7605-7626. <span style="font-variant:small-caps;">C. S. Lin, Z. Nie, J. Wei:</span> *Classification of solutions to general Toda systems with singular sources*, preprint. <span style="font-variant:small-caps;">C. S. Lin, J.C. Wei, W. Yang, L. Zhang:</span> *On rank-$2$ Toda systems with arbitrary singularities: local mass and new estimates* Anal. PDE **11** (2018), no. 4, 873-898. <span style="font-variant:small-caps;">C. S. Lin, J.C. Wei, D. Ye:</span> *Classification and nondegeneracy of $SU(n+1)$ Toda system with singular sources*, Invent. Math. **190** (2012), no. 1, 169-207. <span style="font-variant:small-caps;">C. S. Lin, J.C. Wei, L. Zhang:</span> *Classification of blowup limits for $SU(3)$ singular Toda systems*, Anal. PDE **8** (2015), no. 4, 807-837. <span style="font-variant:small-caps;">C. S. Lin, W. Yang, X. Zhong:</span> *Apriori Estimates of Toda systems, I: the types of ${A}_n,{B}_n,{C}_n$ and ${G}_2$*, to appear in J. Differential Geometry. <span style="font-variant:small-caps;">M. Lucia, M. Nolasco:</span> *$SU(3)$ Chern-Simons vortex theory and Toda systems*, J. Diff. Equations **184** (2002), 443-474. <span style="font-variant:small-caps;">F. Luo, G. Tian:</span> *Liouville equation and spherical convex polytopes*, Proc. Amer. Math. Soc. **116** (1992) no. 4, 1119-1129. <span style="font-variant:small-caps;">M. Musso, A. Pistoia, J.C. Wei:</span> *New blow-up phenomena for $SU(n+1)$ Toda system*, J. Differential Equations **260** (2016), no. 7, 6232-6266. <span style="font-variant:small-caps;">Z. H. Nie:</span>*Classification of solutions to Toda system of types $C$ and $B$ with singular sources*, Calc. Var. Partial Differential Equations, 55(2016) no.3 23pp. <span style="font-variant:small-caps;">Z. H. Nie:</span>*On characteristic integrals of Toda field theories*, J. Nonlinear Math. Phys. 21 (2014) no.1 120-131. <span style="font-variant:small-caps;">M. Nolasco, G. Tarantello:</span>*Vortex condensates for the $SU(3)$ Chern-Simons theory*, Commun. Math. Phys. 213(3), 599-639 (2000). <span style="font-variant:small-caps;">A. V. Razumov, M.V. Saveliev:</span>*Lie algebras, geometry, and Toda-type systems*, Cambridge University Press, (1997). <span style="font-variant:small-caps;">J. Prajapat, G. Tarantello:</span> *On a class of elliptic problems in ${\mathbb{R}}^2$: symmetry and uniqueness results*, Proc. Royal Soc. Edinburgh **131 A** (2001), 967-985. <span style="font-variant:small-caps;">G. Tarantello:</span> *Multiple condensate solutions for the Chern-Simons-Higgs theory*, J. Math. Phys. **37** (1996) 3769-3796. <span style="font-variant:small-caps;">M. Troyanov</span>, *Prescribing curvature on compact surfaces with conical singularities*, Trans. Am. Math. Soc. **324** (1991) 793-821. <span style="font-variant:small-caps;">M. Troyanov:</span> *Metric of constant curvature on a sphere with two conical singularities*, in Differential geometry, Lect. Notes in Math., vol. 1410, Springer-Verlag, 1989, pp. 296-306. <span style="font-variant:small-caps;">M. Umehara, K. Yamada:</span> *Metrics of constant curvature $1$ with three conical singularities on the $2$-sphere*, Illinois J. Math. **44** (2000), no. 1, 72-94. <span style="font-variant:small-caps;">Y. J. Wei, Y.M. Zou:</span> *Inverses of Cartan matrices of Lie algebras and Lie superalgebras,* Linear Algebra and its Applications, 521 (2017), 283-298. <span style="font-variant:small-caps;">Y. Yang:</span> *The relativistic non-abelian Chern-Simons equation*, Commun. Phys. 186(1), 199-218 (1999). <span style="font-variant:small-caps;">Y. Yang:</span> *Solitons in Field Theory and Nonlinear Analysis,* Springer Monographs in Mathematics, Springer, New York (2001) [^1]: Even though the equation satisfied by $\psi^k_i$ for $2\leq i\leq n-1$ looks slightly different form the one $\psi_1^k$, the proof is same.
--- abstract: | We consider wave propagation in a complex structure coupled to a finite number $N$ of scattering channels, such as chaotic cavities or quantum dots with external leads. Temporal aspects of the scattering process are analysed through the concept of time delays, related to the energy (or frequency) derivative of the scattering matrix $\mathcal{S}$. We develop a random matrix approach to study the statistical properties of the symmetrised Wigner-Smith time-delay matrix $\mathcal{Q}_s = -\mathrm{i}\hbar\,\mathcal{S}^{-1/2}\big(\partial_\varepsilon\mathcal{S}\big)\,\mathcal{S}^{-1/2}$, and obtain the joint distribution of $\mathcal{S}$ and $\mathcal{Q}_s$ for the system with non-ideal contacts, characterised by a finite transmission probability (per channel) $0<T{\leqslant}1$. We derive two representations of the distribution of $\mathcal{Q}_s$ in terms of matrix integrals specified by the Dyson symmetry index $\beta=1,\,2,\,4$ (the general case of unequally coupled channels is also discussed). We apply this to the Wigner time delay $\tau_\mathrm{W}=(1/N)\,\mathrm{tr}\big\{\mathcal{Q}_s\big\}$, which is an important quantity providing the density of states of the open system. Using the obtained results, we determine the distribution $\mathscr{P}_{N,\beta}(\tau)$ of the Wigner time delay in the weak coupling limit $NT\ll1$ and identify the following three regimes. (i) The large deviations at small times (measured in units of the Heisenberg time) are characterised by the limiting behaviour $\mathscr{P}_{N,\beta}(\tau)\sim\tau^{-\beta N^2/2-3/2}\,\exp\big\{-\beta N T/(8\tau)\big\}$ for $\tau\lesssim T$. (ii) The distribution shows the universal $\tau^{-3/2}$ behaviour in some intermediate range $T\lesssim\tau\lesssim1/(TN^2)$. (iii) It has a power law decay $\mathscr{P}_{N,\beta}(\tau)\sim T^2N^3(TN^2\tau)^{-2-\beta N/2}$ for large $\tau\gtrsim1/(TN^2)$.\ Keywords: random matrix theory, scattering theory, quantum chaos, delay times\ Published 5 September 2018 in [*J. Phys. A: Math. Theor. **51** (2018) 404001*]() address: - 'LPTMS, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay cedex, France' - 'Department of Mathematics, Brunel University London, Uxbridge, UB8 3PH, United Kingdom' - 'LPTMS, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91405 Orsay cedex, France' author: - Aurélien Grabsch - Dmitry V Savin - Christophe Texier title: 'Wigner-Smith time-delay matrix in chaotic cavities with non-ideal contacts' --- =1 Introduction ============ Scattering of waves in complex systems has been a subject of intensive studies, with motivations ranging from compound-nucleus reactions [@VerWeiZir85; @MitRicWei10], coherent electronic transport [@Bee97] to propagation of electromagnetic waves in random media [@SebGen98] and chaotic billiards [@GuhMulWei98]. In a scattering setting, the central object is the on-shell scattering matrix $\Sm(\varepsilon)$ whose matrix elements provide the probability amplitudes of transitions (reflection and transmission) between scattering channels open at the given energy $\varepsilon$ [@MelKum04]. The total number $N$ of open channels is typically finite (for example, $N$ is fixed by transverse quantisation for the modes propagating in the electronic wave guides attached to a quantum dot). For an energy and flux conserving system (i.e. without losses or gain), the $\Sm$-matrix is unitary and can therefore be diagonalised as follows $$\label{eq:S} \Sm(\varepsilon) =\mathcal{U}(\varepsilon)\,{\mathrm{e}^{\I\Theta(\varepsilon)}}\,\mathcal{U}^\dagger(\varepsilon) \:, \hspace{1cm} \Theta=\mathrm{diag}(\theta_1,\cdots,\theta_\Nc)\,.$$ The diagonal matrix $\Theta$ gathers the scattering phase shifts (eigenphases) and $\mathcal{U}$ is a $\Nc\times\Nc$ unitary matrix of the corresponding eigenvectors (associated with a specific basis of solutions of the wave equation known as the *partial* scattering waves). For systems invariant under time reversal, the reciprocity principle dictates $\Sm$ to be also symmetric, implying that $\mathcal{U}$ becomes an orthogonal matrix in this case. Complementary to such a stationary description, the temporal aspects of the scattering process may also be characterised in terms of the $\Sm$-matrix by several means. The most well-known concept is probably that of resonance widths, which are related to finite lifetimes of resonance states formed at the intermediate stage of the scattering event [@Kot05; @FyoSav11]. Such resonances are formally defined through the analytical structure of $\Sm(\varepsilon)$ in the complex $\varepsilon$ plane, corresponding to the poles $\mathcal{E}_n=E_n-\frac{\I}{2}\Gamma_n$, where $E_n$ and $\Gamma_n>0$ are the energy and width of the $n$th resonance, respectively. Practically, they are accessible by performing the spectroscopy analysis of relevant decay spectra [@GluKolKor02; @KuhHohMaiSto08; @DiFKraFra12]. The *time delay* is another important notion used to quantify the duration of the scattering event. Following Wigner [@Wig55] and Smith [@Smi60], the time spent by an incident wave in the scattering region can be characterized in terms of the following matrix: $$\label{eq:Q_WS} \WSm(\varepsilon) = -\I\hbar \Sm^\dagger(\varepsilon) {\frac{\partial \Sm(\varepsilon)}{\partial \varepsilon}}.$$ Below we set $\hbar=1$. The Wigner-Smith matrix (\[eq:Q\_WS\]) is Hermitian by construction (for unitary $\Sm$) and thus has all real eigenvalues $\{\tau_1,\cdots,\tau_\Nc\}$ that are commonly referred to as *proper time delays*. They provide the lifetimes of metastable states. On the other hand, the diagonal elements $\{\WSm_{11},\cdots,\WSm_{\Nc\Nc}\}$ of (\[eq:Q\_WS\]) are also real and serve to characterise the time delay in a given entrance channel [@Smi60; @Lyu77]. Taking the trace, one arrives at the simple averaged characteristic, the so-called Wigner time delay [@Lyu77; @LewWei91; @LehSavSokSom95; @FyoSom97] $$\label{eq:Wig_def} \Wt(\varepsilon) \equiv\frac{1}{\Nc}{\mathop{\mathrm{tr}}\nolimits\left\{ \WSm \right\}} = -\frac{\I}{\Nc} {\frac{\partial \ln\det\Sm(\varepsilon)}{\partial \varepsilon}} \,.$$ This quantity plays an important role in practical applications [@CarNus02; @KolVos13; @Tex16]. In particular, it provides a measure of the density of states of the open system, thus being essential for the description of electronic and transport properties of coherent conductors [@Tex16]. In view of representations (\[eq:S\]) and (\[eq:Wig\_def\]), it is also instructive to consider the so-called *partial time delays* $\{\tilde{\tau}_a = \partial\theta_a/\partial\varepsilon$, $a=1,\cdots,\Nc\}$ that are defined by the energy derivative of the scattering eigenphases [@FyoSom97]. They may be treated as the time delay of a “narrow” wave packet (with a weak energy dispersion around $\varepsilon$) prepared with respect to a given scattering eigenchannel. Although there is a connection between those three time delay sets [@SavFyoSom01], they generally characterise different aspects of the problem [@Tex16]. Other characteristic times can also be introduced using certain derivatives of the $\Sm$-matrix elements, see reviews [@CarNus02; @KolVos13; @Tex16; @HauSto89; @But90; @LanMar94; @But02a] for relevant studies. Generally, time delays are known to satisfy certain inequalities (essentially imposed by causality); in particular, they cannot take arbitrary large negative values [@CarNus02; @KolVos13]. In the resonance approximation, however, one can neglect a smooth energy dependence associated with potential scattering and direct reactions. The whole dependence of $\Sm(\varepsilon)$ on the energy is then due to its complex poles (resonances). Under such assumptions, the Wigner-Smith matrix becomes strictly positive [@SokZel97]. The Wigner time delay is then determined entirely by the resonance spectrum of the open system [@Lyu77; @LehSavSokSom95] $$\label{eq:Wig_res} \Wt(\varepsilon) = \frac{1}{\Nc}\sum_n\frac{\Gamma_n}{(\varepsilon-E_n)^2+\Gamma_n^2/4}\,.$$ This important expression is valid at arbitrary degree of the resonance overlap, leading to an interpretation of the Wigner time delay (\[eq:Wig\_res\]) as the density of states in open systems, see [@FyoSom97; @CarNus02; @Tex16] for further discussion. The spectral average of $\Wt$ over a narrow energy window is given by $\overline{\tau}_\mathrm{W} = 2\pi/(N\Delta)$, where $\Delta$ is the mean level spacing (which carries a smooth $\varepsilon$-dependence in general). This relates the mean time delay $\overline{\tau}_\mathrm{W}=\Ht/N$ to the fundamental timescale of quantum systems, the Heisenberg time $\Ht=2\pi/\Delta$. When these concepts are applied to complex quantum or wave systems, such as quantum dots or microwave billiards with classically chaotic dynamics, a statistical analysis is required in order to characterise strong fluctuations that arise in scattering. There are two main approaches to describe such fluctuations: the semiclassical method (see [@KuiSavSie14] for recent advances and further references) and random matrix theory (RMT). The latter proved to be extremely successful in describing universal patterns of chaotic wave phenomena [@GuhMulWei98], being also the most suitable in order to provide the full statistical information in terms of both correlations and distributions. There are two possible variants of the RMT formulation of chaotic scattering. The *stochastic approach* (see [@Bee97; @MelKum04] for reviews) treats the $\Sm$-matrix as the prime statistical object without any reference to the system Hamiltonian. The probability distribution of $\Sm=\Sm(\varepsilon)$ (at the fixed energy $\varepsilon$) is deduced from a maximum entropy principle subject to the global constraints imposed on $\Sm$ by its symmetry and analyticity. It is given by the so-called Poisson kernel [@MelPerSel85; @Bro95; @MelKum04] $$\label{eq:PoissonKernel} P_{\Sm}(\Sm)\propto|\det({\mathbf{1}}_\Nc-\Sbar^*\Sm)|^{-2-\beta(\Nc-1)}\,,$$ which is parameterised by the mean (“optical”) scattering matrix $\Sbar$. In the absence of direct reactions, $\Sbar$ can always be chosen as a constant diagonal matrix [@EngWei73]. The symmetry index $\beta=1$ ($\beta=2$) corresponds to the systems with preserved (broken) time-reversal symmetry ($\beta=4$ is to be taken when spin rotational symmetry is broken). The approach proved to be very useful, in particular, for studying coherent electronic transport in mesoscopic systems [@Bee97]. However, correlations at different energies as well as other spectral properties of open systems related to the resonances turn out to be inaccessible in such an approach because of its fixed-energy nature (in this respect, see [@BroBut97] for an extension to address the energy dependence). The *Hamiltonian approach* [@MahWei69; @VerWeiZir85] is the other and more general formulation that is well adapted to treat both scattering and spectral characteristics on equal footing [@SokZel89; @FyoSav11]. Within the resonance approximation considered, the starting point is the following representation of the $\Sm$-matrix in terms of the Wigner’s reaction matrix $\Kmat$: $$\label{eq:SandK} \Sm(\varepsilon) = \frac{{\mathbf{1}}_\Nc-\I\,\Kmat(\varepsilon)}{{\mathbf{1}}_\Nc+\I\,\Kmat(\varepsilon)}\,, \qquad \Kmat(\varepsilon) = \Wmat^\dagger (\varepsilon-\mathcal{H})^{-1}\Wmat\,.$$ Here, the Hermitian matrix $\mathcal{H}$ of size $\Nint$ represents the internal Hamiltonian of the closed system, whereas the rectangular $\Nint\times\Nc$ matrix $\Wmat$ consists of the constant coupling amplitudes between $\Nint$ internal and $\Nc$ channel states. In the chaotic regime, $\mathcal{H}$ is modelled by an RMT ensemble of appropriate symmetry [@GuhMulWei98]. In the RMT limit $\Nint\gg1$, spectral fluctuations become universal (model-independent) on the local scale of the mean level spacing $\Delta$. Similarly, the results turn out to be insensitive to particular statistical assumptions on the amplitudes $\{\Wmat_{na}\}$ provided that $\Nc\ll\Nint$ [@LehSavSokSom95; @LehSahSokSom95]. These amplitudes appear in the final expressions only through the transmission coefficients $$\label{eq:RelationBetweenCouplingParameters} \coupl_a \equiv 1-|\Sbar_{aa}|^2 = \frac{4\fss_a}{(1+\fss_a)^2}\,, \qquad \fss_a = \frac{2\pi\|\Wmat_a\|^2}{\Nint\Delta}\,.$$ $\coupl_a$ describes the probability of entering the system through channel $a$ (thus characterizing the contact quality), with $\coupl_a\ll1$ ($\coupl_a=1$) corresponding to weak (perfect) coupling. The Hamiltonian approach, especially when combined with the supersymmetry technique to perform statistical averages [@VerWeiZir85], offers the powerful method to derive exact non-perturbative results for various correlation and distribution functions at any channel coupling, see [@MitRicWei10; @FyoSav11; @FyoSavSom05] for details. It was actually possible to derive the Poisson kernel (\[eq:PoissonKernel\]) starting from representation (\[eq:SandK\]), thus proving equivalence of the two approaches for the $\Sm$-matrix distribution [@Bro95]. As to the Wigner-Smith matrix, a number of exact results are already known for various time delays at any $\coupl_a{\leqslant}1$, which we will briefly overview below. However, the distribution of the whole $\WSm$ matrix (in its symmetrised form) is only known for the special case of perfect coupling (all $T_a=1$) [@BroFraBee97; @BroFraBee99]. It is the aim of this work to fill in this gap and to provide the corresponding distribution at arbitrary coupling. We derive the exact result in terms of certain matrix integrals and further analyse the relevant marginal densities in the weak coupling limit. The outline of the paper is as follows. In the next section we state the main results of this work. In Section \[sec:Motivations\], we first provide a heuristic analysis of the Wigner time delay distribution in the weak coupling limit, providing some physical intuition on the nature of the results; then we give an overview of the known exact results at arbitrary $\coupl$ (this overview is complemented by \[app:PartialProper\]). Section \[sec:model\] develops a resonance representation of the Wigner-Smith matrix. The mapping between the perfect and arbitrary coupling is established and used in Section \[sec:WSMdistribution\] to derive a general representation for the Wigner-Smith matrix distribution at arbitrary coupling in terms of a matrix integral over Hermitian matrices. In Section \[sec:IntegrationOverCUE\], we work out an alternative form of the distribution in terms of a matrix integral over the unitary group, which turns out to be more useful for numerics. Based on these results, we study the characteristic function of the Wigner time delay in the weak coupling limit in Section \[sec:characfct\] and deduce the limiting behaviours of its distribution. Some numerical analysis is presented in Section \[sec:Numerics\]. Finally, we provide several appendices with more technical details of our calculations, which we believe may be helpful for further developments and applications. Statement of the main results ============================= We consider the symmetrised Wigner-Smith matrix defined by $$\label{eq:WS_sym} \WSm_{s} = \Sm^{1/2}\WSm\Sm^{-1/2} = -\I\,\Sm^{-1/2} \partial_\varepsilon\Sm\,\Sm^{-1/2}\,,$$ which clearly has the same spectrum as $\WSm$. Our first main result is the joint matrix distribution for the scattering matrix $\Sm$ and the inverse matrix $\invQ=\WSm_s^{-1}$ at arbitrary transmission of $\Nc$ channels. To this end, we develop a resonance representation for the Wigner-Smith matrix to establish a relation between this matrix at arbitrary and perfect coupling. This enables us to apply the known joint distribution at perfect coupling [@BroFraBee97; @BroFraBee99] to that at arbitrary coupling. When all channels have the same transmission coefficient $T=1-|\Sbar|^2$, our result reads $$\begin{aligned} \label{eq:JointMatrixDistribIntro} &&\hspace{-1cm} \mathrm{D}\Sm\,\,\mathrm{D}\invQ\, P_{\Sm,\invQ}(\Sm,\invQ) = \mathrm{D}\Sm\,\,\mathrm{D}\invQ\, c_{\Nc,\beta} \, \Theta(\invQ)\, \left|\det\big({\mathbf{1}}_\Nc-\Sbar^*\Sm\big)\right|^{\beta\Nc}\, \\\nonumber &&\times \big(\det\invQ\big)^{\beta\Nc/2} \, \exp\left[ -\frac{\beta}{2(1-|\Sbar|^2)}\, {\mathop{\mathrm{tr}}\nolimits\left\{ ({\mathbf{1}}_\Nc-\Sbar^*\Sm)({\mathbf{1}}_\Nc-\Sbar\Sm^\dagger)\invQ \right\}} \right],\end{aligned}$$ where $c_{\Nc,\beta}$ is a normalisation constant. $\mathrm{D}\Sm$ is the Haar measure (uniform measure over unitary matrices) and $\mathrm{D}\invQ$ the Lebesgue measure over the set of Hermitian matrices. The matrix theta function is $\Theta(\invQ)=1$ when all eigenvalues of $\invQ$ are positive and zero otherwise. The result has relied on the following conservation of the measure when mapping the $\Sm$ and $\invQ$ matrices for the ideal and non-ideal contacts: $$\nonumber \mathrm{D}\Sm_0\,\,\mathrm{D}\invQ_0=\mathrm{D}\Sm\,\,\mathrm{D}\invQ \:.$$ The representation may be regarded as an extension of the Poisson kernel for the time-delay problem. One can then deduce the distribution of the matrix $\invQ$ in terms of a matrix integral over the unitary group. We have prefered a more convenient form, induced by , in terms of a matrix integral over Hermitian matrices $$\begin{aligned} \label{eq:DistribInvQintro} &&\hspace{-1cm} P_\invQ(\invQ) = b_{\Nc,\beta} \Theta(\invQ)\, ( \det\invQ )^{\beta\Nc/2} \\\nonumber &&\times \int\mathrm{D}\Kmat\, \frac{\det({\mathbf{1}}_\Nc+\Kmat^2)^{\beta\Nc/2}} {\det({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{1-\frac{\beta}{2}+\beta\Nc}}\, \exp\left[ -\frac{\beta}{2}\fss\,{\mathop{\mathrm{tr}}\nolimits\left\{ \frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}\invQ \right\}} \right] $$ where $b_{\Nc,\beta}$ is a normalisation constant and the coupling constant $\fss>0$ is related to the transmission coefficient . For $\fss=1$ ($\Sbar=0$), Eq.  reduces to the Laguerre ensemble corresponding to the known result at prefect coupling [@BroFraBee97; @BroFraBee99]. We have also generalised this expression to the most general case when channels are not equally coupled, see equation  below in the text. The matrix distribution is further used to study the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ of the Wigner time delay $\Wt=(1/\Nc){\mathop{\mathrm{tr}}\nolimits\left\{ \invQ^{-1} \right\}}$. Defining the characteristic function (Laplace transform of the distribution) as $ \mathcal{Z}_{\Nc,\beta}(p)\propto {\left\langle \exp\big\{ -(2p/\beta\fss){\mathop{\mathrm{tr}}\nolimits\left\{ \invQ^{-1} \right\}} \big\} \right\rangle} $, which involves in principle two matrix integrals (over $\invQ$ and $\Kmat$), we have finally obtained a ratio of two $\Nc\times\Nc$ determinants integrated over the eigenvalues of one matrix only $$\begin{aligned} \label{eq:Zn2intro} \hspace{-1.5cm} \mathcal{Z}_{\Nc,2}(p) =\int_{\mathbb{R}^\Nc} \frac{ \D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2}{ \prod_n(1+k_n^2)^{\Nc} } \, \frac{ \det\left[ \left( p\, \frac{1+\fss^2k_j^2}{1+k_j^2} \right)^{\frac{\Nc+i}{2}} K_{\Nc+i} \left(2\sqrt{p\frac{1+k_j^2}{1+\fss^2k_j^2}}\right) \right] }{ \det\left[ \left( \frac{1+k_j^2}{1+\fss^2k_j^2} \right)^{-\Nc-i} \right] } \:.\end{aligned}$$ The result holds in the unitary ($\beta=2$) case. Here $\Delta_\Nc(k)=\prod_{i<j}(k_i-k_j)$ denotes the Vandermonde determinant and $K_\nu(x)$ is the MacDonald function (modified Bessel function of 3rd kind). Because has a finite limit when $\fss\to0$, this result shows, in particular, that when rescaled by the factor $\fss\simeq\coupl/4\to0$, the Wigner time delay distribution has a limit $\lim_{\fss\to0}\fss\,{\mathscr{P}_{\Nc,2}}(\fss\,\rt)={\mathscr{Q}_{\Nc,2}}(\rt)$ independent of $\fss$. We have also verified numerically that this holds for all $\beta$. Finally we have obtained the limiting behaviours of the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ in the weak coupling limit, $\Nc\coupl\ll1$. The large deviations $\tau\to0$ are characterised by $${\mathscr{P}_{\Nc,\beta}}(\tau) \sim \tau^{-\frac{\beta\Nc^2}{2}-\frac32}\, {\mathrm{e}^{-\beta\Nc\coupl/(8\tau)}} \hspace{1cm} \mbox{for } \tau\ll \coupl\,,$$ which is obtained by extending the steepest descent method to the matrix integrals (for arbitrary symmetry class). For $\beta=2$, we have also deduced this behaviour from . Then, analysing in detail the limit first $\fss\to0$ and then $p\to0$ of the characteristic function , we have obtained the power law $${\mathscr{P}_{\Nc,\beta}}(\tau) \sim \frac{1}{\coupl}(\coupl/\tau)^{3/2} \hspace{1cm} \mbox{for } \coupl \ll \tau \ll 1/(\Nc^2\coupl) \:,$$ which holds independently of $\beta$. Finally, we have provided a simple argument in terms of isolated resonances to get the large time asymptotic as follows $${\mathscr{P}_{\Nc,\beta}}(\tau) \sim \coupl^2\Nc^3 \, \left( \coupl\Nc^2\,\tau\right)^{-2-\beta\Nc/2}$$ All these limiting behaviours have been verified numerically. Background and motivations {#sec:Motivations} ========================== Heuristic analysis (single resonance approximation) {#Subsec:Heuristic} --------------------------------------------------- Before entering into the detailed analysis, it is instructive to give a qualitative discussion of the typical behaviour of time delay distributions when all $\coupl_a=\coupl\ll1$. The channel coupling can be treated perturbatively in such a case, enabling us to estimate the mean width (decay rate) by the Fermi’s golden rule as $\bG=\Nc\coupl\frac{\Delta}{2\pi}=\Nc\coupl/\Ht$. (This is known as the Weisskopf estimate in nuclear physics and as the inverse of the dwell time in mesoscopics.) The distribution of the resonance widths (rescaled in units of $\bG$) is then given by the well known $\chi^2$ distribution with $\beta\Nc$ degrees of freedom, $$\label{eq:DistribResonanceWidth} p (y) = \frac{ (\beta\Nc/2)^{\beta\Nc/2} }{\Gamma(\beta\Nc/2)} \, y^{\frac{\beta\Nc}{2}-1}{\mathrm{e}^{-\beta\Nc y /2}}\,, \qquad y\equiv\Gamma/\bG=\Gamma\Ht/(\Nc\coupl)\,.$$ The related moments are ${\langle y^k\rangle} = \big(\frac{2}{\beta\Nc}\big)^{k} \big(\frac{\beta\Nc}{2}\big)_k$, where ${\left\langle \cdots\right\rangle}$ denotes the statistical averaging and $(a)_k$ is the Pochhammer symbol. Expression is a many-channel generalisation of the famous Porter-Thomas result at $\Nc=1$ and $\beta=1$ [@PorTho56]. It must be emphasised that this distribution arises from a perturbative treatment of the channel coupling, resulting essentially from Gaussian statistics of the chaotic wave functions of the closed system. Thus it is valid in the *weak* coupling limit only. (Notably at perfect coupling, the exact distribution of resonance widths is known [@FyoSom97] to develop the power law decay $p(y)\propto y^{-2}$ at $y\gg1$. See Ref. [@FyoSav15] for further discussion of the weak coupling limit beyond the perturbative regime.) We now consider the important case of *isolated* (well-separated) resonances when typical widths $\Gamma_n\ll|E_n-E_{n+1}|\sim\Delta$, corresponding to $2\pi\bG/\Delta=\Nc\coupl\ll1$. Scattering patterns in such a regime are dominated by a single resonance with energy $E_n\approx\varepsilon$ closest to the scattering energy. Accordingly, we may approximate the Wigner time delay (\[eq:Wig\_res\]) as $\Wt(\varepsilon)\simeq(1/\Nc)\,\Gamma_n/\big[(\varepsilon-E_n)^2+\Gamma_n^2/4\big]$ and assume the statistically uncorrelated energies and widths [@FyoSom97]. This leads to the following form of the time delay distribution: $$\label{eq:heuristic0} {\mathscr{P}_{\Nc,\beta}}\left( \tau \right) \approx \int_0^\infty \frac{\D\Gamma}{\bG}\, p\left(\frac{\Gamma}{\bG}\right) \int_{-\Delta/2}^{\Delta/2}\frac{\D E}{\Delta}\, \delta\!\left( \tau - \frac{1}{\Nc}\,\frac{\Gamma}{E^2+\Gamma^2/4} \right) \:.$$ Since such a Lorentzian profile is the most natural shape of the energy dependence in the vicinity of the resonance, one may generally expect that approximation (\[eq:heuristic0\]) describes adequately the other types of time delays in the limit considered as well. In the regime $\tau>2/(\Nc\Delta)=\Ht/(\Nc\pi)$, the cutoff of the integration over $E$ at $\Delta/2$ plays no role and can be replaced by infinity. We obtain the useful representation $${\mathscr{P}_{\Nc,\beta}}\left( \tau \right) \approx \frac{1}{\Delta\sqrt{\Nc}\,\tau^{3/2}} \int_0^{4/(\Nc\tau)} \frac{\D\Gamma}{\bG}\, p\left(\frac{\Gamma}{\bG}\right) \frac{\sqrt{\Gamma}}{\sqrt{1-\Gamma\Nc\tau/4}} \:.$$ In the intermediate regime $\Ht/\Nc\ll\tau\ll1/(\Nc\bG)=\Ht/(\Nc^2\coupl)$, the main contribution comes from the most probable (typical) resonances of width $\Gamma\sim\bG$, resulting in $$\label{eq:heuristic1} {\mathscr{P}_{\Nc,\beta}}(\tau) \sim \frac{1}{\coupl\Ht} \left(\frac{\coupl\,\Ht}{\tau}\right)^{3/2}$$ independently of the symmetry index $\beta$. Such a behaviour was already observed in previous studies of both partial [@FyoSom97; @FyoSavSom97] and proper time delays [@SomSavSok01]. It is believed that this $\tau^{-3/2}$ law is the most robust feature of the distribution in the regime of isolated resonances. Finally, the far tail of the distribution at $\tau\gg1/(\Nc\bG)=\Ht/(\Nc^2\coupl)$ is controlled by rare narrow resonances of width $\Gamma\ll\bG$, yielding the asymptotic behaviour $$\label{eq:heuristic2} {\mathscr{P}_{\Nc,\beta}}(\tau) \sim \frac{\coupl^2\Nc^3}{\Ht} \, \left(\frac{\Ht}{\coupl\Nc^2\,\tau}\right)^{2+\beta\Nc/2}.$$ The universal exponent $2+\beta\Nc/2$ can be simply understood from the limiting behaviour of the resonance width distribution (\[eq:DistribResonanceWidth\]) at $\Gamma\to0$, as explained in Ref. [@FyoSom97]. As a check, we can estimate the moments from this distribution. The (positive) moments are controlled by the upper cutoff $\tau_*\sim\Ht/(\Nc^2\coupl)$ of the power law . Thus we obtain ${\left\langle \Wt^k\right\rangle}\sim\sqrt{\coupl\Ht}\,\tau_*^{k-1/2}$ i.e. $${\left\langle \Wt^k\right\rangle} \sim \Ht^k/\big(\coupl^{k-1}\Nc^{2k-1}\big), \qquad\mbox{for } k<1+\beta\Nc/2 \:,$$ whereas all moments of higher order $k{\geqslant}1+\beta\Nc/2$ diverge because of Eq. (\[eq:heuristic2\]). In the $\Nc\coupl\ll1$ limit, this reproduces the known exact results (for $k=1,\,2$) discussed below. One of our aims will be to settle these results and analysis on rigorous grounds and in particular to characterise the large deviations for $\tau\to0$. Known exact results ------------------- As mentioned in the Introduction, the three time-delay sets in question do not coincide in general and have different statistical properties. We note that the formal order of the two operations, the diagonalisation and taking the energy derivative of the scattering matrix, is reversed when dealing with the proper or partial time delays. The connection between the two sets can in principle be found from the general expression [@SavFyoSom01] $$\label{eq:Q_prop-part} \WSm = \mathcal{U}\,\partial_\varepsilon\Theta\,\mathcal{U}^\dagger + \I\,\Sm^\dagger\left[\mathcal{U}\,\partial_\varepsilon\mathcal{U}^\dagger,\,\Sm \right],$$ where $\partial_\varepsilon\equiv{\frac{\partial }{\partial \varepsilon}}$ and $[\,,\,]$ stands for a commutator. This clearly shows that the differences between the proper and partial time delays are due to the second term in (\[eq:Q\_prop-part\]), which essentially accounts for the different bases chosen to express the $\Sm$-matrix [@Tex16]. Clearly, the time delays satisfy the following sum rule: $$\label{eq:SumRulePPW} \frac{1}{\Nc}\sum_{a=1}^\Nc\tau_a =\frac{1}{\Nc}\sum_{a=1}^\Nc\tilde{\tau}_a =\frac{1}{\Nc}\sum_{a=1}^\Nc\WSm_{aa} =\frac{1}{\Nc} {\mathop{\mathrm{tr}}\nolimits\left\{ \WSm \right\}} = \Wt\,.$$ In the case of the equivalent channels, this sum rule implies the following equality for the mean time delays: ${\langle \tau_a\rangle}={\langle \tilde\tau_a\rangle}={\langle \WSm_{aa}\rangle}={\langle \Wt\rangle}=\frac{\Ht}{\Nc}$. It is therefore useful to measure all times in units of the Heisenberg time, and simply set $\Ht=1$ below. ### Perfect coupling, $T=1$. In the special case of one open channel, $\Nc=1$, all time delays reduce to a single quantity, the energy derivative of the scattering phase. Its distribution was first derived for $\beta=2$ (but any $\coupl$) in Ref. [@FyoSom96] and independently for any $\beta$ (but $\coupl=1$) in Ref. [@GopMelBut96]. The matrix generalisation of the latter approach to arbitrary $\Nc>1$ was presented in the influential work [@BroFraBee97; @BroFraBee99] by Brouwer, Frahm, and Beenakker (BFB) who showed that the proper time delays (more precisely, their inverses) are distributed according to the Laguerre ensemble of random matrices. This provided a route for applying powerful RMT techniques (like orthogonal polynomials and the Coulomb gas method) to study various densities, moments and correlators built on the Wigner-Smith matrix [@SavFyoSom01; @MezSim11; @MezSim12; @MezSim13; @TexMaj13; @MarMarGar14; @GraTex15; @Cun15; @Nov15a; @CunMezSimViv16; @CunMezSimViv16b; @GraMajTex17b]. We refer to Ref. [@Tex16] for the most recent review and briefly discuss below the qualitative differences in the behaviour of the relevant distribution functions (more details on the marginal distributions of both proper and partial time delays can be found in \[app:PartialProper\]). The many-channel distribution of the Wigner time delay is explicitly known only for $\Nc=2$ [@SavFyoSom01] or $\Nc\gg1$ [@TexMaj13]. However, its variance can be found exactly at any $\Nc$ and is represented by the following form (valid for arbitrary $\beta$ considered) [@MezSim13]: $$\label{eq:VarianceWTDperfect} \mathrm{var}(\Wt) = \frac{4}{\Nc^2(\Nc+1)(\beta\Nc-2)} \simeq \frac{4}{\beta\Nc^4} \:.$$ Here and below the symbol $\simeq$ is used to show the leading asymptotic at $\Nc\gg1$. For the partial time delays, the variances and covariances can be derived from the exactly known marginal [@FyoSom96; @FyoSom97; @FyoSavSom97] and joint (two-point) [@SavFyoSom01] densities and read as follows [@KuiSavSie14] : $$\begin{aligned} \label{eq:var_part} \mathrm{var}(\tilde{\tau}_a) &= \frac{2}{\Nc^2(\beta\Nc-2)}\simeq\frac{2}{\beta\Nc^3} \:, \\ \label{eq:CovPartial} \mathrm{cov}(\tilde{\tau}_a,\tilde{\tau}_b) &= + \frac{\mathrm{var}(\tilde{\tau}_a)}{\Nc+1} \simeq +\frac{2}{\beta\Nc^4} \simeq +\frac{1}{\Nc}\,\mathrm{var}(\tilde{\tau}_a) \:.\end{aligned}$$ The corresponding expressions are also available for the proper time delays  [@MezSim11; @MarMarGar14], $$\begin{aligned} \mathrm{var}(\tau_a) &= \frac{\Nc[\beta(\Nc-1)+2]+2}{\Nc^2(\Nc+1)(\beta\Nc-2)}\simeq\frac{1}{\Nc^2} \:, \\ \label{eq:CovProper} \mathrm{cov}(\tau_a,\tau_b) &= - \frac{1}{\Nc^2(\Nc+1)}\simeq-\frac{1}{\Nc^3} \simeq -\frac{1}{\Nc}\,\mathrm{var}(\tilde{\tau}_a) \:.\end{aligned}$$ As to the diagonal elements $\WSm_{aa}$, less is known about their statistical properties except for the $\beta=2$ case (unitary symmetry), when it can be shown that the distributions of $\WSm_{aa}$ and $\tilde{\tau}_a$ coincide [@SavFyoSom01]. This follows from the general relation $\tilde{\tau}_a=\big[\mathcal{U}^\dagger\,\WSm\,\mathcal{U}\big]_{aa}$, implied by (\[eq:Q\_prop-part\]), and from the statistical independence of $\mathcal{U}$ and $\WSm$ in that case. For the $\beta=1$ case (orthogonal symmetry), these two matrices become statistically correlated, resulting in different statistics, in particular, $\mathrm{var}(\tilde{\tau}_a)\neq\mathrm{var}(\WSm_{aa})$. The latter variance was recently computed at any $\Nc$ in [@KuiSavSie14] using semiclassical methods, yielding $\mathrm{var}(\WSm_{aa})\simeq1/N^3$ unlike the $2/N^3$ dependence of (\[eq:var\_part\]) (but the exact RMT result is still lacking). It is also worth mentioning the recent study of the distribution of $\sum_{a=1}^K\WSm_{aa}$, where the sum is restricted to a fraction of terms $K<\Nc$ (cf. Appendix of Ref. [@GraMajTex17b]). The time delays in question clearly show the different scaling with $\Nc$, dependence in $\beta$ and sign of the correlations at perfect coupling. This leads to the profound differences between the corresponding distributions, which are schematically illustrated in Fig. \[fig:SketchPC\]. ![ *Sketch of the time delay distributions for $\Nc\gg1$ equivalent channels at perfect (left) or weak coupling (right), where $T$ is the channel transmission coefficient. Shown are the distributions of the proper (continuous red line) and partial time delays (dashed green line) as well as the Wigner time delay distribution (dotted blue line), with all times being measured in units of the Heisenberg time. For perfect coupling, the proper time delay distribution is close to the Marčenko-Pastur law (with additional large deviation tails out of the interval $[(\sqrt{2}-1)^2/\Nc,(\sqrt{2}+1)^2/\Nc]$; cf. [@Tex16]). For weak coupling, the distribution of the Wigner time delay is not shown, as it is still unknown and will be determined in the present paper.*[]{data-label="fig:SketchPC"}](sketch_wwP_pc-v2 "fig:") ![ *Sketch of the time delay distributions for $\Nc\gg1$ equivalent channels at perfect (left) or weak coupling (right), where $T$ is the channel transmission coefficient. Shown are the distributions of the proper (continuous red line) and partial time delays (dashed green line) as well as the Wigner time delay distribution (dotted blue line), with all times being measured in units of the Heisenberg time. For perfect coupling, the proper time delay distribution is close to the Marčenko-Pastur law (with additional large deviation tails out of the interval $[(\sqrt{2}-1)^2/\Nc,(\sqrt{2}+1)^2/\Nc]$; cf. [@Tex16]). For weak coupling, the distribution of the Wigner time delay is not shown, as it is still unknown and will be determined in the present paper.*[]{data-label="fig:SketchPC"}](sketch_wwP_wc-v2 "fig:") ### Non-ideal contacts, $T<1$. The general case of arbitrary transmission is more challenging for rigorous analysis, with studies being restricted to certain correlation functions and marginal distributions only. Most of the results have been derived within a nonperturbative approach developed in [@LehSavSokSom95; @FyoSom97; @SomSavSok01], which can be also extended to include effects of finite absorption [@SavSom03] and disorder [@FyoSavSom05; @OssFyo05]. In particular, the variance of the Wigner time delay follows from the autocorrelation function of $\Wt(\varepsilon)$ which is known exactly for both orthogonal ($\beta=1$) [@LehSavSokSom95] and unitary ($\beta=2$) symmetry [@FyoSom96] as well as in the whole crossover region between the two cases [@FyoSavSom97]. The variance is found to take a simple explicit form only in the $\beta=2$ case [@FyoSom96; @FyoSom97], being given then by $$\label{eq:FyodorovSommers1997Eq195} \mathrm{var}(\Wt) = \frac{2\left[1-(1-\coupl)^{\Nc+1}\right]}{\coupl^2\Nc^2(\Nc^2-1)}\,.$$ Considering the limit of weak transmission per channel $\coupl\ll1$, we get from two different possible behaviours, which depend on the product $\Nc\coupl$ describing the degree of the resonance overlap (thus controlling the overall coupling to the continuum) : $$\label{eq:VarianceWT} \frac{ \mathrm{var}(\Wt) }{ {\left\langle \Wt\right\rangle}^2 } \underset{\coupl\ll1}{\simeq} \left\{ \begin{array}{ll} \displaystyle \frac{4}{\beta(\Nc\coupl)^2} \ll 1 & \mbox{for } \Nc\coupl\gg1 \\[0.5cm] \displaystyle \frac{2}{\Nc\coupl} \gg 1 & \mbox{for } \Nc\coupl\ll1 \end{array} \right. .$$ Here we have reintroduced $\beta$ to match the known $\beta=1$ result [@LehSavSokSom95]. The marginal distribution of the partial time delays is known exactly at any $\beta$ [@FyoSom97; @FyoSavSom97]. The corresponding variance is also given by a simple explicit expression for $\beta=2$ [@FyoSom97] : $$\label{eq:FyodorovSommers1997Eq167} \mathrm{var}(\tilde{\tau}_a)= \frac{2\Nc(\coupl^{-1}-1)+1}{\Nc^2(\Nc-1)} \underset{\coupl\ll1}{\simeq} \frac{2}{\coupl\,\Nc^2} \:.$$ By combining and , we readily deduce an exact result for the covariance : $$\begin{aligned} \label{eq:CovPartialWeakCoupling} \hspace{-2cm} \mathrm{cov}(\tilde{\tau}_a,\tilde{\tau}_b) &= \frac{1}{\coupl^2\Nc(\Nc-1)^2} \left[ \frac{2\left[1-(1-\coupl)^{\Nc+1}\right]}{\Nc+1} -2\coupl(1-\coupl) - \frac{\coupl^2}{\Nc} \right] \\ & \label{eq:CovPartialWeakCoupling1} \underset{\coupl\ll1}{\simeq} -\frac{1}{\Nc^2}\times \left\{ \begin{array}{ll} \displaystyle \frac{2}{\coupl \Nc} & \mbox{for } \Nc\coupl\gg1 \\[0.25cm] \displaystyle 1 & \mbox{for } \Nc\coupl\ll1 \end{array} \right.\end{aligned}$$ It is worth noting that when compared to Eq. , the covariances change both in sign and scaling with $\Nc$ as transmission crosses over from perfect to weak coupling. Finally, the marginal distribution of the proper time delays at arbitrary $\coupl$ was obtained in [@SomSavSok01]. As will be shown in \[app:PartialProper\], the distributions of the proper and partial time delays become almost identical to each other in the weak coupling limit. To close this brief overview, we emphasize an important difference between the partial (or proper) time delays and the Wigner time delay. As is clear from the above expressions (for $\beta=2$, but the conclusion holds for any symmetry), the relative fluctuations of the partial/proper time delays are always large at weak transmission $\coupl\ll1$, $$\frac{ \mathrm{var}(\tau_a) }{ {\left\langle \tau_a\right\rangle}^2 } \simeq \frac{ \mathrm{var}(\tilde{\tau}_a) }{ {\left\langle \tilde{\tau}_a\right\rangle}^2 } \simeq \frac{2}{\coupl} \gg1.$$ Thus one expects a broad distribution in this limit independently of the channel number $\Nc$ (see Fig. \[fig:SketchPC\]), as was indeed shown for the exact distributions [@FyoSom97; @SomSavSok01] (see also \[app:PartialProper\] below). On the other hand, the relative fluctuations of the Wigner time delay are not necessarily large because of the specific $N$ dependence according to . This can be understood from Eq. (\[eq:SumRulePPW\]) defining the Wigner time delay as a linear statistics on $\{\tau_a\}$ and by noting that their correlations diminish rapidly when the parameter $\Nc\coupl$ grows, see (\[eq:CovPartialWeakCoupling1\]). Although the full distribution of the Wigner time delay is still unknown at arbitrary $T$, this discussion and (\[eq:VarianceWT\]) suggest that it converges to the Gaussian distribution in the strong coupling limit $\Nc\coupl\gg1$. In the opposite case of weak coupling, $\Nc\coupl\ll1$, the distribution becomes broad with nontrivial behaviour. One of our purposes here is to study in much detail this broad distribution. Resonance representation for the Wigner-Smith matrix {#sec:model} ==================================================== General considerations ---------------------- Our starting point is the following well-known representation for the $\Sm$-matrix in terms of an effective (non-Hermitian) Hamiltonian $\Heff$ of the open system [@SokZel89; @VerWeiZir85]: $$\label{eq:HamiltonianApproach} \Sm(\varepsilon) = {\mathbf{1}}_\Nc-2\I\,\Wmat^\dagger \, \left( \varepsilon-\Heff\right)^{-1} \, \Wmat\,, \qquad \Heff = \mathcal{H}-\I\,\Wmat\Wmat^\dagger \:.$$ This expression follows from by simple algebra, but it has an advantage in making explicit the resonance energy dependence associated with the $\Sm$-matrix poles. Indeed, the latter are just given by the eigenvalue problem on $\Heff$, $\Heff{|\kern.3exR_n\kern.3ex\rangle}=\mathcal{E}_n{|\kern.3exR_n\kern.3ex\rangle}$ and ${\langle\kern.3ex L_n \kern.3ex|}\Heff=\mathcal{E}_n{\langle\kern.3ex L_n \kern.3ex|}$, which can be further used to construct a pole expansion over the biorthogonal set of the (left and right) eigenfunctions corresponding to the same eigenvalue $\mathcal{E}_n=E_n-\frac{\I}{2}\Gamma_n$. Since in the resonance approximation considered $\Wmat$ is assumed to be energy independent, the energy derivative of $\Sm(\varepsilon)$ can be easily taken, leading to the following convenient representation for the Wigner-Smith matrix [@SokZel97] $$\label{eq:UsefulRepresent0} \WSm(\varepsilon) = 2\pi \, \Psi^\dagger(\varepsilon) \, \Psi(\varepsilon)\,, \qquad \Psi(\varepsilon) =\frac{1}{\sqrt{\pi}}\, (\varepsilon-\Heff)^{-1}\, \Wmat \,.$$ The $a$th column $\Psi_a$ of the $\Nint{\times}\Nc$ matrix $\Psi(\varepsilon)$ may be treated [@SokZel97] as the internal part of the scattering wave function initiated in channel $a$ at the scattering energy $\varepsilon$. The norm of $\Psi_a$ gives the diagonal element $\WSm_{aa}$, thus providing their interpretation as the average time delay of a wave packet in a given channel [@Smi60]. Using the eigenbasis of $\Heff$ and noting its completeness, we find a pole expansion of $\WSm$ as follows $$\label{eq:WS_pole} \WSm_{ab}(\varepsilon) = 2\sum_{n,m} \frac{ U_{mn} \widetilde{\Wmat}^*_{ma} \widetilde{\Wmat}_{nb} }{ (\varepsilon-\mathcal{E}^{*}_m)(\varepsilon-\mathcal{E}_n) }\,,$$ where $\widetilde{\Wmat}_{na}={\langle\kern.3ex L_n \kern.3ex|}\Wmat_{a}$ and $U_{mn}=\langle{R_m}|R_n\rangle$ is the so-called Bell-Steinberger matrix. Note that $U_{mn}\neq\delta_{mn}$ in general, so this matrix serves as a sensitive indicator of the nonorthogonality of the resonance states [@FyoSav12]. It is worth discussing the physical meaning of the matrix $\Psi$ on an example of a quantum dot modelled by a potential. For simplicity, we assume a discrete model and write the Hamiltonian as $\mathcal{H}_{x,x'}=-\Delta_{x,x'}+V_x\,\delta_{x,x'}$, where $\Delta$ is the discretised Laplacian matrix. Following the same steps which have led to , we get $$\I \left( \Sm^\dagger {\frac{\partial \Sm}{\partial V_x}} \right)_{a,b} =2\pi\,\left(\Psi^\dagger\right)_{a,x}\, \Psi_{x,b}$$ for the derivative with respect to the potential. Summation over $x$ inside the quantum dot gives . Actually, such a formula was derived in other contexts [@But00; @TexBut03; @TexDeg03] within a continuum model, where it was shown that $ -(2\I\pi)^{-1} \left( \Sm^\dagger\, \delta\Sm/\delta V(x) \right)_{a,b} =\psi^{(a)}_\varepsilon(x)^*\psi^{(b)}_\varepsilon(x) $, with $\psi^{(a)}_\varepsilon(x)$ being the stationary scattering state incoming in channel $a$. This leads to the correspondence $ \Psi_{x,a} =\frac{1}{\sqrt{\pi}}\big[(\varepsilon-\Heff)^{-1}\, \Wmat \big]_{xa} \equiv \psi^{(a)}_\varepsilon(x) $ between the two models. We note, however, that taking the derivative with respect to the energy or the potential does not necessarily lead to the same result. In particular, the continuum model is known [@TexBut03; @TexDeg03] to have the exact relation $ \int_\mathrm{QD}\D x\,\Sm^\dagger\, \delta\Sm/\delta V(x) = \Sm^\dagger\, \partial_\varepsilon\Sm + \big(\Sm-\Sm^\dagger\big)/(4\varepsilon) $, where integration is over the scattering region (the quantum dot). We conclude that an exact representation of $\WSm$ should not only involve $\Psi^\dagger\Psi$ like in , but also the contribution $\big(\Sm-\Sm^\dagger\big)/(4\varepsilon)$, which is due to non-resonant effects neglected here. Finally, it is convenient to express the Wigner-Smith matrix in terms of the reaction matrix $\Kmat$. Some algebra gives $ \Psi=\frac{1}{\sqrt{\pi}}\,(\varepsilon-\mathcal{H})^{-1}\Wmat\,({\mathbf{1}}_\Nc+\I\,\Kmat)^{-1} $, resulting in [@SokZel97] $$\label{eq:UsefulRepresent1} \WSm = -2\, ({\mathbf{1}}_\Nc-\I\,\Kmat)^{-1}\,{\frac{\partial \Kmat}{\partial \varepsilon}}\,({\mathbf{1}}_\Nc+\I\,\Kmat)^{-1} \:.$$ This representation will prove to be useful for the RMT analysis developed below. RMT for perfect coupling ------------------------ The case of perfect coupling corresponds to the situation when the mean ${\left\langle \Sm\right\rangle}=0$. The $\Sm$ matrix is then distributed in one of the circular ensembles, C$\beta$E, of random orthogonal (COE, $\beta=1$), unitary ($\mathrm{CUE}\equiv\mathrm{U}(N)$, $\beta=2$) or symplectic (CSE, $\beta=4$) unitary matrices [@Bee97]: $$P_\Sm^{(0)} (\Sm) \, \mathrm{D}\Sm = C_N\,\mathrm{D}\Sm\,,$$ where $\mathrm{D}\Sm$ is the Haar measure and $C_N$ a normalisation constant (the superscript “$^{(0)}$” stands for perfect coupling). Correspondingly, the reaction matrix belongs to one of the three Cauchy ensembles (orthogonal, unitary or symplectic) in this case [@MelPerSel85; @Bro95] $$\label{eq:CauchyDistribution} P_\Kmat^{(0)} (\Kmat) \propto \big[\det({\mathbf{1}}_\Nc+\Kmat^2)\big]^{-1-\beta(\Nc-1)/2} \:.$$ This follows from the relation and noting that the associated Jacobian is given by $$\label{eq:JacobianSK} \mathrm{D}\Sm = \mathrm{D}\Kmat\, \frac{2^{\Nc(1+\beta(\Nc-1)/2)}}{\left[\det({\mathbf{1}}_\Nc+\Kmat^2)\right]^{1+\beta(\Nc-1)/2}} \:,$$ where $\mathrm{D}\Kmat$ is the Lebesgue measure over the set of Hermitian matrices. In order to derive the distribution of the Wigner-Smith matrix, we also require the statistics of the energy derivative $\partial\Kmat/\partial\varepsilon$. Following BFB [@BroFraBee97; @BroFraBee99], it is convenient to symmetrize the Wigner-Smith matrix according to , which can be written as $$\WSm_{s} = -2\, ({\mathbf{1}}_\Nc+\Kmat^2)^{-1/2}\,{\frac{\partial \Kmat}{\partial \varepsilon}}\,({\mathbf{1}}_\Nc+\Kmat^2)^{-1/2} \:.$$ which clearly has the same spectrum as $\WSm$. BFB’s approach has shown the statistical independence of $\Kmat$ and $\partial\Kmat/\partial\varepsilon$ and, hence, that of $\Sm$ and $\WSm_{s}$, with the joint distribution $$\label{eq:BFB1997} P_{\Sm,\WSm_s}^{(0)}(\Sm,\WSm_{s}) = P_{\Sm}^{(0)}(\Sm)\,P_{\WSm_s}^{(0)}(\WSm_{s}) \:.$$ The distribution $P_{\WSm_s}^{(0)}(\WSm_s)$ turns out to correspond to a specific instance of the so-called inverse-Wishart matrices (Laguerre ensemble) for $\invQ = \WSm_s^{-1}$, $$\label{eq:Laguerre} P_\invQ^{(0)}(\invQ) \propto \Theta(\invQ)\, \left(\det\invQ\right)^{\beta\Nc/2} \,{\mathrm{e}^{-(\beta/2)\,{\mathop{\mathrm{tr}}\nolimits\left\{ \invQ \right\}}}} \:.$$ An explicit form provided by BFB for the distribution $P_{\WSm_s}^{(0)}(\WSm_s)$ of $\WSm_{s}$ then follows from the above by making use of $\mathrm{D}\invQ=(\det\WSm_s)^{-2-\beta(\Nc-1)}\,\mathrm{D}\WSm_s$. Wigner-Smith matrix distribution for non-ideal contacts {#sec:WSMdistribution} ======================================================= Uniform couplings {#subsec:WSMdistrib1} ----------------- We consider first a simple model of tunable contacts where all the channels are equally coupled and characterised by the same transmission coefficient $\coupl=4\fss/(1+\fss)^2$, where the coupling constant $\fss>0$ is defined in . The case of perfect coupling hence corresponds to $\fss=1$. In view of the resonance representation , it is clear that the model with arbitrary coupling can be mapped to that with perfect one by performing the substitution $\Wmat\longrightarrow \sqrt{\fss}\,\Wmat$. Note that the results should depend on $\fss$ only through the transmission coefficient $T$, thus implying a symmetry $\fss\leftrightarrow1/\fss$. Such a symmetry can be understood from representation and the known invariance of the Cauchy distribution under $\Kmat\leftrightarrow\Kmat^{-1}$. Therefore, it will be sufficient to consider $0<\fss{\leqslant}1$. Keeping the notation $\Kmat$ for the reaction matrix at perfect coupling, distributed according to the Cauchy distribution , we rewrite as follows $$\WSm = -2\fss\, ({\mathbf{1}}_\Nc-\I\,\fss\,\Kmat)^{-1}\, {\frac{\partial \Kmat}{\partial \varepsilon}}\, ({\mathbf{1}}_\Nc+\I\,\fss\,\Kmat)^{-1} \,.$$ Denoting the symmetrised Wigner-Smith matrix at perfect coupling by $\WSm_{s0}$, we have $$\label{eq:RelationQsQs0} \WSm_s= A\, \WSm_{s0}\, A\,, \qquad A = \sqrt{\fss}\,({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{-1/2}\,({\mathbf{1}}_\Nc+\Kmat^2)^{1/2} \:,$$ and note also that $A=A^\dagger$. The matrix $\WSm_{s0}$ is distributed according to . Therefore, the required distribution of $\invQ =\WSm_s^{-1} = A^{-1} \invQ_0 A^{-1}$ can then be rewritten in terms of two integrals over Hermitian matrices from the Cauchy and Laguerre ensembles: $$\begin{aligned} &\hspace{-2cm} P_\invQ(\invQ) = {\left\langle \delta\left( \invQ - A^{-1} \invQ_0 A^{-1} \right) \right\rangle}_{\Kmat,\,\invQ_0} \\ \nonumber &\hspace{-2.5cm} \propto \int\mathrm{D}\Kmat\,\det({\mathbf{1}}_\Nc+\Kmat^2)^{-1-\beta(\Nc-1)/2} \int_{\invQ_0>0}\mathrm{D}\invQ_0\,(\det \invQ_0)^{\beta\Nc/2}\,{\mathrm{e}^{-(\beta/2){\mathop{\mathrm{tr}}\nolimits\left\{ \invQ_0 \right\}}}} \delta\left( \invQ - A^{-1} \invQ_0 A^{-1} \right) \:,\end{aligned}$$ where the second integral runs over Hermitian matrices with positive eigenvalues. We can eliminate one matrix integral by using the general expression of the Jacobian [@Mat97] $$\label{eq:UsefulJacobian} \mathrm{D}\invQ_0=\mathrm{D}Y\,\big[\det(A^\dagger A)\big]^{1+\beta(\Nc-1)/2} \hspace{1cm}\mbox{for } \invQ_0=A^\dagger YA \:,$$ where $A$ must be real for $\beta=1$. We finally obtain the representation $$\label{eq:DistributionZGeneral} \hspace{-2.5cm} P_\invQ(\invQ) \propto \Theta(\invQ)\, ( \det\invQ )^{\beta\Nc/2} \int\mathrm{D}\Kmat\, \frac{\det({\mathbf{1}}_\Nc+\Kmat^2)^{\beta\Nc/2}} {\det({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{1-\frac{\beta}{2}+\beta\Nc}}\, \exp\left( -\frac{\beta}{2}\fss\,{\mathop{\mathrm{tr}}\nolimits\left\{ \frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}\invQ \right\}} \right)$$ where the integration is over the set of Hermitian matrices with real ($\beta=1$), complex ($\beta=2$) or quaternionic ($\beta=4$) entries. Setting $\fss=1$ (perfect coupling) we obviously recover the Laguerre distribution . We note that in the unitary case ($\beta=2$) one can use the invariance under unitary transformations to show that the distribution of the Wigner-Smith matrix $\WSm$ is the same as the distribution of the symmetrised matrix $\WSm_s$ [@BroFraBee99]. However, this is not the case in the orthogonal and symplectic cases. It is tempting to perform a similar calculation as above for the $\WSm$ matrix, starting from $\WSm = B\, \WSm_{s0}\, B^\dagger$ with $ B = \sqrt{\fss}\,({\mathbf{1}}_\Nc-\I\,\fss\,\Kmat)^{-1}\,({\mathbf{1}}_\Nc+\Kmat^2)^{1/2} $. This shows that the analysis done for $\WSm_s$ cannot be reproduced for $\WSm$ in the orthogonal case ($\beta=1$) because it is not clear that the change of variable $\invQ_0=B^\dagger YB$ is compatible with the constraints $\invQ_0=\invQ_0^\mathrm{T}$ and $\invQ=\invQ^\mathrm{T}$, since $\Kmat=\Kmat^\mathrm{T}$ (besides, $B$ is not real for $\beta=1$, hence cannot be used). Joint distribution of the eigenvalues in the unitary case ($\beta=2$) --------------------------------------------------------------------- In the unitary case, the joint distribution of eigenvalues can be deduced from by an integration over the unitary group. We decompose the matrices as $\invQ=V\,A\,V^\dagger$ and $\fss\frac{1+\Kmat^2}{1+\fss^2\Kmat^2}=W\,B\,W^\dagger$ where $V$ and $W$ are two unitary matrices and $A=\mathrm{diag}(\einvQ_1,\cdots,\einvQ_\Nc)$ and $B=\fss\,\mathrm{diag}(\frac{1+k_1^2}{1+\fss^2k_1^2},\cdots,\frac{1+k_\Nc^2}{1+\fss^2k_\Nc^2})$. We have $$\mathrm{D}\invQ\,P_\invQ(\invQ) = \mathrm{D}V \, \D \einvQ_1\cdots\D \einvQ_\Nc\, P(\einvQ_1,\cdots,\einvQ_\Nc)\,\Delta_\Nc(\einvQ)^2$$ where $\Delta_\Nc(\einvQ)=\prod_{i<j}(\einvQ_i-\einvQ_j)$ is the Vandermonde and $\mathrm{D}V$ the Haar measure. A similar decomposition holds for $\mathrm{D}\Kmat$, thus $$\begin{aligned} \hspace{-2cm} P(\einvQ_1,\cdots,\einvQ_\Nc) \propto & \Delta_\Nc(\einvQ)^2 \prod_n {\theta_\mathrm{H}}(\einvQ_n)\,\einvQ_n^\Nc \, \int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2 \prod_n \frac{(1+k_n^2)^{\Nc}}{(1+\fss^2k_n^2)^{2\Nc}} \nonumber\\ &\hspace{3cm} \times \int_{\mathrm{U}(\Nc)}\mathrm{D}V \int_{\mathrm{U}(\Nc)}\mathrm{D}W {\mathrm{e}^{-{\mathop{\mathrm{tr}}\nolimits\left\{ A\,V^\dagger W\,B\,W^\dagger V \right\}} }}\end{aligned}$$ where ${\theta_\mathrm{H}}(\einvQ)$ is the usual Heaviside function. Using Harish-Chandra-Itzykson-Zuber integral (see \[app:HCIZ\]), we obtain $$\begin{aligned} \label{eq:PzArbitraryCoupling} \hspace{-2cm} P(\einvQ_1,\cdots,\einvQ_\Nc) \propto \Delta_\Nc(\einvQ) \,\prod_n {\theta_\mathrm{H}}(\einvQ_n)\,\einvQ_n^\Nc \, \int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \frac{\Delta_\Nc(k)^2}{\Delta_\Nc\left(\fss\,\frac{1+k^2}{1+\fss^2k^2}\right)} \prod_n \frac{(1+k_n^2)^{\Nc}}{(1+\fss^2k_n^2)^{2\Nc}} \nonumber\\ \hspace{6cm} \times \det\left[ \exp\left( -\fss\,\frac{1+k_i^2}{1+\fss^2k_i^2}\,\einvQ_j \right) \right]\end{aligned}$$ which will be used in Section \[sec:characfct\]. [^1] Channels with different coupling parameters {#subsec:DifferentCouplings} -------------------------------------------- It is clear from the above discussion how to extend the obtained results to a general case of arbitrary and nonequal channel couplings. Exploiting representation again, we can now substitute the reaction matrix $\Kmat$ at perfect couplings by $$\Kmat \longrightarrow \mathcal{U}_C^\dagger \, C \, \Kmat \, C\, \mathcal{U}_C \,, \qquad C=\mathrm{diag}(\sqrt{\fss_1},\cdots,\sqrt{\fss_\Nc})\,,$$ where $\mathcal{U}_C$ is a unitary matrix and $\fss_a$ correspond to different transmission coefficients . Following the same lines as in section \[subsec:WSMdistrib1\], we have $\WSm_s=A\,\WSm_{s0}\,A^\dagger$ with $$A = \mathcal{U}_C^\dagger\, \left( {\mathbf{1}}_\Nc + \left(C\,\Kmat\,C\right)^2 \right)^{-1/2}\, C\, \left( {\mathbf{1}}_\Nc+\Kmat^2 \right)^{1/2} \:.$$ (We have used $\big[\mathcal{U}_C^\dagger M\mathcal{U}_C\big]^{-1/2}=\mathcal{U}_C^\dagger M^{-1/2}\mathcal{U}_C$, but note that $(AB)^{-1/2}\neq B^{-1/2}A^{-1/2}$ in general). As before, we assume that all energy dependence is carried by the reaction matrix $\Kmat$, while the matrices $\mathcal{U}_C$ and $C$ of the coupling parameters are energy independent. This leads to the following generalisation of equation : $$\begin{aligned} \label{eq:DistribGammaArbitraryCouplings} \hspace{-2cm} P_\invQ(\invQ) \propto \Theta(\invQ)\, ( \det\invQ )^{\beta\Nc/2} \int\mathrm{D}\Kmat\, \frac{\det({\mathbf{1}}_\Nc+\Kmat^2)^{\beta\Nc/2}} {\det\left({\mathbf{1}}_\Nc+\left(C\Kmat C\right)^2\right)^{1-\frac{\beta}{2}+\beta\Nc}} \\ \hspace{-2cm} \nonumber \times \exp\left( -\frac{\beta}{2} {\mathop{\mathrm{tr}}\nolimits\left\{ \left( {\mathbf{1}}_\Nc + \left(C\,\Kmat\,C\right)^2 \right)^{-1/2}\,C\, \left( {\mathbf{1}}_\Nc + \Kmat^2\right)\, C\, \left( {\mathbf{1}}_\Nc + \left(C\,\Kmat\,C\right)^2 \right)^{-1/2}\, \mathcal{U}_C\,\invQ\,\mathcal{U}_C^\dagger \right\}} \right).\end{aligned}$$ One obviously recovers at $\mathcal{U}_C={\mathbf{1}}_\Nc$ and $C=\sqrt{\fss}\,{\mathbf{1}}_\Nc$. Joint distribution of $\Sm$ and $\WSm_s$ (uniform couplings) {#sec:IntegrationOverCUE} ============================================================ We now derive another instructive integral representation for the distribution $P_\invQ(\invQ)$ in terms of an integral over the unitary group. Our purpose here is not simply technical but aiming to shed new light on the derivation of Eq. . This second formulation will allow us to obtain more straightforwardly the joint distribution $P(\Sm,\invQ)$ of the matrices $\Sm$ and $\invQ$. It will also be useful for the numerical calculations presented in Section \[sec:Numerics\]. The starting point is to reformulate the model introduced above, according to Brouwer’s construction [@Bro95] of the distribution of the $\Sm$ matrix for tunable couplings. We introduce the $\Nc\times\Nc$ scattering matrix $\Sm_0$ belonging to one of the circular ensembles C$\beta$E, describing the quantum dot for perfect contacts. The non-ideal nature of the contact is then accounted for through the $2\Nc\times2\Nc$ scattering matrix (see Fig. \[fig:QDcouplings\]) $$\Sm_\mathrm{barrier} = \left( \begin{array}{cc} r_b & t_b' \\ t_b & r_b' \end{array} \right)$$ gathering the transmission/reflection through the region between the lead and the dot. ![*Quantum dots coupled to contacts through which the electronic wave is injected. The scattering matrix $\Sm_0$ describes the dynamics of the perfectly coupled quantum dots and the scattering matric $\Sm_\mathrm{barrier}$ the scattering through each barrier.* []{data-label="fig:QDcouplings"}](QDweaklycoupled "fig:"){height="3.5cm"} ![*Quantum dots coupled to contacts through which the electronic wave is injected. The scattering matrix $\Sm_0$ describes the dynamics of the perfectly coupled quantum dots and the scattering matric $\Sm_\mathrm{barrier}$ the scattering through each barrier.* []{data-label="fig:QDcouplings"}](QDweaklycoupled2 "fig:"){height="3.5cm"} The scattering matrix --------------------- The matrix ${\Sm_0}$ is such that ${\left\langle \Sm_0\right\rangle}=0$ by construction, while the matrix $\Sm_\mathrm{barrier}$ is supposed fixed. The scattering matrix describing the quantum dot with arbitrary couplings is $$\Sm = r_b + t_b' \, \left( \Sm_0^\dagger - r_b' \right)^{-1} \, t_b \:.$$ Because ${\left\langle \Sm_0^n\right\rangle}=0$ for any positive integer $n$, we have ${\left\langle \Sm\right\rangle}=r_b$. We still consider the case of uniform couplings, when the average ${\left\langle \Sm\right\rangle}\equiv\mathbf{1}_\Nc\,\Sbar$ is proportional to the identity matrix, $$r_b=-\left(r_b'\right)^\dagger=\mathbf{1}_\Nc\,\Sbar \hspace{1cm}\mbox{and}\hspace{1cm} t_b=t_b'=\mathbf{1}_\Nc\,\sqrt{1-|\Sbar|^2} \:,$$ leading to the simpler representation $$\label{eq:TheModel} \Sm = \Big( \Sbar\,\mathbf{1}_\Nc + \Sm_0 \Big)\, \Big( \mathbf{1}_\Nc + \Sbar^*\,\Sm_0 \Big)^{-1} \:.$$ Introducing the transmission probability $\coupl=1-|\Sbar|^2$ of the barrier, we note that the case of perfect couplings, $\coupl=1$, corresponds to $\Sm=\Sm_0$ with ${\left\langle \Sm\right\rangle}\equiv\Sbar=0$. Given these results, we can now obtain the distribution of $\Sm$ at arbitrary coupling by evaluating the Jacobian of transformation . Note that the two scattering matrices have the same eigenvectors. Then establishing a relation between the two measures only requires to relate the Vandermonde determinants constructed from their eigenvalues. Using $$\Sm_0=\Big( \Sm - \Sbar\,\mathbf{1}_\Nc \Big)\,\Big( \mathbf{1}_\Nc-\Sbar^*\,\Sm \Big)^{-1} \:,$$ we deduce the following relation between eigenvalues $${\mathrm{e}^{\I\theta_a^{(0)}}} - {\mathrm{e}^{\I\theta_b^{(0)}}} = \frac{1-|\Sbar|^2}{(1-\Sbar^*\,{\mathrm{e}^{\I\theta_a}})(1-\Sbar^*\,{\mathrm{e}^{\I\theta_b}})} \big[ {\mathrm{e}^{\I\theta_b}} - {\mathrm{e}^{\I\theta_a}} \big] \:.$$ As a consequence, the Vandermondes built from the two sets of eigenvalues are related by $$\hspace{-2cm} \Delta_\Nc\big( {\mathrm{e}^{\I\theta_1^{(0)}}},\cdots,{\mathrm{e}^{\I\theta_\Nc^{(0)}}} \big) = \big(1-|\Sbar|^2\big)^{\Nc(\Nc-1)/2} \prod_a\big(1-\Sbar\,{\mathrm{e}^{\I\theta_a}}\big)^{-\Nc+1}\, \Delta_\Nc\big( {\mathrm{e}^{\I\theta_1}},\cdots,{\mathrm{e}^{\I\theta_\Nc}} \big)\,.$$ Using $\D\theta_a^{(0)}=\D\theta_a\,(1-|\Sbar|^2)/|1-\Sbar^*\,{\mathrm{e}^{\I\theta_a}}|^2$, we finally relate the two measures as follows $$\mathrm{D}\Sm_0 = (1-|\Sbar|^2)^{\Nc+\beta\Nc(\Nc-1)/2}\, \mathrm{D}\Sm \,|\det({\mathbf{1}}_\Nc-\Sbar^*\Sm)|^{-2-\beta(\Nc-1)}$$ from which we can read out the distribution $P_{\Sm}(\Sm)$ of $\Sm$, as $P_{\Sm}^{(0)}(\Sm)$ is just uniform. We have thus recovered the Poisson kernel (\[eq:PoissonKernel\]) (reproducing the proof of Ref. [@Bro95]). We now consider the Wigner-Smith time matrix. Assuming as before $\partial_\varepsilon\Sbar=0$, we have $ \partial_\varepsilon\Sm =\left(1-|\Sbar|^2\right)\, \big( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0 \big)^{-1} \, \partial_\varepsilon\Sm_0\, \big( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0 \big)^{-1} $, thus yielding $$\WSm = \left( 1-|\Sbar|^2 \right) \, \left( {\mathbf{1}}_\Nc + \Sbar\Sm_0^\dagger \right)^{-1} \, \WSm_0\, \left( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0 \right)^{-1} \:.$$ The symmetrised Wigner-Smith matrix can be again written as $\WSm_s = A\,\WSm_{s0}\,A$, where $$\label{eq:QA} A = \sqrt{1-|\Sbar|^2} \left( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0 \right) ^{-1/2} \left( {\mathbf{1}}_\Nc + \Sbar\Sm_0^\dagger \right) ^{-1/2}$$ is Hermitian. One can easily check that this expression is equivalent to . It will be useful in what follows to also determine the Jacobian of the transformation $\invQ_0\to\invQ=A^{-1}\invQ_0(A^\dagger)^{-1}$. Using we obtain $$\mathrm{D}\invQ_0 = (1-|\Sbar|^2)^{\Nc+\beta\Nc(\Nc-1)/2} \,\left|\det\left({\mathbf{1}}_\Nc+\Sbar^*\Sm_0)\right)\right|^{-2-\beta(\Nc-1)}\, \mathrm{D}\invQ$$ which can be re-expressed in terms of $\Sm$, leading to $$\mathrm{D}\invQ_0 = (1-|\Sbar|^2)^{-\Nc-\beta\Nc(\Nc-1)/2}\, \left|\det\left({\mathbf{1}}_\Nc-\Sbar^*\Sm\right)\right|^{2+\beta(\Nc-1)}\, \mathrm{D}\invQ \:.$$ Remarkably, this shows that the measure is invariant, $$\label{eq:ConservationMeasureSinvQ} \mathrm{D}\Sm_0\, \mathrm{D}\invQ_0 = \mathrm{D}\Sm\, \mathrm{D}\invQ$$ It is tempting to regard this equation as a matrix extension of Liouville’s theorem, although further study would be needed to support this statement (e.g., by investigating parametric evolution of the associated matrix flow with regard to coupling changes). Joint distribution of $\Sm$ and $\invQ=\WSm_s^{-1}$ --------------------------------------------------- Our starting point is again the BFB result for ideal contacts , rewritten with the inverse Wigner-Smith matrix $$P^{(0)}_{\Sm,\invQ}(S_0,\invQ_0) \propto \Theta(\invQ_0)\, \left(\det\invQ_0\right)^{\beta\Nc/2} \,{\mathrm{e}^{-(\beta/2)\,{\mathop{\mathrm{tr}}\nolimits\left\{ \invQ_0 \right\}}}}$$ Using the two transformations (\[eq:TheModel\]) and (\[eq:QA\]), and the conservation of the measure , we finally obtain the joint distribution $$\begin{aligned} \label{eq:JointPDFSinvQ} && \hspace{-1.75cm} P_{\Sm,\invQ}(\Sm,\invQ) \propto \Theta(\invQ)\, \left|\det\big({\mathbf{1}}_\Nc-\Sbar^*\Sm\big)\right|^{\beta\Nc}\, \\\nonumber &&\times \big(\det\invQ\big)^{\beta\Nc/2} \, \exp\left[ -\frac{\beta}{2(1-|\Sbar|^2)}\, {\mathop{\mathrm{tr}}\nolimits\left\{ ({\mathbf{1}}_\Nc-\Sbar^*\Sm)({\mathbf{1}}_\Nc-\Sbar\Sm^\dagger)\invQ \right\}} \right]\end{aligned}$$ A similar structure was given in a recent paper [@MarSchBee16], including the other four (“BdG”) symmetry classes relevant for scattering in an Andreev billiard. The case of the three chiral symmetry classes remains an open problem. Distribution of the inverse of the Wigner-Smith matrix ------------------------------------------------------ The distribution of the matrix $\invQ = \WSm_s^{-1}$ can be deduced by integrating over $\Sm$. In order to make the connection with the representation more clear, we prefer to write a matrix integral over the scattering matrix of the cavity with perfect contacts: $$P_\invQ(\invQ) = {\left\langle \delta\left( \invQ - (A^{-1})^\dagger \invQ_0 A^{-1} \right) \right\rangle}_{\Sm_0,\,\invQ_0} \:,$$ where $\Sm_0$ belongs to the circular ensemble and $\invQ_0=\WSm_{s0}^{-1}$ is uncorrelated from the scattering matrix and distributed according to . Using , we finally obtain $$\begin{aligned} \label{eq:DistribWSoverUnitaryGroup} \hspace{-1cm} P_\invQ(\invQ) \propto \Theta(\invQ)\, &( \det\invQ )^{\beta\Nc/2} \int_{\mathrm{C\beta E}} \mathrm{D}\Sm_0\, \left|\det\left({\mathbf{1}}_\Nc+\Sbar^*\Sm_0\right)\right|^{\beta-2-2\beta\Nc} \\ \nonumber &\times \exp\left[ -\frac{\beta}{2}\,(1-|\Sbar|^2) {\mathop{\mathrm{tr}}\nolimits\left\{ ( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0)^{-1}( {\mathbf{1}}_\Nc + \Sbar\Sm_0^\dagger)^{-1} \invQ \right\}} \right] \:,\end{aligned}$$ where the integral runs over the circular ensemble. Note that it is also possible to go more directly from to by using and . The generalisation of this result to the case of channels with different couplings, as it has been done in Section \[subsec:DifferentCouplings\], is also possible. Characteristic function of the Wigner time delay {#sec:characfct} ================================================ As is already mentioned in the introduction, the trace of the Wigner-Smith matrix $$\Wt = \frac{1}{\Nc}\sum_a\tau_a =\frac{1}{\Nc}{\mathop{\mathrm{tr}}\nolimits\left\{ \invQ^{-1} \right\}}\,,$$ i.e. the Wigner time delay, is of special interest due to its practical applications. The distribution and moments of $\Wt$ were studied in much detail for perfect coupling $\coupl=1$ [@GopMelBut96; @FyoSom97; @SavFyoSom01; @MezSim13; @TexMaj13]. Our aim now is to determine the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ of the Wigner time delay in the weak coupling limit $\coupl\approx 4\fss\to0$. We find it convenient to introduce the rescaled variable $t = 2\Wt/(\beta\fss)$, with the rescaled distribution being $$\label{eq:DefRescaledDistWTD} {\mathscr{Q}_{\Nc,\beta}}(t)=(\beta\fss/2)\,{\mathscr{P}_{\Nc,\beta}}(\tau=(\beta\fss/2)\,t) $$ (we will see in Section \[sec:Numerics\] and \[app:PartialProper\] that a more natural scaling variable is $|1/\fss-\fss|\,\tau$ rather than $\tau/\fss$, however this makes no difference in the weak coupling limit). We introduce the characteristic function for the Wigner time delay $$\label{eq:DefZN} \mathcal{Z}_{\Nc,\beta}(p) = \mathcal{Z}_{\Nc,\beta}(0)\, {\left\langle {\mathrm{e}^{ -(2p/\beta\fss){\mathop{\mathrm{tr}}\nolimits\left\{ \invQ^{-1} \right\}} }} \right\rangle}$$ (the normalisation $\mathcal{Z}_{\Nc,\beta}(0)$ will be chosen for convenience below). The characteristic function is related to the distribution of the rescaled time delay as $$\frac{\mathcal{Z}_{\Nc,\beta}(p)}{\mathcal{Z}_{\Nc,\beta}(0)} = \int_0^\infty\D\rt\, {\mathscr{Q}_{\Nc,\beta}}(t)\, {\mathrm{e}^{-\Nc p\rt}} \:.$$ In the following, we mostly consider the unitary case $\beta=2$. The last subsection will discuss the large deviation for arbitrary symmetry class. The characteristic function can be written as a matrix integral with . Using expression for the joint distribution of the eigenvalues $\{\einvQ_1,\cdots,\einvQ_\Nc\}$, we get $$\begin{aligned} \hspace{-2cm} \mathcal{Z}_{\Nc,2}(p) \propto \int_{\mathbb{R}_+^\Nc}\D \einvQ_1\cdots\D \einvQ_\Nc\, &\Delta_\Nc(\einvQ) \, \prod_n \einvQ_n^\Nc \,{\mathrm{e}^{-p/(\fss \einvQ_n)}} \int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \frac{\Delta_\Nc(k)^2}{\Delta_\Nc\left(\fss\,\frac{1+k^2}{1+\fss^2k^2}\right)} \nonumber\\ &\times \prod_n \frac{(1+k_n^2)^{\Nc}}{(1+\fss^2k_n^2)^{2\Nc}} \det\left[ \exp\left( -\fss\,\frac{1+k_i^2}{1+\fss^2k_i^2}\,\einvQ_j \right) \right]\end{aligned}$$ Integrals over $\gamma_k$ can be performed thanks to the Andreief formula (see \[app:Andreief\]) $$\begin{aligned} \hspace{-1cm} \int_{\mathbb{R}_+^\Nc} \prod_k\left(\D \einvQ_k\, \einvQ_k^\Nc \,{\mathrm{e}^{-p/(\fss \einvQ_k)}} \right) \det\left[ \einvQ_k^{i-1} \right] \, \det\left[ \exp\left( -\fss\,\einvQ_k\,\frac{1+k_j^2}{1+\fss^2k_j^2} \right) \right] \nonumber\\ =\Nc!\, \det\left[ \int_0^\infty\D \einvQ\, \einvQ^\Nc \,{\mathrm{e}^{-p/(\fss \einvQ)}} \einvQ^{i-1} \, {\mathrm{e}^{-\fss\,\einvQ\,(1+k_j^2)/(1+\fss^2k_j^2)}} \right]\end{aligned}$$ leading to $$\begin{aligned} \hspace{-2cm} \mathcal{Z}_{\Nc,2}(p) =\int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \frac{\Delta_\Nc(k)^2}{\Delta_\Nc(\frac{1+k^2}{1+\fss^2k^2})}\, \prod_n \frac{(1+k_n^2)^{\Nc}}{(1+\fss^2k_n^2)^{2\Nc}}\, \nonumber\\ \hspace{2cm}\times \det\left[ \left( p\, \frac{1+\fss^2k_j^2}{1+k_j^2} \right)^{\frac{\Nc+i}{2}} K_{\Nc+i} \left(2\sqrt{p\frac{1+k_j^2}{1+\fss^2k_j^2}}\right) \right] \:.\end{aligned}$$ This expression can be simplified further by noticing the obvious relation $$\label{eq:UsefulRelation} \prod_j\xi_j^{2\Nc}\,\det\left[ \xi_j^{-\Nc-i} \right] = \Delta_\Nc(\xi) \:, \qquad \xi_j \equiv \frac{1+k_j^2}{1+\fss^2k_j^2} \,.$$ Collecting everything, we arrive at the final result $$\begin{aligned} \label{eq:MainResult1} \hspace{-2.5cm} \mathcal{Z}_{\Nc,2}(p) =\int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \frac{ \Delta_\Nc(k)^2 }{ \prod_n (1+k_n^2)^{\Nc} } \, \frac{ \det\left[ \left( p\, \frac{1+\fss^2k_j^2}{1+k_j^2} \right)^{\frac{\Nc+i}{2}} K_{\Nc+i} \left(2\sqrt{p\frac{1+k_j^2}{1+\fss^2k_j^2}}\right) \right] }{ \det\left[ \left( \frac{1+k_j^2}{1+\fss^2k_j^2} \right)^{-\Nc-i} \right] } $$ #### Normalisation constant. Using the asymptotics of the MacDonald function, $K_\nu(x)\simeq\big[\Gamma(\nu)/2\big]\,(2/x)^\nu$ for $x\to0$, we get the normalisation constant under the form $$\mathcal{Z}_{\Nc,2}(0) =2^{-\Nc}\prod_{n=1}^\Nc\Gamma(\Nc+n) \int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2\prod_n(1+k_n^2)^{-\Nc} \:,$$ which is surprisingly independent of $\fss$. We recognize the normalisation of the Cauchy ensemble, Eq.  of \[app:tmi\], hence we get $$ \mathcal{Z}_{\Nc,2}(0) =\pi^\Nc 2^{-\Nc^2}\Nc! \prod_{n=1}^\Nc\Gamma(\Nc+n) \:.$$ Perfect coupling ---------------- Eq.  shows that the limit of perfect coupling, $\fss\to1$, is singular as the determinant in the denominator vanishes. For this reason it is more easy to start from the definition with and apply the Andreief formula with $$\mathcal{Z}_{\Nc,2}(p) \propto \int_{\mathbb{R}_+^\Nc}\D\einvQ_1\cdots\D\einvQ_\Nc\, \Delta_\Nc(\einvQ)^2\,\prod_k\left( \einvQ_k^\Nc \, {\mathrm{e}^{-\einvQ_k-p/\einvQ_k}} \right)$$ leading to $$\mathcal{Z}_{\Nc,2}(p) \propto \det\left[ p ^{\frac{\Nc+i+j-1}{2}} \, K_{\Nc+i+j-1}(2\sqrt{p}) \right] \hspace{1cm}\mbox{for } \fss=1 \:.$$ For two other symmetry classes ($\beta=1$, $4$), one can also obtain certain Pfaffian representation (analogous to the one derived in a different context in Ref. [@GraTex16], cf. supplementary material to this paper as well as [@Gra18]). Limiting behaviours of the characteristic function in the weak coupling limit {#subsec:UniversalLimitWC} ----------------------------------------------------------------------------- The form is appropriate to consider the weak coupling limit $\fss\to0$ : the characteristic function simplifies as $$\label{eq:CharacFctWC} \hspace{-1cm} \mathcal{Z}_{\Nc,2}(p) =\int_{\mathbb{R}^\Nc}\D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2 \: \frac{ \det\left[ \left( \frac{p}{1+k_j^2} \right)^{\frac{\Nc+i}{2}} K_{\Nc+i} \left(2\sqrt{p(1+k_j^2)}\right) \right] }{ \det\left[ (1+k_j^2)^{-i} \right] } \:.$$ The existence of a finite limit for $\fss\to0$ shows that the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ admits a universal form (independent of the coupling) after proper rescaling $\tau\sim\fss$, i.e. the rescaled distribution ${\mathscr{Q}_{\Nc,\beta}}(t)$ has a limit. A similar observation is made for the marginal distributions of both partial and proper time delays in \[sec:Partial\] for arbitrary symmetry class. ### Limit $p\to\infty$. In the limit $p\to\infty$, the determinant simplifies to $$\label{eq:PinftyStep1} \hspace{-2cm} \mathcal{Z}_{\Nc,2}(p) \simeq \left(\frac{\pi}{4}\right)^{\Nc/2} p^{\frac{3\Nc^2}{4}} \int\D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2 \, \prod_n\left[ \frac{ {\mathrm{e}^{-2\sqrt{p(1+k_n^2)}}} }{(1+k_n^2)^{\frac\Nc2+\frac14}} \right] \frac{ \det\left[ (1+k_j^2)^{-i/2} \right] }{ \det\left[ (1+k_j^2)^{-i} \right] } \:.$$ The exponentials constraint the variables to be $k_n\lesssim1/\sqrt{p}\to0$, thus we can write ${\mathrm{e}^{-2\sqrt{p(1+k_n^2)}}}\simeq{\mathrm{e}^{-2\sqrt{p}-\sqrt{p}k_n^2}}$ and expand the remaining functions. We now analyse the ratio of the two determinants in the limit $k_j\to0$. For this purpose, we use the following convenient relation $$\label{eq:UsefulRelationWithDet} \det\left[ \phi_i(k_j) \right]_{1{\leqslant}i,\, j{\leqslant}N} \underset{k_j\to0}{\simeq} \Delta_N(k) \, \det\left[ \phi_i^{(n-1)}(0)/(n-1)! \right]_{1{\leqslant}i,\, n{\leqslant}N}$$ where $\{\phi_i(k)\}$ is a set of regular functions (differentiable at least $N$ times). The proof of the relation is simple: replacing the functions by a Taylor expansion, we notice that the lowest order in $k_j$’s is provided by the first $N$ terms of the series $$\det\left[ \sum_{n=1}^{N-1}\frac{\phi_i^{(n-1)}(0)}{(n-1)!} \, k_j^{n-1} \right]_{1{\leqslant}i,\, j{\leqslant}N}\,.$$ This is readily recognized as the determinant of a product of matrices, yielding . We apply to the ratio of determinants in . The corresponding Taylor expansion is given by $(1+x)^{-\alpha}=\sum_{n=0}^\infty\frac{(\alpha)_n}{n!}(-x)^n$, where $(\alpha)_n=\Gamma(\alpha+n)/\Gamma(\alpha)$ is the Pochhammer symbol. Thus the ratio of the two determinants has a finite limit $$\frac{ \det\left[ (1+k_j^2)^{-i/2} \right] }{ \det\left[ (1+k_j^2)^{-i} \right] } \underset{ k_j\to0 }{\longrightarrow} \mathscr{B}_\Nc = \frac{ \det\left[ (-1)^{j-1}\,\frac{\Gamma(i/2+j-1)}{\Gamma(i/2)\Gamma(j)} \right] } { \det\left[ (-1)^{j-1}\,\frac{\Gamma(i+j-1)}{\Gamma(i)\Gamma(j)} \right] }\,,$$ which after further simplifications reduces to $$\mathscr{B}_\Nc = \frac{ \det\left[ \Gamma(i/2+j-1) \right] }{ \det\left[ \Gamma(i+j-1) \right] } \,\prod_{n=1}^\Nc\frac{\Gamma(n)}{\Gamma(n/2)}\,.$$ We can now write $$\mathcal{Z}_{\Nc,2}(p) \simeq \left(\frac{\pi}{4}\right)^{\Nc/2}\mathscr{B}_\Nc\, p^{\frac{3\Nc^2}{4}} {\mathrm{e}^{-2\Nc\sqrt{p}}} \int\D k_1\cdots\D k_\Nc\,\Delta_\Nc(k)^2 \prod_n {\mathrm{e}^{-\sqrt{p}k_n^2}}\,.$$ Using $\Delta_N(\alpha\,x)=\alpha^{N(N-1)/2}\,\Delta_N(x)$ and the integral , we finally obtain $$\label{eq:CaraFctWTDBeta2} \mathcal{Z}_{\Nc,2}(p) \underset{p\to\infty}{\simeq} \mathscr{A}_\Nc\, p^{\frac{\Nc^2}{2}} \, {\mathrm{e}^{-2\Nc\sqrt{p}}}\,,$$ where $\mathscr{A}_\Nc = 2^{-\frac{\Nc^2}{2}}\left(\frac{\pi}{4}\right)^{\Nc/2} \mathscr{B}_\Nc \int\D x_1\cdots\D x_\Nc\,\Delta_\Nc(x)^2\prod_n {\mathrm{e}^{-x_n^2/2}}$ can be also written as $$\begin{aligned} \mathscr{A}_\Nc = 2^{-\frac{\Nc}{2}(\Nc+1)}\pi^{\Nc}\,G(N+2)\,\mathscr{B}_\Nc\end{aligned}$$ in terms of the Barnes $G$-function. Correspondingly, the (rescaled) Wigner time delay distribution reads $${\mathscr{Q}_{\Nc,2}}(\rt) \underset{t\to0}{\simeq} \mathscr{C}_\Nc\, \rt^{-\Nc^2-3/2}\,{\mathrm{e}^{-\Nc/t}}\,, \qquad \mathscr{C}_\Nc =\sqrt{\frac{\Nc}{\pi}} \frac{\mathscr{A}_\Nc}{\mathcal{Z}_{\Nc,2}(0)} \:,$$ thus yielding the asymptotic behaviour $${\mathscr{P}_{\Nc,2}}(\tau) \sim \coupl^{-1}\,(\coupl/\tau)^{\Nc^2+3/2} \,{\mathrm{e}^{-\Nc\coupl/(4\tau)}} \qquad\mbox{for } \tau\ll\coupl \:.$$ ### Limit $p\to0$. The limit of small $p$ is more tricky. First, it must be recognised that the dominant contribution to the multiple integral comes from the expansion of the MacDonald functions within a window $|k_n|\lesssim1/\sqrt{p}$ : $$\mathcal{Z}_{\Nc,2}(p) = \frac{\prod_n\Gamma(\Nc+n)}{2^{\Nc}} \int\D k_1\cdots\D k_\Nc\, \frac{ \Delta_\Nc(k)^2 }{ \prod_n (1+k_n^2)^{\Nc} } \, \frac{ \det\left[ \frac{1}{(1+k_j^2)^i} \left( 1 - p\, \frac{1+k_j^2}{\Nc+i-1} + \mathcal{O}(p^2) \right) \right] }{\det\left[ (1+k_j^2)^{-i} \right]}$$ Now we use that the $p$-dependent determinant here can be further written as $$\det(A - p\,B) \underset{p\to0}{\simeq} \det(A) \left( 1 - p\, {\mathop{\mathrm{tr}}\nolimits\left\{ A^{-1}B \right\}} \right)\,,$$ where the matrices $A$ and $B$ are defined by $$A_{ij}=(1+k_j^2)^{-i}\equiv(X_j)^{i} \quad\mbox{and}\quad B_{ij}=\frac{(1+k_j^2)^{-i+1}}{\Nc+i-1}\equiv\frac{(X_j)^{i-1}}{\Nc+i-1} \:.$$ Making use of the relation $${\mathop{\mathrm{tr}}\nolimits\left\{ A^{-1}B \right\}} = \frac{1}{\Nc} \sum_n X_n^{-1} = \frac{1}{\Nc} \sum_n (1+k_n^2) \:,$$ the leading order term of the characteristic function is then found as follows $$\begin{aligned} \hspace{-2cm} \mathcal{Z}_{\Nc,2}(0) - \mathcal{Z}_{\Nc,2}(p) \sim p \int_{|k_i|\lesssim1/\sqrt{p}}\D k_1\cdots\D k_\Nc\, \Delta_\Nc(k)^2 \prod_{n}(1+k_n^2)^{-\Nc} \frac{1}{\Nc}\sum_n (1+k_n^2)\end{aligned}$$ where we have used once again $\det\big[(1+k_j^2)^{-i}\big]=\Delta_\Nc(k^2)\, \prod_{n}(1+k_n^2)^{-\Nc}$. By symmetry we can perform $(1/\Nc)\sum_n(1+k_n^2)\to 1+ k_\Nc^2$. As $p\to0$, the dominant contribution comes from the term $$\Delta_\Nc(k)^2 \simeq k_\Nc^{2(\Nc-1)} \Delta_{\Nc-1}(k)^2 + \mathcal{O}( k_\Nc^{2\Nc-3} )\,.$$ By inspecting the integral, we can write $$\mathcal{Z}_{\Nc,2}(0) - \mathcal{Z}_{\Nc,2}(p) \sim p \underbrace{ \int_{|k_i|\lesssim1/\sqrt{p}}\D k_1 \cdots\D k_{\Nc-1}\, \frac{ \Delta_{\Nc-1}(k)^2}{ \prod_{n=1}^{\Nc-1}(1+k_n^2)^{\Nc} } }_{\to \mathrm{const.} \ \mathrm{as}\ p\to0} \underbrace{ \int_{|k_\Nc|\lesssim1/\sqrt{p}}\D k_\Nc \, \frac{ k_\Nc^{2(\Nc-1)}\,(1+k_\Nc^2)}{(1+k_\Nc^2)^\Nc} }_{\sim 1/\sqrt{p} \ \mathrm{as}\ p\to0}$$ and, therefore, conclude that $$\label{eq:CharacteristicFctWTDlimit1} \frac{\mathcal{Z}_{\Nc,2}(p)}{\mathcal{Z}_{\Nc,2}(0)} \underset{p\to0}{\simeq} 1 - B_\Nc\,\sqrt{p} \:,$$ where $B_\Nc$ is some constant. This behaviour can now be related to the distribution by using a Tauberian theorem. Assuming the tail ${\mathscr{Q}_{\Nc,2}}(t)\simeq c\,\rt^{-3/2}$, we have $$\begin{aligned} \hspace{-2cm} \int_0^\infty\D\rt\, {\mathscr{Q}_{\Nc,2}}(t)\, {\mathrm{e}^{-\Nc p\rt}} =1 - \int_0^\infty\D\rt\, {\mathscr{Q}_{\Nc,2}}(t)\, (1- {\mathrm{e}^{-\Nc p\rt}} ) \\ \nonumber \hspace{-2cm} \underset{p\to0}{\simeq } 1 - c \int_0^\infty\frac{\D\rt}{\rt^{3/2}}\, (1- {\mathrm{e}^{-\Nc p\rt}} ) = 1 - 2\,c \,\Nc\,p \int_0^\infty\D\rt\,\rt^{-1/2}\, {\mathrm{e}^{-\Nc p\rt}} = 1 - 2\,c \,\sqrt{\pi\,\Nc\,p }\end{aligned}$$ Thus $c=B_\Nc/(2\sqrt{\pi\Nc})$. A precise determination of $B_\Nc$ would be interesting, in particular in order to clarify the precise scaling with $\Nc$ of the typical values of the random variable $\Wt$, however it goes beyond the present analysis. We conclude that in the limit of small transmission, $\coupl\ll1$, the Wigner time delay distribution shows the universal $\tau^{-3/2}$ behaviour $$\label{eq:UniversalTailDistWTD} {\mathscr{P}_{\Nc,2}}(\tau) \sim \coupl^{-1}\,(\coupl/\tau)^{3/2} \hspace{1cm}\mbox{for } \coupl \ll \tau \ll 1/\coupl \:.$$ In the next section, we will see that the upper cutoff also carries a $\Nc$-dependence. This behaviour coincides with the one obtained by a heuristic argument, see Eq. , which is based on the picture of isolated resonances. It is worth stressing that the order of the limits $p\to0$ and $\fss\to0$ is important. For finite coupling the first moments are finite. Using that the second moment is [@LehSavSokSom95; @FyoSom97] ${\left\langle \Wt^2\right\rangle}\simeq1/(2\fss\Nc^3)$, cf. Eq. , one expects $$\frac{\mathcal{Z}_{\Nc,\beta}(p)}{\mathcal{Z}_{\Nc,\beta}(0)} \underset{p\to0}{\simeq} 1 - \frac{2p}{\beta\fss} + \frac{p^2}{\beta^2\fss^3\Nc} +\cdots$$ for small but finite $\fss$ (and large $\Nc$). The behaviour is obtained by sending first $\fss\to0$ and *then* $p\to0$. For finite $\fss$ the non-analyticity of the characteristic function appears at higher order in $p$, corresponding to the divergence of the moments of high order, ${\langle \Wt^k\rangle}=\infty$ for $k{\geqslant}1+\beta\Nc/2$. For finite $\coupl$, the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ should be in correspondence with the marginal distribution of the proper (or partial) times in the limit $\tau\to\infty$, with ${w_{\Nc,\beta}}(\tau)\sim\tau^{-2-\beta\Nc/2}$, as we expect that one proper time dominates the sum $\Wt=(1/\Nc)\sum_a\tau_a$. Inspection of the matrix distribution shows that if one resonance is much more narrow than all others, $\gamma_1\to0$, we expect the vanishing of the density as $P_\invQ(\Gamma)\sim(\det\invQ)^{\beta\Nc/2}\sim\gamma_1^{\beta\Nc/2}$. Correspondingly the distribution of $\Wt=(1/\Nc)\sum_a\gamma_a^{-1}\simeq1/(\Nc\,\gamma_1)$ presents the tail ${\mathscr{P}_{\Nc,\beta}}(\tau) \sim \tau^{-2-\beta\Nc/2}$. We can reintroduce the dependence in $\coupl$ by matching the behaviour with  : $${\mathscr{P}_{\Nc,\beta}}(\tau) \underset{\tau\gg1/\coupl}{\sim } \coupl^2\,(\coupl\tau)^{-2-\beta\Nc/2} \:,$$ for $\tau\gtrsim1/\coupl$. A similar decoupling of the eigenvalues was demonstrated for perfect contacts in Ref. [@TexMaj13]. Note that the $\Nc$-dependence has not been included above. This will be discussed in the Section \[sec:Numerics\] (see also section \[Subsec:Heuristic\], where such a behaviour has been related to isolated resonances with atypically narrow width). Large deviations for $\tau\to0$ for arbitrary symmetry class ------------------------------------------------------------ In this last subsection, we study the limiting behaviour of the distribution ${\mathscr{P}_{\Nc,\beta}}(\tau)$ for $\tau\ll\fss/\Nc$ by a steepest descent analysis of the matrix integral, which allows to consider any symmetry class. Our starting point is $$\begin{aligned} \hspace{-2cm} \mathcal{Z}_{\Nc,\beta}(p) \propto \int_{\invQ>0}\mathrm{D}\invQ\, (\det \invQ )^{\beta\Nc/2} &\int\mathrm{D}\Kmat\, \frac{ \det({\mathbf{1}}_\Nc+\Kmat^2)^{ \beta\Nc/2 } } { \det({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{\beta\Nc+1-\beta/2} } \\ \nonumber &\times \exp\left( - \frac{\beta}{2}\fss\, {\mathop{\mathrm{tr}}\nolimits\left\{ \frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2} \invQ \right\}} - \frac{2p}{\beta\fss}\,{\mathop{\mathrm{tr}}\nolimits\left\{ \invQ^{-1} \right\}} \right) \:.\end{aligned}$$ The integral over the matrix $\invQ$ is of the form of the Bessel function with matrix argument introduced in Ref. [@Her55], generalising the MacDonald function as $$\label{eq:MatrixMacDonald} B_{\nu,\beta}(Z) = \int_{X>0}\mathrm{D}X\, \left(\det X\right)^{-\nu-1-\beta(N-1)/2} \,{\mathrm{e}^{-{\mathop{\mathrm{tr}}\nolimits\left\{ X+Z\,X^{-1} \right\}}}} \:,$$ where $Z$ is a Hermitian matrix. The relation with the characteristic function reads explicitly $$\begin{aligned} \label{eq:RepresZMartrixBessel} \hspace{-1cm} \mathcal{Z}_{\Nc,\beta}(p) \propto \int\mathrm{D}\Kmat\, \frac{ \det({\mathbf{1}}_\Nc+\Kmat^2)^{ \beta\Nc/2 } } { \det({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{\beta\Nc+1-\beta/2} } \: B_{1+\frac{\beta\Nc}{2},\beta}\left(p\,\frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}\right) \:.\end{aligned}$$ The limiting behaviour of integrals such as was recently studied by the Laplace method in [@ButWoo03] for real symmetric matrices. Here we generalise this analysis to the unitary class, which allows us to compute easily the remaining matrix integral (over $\Kmat$). Using the invariance under unitary transformations, we can always choose one of the two matrices under a diagonal form. We choose $\Kmat=\mathrm{diag}(k_1,\cdots,k_\Nc)$. Next we perform the change of variable $$\invQ\longrightarrow \frac{2\sqrt{p}}{\beta\fss} \left(\frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}\right)^{-1/4} X \left(\frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}\right)^{-1/4}$$ Thus $$\begin{aligned} \label{eq:84} \hspace{-2cm} \mathcal{Z}_{\Nc,\beta}(p) \propto p^{\frac{\beta\Nc^2}{2}+\frac{\Nc}{2}(1-\beta/2)} \int\D k_1\cdots\D k_\Nc\,|\Delta_\Nc(k)|^\beta \prod_n \frac{(1+k_n^2)^{(\beta/2-1)/2}}{(1+\fss^2k_n^2)^{\beta\Nc/2-\beta/4+1/2}} \nonumber\\ \times \int_{X>0}\mathrm{D}X\, (\det X )^{\beta\Nc/2} \exp\left( - \sqrt{p}\, {\mathop{\mathrm{tr}}\nolimits\left\{ \sqrt{\frac{{\mathbf{1}}_\Nc+\Kmat^2}{{\mathbf{1}}_\Nc+\fss^2\Kmat^2}} \left(X+X^{-1}\right) \right\}} \right) \:.\end{aligned}$$ Then, we introduce $R=({\mathbf{1}}_\Nc+\Kmat^2)^{1/2}({\mathbf{1}}_\Nc+\fss^2\Kmat^2)^{-1/2}$. The integral is dominated by the position of the saddle point, minimum of ${\mathop{\mathrm{tr}}\nolimits\left\{ R\,(X+X^{-1}) \right\}}$, which is found to be $X_*={\mathbf{1}}_\Nc$. The Hessian matrix has the form $\mathscr{H}_{(i,j),(k,l)}=2(R_{jl}\delta_{ik}+R_{ik}\delta_{jl})$, so that we obtain the form $$\begin{aligned} \int_{X>0}\mathrm{D}X\, (\det X )^{\beta\Nc/2} {\mathrm{e}^{ - \Lambda\, {\mathop{\mathrm{tr}}\nolimits\left\{ R\, \left(X+X^{-1}\right) \right\}} }} \nonumber \\ \underset{\Lambda\to\infty}\simeq \left(\frac{\pi}{\Lambda}\right)^{\Nc(1+\beta(\Nc-1)/2)} (\det R)^{-1/2} \prod_{i<j}(R_{ii}+R_{jj})^{-\beta/2} \,{\mathrm{e}^{-2\Lambda\,{\mathop{\mathrm{tr}}\nolimits\left\{ R \right\}}}}\end{aligned}$$ After some algebra we eventually get the limiting behaviour (assuming $\fss\to0$) $$\mathcal{Z}_{\Nc,\beta}(p) \propto p^{\beta\Nc^2/4}\,{\mathrm{e}^{-2\Nc\sqrt{p}}} \hspace{1cm} \mbox{for } p\to\infty$$ which agrees with for $\beta=2$. Correspondingly, we obtain the limiting behaviour for the distribution of the Wigner time delay $$\label{eq:LargeDevSmallTauWC} {\mathscr{P}_{\Nc,\beta}}(\tau) \sim \tau^{-\frac{\beta\Nc^2}{2}-\frac32}\, {\mathrm{e}^{-\beta\Nc\fss/(2\tau)}} \hspace{1cm} \mbox{for } \tau\to0 \mbox{ and }\fss\ll1 \:.$$ As a check, we can compare this behaviour with the limiting behaviour of the marginal distribution for proper times and partial times as the three distributions coincide for one channel, ${\mathscr{P}_{1,\beta}}(\tau)={w_{1,\beta}}(\tau)={\tilde{w}_{1,\beta}}(\tau)$. From Eq.  we have ${\tilde{w}_{\Nc,\beta}}(\tau)\sim\tau^{-\beta\Nc/2-3/2}\,\exp\big\{-\beta\fss/(2\tau)\big\}$ and from Eq. , ${w_{\Nc,2}}(\tau)\sim\tau^{-2\Nc-1/2}\,\exp\big\{-\fss/\tau\big\}$. The three limiting behaviours indeed coincide when $\Nc=1$, as it should. For reference, we can compare this behaviour to the corresponding one for ideal couplings (see Ref. [@TexMaj13] and Section 5 of Ref. [@GraTex16b], and also [@Gra18]) $$\label{eq:LargeDevSmallTauPC} {\mathscr{P}_{\Nc,\beta}}^{(0)}(\tau) \sim \tau^{-\frac{3\beta\Nc^2}{4}-\frac{\Nc}{2}(1-\frac{\beta}{2})-\frac32}\, {\mathrm{e}^{-\beta\Nc/(2\tau)}} \hspace{1cm} \mbox{for } \tau\to0 \mbox{ and }\fss=1 \:.$$ Although the leading exponential terms in and coincide, the pre-exponential factors there have different power law dependencies. Numerical analysis {#sec:Numerics} ================== We have performed numerical simulations in order to study the weak coupling limit. For this purpose we use the formulation presented in Section \[sec:IntegrationOverCUE\] : we generate the matrix $\Sm_0$ in the circular ensemble and the matrix $\invQ_0=\WSm_{s0}^{-1}$ in the Laguerre ensemble. The Wigner-Smith matrix is then constructed making use of the expression $$\WSm = \left( 1-|\Sbar|^2 \right) \, \left( {\mathbf{1}}_\Nc + \Sbar\Sm_0^\dagger \right)^{-1} \, \Sm_0^{-1/2} \WSm_{s0}\, \, \Sm_0^{1/2} \left( {\mathbf{1}}_\Nc + \Sbar^*\Sm_0 \right)^{-1} \:.$$ ![ *Cumulative distribution of the proper time delays in the unitary case ($\beta=2$) for different channel number $\Nc$ and coupling $\fss$ (the latter controls the transmission probability through the contact, $\coupl\simeq4\fss$ at small $\fss\ll1$). The dashed black line corresponds to the exact analytical expression .* []{data-label="fig:CumulativeProperTimes"}](Cumulative_PropTimes_beta2_rescaled_comp_analytic){height="6cm"} Check: marginal distribution of the proper times ------------------------------------------------ As a first check, we have computed the cumulative (marginal) distribution of the proper time for different $\Nc$ and $\fss$. This distribution is shown in Fig. \[fig:CumulativeProperTimes\], where it is plotted in terms of the scaling variable $s=\Nc|1/\fss-\fss|\,\tau$, which is a natural choice describing the full range of couplings (see \[app:PartialProper\]). We have generated $10^{5}$ matrices each time. For weak coupling $\fss\to0$, the main behaviours of the distribution are $$\frac{\fss}{\Nc}\,{w_{\Nc,\beta}}\left(\tau=\frac{\fss}{\Nc}\,s\right) \sim \left\{ \begin{array}{ll} s^{-3/2} & \mbox{for } 1 \lesssim s \lesssim 1/\fss^2 \\[0.25cm] \fss^3 \left(\fss^2 s \right)^{-2-\beta\Nc/2} & \mbox{for } s \gtrsim 1/\fss^2 \\ \end{array} \right.\,,$$ which are deduced in \[app:PartialProper\] from the known exact result [@SomSavSok01]. We can see that in the limit $\fss\to0$ all curves collapse onto each other (after proper rescaling). Changing $\fss$ then only shifts the upper cutoff of the $s^{-3/2}$ tail. The positions of the lower and upper cutoffs of this power law perfectly coincide with the two cutoffs $\tup$ and $\tlow$ defined by Eqs. (\[eq:DefUpperCutoffProper\]) and (\[eq:DefLowerCutoffProper\]). We have also compared the numerics with the exact distribution for $\Nc=2$ (in practice, this is only possible for small $\Nc\lesssim5$ and not too small $\fss\gtrsim0.01$, otherwise appears to be too involved for being plotted with a conventional software like [Mathematica]{}): the agreement is excellent. ![*Cumulative distribution of the Wigner time delay in the unitary case ($\beta=2$) for different channel numbers and different couplings. The dashed black lines are $\tau^{-1/2}$ and $\tau^{-1-\Nc}$. The distributions for different channel numbers are plotted for $\fss=0.01$ on the bottom part of the figure.* []{data-label="fig:CumulativeWignerTime"}](Cumulative_TauW_beta2_rescaled "fig:"){width="60.00000%"} ![*Cumulative distribution of the Wigner time delay in the unitary case ($\beta=2$) for different channel numbers and different couplings. The dashed black lines are $\tau^{-1/2}$ and $\tau^{-1-\Nc}$. The distributions for different channel numbers are plotted for $\fss=0.01$ on the bottom part of the figure.* []{data-label="fig:CumulativeWignerTime"}](Distr_TauW_beta2_rescaled "fig:"){width="49.00000%"} Distribution of the Wigner time delay ------------------------------------- Next, we have considered the distribution of the Wigner time delay in the weakly coupled regime, $\Nc\coupl\ll1$. We draw several conclusions from such a numerical analysis. ![*Comparison of the cumulative distribution of the Wigner time delay in the orthogonal and unitary case.*[]{data-label="fig:ComparisonOrthoUnitary"}](TauW_comp_N10_kappa01){width="55.00000%"} - Taking again $s=|1/\fss-\fss|\,\tau$ as the scaling variable, we see that the different distributions collapse onto each other and show the intermediate $s^{-3/2}$ behaviour for different $\Nc$ and $\fss$ (Fig. \[fig:CumulativeWignerTime\]). - The lower cutoff of the $s^{-3/2}$ law is almost independent of $\Nc$. - The upper cutoff depends on both $\fss$ and $\Nc$, with numerics supporting the scaling $\tau_*\sim1/(\fss\Nc^2)$. (This can be clearly seen, e.g., by comparing the two curves for $\Nc=5$ and $\Nc=50$ in Fig. \[fig:CumulativeWignerTime\] for the same value of $\fss$.) - The power law $\tau^{-3/2}$ is observed both in the unitary and orthogonal case (Fig. \[fig:ComparisonOrthoUnitary\]). (This is consistent with the earlier study [@FyoSavSom97] of the crossover regime). - For $\tau\gtrsim\tau_*$, the distribution exhibits a power law tail with the universal exponent $2+\beta\Nc/2$, which is anticipated theoretically and confirmed here numerically. These findings together with the outcome of Section \[sec:characfct\] can be summarised as follows : $$\fss\, {\mathscr{P}_{\Nc,\beta}}\left(\tau=\fss\,s\right) \sim \left\{ \begin{array}{ll} s^{-3/2} & \mbox{for } 1 \lesssim s \lesssim 1/(\fss\Nc)^2 \\[0.25cm] (\Nc\fss)^3 \left(\Nc^2\fss^2s\right)^{-2-\beta\Nc/2} & \mbox{for } s \gtrsim 1/(\fss\Nc)^2 \\ \end{array} \right.$$ Furthremore, we have argued in Section \[subsec:UniversalLimitWC\] that $\lim_{\fss\to0}\fss\, {\mathscr{P}_{\Nc,\beta}}\left(\tau=\fss\,s\right)$ is a universal function, although we have not been able to determine its precise form. Finally, we have also studied the transition from strong coupling ($\Nc\coupl\gg1$) to weak coupling ($\Nc\coupl\ll1$), for large $\Nc$, and shown that the distribution crosses over from a narrow distribution to a broad distribution when $\Nc\coupl\sim1$ (Fig. \[fig:DWTfromSCtoWC\]). ![*Distribution of the Wigner time delay for $\Nc=50$ channels : from strong coupling regime ($\Nc\coupl\gg1$) to weak coupling ($\Nc\coupl\ll1$).*[]{data-label="fig:DWTfromSCtoWC"}](Distr_TauW_beta2_evol){width="60.00000%"} Conclusion ========== In this article, we have considered the scattering of waves by a chaotic cavity coupled to $\Nc$ channels characterised by arbitrary transmission coefficients $T$. Within a random matrix approach, we have derived the joint distribution of the scattering matrix $\Sm$ and the symmetrised time-delay matrix $\WSm_s$ at arbitrary channel couplings. This extends the result obtained by Brouwer, Frahm and Beenakker [@BroFraBee97; @BroFraBee99] at $\coupl=1$ to the general case of non-ideal coupling, $\coupl<1$. This has allowed us to obtain two representations for the distribution of $\WSm_s$ (or more precisely, its inverse) in terms of certain matrix integrals. Then we have applied our results to study the statistical properties of the Wigner time delay $\Wt=\frac{1}{\Nc}{\mathop{\mathrm{tr}}\nolimits\left\{ \WSm_s \right\}}$. Specifically, we have derived the exact representation of the characteristic function of $\Wt$ as a multiple integral involving Bessel functions of matrix argument. This expression has been further used to obtain, after inverse Laplace transform, the asymptotic behaviours of the Wigner time delay distribution in the limit $\coupl\to0$ (weak coupling per channel), keeping $\Nc\coupl\ll1$. Physically, this corresponds to the regime of isolated resonances (the system weakly coupled to the external). In such a case, the Wigner time delay distribution becomes broad with an intermediate behaviour described by the universal $\tau^{-3/2}$ law. We have also established the left and right tails of the distribution up to the constant prefactors that have not been computed. The knowledge of these constants would however be needed for determining the precise positions of the crossovers between the limiting behaviours. These cutoffs are of interest, as they control the positive and negative moments, but they have only been deduced here from a numerical analysis. We have also compared such a behaviour with the one derived from the known marginal distributions of the partial and proper time delays, which become almost identical to each other in the weak coupling limit (see \[app:PartialProper\]). In particular, the distribution of the partial time delays (rescaled properly by $T$) is found to have a simple universal form in the limit $\coupl\to0$ at any $\Nc$. We have argued that the distribution of the Wigner time delay should be described by a universal function in the limit $\coupl\to0$ as well. The analysis of the other regime with $\Nc\coupl\gg1$ (the strongly coupled system with overlapping resonances) suggests that the Wigner time delay distribution becomes narrow, with a Gaussian-like bulk behaviour. The crossover between the two limiting forms occurs quite sharply at $\Nc\coupl\sim1$ (cf. Fig. \[fig:DWTfromSCtoWC\]). Determining the precise universal function describing such a crossover is still an outstanding problem and a challenging one to consider in future study. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Pierpaolo Vivo for stimulating discussions. DVS gratefully acknowledges University Paris-Sud for financial support and LPTMS in Orsay for hospitality during his stay there. Some matrix integrals ===================== Harish-Chandra - Itzykson - Zuber Integrals {#app:HCIZ} ------------------------------------------- Consider two Hermitian matrices $A$ and $B$ with spectra $\{a_i\}$ and $\{b_i\}$. Then [@ItzZub80a; @ZinZub03] $$\hspace{-1cm} \int_{\mathrm{U}(N)} \mathrm{D}U\, \exp\left(t\,{\mathop{\mathrm{tr}}\nolimits\left\{ AUBU^\dagger \right\}} \right) = G(N+1)\, t^{-N(N-1)/2} \frac{\det\left({\mathrm{e}^{t\,a_ib_j}}\right)_{1{\leqslant}i,\,j{\leqslant}N}}{\Delta_\Nc(a)\,\Delta_\Nc(b)}$$ where $$\Delta_\Nc(a) = \det(a_i^{j-1})_{1{\leqslant}i,\,j{\leqslant}N} =\prod_{i<j}(a_i-a_j)$$ is the Vandermonde and $G(z)$ is the Barnes’ $G$ function (double gamma function) [@DLMF §5.17] defined by $G(z+1)=\Gamma(z)G(z)$, i.e. $G(N+1)=(N-1)!(N-2)!\cdots3!2!1!$. Two normalisation constants {#app:tmi} --------------------------- We state two matrix integrals provided in Forrester’s book [@For10], which are used in the article. The normalisation for the Cauchy ensemble is given by Eqs. (4.4) and (4.145) of [@For10]: $$\label{eq:mi1} \hspace{-2cm} \int\D x_1\cdots\D x_N\,|\Delta_N(x)|^\beta\, \prod_n (1+x_n^2)^{-\alpha} =2^{\beta N(N-1)/2-2(\alpha-1)N} \pi^N\, M_N(a,a,\beta/2)$$ where $a=\alpha-1-\beta(N-1)/2$ and $$M_N(a,b,\lambda) =\frac{1}{\Gamma(1+\lambda)^N}\prod_{j=0}^{N-1} \frac{\Gamma(\lambda j +a+b+1) \Gamma(\lambda(j+1)+1)} {\Gamma(\lambda j +a +1)\Gamma(\lambda j +b+1)} \:.$$ The normalisation for the Gaussian ensemble is given on p. 173 of [@For10]: $$\label{eq:mi2} \int\D x_1\cdots\D x_N\,|\Delta_N(x)|^\beta\, \prod_n {\mathrm{e}^{-x_n^2/2}} = \frac{(2\pi)^{N/2}}{\Gamma(1+\beta/2)^{N}}\, \prod_{j=1}^{N}\Gamma(1+j\beta/2) \:.$$ Andréief formula {#app:Andreief} ================ A formula due to Andréief [@And86] (see also the recent historical note [@For18]) is $$\hspace{-1.5cm} \int \left(\prod_{n=1}^N\D\mu(x_n)\right) \det(A_i(x_j))\, \det(B_k(x_l))\, =N!\, \det\left[\int\D\mu(x)\,A_i(x)\,B_j(x)\right].$$ For $\beta=2$, writing the Vandermonde as $$\Delta_\Nc(\lambda)^2= \prod_{i<j}(\lambda_i-\lambda_j)^2 = \underbrace{ \det(\lambda_i^{k-1}) }_{\prod_{i<j}(\lambda_i-\lambda_j)} \det(\lambda_j^{k-1})$$ we deduce the representation of the matrix integral as a Hankel determinant $$\int \left(\prod_{i=1}^N\D\mu(\lambda_i)\right) \prod_{i<j}(\lambda_i-\lambda_j)^2 =N!\: \det\left( a_{ij} \right)_{1{\leqslant}i,\,j{\leqslant}N} \:,$$ where the matrix elements are $$a_{ij} = \int\D\mu(\lambda)\,\lambda^{i+j-2} \hspace{1cm}\mbox{for } 1{\leqslant}i,\:j{\leqslant}N \:.$$ Partial and proper time delays {#app:PartialProper} ============================== The marginal distributions of the partial and proper time delays were obtained in several papers by Fyodorov, Sommers and collaborators [@FyoSom96; @FyoSom97; @FyoSavSom97; @SavFyoSom01] (partial times) and [@SomSavSok01] (proper times). These explicit results are however expressed in complicated forms, with the transmission coefficient entering through the following parameter : $$g = \frac{2}{\coupl} - 1 = \frac{1}{2}\left( \fss +\frac{1}{\fss} \right) {\geqslant}1\,.$$ It is the purpose of this appendix to derive the precise limiting behaviours of these distributions in the weak coupling limit $\coupl\approx2/g\to0$. It will be convenient to rescale the time delays and relevant distributions as follows $$\label{eq:rescaling} \hspace{-1cm} \tau \simeq \frac{\beta}{4g}\,\rt \simeq \frac{\beta\fss}{2}\,\rt \simeq \frac{\beta\coupl}{8}\,\rt \hspace{1cm}\mbox{and} \hspace{1cm} {q_{\Nc,\beta}}(\rt) \underset{g\gg1}{\simeq } \frac{\beta}{4g}\, {w_{\Nc,\beta}}\left(\tau\simeq\frac{\beta}{4g}\rt \right) \:,$$ with a similar form for ${\tilde{q}_{\Nc,\beta}}(\tau)$. Marginal distribution of the partial time delays in the unitary case -------------------------------------------------------------------- The marginal distribution of partial time delays was first derived by Fyodorov and Sommers [@FyoSom96; @FyoSom97] in the unitary case: $$\hspace{-1cm} {\tilde{w}_{\Nc,2}}(\tau) = \frac{1}{\tau^2}\,\tilde{p}_\Nc^{(2)}(1/\tau)\,, \hspace{0.5cm}\mbox{where } \tilde{p}_{\Nc}(\gamma) = \frac{\gamma^\Nc}{\Nc!}\left(-\partial_\gamma\right)^\Nc \left[ I_0(\sqrt{g^2-1}\,\gamma){\mathrm{e}^{-g\gamma}} \right] \:.$$ In order to find limiting behaviours we rescale the distribution by introducing $\rt=2g\tau$ or $z=\gamma/(2g)$. We first consider the domain $z\ll1$ (i.e. $\tau\ll g$). Using that $$I_0(\sqrt{g^2-1}\,\gamma){\mathrm{e}^{-g\gamma}} \simeq \frac{1}{2g}\,\phi\left( z=\frac{\gamma}{2g}\right) \hspace{1cm}\mbox{with } \phi(z) \eqdef \frac{1}{\sqrt{\pi z}}{\mathrm{e}^{-z}}$$ we write $$\tilde\pi_{\Nc}(z) =\lim_{g\to\infty} 2g \:\tilde{p}_{\Nc}(\gamma=2g\,z) = \frac{z^\Nc}{\Nc!}\left(-\partial_z\right)^\Nc \left[ \phi(z) \right]$$ Using $(-\partial_z)^n(1/\sqrt{z})=(1/2)_n z^{-1/2-n}=2^{-n}(2n-1)!!\,z^{-1/2-n}$, where $(a)_n=a(a+1)\cdots(a+n-1)=\Gamma(a+n)/\Gamma(a)$ is the Pochhammer symbol, we deduce $$\tilde\pi_{\Nc}(z) = \frac{1}{\Nc!}\left( \sum_{n=0}^\Nc C_\Nc^n \,\frac{(2n-1)!!}{2^n}\,z^{\Nc-n} \right) \,\phi(z) \:.$$ We obtain $$\label{eq:MarginalPartialTimesWC} \lim_{\fss\to0} {\tilde{q}_{\Nc,2}}(\rt) = \frac{{\mathrm{e}^{-1/\rt}}}{\sqrt{\pi}\,\rt^{3/2}}\, \frac{1}{\Nc!}\sum_{n=0}^\Nc \frac{C_\Nc^n(2\Nc-2n-1)!!}{2^{\Nc-n}} \, \rt^{-n}$$ In particular, for $t\gg1$, we get $$\label{eq:FyodorovSommersLimiting1} \lim_{\fss\to0} {\tilde{q}_{\Nc,2}}(\rt) \simeq \frac{(2\Nc-1)!!}{\sqrt{\pi}\,2^\Nc\,\Nc!}\,\rt^{-3/2} \:.$$ We now turn to the study of the far tail ($\tau\gg1/\fss$). We expand $I_0(\sqrt{g^2-1}\gamma){\mathrm{e}^{-g\gamma}}$ in powers of $\gamma$ and identify the coefficient of the term $\gamma^\Nc$. Some algebra gives the form $$\label{eq:FyodorovSommersLimiting2} {\tilde{q}_{\Nc,2}}(\rt) \simeq \frac{a_\Nc}{g^3} \, (\rt/g)^{-2-\Nc}$$ where $$\hspace{-1cm} a_\Nc = 2 \sum_{m=0}^{\lfloor\Nc/2\rfloor} \frac{2^{\Nc-2m}}{(m!)^2(\Nc-2m)!} = \frac{2^{1+2\Nc}\,\Gamma(\frac{1}{2}+\Nc)}{\sqrt{\pi}\,(\Nc!)^2} = \frac{2^{1+\Nc}\,(2\Nc-1)!!}{(\Nc!)^2}$$ (\[eq:FyodorovSommersLimiting1\]) and (\[eq:FyodorovSommersLimiting2\]) match exactly the limiting forms derived in Eqs. 165 and 166 of Ref. [@FyoSom97]. Marginal distribution of the partial time delays for arbitrary symmetry class {#sec:Partial} ----------------------------------------------------------------------------- We now consider the marginal distribution of the partial time delays for arbitrary symmetry class and show that it takes a rather simple form in the weak coupling limit. We follow the formulation introduced by Gopar and Mello [@GopMel98] for $\Nc=1$ and further generalised in [@SavFyoSom01] for arbitrary $\Nc>1$, although these papers did not consider specifically the weak coupling limit. When all channels are equally coupled, Eq.  implies a relation between the eigenvalues of the two scattering matrices $ {\mathrm{e}^{\I\theta_a}} = \big(\Sbar+{\mathrm{e}^{\I\theta_a^{(0)}}}\big)\big(1+\Sbar^*\,{\mathrm{e}^{\I\theta_a^{(0)}}}\big)^{-1}$. This leads to the following relation between the partial times $\tilde{\tau}_a=\partial_\varepsilon\theta_a$ and $\tilde{\tau}_a^{(0)}=\partial_\varepsilon\theta_a^{(0)}$ [@GopMel98; @SavFyoSom01]: $$\tilde{\tau}_a = f(\theta_{a}^{(0)})\,\tilde{\tau}_a^{(0)}$$ where $$f(\theta)=\frac{1-|\Sbar|^2}{\big| 1+\Sbar^*{\mathrm{e}^{\I\theta}}\big|^2} = \frac{ \frac{2\fss}{1-\fss^2} }{ \frac{1+\fss^2}{1-\fss^2} + \cos\theta } = \frac{1}{g+\sqrt{g^2-1}\,\cos\theta} \:.$$ (We have used $\Sbar=(1-\fss)/(1+\fss)$, choosing $\fss\in[0,1]$). We can therefore write the distribution as ${\tilde{w}_{\Nc,\beta}}(\tau) = {\langle \delta(\tau - f(\theta_{a}^{(0)})\,\tilde{\tau}_a^{(0)} \rangle}_{\theta_{a}^{(0)},\tilde{\tau}_a^{(0)}} $. Now we use the fact that for perfect coupling, the phase shifts are uniformly distributed and uncorrelated from the partial time delays. As a consequence : $$\label{eq:GMandSFS} {\tilde{w}_{\Nc,\beta}}(\tau) = \int_0^{2\pi}\frac{\D\theta}{2\pi}\,\frac{1}{f(\theta)}\, {\tilde{w}_{\Nc,\beta}}^{(0)}(\tau/f(\theta)) \:,$$ where [@SavFyoSom01] : $$\label{eq:SavinFyodorovSommers2001} {\tilde{w}_{\Nc,\beta}}^{(0)}(\tau) =\frac{1}{\Nc}\sum_{a=1}^\Nc {\left\langle \delta( \tau - \tilde{\tau}_a^{(0)} ) \right\rangle} =\frac{(\beta/2)^{1+\beta\Nc/2}}{\Gamma(1+\beta\Nc/2)}\, \frac{{\mathrm{e}^{-\beta/(2\tau)}}}{\tau^{2+\beta\Nc/2}}$$ The representation , written under a slightly different form in [@SavFyoSom01], generalizes the one obtained by Gopar and Mello for $\Nc=1$ [@GopMel98]. We can make this integral representation more explicit through the rescaling $${\tilde{q}_{\Nc,\beta}}(\rt) = \frac{\beta}{4\sqrt{g^2-1}}\: {\tilde{w}_{\Nc,\beta}}\left(\tau=\frac{\beta\,\rt}{4\sqrt{g^2-1}} \right)$$ with $2\sqrt{g^2-1}=1/\fss-\fss$. Some algebra gives the form $$\begin{aligned} \label{eq:MarginalPartialForallBeta} & \hspace{-2cm} {\tilde{q}_{\Nc,\beta}}(\rt) = \frac{1}{\Gamma(1+\frac{\beta\Nc}{2})}\,\rt^{-2-\beta\Nc/2} \\\nonumber & \times \int_{0}^{\pi}\frac{\D\theta}{\pi}\, \left(\frac{2\sqrt{g^2-1}}{g+\sqrt{g^2-1}\,\cos\theta}\right)^{1+\beta\Nc/2}\, \exp\left\{-\frac{2\sqrt{g^2-1}}{\rt\,(g+\sqrt{g^2-1}\,\cos\theta)}\right\} \:.\end{aligned}$$ Before taking the limit of weak coupling ($g\to\infty$), we find more convenient to change the variable as $u=\tan^2(\theta/2)$, leading to the exact expression $$\begin{aligned} \label{eq:MarginalPartialTimesGeneralForm} & \hspace{-2cm} {\tilde{q}_{\Nc,\beta}}(\rt) = \frac{1}{\pi\,\Gamma(1+\frac{\beta\Nc}{2})} \, \rt^{-2-\beta\Nc/2} \\\nonumber &\times \int_0^\infty\frac{\D u}{\sqrt{u}}\,(1+u)^{\beta\Nc/2}\, \left(\frac{ 1-\fss^2 }{ 1+\fss^2\,u }\right)^{1+\beta\Nc/2}\, \exp\left\{ -\frac{1+u}{\rt}\,\frac{1-\fss^2}{1+\fss^2\,u} \right\} \:.\end{aligned}$$ It will be also convenient to express the cumulative distribution $$\label{eq:CumulativeMarginalPartial} \int_{\rt}^\infty\D y\,{\tilde{q}_{\Nc,\beta}}(y) =\frac{1}{\pi} \int_0^\infty\frac{\D u}{\sqrt{u}(1+u)}\, \frac{\gamma\left(1+\frac{\beta\Nc}{2},\frac{1+u}{\rt}\,\frac{1-\fss^2}{1+\fss^2\,u}\right)} {\Gamma(1+\frac{\beta\Nc}{2})}$$ where $\gamma(a,z)$ is the incomplete Gamma function [@gragra]. ### Limit $\fss\to0$. The integral representation is the most appropriate in order to study the limit of weak coupling $\fss\to0$. It makes clear that the distribution takes the simple form in this limit : $$\begin{aligned} \label{eq:MarginalPartialTimeGoparMelloMethod} \lim_{\fss\to0}{\tilde{q}_{\Nc,\beta}}(\rt) = \frac{1}{\sqrt{\pi}\,\Gamma(1+\beta\Nc/2)} \frac{{\mathrm{e}^{-1/\rt}}}{\rt^{2+\beta\Nc/2}}\, U\left( \frac{1}{2} , \frac{\beta\Nc+3}{2} , \frac{1}{\rt} \right) \:.\end{aligned}$$ where $U(a,c,z)$ is the Kummer function [@AbrSte64]. It is quite remarkable to obtain a universal form describing the full distribution in this limit. The expression further simplifies in the unitary case ($\beta=2$) as the Kummer function can be expressed as a sum $$\begin{aligned} \hspace{-2.5cm} U\left( \frac{1}{2} ,\Nc+ \frac{3}{2} , \frac{1}{\rt} \right) = \frac{1}{\sqrt{\pi}} \int_0^\infty\frac{\D u}{\sqrt{u}}(1+u)^{\Nc}{\mathrm{e}^{-u/\rt}} = \frac{1}{\sqrt{\pi}}\sum_{n=0}^\Nc C_\Nc^n\,\Gamma(n+1/2)\,\rt^{n+1/2} \:.\end{aligned}$$ Using $\Gamma(n+1/2)=2^{-n}(2n-1)!!\sqrt{\pi}$, we get $$\label{eq:MPartialLimitBeta2} \lim_{\fss\to0} {\tilde{q}_{\Nc,2}}(\rt) = \frac{{\mathrm{e}^{-1/\rt}}}{\sqrt{\pi}\,\rt^{3/2}}\, \sum_{n=0}^\Nc \frac{(2n-1)!!}{n!\,(N-n)!\,2^n} \,\rt^{-\Nc+n} \:.$$ which is in exact correspondence with , as it should, although the two derivations are quite different. ### Far tail ($\rt\gg1/\fss^2$). A more careful analysis of the integral shows that for small but finite $\fss$, the distribution presents a different behaviour for $t\gg1/\fss^2$. In this case, noticing that $ (1-\fss^2)(1+u)/\big[t\,(1+\fss^2u)\big] \ll 1$, we can replace the exponential in by unity, which shows that the distribution has a power law tail with large exponent ${\tilde{q}_{\Nc,\beta}}(\rt)\simeq(a_\Nc/g^3)(\rt/g^2)^{-2-\beta\Nc/2}$, with $2\fss\simeq1/g$ and where the coefficient $a_\Nc$ can be easily found and is given below. Marginal distribution of the proper time delays in the unitary case {#sec:Proper} ------------------------------------------------------------------- The exact explicit form for the marginal distribution of the proper time delays was only found for the unitary case in Ref. [@SomSavSok01] : $$\label{eq:SommersSavinSokolov2001a} {w_{\Nc,2}}(\tau) = \frac{1}{\Nc\tau}\sum_{n=0}^{\Nc-1} \left( F_n{\frac{\partial B_n}{\partial \tau}} - B_n{\frac{\partial F_n}{\partial \tau}} \right)$$ where $$\begin{aligned} \label{eq:SommersSavinSokolov2001b} B_n &= \frac{1}{n!} \left(-{\frac{\partial }{\partial g}}\right)^n \left[ I_0(\sqrt{g^2-1}/\tau)\, {\mathrm{e}^{-g/\tau}} \right] \\ \label{eq:SommersSavinSokolov2001c} F_n &= \sum_{m=0}^n\frac{1}{(2m+1)!} \left({\frac{\partial ^2}{\partial g^2}}-\frac{2}{\tau}{\frac{\partial }{\partial g}}\right)^m g^n \:.\end{aligned}$$ This analytic solution is still quite complicate. Already for $\Nc=5$, the plot with the software [Mathematica]{} shows some irregularities (cf. Fig. \[fig:ComparisonProperPartial\]). The above explicit expressions become of limited use for plotting the distribution at larger $\Nc$. (Note, however, that one can alternatively use an integral representation of the exact distribution that can be inferred from the analysis of [@SomSavSok01]). Hence it is instructive to extract limiting behaviours directly from Eq. . Let us now study this point. ![*Comparison between the marginal distributions for proper times (continuous blue line) and partial times (dashed red line) for $\Nc=2$ and $\Nc=5$. Coupling parameter is $g=2/\coupl-1=10$. The dotted lines are the two power laws with exponents $3/2$ and $2+\Nc$.* []{data-label="fig:ComparisonProperPartial"}](marginal-proper-partial-N2-bis "fig:"){width="45.00000%"} ![*Comparison between the marginal distributions for proper times (continuous blue line) and partial times (dashed red line) for $\Nc=2$ and $\Nc=5$. Coupling parameter is $g=2/\coupl-1=10$. The dotted lines are the two power laws with exponents $3/2$ and $2+\Nc$.* []{data-label="fig:ComparisonProperPartial"}](marginal-proper-partial-N5 "fig:"){width="45.00000%"} The expression can be rewritten in terms of the scaling variable as $$\label{eq:SommersSavinSokolov2001aBIS} {q_{\Nc,2}}(\rt) = \frac{2g}{\Nc\rt}\sum_{n=0}^{\Nc-1} \left( F_n{\frac{\partial B_n}{\partial \rt}} - B_n{\frac{\partial F_n}{\partial \rt}} \right) \:.$$ We now discuss the structure of the functions $B_n$ and $F_n$. The functions $F_n$ can be computed systematically from  : $$\begin{aligned} F_0 &= 1 \\ F_1 &= g \, \left( 1 - \frac{2}{3\rt} \right) \\ F_2 &= g^2 \, \left( 1 - \frac{4}{3\rt}+\frac{4}{15\rt^2} \right) + \frac{1}{3} \\ F_3 &= g^3 \, \left( 1 - \frac{2}{\rt}+\frac{4}{5\rt^2} - \frac{8}{105\rt^3} \right) + g \, \left( 1 - \frac{2}{5\rt} \right) \\ & \vdots \hspace{2cm} \vdots\end{aligned}$$ For the following, it is sufficient to identify the first and last terms in the contribution of order $g^n$ : $$\begin{aligned} F_n = g^n\,\left( 1 - \frac{(\cdots)}{\rt} + \cdots + (-1)^n\frac{2^{2n}n!}{(2n+1)!\,\rt^n} \right) + g^{n-2} \, (\cdots) + \cdots\end{aligned}$$ Note that the term $t^{-n}$ corresponds to $(-2/\tau)^n\partial_g^ng^n$. We now focus on the functions $B_n$’s in the large $g$ limit and restrict ourselves to the regime $\tau\ll g$, i.e. $t\ll g^2$. In this case we can write $$B_0 \simeq \frac{1}{2g} \psi(\rt=2g\tau)\,, \hspace{1cm}\mbox{with } \psi(\rt) \eqdef \sqrt{\frac{\rt}{\pi}}{\mathrm{e}^{-1/\rt}}$$ which considerably simplifies Eq.  $$B_n \simeq \frac{1}{2g^{n+1}} \sum_{m=0}^n \frac{(-\rt)^m}{m!}\,\psi^{(m)}(\rt) \:.$$ We now remark that the calculation of the derivatives $\psi^{(m)}(\rt)$ can be simplified in the two limiting cases $\rt\gg1$ or $\rt\ll1$. #### Limit $\rt\gg1$ (and $\rt\ll g^2$). For large $\rt$, the derivatives of $\psi(\rt)$ are dominated by derivation of the power law $\sqrt{\rt}$ in $\psi(\rt)$, hence $$B_n \simeq \alpha_n\frac{\psi(\rt)}{2g^{n+1}} \hspace{1cm}\mbox{with } \alpha_n\eqdef\sum_{m=0}^n \frac{(-1)^m(\frac12-m+1)_m}{m!}$$ Then the distribution is dominated by the term $${q_{\Nc,2}}(\rt) \simeq \frac{2g}{\Nc\rt}\sum_{n=0}^{\Nc-1} F_n{\frac{\partial B_n}{\partial \rt}} \simeq \frac{\psi(\rt)}{2\Nc\rt^2} \sum_{n=0}^{\Nc-1}\alpha_n \:.$$ Finally we can write $$\label{eq:C35} {q_{\Nc,2}}(\rt) \simeq b_\Nc \, \rt^{-3/2}$$ where $$b_\Nc = \frac{1}{2\sqrt{\pi}\,\Nc} \sum_{n=0}^{\Nc-1} (\Nc-n) \frac{(-1)^n(\frac12-n+1)_n}{n!} = \frac{(2\Nc-1)!!}{\sqrt{\pi}\,\Nc!2^\Nc} \:.$$ This is precisely the coefficient of the marginal for partial times, Eq.  for $\beta=2$. We have also checked that it coincides with the precise behaviour given in Ref. [@SomSavSok01] for large $\Nc$ $${w_{\Nc,\beta}}(\tau) \simeq \frac{1}{\pi\sqrt{2\Nc g}} \tau^{-3/2}\qquad \mbox{ for } 1/g\ll \tau \ll g \:.$$ #### Limit $\rt\ll1$. The derivatives of $\psi(\rt)$ are dominated by derivation of the exponential, hence $$B_n \simeq \frac{\psi(\rt)}{2g^{n+1}} \sum_{m=0}^n \frac{(-1)^m}{m!\rt^m} \simeq \frac{\psi(\rt)}{2g^{n+1}}\frac{(-1)^n}{n!\rt^n}$$ Using the expansion of $F_n$, we get $${q_{\Nc,2}}(\rt) \simeq \frac{2g}{\Nc\rt} F_{\Nc-1}{\frac{\partial B_{\Nc-1}}{\partial \rt}} \simeq \frac{2^{2(\Nc-1)}}{\sqrt{\pi}\,\Nc(2\Nc-1)!}\,\frac{{\mathrm{e}^{-1/\rt}}}{\rt^{2N+1/2}} \:.$$ This behaviour is different from the one obtained for partial times, cf. Eq. . Comparison of the two marginal distributions -------------------------------------------- Although the two distributions ${\tilde{w}_{\Nc,2}}(\tau)$ and ${w_{\Nc,2}}(\tau)$ look at first sight quite different (see plots in linear scale in Fig. \[fig:ComparisonProperPartial\]), we have showed that they precisely coincide as soon as $\rt\gg1$ : not only the power law $\rt^{-3/2}$ coincide, but also the precise coefficient. We interpret this as a manifestation of the fact that, for $\rt\gg1$, the two distributions are dominated by isolated resonances. Although we have not extracted from (\[eq:SommersSavinSokolov2001a\],\[eq:SommersSavinSokolov2001b\],\[eq:SommersSavinSokolov2001c\]) the behaviour for $\rt\gg g^2$, based on the isolated resonance picture, we assume that the distributions also coincide in this regime as well. We write ${q_{\Nc,\beta}}(\rt)\simeq{\tilde{q}_{\Nc,\beta}}(\rt)$ for $\rt\gg1$, i.e. $${w_{\Nc,\beta}}(\tau) \simeq {\tilde{w}_{\Nc,\beta}}(\tau) \hspace{1cm}\mbox{for } \tau \gtrsim \tlow$$ where $\tlow\sim1/g\sim\fss$, as long as resonances can be considered as isolated, according to the discussion of the introduction (see Fig. \[fig:ComparisonProperPartial\]). The dependence of the cutoff in the channel number is determined below. Hence this is a strong difference between the weak coupling and perfect coupling regimes: while the two marginals strongly differ in the latter, the almost coincide in the former (See Fig. \[fig:SketchPC\]). ### Crossovers. Before summarizing the different limiting behaviours, we determine the precise value where the distribution crosses over from one limiting behaviour to another in the limit of large $\Nc$. The asymptotic form of the coefficients will be useful (we only consider the unitary case) : $$a_\Nc \simeq \sqrt{2}\,\left(\frac{4}{\Nc}\right)^\Nc \frac{{\mathrm{e}^{\Nc}}}{\pi\Nc} \:,\hspace{1cm} b_\Nc \simeq \frac{1}{\pi\sqrt{\Nc}} \:,\hspace{1cm} c_\Nc \simeq \frac{{\mathrm{e}^{2\Nc}}}{4\pi\,\Nc^{2\Nc+1/2}} \:.$$ Let us denote $\tup$ the crossover position between the two last limiting behaviours : we write $ b_\Nc \, (g/\tup)^{3/2} = a_\Nc\, (g/\tup)^{2+\Nc} $. Using the asymptotics of the coefficients, one gets the upper cutoff (in unit of $\Ht$) $$\label{eq:DefUpperCutoffProper} \tup \simeq 4\,\mathrm{e}\, \frac{g}{\Nc}$$ Similarly, we determine the position where the distribuition crosses over between the universal $\tau^{-3/2}$ power law and the $\tau\to0$ behaviour. As the two distributions ${q_{\Nc,2}}(\rt)$ and ${\tilde{q}_{\Nc,2}}(\rt)$ differ in this regime, we have to discuss separately the cases of partial and proper times. We consider first the case of partial times : we write $ \tilde{c}_\Nc\,\rt^{-\Nc-3/2}\,{\mathrm{e}^{-1/\rt}} = b_\Nc\, \rt^{-3/2} $ leading to the equation $ {1}/{\rt} + \Nc \,\ln t = \Nc - \Nc \,\ln\Nc -(1/2)\ln2 $. Thus we obtain the lower cutoff $\tilde\rt_\mathrm{lower}\simeq1/\Nc$, i.e. $$\tilde\tlow \simeq \frac{1}{\Nc g} \:.$$ For the proper time we write $ c_\Nc\,\rt^{-2\Nc-1/2}\,{\mathrm{e}^{-1/\rt}} = b_\Nc\, \rt^{-3/2} $, leading to the equation $ {1}/{\rt} + 2\Nc \,\ln t = 2\Nc - 2\Nc \,\ln\Nc -2\ln2 $, i.e. $\rt_\mathrm{lower}\simeq1/(2\Nc)$. The cutoff for the proper time is half the cutoff for the partial times $$\label{eq:DefLowerCutoffProper} \tlow \simeq \frac{1}{2\Nc g} \simeq \frac{1}{2}\,\tilde\tlow \:.$$ ### Summary of the limiting behaviours. In conclusion, we have seen that the marginal distribution presents three limiting behaviours : $$\label{eq:LimitsMarginalPartialTimes} \hspace{-2cm} {\tilde{q}_{\Nc,\beta}}(\rt) \simeq \frac{1}{2g} {\tilde{w}_{\Nc,\beta}}\left(\tau\simeq\frac{t}{2g}\right) \underset{g\to\infty}{\simeq} \left\{ \begin{array}{ll} \displaystyle \tilde{c}_\Nc\,\rt^{-\beta\Nc/2-3/2}\,{\mathrm{e}^{-1/\rt}} & \mbox{for } \rt\lesssim 1/\Nc \\[0.25cm] \displaystyle b_\Nc \,\rt^{-3/2} & \mbox{for } 1/\Nc \lesssim\rt \lesssim g^2/\Nc \\[0.25cm] \displaystyle \frac{a_\Nc}{g^3}\,\left(\frac{g^2}{\rt}\right)^{2+\beta\Nc/2} & \mbox{for } \rt\gtrsim g^2/\Nc \end{array} \right.$$ where the three coefficients are $$\begin{aligned} \label{eq:CoeffCn} \tilde{c}_\Nc &= \frac{1}{\sqrt{\pi}\,\Gamma(1+\beta\Nc/2)} \:, \\ \label{eq:CoeffBn} b_\Nc &= \frac{1}{\pi}\, \frac{\Gamma(1/2+\beta\Nc/2)}{\Gamma(1+\beta\Nc/2)} \:, \\ \label{eq:CoeffAn} a_\Nc &= \frac{2^{1+\beta\Nc}}{\sqrt\pi}\, \frac{\Gamma(1/2+\beta\Nc/2)}{\Gamma(1+\beta\Nc/2)^2} \:.\end{aligned}$$ The marginal distribution of the proper times is only known in the unitary case : $$\label{eq:LimitsMarginalProperTimes} \hspace{-2cm} {q_{\Nc,2}}(\rt) \simeq \frac{1}{2g} {w_{\Nc,2}}\left(\tau\simeq\frac{t}{2g}\right) \underset{g\to\infty}{\simeq} \left\{ \begin{array}{ll} \displaystyle c_\Nc\,\rt^{-2\Nc-1/2}\,{\mathrm{e}^{-1/\rt}} & \mbox{for } \rt\lesssim 1/\Nc \\[0.25cm] \displaystyle b_\Nc \,\rt^{-3/2} & \mbox{for } 1/\Nc \lesssim\rt \lesssim g^2/\Nc \\[0.25cm] \displaystyle \frac{a_\Nc}{g^3} \left(\frac{g^2}{\rt}\right)^{\Nc+2} & \mbox{for } \rt\gtrsim g^2/\Nc \end{array} \right.$$ where the coefficient obtained above is $$c_\Nc = \frac{2^{2(\Nc-1)}}{\sqrt{\pi}\,\Nc(2\Nc-1)!} \:.$$ s ### Moments of partial times and proper times. We have recalled in the introduction the variance of the partial and proper times. In particular, in the unitary case, we have seen that the second moment is ${\left\langle \tilde{\tau}_a^2\right\rangle}\simeq g/\Nc^2$ for weak coupling $g\gg1$ (compared to $\simeq1/\Nc^3$ for perfect coupling $g=1$). We now analyse more into detail the moments of the partial times and of the proper times in the weak coupling limit in the unitary case. #### Positive moments. In the weak coupling regime, the disributions ${w_{\Nc,\beta}}(\tau)$ and ${\tilde{w}_{\Nc,\beta}}(\tau)$ coincide for $\tau\gg1/g$, i.e. the part of the distributions which controls the *positive* moments : $${\langle \tau_a^k\rangle}\simeq{\langle \tilde\tau_a^k\rangle} \hspace{1cm}\mbox{for } k<1+\beta\Nc/2$$ (${\langle \tilde\tau_a^k\rangle}={\langle \tau_a^k\rangle}=\infty$ for $k{\geqslant}1+\beta\Nc/2$). The calculation of the moments is dominated by the $\tau^{-3/2}$ tail, cutoff by the faster decay $\tau^{-2-\beta\Nc/2}$ above $\tup$, where the cutoff was determined above. We can estimate the positive moments as $${\left\langle \tau_a^k\right\rangle} \simeq \int^{\tup}\D\tau\,\frac{b_\Nc}{\sqrt{g}}\,\tau^{k-3/2} \sim \frac{b_\Nc}{\sqrt{g}}\,\tup^{k-1/2}$$ leading to the typical scale $${\left\langle \tau_a^k\right\rangle}^{1/k} \sim \frac{1}{\Nc^{1/k}}\, \tup^{1-1/k} \sim \frac{g^{1-1/k}}{\Nc} \sim \frac{1}{\Nc\,\coupl^{1-1/k}}$$ for $k<1+\beta\Nc/2$. #### Negative moments. The *negative* moments are controlled by the lower cutoff introduced above. We can write $${\left\langle \tau_a^{-k}\right\rangle} \simeq \int_{\tlow}\D\tau\,\frac{b_\Nc}{\sqrt{g}}\,\tau^{-k-3/2} \sim \frac{b_\Nc}{\sqrt{g}}\,\tlow^{-k-1/2} \sim (\Nc g)^k$$ i.e. $${\left\langle \tau_a^{-k}\right\rangle}^{-1/k} \simeq 2^{-1-1/(2k)} {\left\langle \tilde\tau_a^{-k}\right\rangle}^{-1/k} \sim \tlow \sim \frac{1}{\Nc g} \sim \frac{\coupl}{\Nc} \:.$$ References {#references .unnumbered} ========== [100]{} J. J. M. Verbaarschot, H. A. Weidenmüller, and M. R. Zirnbauer, Grassmann integration in stochastic quantum physics: The case of compound-nucleus scattering, Phys. Rep. [**129**]{}(6), 367–438 (1985). G. E. Mitchell, A. Richter, and H. A. Weidenmüller, Random matrices and chaos in nuclear physics: Nuclear reactions, Rev. Mod. Phys. [**82**]{}, 2845–2901 (2010). C. W. J. Beenakker, Random-matrix theory of quantum transport, Rev. Mod. Phys. [**69**]{}(3), 731–808 (1997). P. Sebbah and A. Genack, Multiple scattering of microwaves, in [*New aspects of electromagnetic and acoustic wave diffusion*]{}, pages 28–34, 1998, Springer Tracts in Modern Physics, vol. [**144**]{}. T. Guhr, A. M[ü]{}ller-Groeling, and H. A. Weidenmüller, Random-matrix theories in quantum physics: common concepts, Phys. Rep. [**299**]{}(4/6), 189–425 (1998). P. A. Mello and N. Kumar, [*Quantum transport in mesoscopic systems – Complexity and statistical fluctuations*]{}, Oxford University Press, 2004. T. Kottos, Statistics of resonances and delay times in random media: beyond random matrix theory, J. Phys. A: Math. Theor. [**38**]{}, 10761–10786 (2005). Y. V. Fyodorov and D. V. Savin, Resonance Scattering of Waves in Chaotic Systems, in [*The Oxford Handbook of Random Matrix Theory*]{}, edited by G. Akemann, J. Baik, and P. [Di Francesco]{}, pages 703–722, Oxford University Press, Oxford, 2011. M. Gl[ü]{}ck, A. R. Kolovsky, and H. J. Korsch, Wannier–[S]{}tark resonances in optical and semiconductor superlattices, Phys. Rep. [**366**]{}(3), 103–182 (2002). U. Kuhl, R. H[ö]{}hmann, J. Main, and H.-J. St[ö]{}ckmann, Resonance widths in open microwave cavities studied by harmonic inversion, Phys. Rev. Lett. [**100**]{}, 254101 (2008). A. Di Falco, T. F. Krauss, and A. Fratalocchi, Lifetime statistics of quantum chaos studied by a multiscale analysis, Appl. Phys. Lett. [**100**]{}, 184101 (2012). E. P. Wigner, Lower limit for the energy derivative of the scattering phase shift, Phys. Rev. [**98**]{}(1), 145–147 (1955). F. T. Smith, Lifetime matrix in collision theory, Phys. Rev. [**118**]{}(1), 349–356 (1960). V. L. Lyuboshitz, On collision duration in the presence of strong overlapping resonance levels, Phys. Lett. [**72B**]{}(1), 41–44 (1977). C. H. Lewenkopf and H. A. Weidenmüller, Stochastic versus semiclassical approach to quantum chaotic scattering, Ann. Phys. (N.Y.) [**212**]{}(1), 53–83 (1991). N. Lehmann, D. V. Savin, V. V. Sokolov, and H.-J. Sommers, Time delay correlations in chaotic scattering: random matrix approach, Physica D [**86**]{}, 572–585 (1995). Y. V. Fyodorov and H.-J. Sommers, Statistics of resonance poles, phase shifts and time delays in quantum chaotic scattering: Random matrix approach for systems with broken time-reversal invariance, J. Math. Phys. [**38**]{}(4), 1918–1981 (1997). C. A. A. [de Carvalho]{} and H. M. Nussenzveig, Time delay, Phys. Rep. [**364**]{}, 83–174 (2002). E. E. Kolomeitsev and D. N. Voskresensky, Time delays and advances in classical and quantum systems, J. Phys. G: Nucl. Part. Phys. [**40**]{}, 113101 (2013). C. Texier, Wigner time delay and related concepts – Application to transport in coherent conductors, Physica E [**82**]{}, 16–33 (2016), see arXiv:1507.00075 \[cond-mat\] for updated version. D. V. Savin, Y. V. Fyodorov, and H.-J. Sommers, Reducing nonideal to ideal coupling in random matrix description of chaotic scattering: Application to the time-delay problem, Phys. Rev. E [**63**]{}, 035202 (2001). E. H. Hauge and J. A. St[ø]{}vneng, Tunneling times: a critical review, Rev. Mod. Phys. [**61**]{}(4), 917–936 (1989). M. B[ü]{}ttiker, Traversal, reflection and dwell time for quantum tunneling, in [*Electronic properties of multilayers and low-dimensional semiconductors structures*]{}, edited by J. M. [Chamberlain et al.]{}, page 297, Plenum Press, New York, 1990. R. Landauer and T. Martin, Barrier interaction time in tunneling, Rev. Mod. Phys. [**66**]{}(1), 217–228 (1994). M. B[ü]{}ttiker, The local Larmor clock, Partial densities of states, and mesoscopic physics, in [*Time in quantum mechanics*]{}, edited by J. G. Muga, R. S. Mayato, and I. L. Egusquiza, Lecture Notes in Physics, page 256, Springer, 2002, preprint quant-ph/0103164. V. V. Sokolov and V. G. Zelevinsky, Simple mode on a highly excited background: Collective strength and damping in the continuum, Phys. Rev. C [**56**]{}, 311–323 (1997). J. Kuipers, D. V. Savin, and M. Sieber, Efficient semiclassical approach for time delays, New J. Phys. [**16**]{}, 123018 (2014). P. A. Mello, P. Pereyra, and T. H. Seligman, Information theory and statistical nuclear reactions. I. General theory and applications to few-channel problems, Ann. Phys. [**161**]{}(2), 254–275 (1985). P. W. Brouwer, Generalized circular ensemble of scattering matrices for a chaotic cavity with nonideal leads, Phys. Rev. B [**51**]{}, 16878–16884 (1995). C. A. Engelbrecht and H. A. Weidenm[ü]{}ller, [H]{}auser-[F]{}eshbach theory and [E]{}ricson fluctuations in the presence of direct reactions, Phys. Rev. C [**8**]{}, 859–862 (1973). P. W. Brouwer and M. B[ü]{}ttiker, Charge-relaxation and dwell time in the fluctuating admittance of a chaotic cavity, Europhys. Lett. [**37**]{}(7), 441–446 (1997). C. Mahaux and H. A. Weidenmüller, [*Shell-model approach to nuclear reactions*]{}, North-Holland, Amsterdam, 1969. V. V. Sokolov and V. G. Zelevinsky, Dynamics and statistics of unstable quantum states, Nucl. Phys. A [**504**]{}(3), 562–588 (1989). N. Lehmann, D. Saher, V. V. Sokolov, and H.-J. Sommers, Chaotic scattering: The supersymmetry method for large number of channels, Nucl. Phys. A [**582**]{}, 223–256 (1995). Y. V. Fyodorov, D. V. Savin, and H.-J. Sommers, Scattering, reflection and impedance of waves in chaotic and disordered systems with absorption, J. of Phys. A: Math. Gen. [**38**]{}(49), 10731–10760 (2005). P. W. Brouwer, K. M. Frahm, and C. W. Beenakker, Quantum mechanical time-delay matrix in chaotic scattering, Phys. Rev. Lett. [**78**]{}(25), 4737 (1997). P. W. Brouwer, K. M. Frahm, and C. W. Beenakker, Distribution of the quantum mechanical time-delay matrix for a chaotic cavity, Waves Random Media [**9**]{}, 91–104 (1999). C. E. Porter and R. G. Thomas, Fluctuations of Nuclear Reaction Widths, Phys. Rev. [**104**]{}, 483–491 (1956). Y. V. Fyodorov and D. V. Savin, Resonance width distribution in RMT: Weak-coupling regime beyond Porter-Thomas, Europhys. Lett. [**110**]{}(4), 40006 (2015). Y. V. Fyodorov, D. V. Savin, and H.-J. Sommers, Parametric correlations of phase shifts and statistics of time delays in quantum chaotic scattering: Crossover between unitary and orthogonal symmetries, Phys. Rev. E [**55**]{}, R4857–R4860 (1997). H.-J. Sommers, D. V. Savin, and V. V. Sokolov, Distribution of Proper Delay Times in Quantum Chaotic Scattering: A Crossover from Ideal to Weak Coupling, Phys. Rev. Lett. [**87**]{}, 094101 (2001). Y. V. Fyodorov and H.-J. Sommers, Parametric correlations of scattering phase shifts and fluctuations of delay times in few-channel chaotic scattering, Phys. Rev. Lett. [**76**]{}(25), 4709 (1996). V. A. Gopar, P. A. Mello, and M. B[ü]{}ttiker, Mesoscopic capacitors: a statistical analysis, Phys. Rev. Lett. [**77**]{}(14), 3005 (1996). F. Mezzadri and N. J. Simm, Moments of the transmission eigenvalues, proper delay times, and random matrix theory. I, J. Math. Phys. [**52**]{}, 103511 (2011). F. Mezzadri and N. J. Simm, Moments of the transmission eigenvalues, proper delay times, and random matrix theory. II, J. Math. Phys. [**53**]{}, 053504 (2012). F. Mezzadri and N. J. Simm, $\tau$-function theory of quantum chaotic transport with $\beta=1,\,2,\,4$, Commun. Math. Phys. [**324**]{}, 465–513 (2013). C. Texier and S. N. Majumdar, Wigner time-delay distribution in chaotic cavities and freezing transition, Phys. Rev. Lett. [**110**]{}, 250602 (2013), Erratum: [*ibid*]{} [**112**]{}, 139902 (2014). A. M. Mart[í]{}nez-Arg[ü]{}ello, M. Mart[í]{}nez-Mares, and J. C. Garc[í]{}a, Joint moments of proper delay times, J. Math. Phys. [**55**]{}(8), 081901 (2014). A. Grabsch and C. Texier, Capacitance and charge relaxation resistance of chaotic cavities – Joint distribution of two linear statistics in the Laguerre ensemble of random matrices, Europhys. Lett. [**109**]{}, 50004 (2015). F. D. Cunden, Statistical distribution of the Wigner-Smith time-delay matrix moments for chaotic cavities, Phys. Rev. E [**91**]{}, 060102 (2015). M. Novaes, Statistics of time delay and scattering correlation functions in chaotic systems. I. Random matrix theory, J. Math. Phys. [**56**]{}, 062110 (2015). F. D. Cunden, F. Mezzadri, N. Simm, and P. Vivo, Correlators for the Wigner-Smith time-delay matrix of chaotic cavities, J. Phys. A: Math. Theor. [**49**]{}, 18LT01 (2016). F. D. Cunden, F. Mezzadri, N. Simm, and P. Vivo, Large-$N$ expansion for the time-delay matrix of ballistic chaotic cavities, J. Math. Phys. [**57**]{}(11), 111901 (2016). A. Grabsch, S. N. Majumdar, and C. Texier, Truncated linear statistics associated with the eigenvalues of random matrices II. Partial sums over proper time delays for chaotic quantum dots, J. Stat. Phys. [**167**]{}, 1452–1488 (2017). D. V. Savin and H.-J. Sommers, Delay times and reflection in chaotic cavities with absorption, Phys. Rev. E [**68**]{}, 036211 (2003). A. Ossipov and Y. V. Fyodorov, Statistics of delay times in mesoscopic systems as a manifestation of eigenfunction fluctuations, Phys. Rev. B [**71**]{}, 125133 (2005). Y. V. Fyodorov and D. V. Savin, Statistics of Resonance Width Shifts as a Signature of Eigenfunction Nonorthogonality, Phys. Rev. Lett. [**108**]{}, 184101 (2012). M. B[ü]{}ttiker, Charge fluctuations and dephasing in Coulomb coupled conductors, in [*Quantum mesoscopic phenomena and mesoscopic devices*]{}, edited by I. O. Kulik and R. Ellialtioglu, volume 559, page 211, Kluwer Academic Publishers, Dordrecht, 2000, (cond-mat/9911188). C. Texier and M. B[ü]{}ttiker, Local Friedel sum rule in graphs, Phys. Rev. B [**67**]{}, 245410 (2003). C. Texier and P. Degiovanni, Charge and current distribution in graphs, J. Phys. A: Math. Gen. [**36**]{}, 12425–12452 (2003). A. M. Mathai, [*Jacobians of matrix transformations and functions of matrix argument*]{}, World Scientific, Singapore, 1997. M. Marciani, H. Schomerus, and C. W. J. Beenakker, Effect of a tunnel barrier on the scattering from a Majorana bound state in an Andreev billiard, Physica E [**77**]{}, 54–64 (2016). A. Grabsch and C. Texier, Topological phase transitions in the 1D multichannel Dirac equation with random mass and a random matrix model, Europhys. Lett. [**116**]{}, 17004 (2016). A. Grabsch, [*Random matrices in statistical physics: quantum scattering and disordered systems*]{}, PhD thesis, Université Paris Saclay, 2018. C. S. Herz, Bessel functions of matrix argument, Ann. Math. [**61**]{}, 474–522 (1955). R. W. Butler and A. T. A. Wood, Laplace approximation for Bessel functions of matrix argument, J. Comput. Appl. Math. [**155**]{}, 359–382 (2003). A. Grabsch and C. Texier, Distribution of spectral linear statistics on random matrices beyond the large deviation function – Wigner time delay in multichannel disordered wires, J. Phys. A: Math. Theor. [**49**]{}, 465002 (2016). C. Itzykson and J.-B. Zuber, The planar approximation. II, J. Math. Phys. [**21**]{}(3), 411–421 (1980). P. Zinn-Justin and J.-B. Zuber, On some integrals over the $\mathrm{U}(N)$ unitary group and their large $N$ limit, J. Phys. A: Math. Gen. [**36**]{}(12), 3173–3193 (2003). Digital Library of Mathematical Functions, [http://dlmf.nist.gov/]{}. P. J. Forrester, [*Log-gases and random matrices*]{}, Princeton University Press, 2010. C. Andréief, Note sur une relation entre les intégrales définies des produits des fonctions, Mém. Soc. Sci. Phys. Nat. Bordeaux [**2**]{}(3), 1–14 (1886). P. J. Forrester, Meet Andréief, Bordeaux 1886, and Andreev, Kharkov 1882-83, Random Matrices: Theor. Appl. **8**(2), 1930001 (2019). V. A. Gopar and P. A. Mello, The problem of quantum chaotic scattering with direct processes reduced to the one without, Europhys. Lett. [**42**]{}(2), 131–136 (1998). I. S. Gradshteyn and I. M. Ryzhik, [*Table of integrals, series and products*]{}, Academic Press, fifth edition, 1994. M. Abramowitz and I. A. Stegun, editors, [*Handbook of Mathematical functions*]{}, Dover, New York, 1964. [^1]: It is worth noting that the Vandermonde determinant in the denominator can be further simplified as $$\Delta_\Nc\left(\fss\,\frac{1+k^2}{1+\fss^2k^2}\right) = \left[\fss(1-\fss^2)\right]^{\Nc(\Nc-1)/2} \frac{\Delta_\Nc(k^2)}{\prod_n(1+\fss^2k_n^2)^{\Nc-1}}$$ leading to a simpler representation of equation  : $$\begin{aligned} \hspace{-2.5cm} P(\{\einvQ_n\}) \propto \Delta_\Nc(\einvQ) \,\prod_n {\theta_\mathrm{H}}(\einvQ_n)\, \einvQ_n^\Nc \, \int \D k_1\cdots\D k_\Nc\, \frac{\Delta_\Nc(k)^2}{\Delta_\Nc (k^2)} \prod_n \frac{(1+k_n^2)^{\Nc}}{(1+\fss^2k_n^2)^{\Nc+1}} \nonumber \det\left[ \exp\left( -\fss\,\frac{1+k_i^2}{1+\fss^2k_i^2}\,\einvQ_j \right) \right].\end{aligned}$$
--- author: - 'J.G. Martí' - 'C. Beaugé' title: 'Stellar scattering and the origin of the planet around $\gamma$-Cephei-A' --- Introduction ============ Studies of planetary formation in binary stellar systems are an intriguing and challenging task. Although the majority of stars are members of such systems (Duquennoy & Mayor 1991), only a handful of exoplanets have so far been detected around compact binaries (i.e. stellar separation below $\sim 50$ AU). Even so, their very existence appears to be inconsistent with models and theories that work for single stars, which pushes some crucial parameters to extreme values. The case of the $\gamma$-Cephei binary is particularly interesting because it is one of the most extreme systems that harbors a giant planet. The stellar system is formed by a primary star with a mass of $M_{A} = 1.6 M_{\odot}$ and a secondary with a mass of $M_{B} = 0.34 M_{\odot}$. The $M_{B}$ body orbits around $M_{A}$ with a semimajor axis of $a_{AB} = 18.5 \; \textrm{AU}$ and an eccentricity of $e_{AB} = 0.36$ (Hatzes et al. 2003). The giant planet that orbits $M_{A}$ has been the object of many studies over the past decade (e.g. Marzari & Scholl 2002; Thébault 2004; Thébault 2006; Haghighipour & Raymond 2007; Paardekooper et al. 2008; Beaugé et al. 2010; etc.) and we have yet to find a suitable physical scenario to explain its existence. Artymowicz and Lubow (1994) showed that the circumplanetary disk is tidally truncated by its companion. This truncation occurs at a location close to the outer limit for dynamical stability of solid particles, and therefore safely lies beyond the position of all detected exoplanets. It nevertheless poses a problem, especially for giant planet formation, because it deprives the disk from a large fraction of its mass. Another consequence of disk truncation is that it shortens the lifetime of the disk, and in turn the timespan for gaseous planet formation (Cieza et al. 2009). To complicate the scenario even more, Nelson (2000) suggested that for an equal-mass binary of separation $50$ AU, temperatures in the disk might stay too high to allow grains to condense. Even if grains can condense, binary perturbations might impede their growth by increasing the impact velocities beyond the disruption limit (e.g. Marzari & Scholl 2000). Although gas drag helps to introduce a phase alignment in the pericenter and, thus, to decrease the relative velocities of nearby bodies, this appears to work only for equal-size planetesimals (e.g. Thebault et al. 2006) and circular gas disks. Eccentric and even precessing gas disks complicate the picture still more (Paardekooper et al. 2009; Marzari et al. 2009; Marzari et al. 2012), although it may indeed work in our favor in the outer parts of the disk (Beaugé et al. 2010). In recent years several studies have been performed on the role of mutual inclinations between the planetesimals, the gas disk, and the binary companion (Marzari et al. 2009b; Xie et al. 2010; Thebault et al. 2010; Müller & Kley 2012; Zhao et al. 2012.). However, even this additional degree of freedom does not appear to be a solution and, in the particular case of $\gamma$-Cephei, does not seem to favor planetary formation for bodies at $a > 2$ AU, which is expected to be the breading ground for the planet detected in this system. In a recent review of the state of the art in this problem, Thebault (2011) wondered whether the whole idea of core-instability planetary formation is feasible in such systems, or if we must resort to gas-instability models (Duchene 2010). However, as Thebault (2011) points out, “the disk instability scenario also encounters severe difficulties in the context of close binaries, and it is too early to know if it can be a viable alternative formation channel”. So, the question remains: How did the planet manage to form around the star $\gamma$-Cephei-A? Here we [**revisit**]{} an alternative route, based on the concept of stellar scattering. We assume that planet $m_{p}$ formed (by one of the accepted models of planetary formation) around $M_{A}$, but in a different dynamical configuration in which the star $M_{A}$ is single or the companion $M_{B}$ is sufficiently wide to allow the formation of the planetary system. The scenario explained is not new. Pfahl (2005) and Portegies Zwart & McMillan (2005) discussed such a mechanism to explain the origin of a planet then believed to exist in the HD188753 system (Konacki 2005), although the existence of this body was subsequently questioned and is so far unconfirmed. A slightly different analysis was later presented by Marzari & Barbieri (2007a, 2007b), were the authors studied the survival of circumstellar planets in dynamically unstable triple stellar systems or in binaries that suffered a close encounter with a third (background) star. These papers explored, for the first time the possibility that the current configuration of the $\gamma$-Cephei stellar components may not be primordial, and that the exoplanet may have been formed under different circumstances. In this paper we again address the proposal of Marzari & Barbieri (2007a), and study the effects of hyperbolic fly-bys of background stars in a binary system similar to $\gamma$-Cephei. The idea is to see under which initial conditions and system parameters it is possible to transform a primordial accretion-friendly stellar system to the current (accretion-hostile) configuration. Since our is dedicated to a single binary system, we are able to explore a larger set of parameters and situations, as well as different types of outcomes. We also include a statistical analysis of the results considering distribution probabilities for impacting velocities and stellar masses. The work is organized as follows. In Section 2 we review the idea of gravitational scattering, with special emphasis on hierarchical three-body systems. Section 3 presents our N-body code and some test runs to analyze the importance of the free parameters in the scattering outcomes. Our main numerical simulations are discussed in Section 4, while a probability analysis of positive outcomes is presented in Section 5. Conclusions finally close the paper in Section 6. Gravitational scattering in hierarchical three-body systems {#sec2} =========================================================== The gravitational interaction between a star and a binary system is a classical problem in stellar dynamics (e.g. Heggie 1975; Hills 1975; Hut & Bahcall 1983; Heggie & Hut 1993; Portegies Zwart et al. 1997; etc.), although it is focused on the relationship between binaries and the dynamical evolution of open clusters. The same principles, however, may also be applied to other problems, such as the origin of exoplanets in multiple stellar systems (Pfahl 2005, Portegies Zwart & McMillan 2005) and the dynamical evolution of “cold” exoplanets with orbits very distant from the central star (e.g. Veras et al. 2009). The hyperbolic two-body problem ------------------------------- We assume a compact stellar binary system with masses $m_A$ and $m_B$, whose orbital separation is small compared with the initial distance of an incoming star $m_C$. In this scenario, and at least for the initial conditions of the hyperbolic encounter, we may approximate the two binary components by a unique body of mass $m_A+m_B$ (see Hut & Bahcall 1983), and refer the orbit of the impactor with respect to the barycenter of the binary. In classical scattering problems, the outcome of the fly-by may be defined by two fundamental parameters: the infall velocity of the incoming body at infinity $v_{\infty} = |\dot{\textbf{r}}(t \to -\infty)|$ and the impact parameter $b$, defined as the minimum distance between the two bodies if there were no deflection. Alternatively, it is possible to substitute $b$ with the minimum distance $r_{min}=a(1-e)$ between the impactor and the center of mass of the pair $m_A+m_B$. Following the approach developed by Hills (1975), we can approximate the total orbital energy of the three-star system, prior to the encounter, by [ $$E_{\rm tot} = E_{AB} + E_C = -\frac{\kappa}{2a_{AB}} + \frac{1}{2} \mu v_\infty^2, \label{eq13}$$ ]{} where both $\kappa$ and $\mu$ are two mass factors defined according to [ $$\kappa = {\cal G} (m_A + m_B) m_C \hspace*{0.4cm} ; \hspace*{0.4cm} \mu = \frac{(m_A + m_B)\, m_C}{m_A + m_B + m_C} \label{eq12}.$$ ]{} In equation (\[eq13\]) $E_{AB}$ is the binding energy of the binary and $E_C$ is the kinetic energy of the impactor. $a_{AB}$ is the semimajor axis of $m_B$ with respect to $m_A$. We assume that initially $m_C$ is sufficiently far from the other two that we can neglect its potential energy. The same calculation can also be made after the scattering event has taken place, now in terms of the new semimajor axis of the binary ($a'_{AB}$) and the outgoing velocity of $m_C$ (which we will denote by $v'_\infty$). We can then use the conservation of $E_{\rm tot}$ to write (see Heggie 1975) [ $$\frac{1}{2} \mu {v'}_\infty^2 = \frac{1}{2} \mu v_\infty^2 + \Delta E_{AB}, \label{eq14}$$ ]{} where $\Delta E_{AB}$ is the change in the binding energy of the binary. The minimum velocity $v_c$ of the impactor necessary to disrupt the binary is such that $\Delta E_{AB} = E_{AB}$ or, equivalently, $E_{\rm tot} = 0$. This gives (Hut & Bahcall 1983) [ $$v_c^2 = \frac{\kappa}{\mu} \frac{1}{a_{AB}} = \frac{{\cal G} m_A m_B (m_A + m_B + m_C)}{(m_A+m_B)m_C} \frac{1}{a_{AB}} . \label{eq15}$$ ]{} It will prove useful to write the incoming velocity of the impactor $v_{\infty}$ in units of $v_c$ [ $$v^* = \frac{v_{\infty}}{v_c}, \label{eq16}$$ ]{} which gives an adimensional expression for the incoming velocity. Furthermore, $v_{\infty}$ can be written in terms of the semimajor axis of the hyperbolic orbit as: [ $$v_{\infty}^2 = \frac{{\cal G}(m_A+m_B+m_C)}{a}, \label{eq17}$$ ]{} where $a$ is the the semimajor axis of $m_C$ with respect to the center of mass of the binary $m_A+m_B$. Recalling expression (\[eq15\]), we may now write the relationship between $a$ and $v^*$ as [ $$a = -a_{AB} \frac{m_C(m_A+m_B)}{m_A m_B} \frac{1}{{v^*}^2}. \label{eq18}$$ ]{} Similarly, we can obtain the eccentricity of the hyperbolic orbit of $m_C$ as [ $$e = 1 - \frac{r_{min}}{a} . \label{eq19}$$ ]{} Finally, if we define $\epsilon_{0}$ as the value of the eccentric anomaly at the beginning of the simulation (when $r = r_0$), we can calculate the initial position and velocity of $m_C$ using the usual expressions for hyperbolic motion (e.g. Butler 2005). Complete set of variables for the scattering problem ---------------------------------------------------- The set $(v_*,r_{min},\epsilon_0)$ defines the initial conditions for the impactor $m_C$, with respect to the center of mass of $m_A + m_B$, assuming that the differential gravitational interactions from both components do not significantly perturb the two-body hyperbolic orbit (see Hut & Bahcall 1983 for a similar approach). However, even if this condition is not satisfied, and the orbit of the incoming star is seriously affected by the two central bodies, the values of $(v_*,r_{min})$ are still indicative of the “unperturbed” motion of $m_C$, and therefore may still be referred to as variables of the problem. To complete the description of the system, we must give the initial conditions for each of the binary components at a given time. These can be uniquely specified by the set of $m_A$-centric orbital elements of $m_B$: $(a_B,e_B,I_B,\lambda_B,\omega_B,\Omega_B)$, where the inclination $I_B$ is given with respect to the orbital plane of the incoming impactor. Since the value of $\epsilon$ is already a variable of the system, it appears less complicated to specify the value of the mean longitude $\lambda_B$ at the time of maximum approach of the impacter (i.e. $r=r_{min}$). From the two-body solution we can then calculate the value of the mean longitude at the initial time $\epsilon=\epsilon_0$ and calculate the position and velocity vectors of $m_A$ and $m_B$ at the beginning of the simulation. As with the initial conditions for $m_C$, it is not expected that the orbits of the binary stars remain constant in time. Accordingly, the values for the orbital elements adopted as initial conditions are also “unperturbed” values, in the same way as the impact parameter $b$ is the unperturbed minimum distance between the projectile and the target. Last of all, and since our ultimate aim will be to study the exoplanet detected around $\gamma$-Cephei-A, we will also assume that a small planetary mass $m_p$ orbits the central star $m_A$. The initial $m_A$-centric orbit of the planet will be circular and specified by values of $(a_p,I_p,\lambda_p,\Omega_p)$, where $\lambda_p$ is once again given at the time when $r_C=r_{min}$. To summarize, we need to set initial values for 16 independent variables in order to get a full description of the scattering event. These are $$\begin{aligned} && (m_A,m_B,m_C,m_p) \;+\; (r_{min},v^*) \;+\; \\ && (a_B,e_B,I_B,\lambda_B,\omega_B,\Omega_B) \;+\; (a_p,I_p,\lambda_p,\Omega_p) \nonumber. \label{eq21}\end{aligned}$$ We recall that the $m_A$-centric orbital elements of $m_B$ are assumed to correspond to the point at which the impactor has its maximum approach if the impactor had zero mass. Of course, previous knowledge of the dynamical system implies that some variables are already known, such as most of the masses and the main orbital elements. However, this still leaves us with a significant number of degrees of freedom to explore. The numerical code {#sec3} ================== Once we set the values for the stellar masses and initial conditions, we integrated the equations of motion using an N-body code to simulate the scattering event. Dynamically, the problem may be characterized as a hierarchical (or nested) full four-body problem, in which all mutual gravitational interactions are taken into account. For the simulations described in this work we employed a code constructed around a Bulrisch-Stoer integrator with an error tolerance of $ll=13$. Since the scattering event is expected to be extremely sensitive to the initial conditions, especially if the impact parameter is small, we performed a series of preliminary runs that included back integrations from the final configurations. In all cases there was no significant difference between the two branches of the same trajectory. Our working hypothesis is that the current $\gamma$-Cephei system could be the outcome of a stellar scattering event that either modified the orbit of the secondary star or caused an exchange of massive bodies. To study whether this is possible, we employes the concept of [*reverse integration*]{}. In other words, our initial conditions were the present day orbital configuration of the $\gamma$-Cephei system plus an additional star $m_C$ in an arbitrary outward-bound hyperbolic orbit. We then integrated the system backwards in time to reproduce the hypothetical scattering event and find the “initial conditions” that led to the present system. It is important to stress that this procedure can only be employed in purely conservative systems where the equations of motion are time-reversible. Therefore, we did not consider the effects of any non-conservative force such as gas drag of tidal effects between the bodies. We considered that a simulation yields a positive result when the following two conditions are met: - The planet $m_p$ remains bounded to $m_A$ in an orbit with moderate to low eccentricity (e.g. $e_p < 0.1$).\ - The region around $m_p$ becomes compatible with planetary formation according to standard core-accretion scenario. This could either be a consequence of a significant increase in the semimajor axis of the secondary binary component, or a stellar exchange in which $m_A$ becomes a single star. Since the equations of motion of the N-body problem are time-reversible, the final outcomes of our simulations would then constitute possible initial conditions that could explain the current planetary system. Possible outcomes ----------------- Depending on the relative magnitudes between $v_\infty$ and $\Delta E_{AB}$, we can list five possible outcomes of a hyperbolic fly-by: De-excitation, excitation, resonance, ionization, and exchange. The reader is referred to Heggie (1975) for more details, including explicit conditions for each type. Not all possible outcomes are relevant for the case at hand. For example, [*resonance*]{} is a transitory state, at least in the point-mass approximation, and following complex interactions between all three stars, at least one will leave the system in a hyperbolic orbit. [*Ionization*]{} implies that all three stars acquire unbounded motion with respect to the other two; the original binary is then destroyed, but not replaced. Since in our case we worked with a reverse temporal axis, this would mean that the present-day $\gamma$-Cephei is the outcome of a triple encounter of isolated stars, a highly improbable situation, to say the least. Since our aims are more restrictive and focused on finding possible formation mechanisms for $\gamma$-Cephei-A-b, we prefer to redefine the possible outcomes according to what Pfahl (2005) calls “scenarios”. These are: - [*Type I*]{}: The impactor $m_C$ captures the secondary component $m_B$ during the encounter, leaving behind $m_A$ as a single star with the planet still bounded in a quasi-circular orbit.\ - [*Type II*]{}: While there is no exchange in the memberships, the binary $m_A+m_B$ acquires a sufficiently wide orbit to allow planetary formation in the region around $m_p$.\ - [*Type III*]{}: The incoming star $m_C$ changes place with $m_B$ where this star is expelled from the system. The new binary $m_A+m_C$ is sufficiently wide to allow planetary formation around the location of $m_p$. Both types I and III are cataloged as [*exchange*]{} in Heggie’s nomenclature, while type II corresponds to an [*excitation*]{}. Marzari & Barbieri (2007a) only discussed type-II outcomes. In addition to the types listed before, we also found a few cases of three other types of outcomes. Although these were very rare and have only a negligible effect on the statistical analysis performed later on, we will nevertheless include them here for completeness and for a full understanding of the complexity of the problem. These will be referred to as types IV to VI, and correspond to the following scenarios: - [*Type IV*]{}: The impactor $m_C$ not only captures the secondary component $m_B$ during the encounter (which would be cataloged as type I), but also the planet that originally orbited $m_A$. The final configuration then includes a new binary $m_B+m_C$ with the planet orbiting $m_C$.\ - [*Type V*]{}: The impactor $m_C$ carries away both $m_A$ and its planet $m_p$, but in the final configuration the planet now orbits $m_C$.\ - [*Type VI*]{}: The passage of the impactor $m_C$ only captures the planet from the original binary, leaving behind a planet-less $m_A+m_B$ system. As mentioned in the previous section (see equation (\[eq12\])), we need to specify a total of 16 parameters/variables. Of these, however, several will be fixed to represent the $\gamma$-Cephei system. For the masses we adopted the values [ $$m_A = 1.6 m_\odot \hspace*{0.2cm} ; \hspace*{0.2cm} m_B = 0.4 m_\odot \hspace*{0.2cm} ; \hspace*{0.2cm} m_p = 1.7 m_{\rm Jup}, \label{eq22}$$ ]{} while for the $m_A$-centric orbit of $m_B$ we assumed: [ $$a_B = 18.5 \; {\rm AU} \hspace*{0.2cm} ; \hspace*{0.2cm} e_B = 0.36. \label{eq23}$$ ]{} Finally, we assumed for the $m_A$-centric semimajor axis of the planet $m_p$ $a_p=2.1$ AU. The remaining ten parameters where considered to be unknown and free to modify in our series of runs. Test runs --------- Not all variables are created equal, and some are expected to be more influential in determining the outcome of the scattering event. The aim of these preliminary series of runs is to estimate how each parameter affects the dynamics and what may be preferential values. We began with three series of runs, each with fixed values of the impactor mass. The values adopted for each run are $m_C=0.2$ (series 1), $m_C=0.4$ (series 2) and $m_C=1.0$ (series 3), with values given in solar masses. All orbits are assumed to lie on the same plane (i.e. $I_i = 0$, and $\Omega_i$ indeterminate) and the remaining angles are taken equal to zero. At the beginning of each integration, the impactor is set at a distance from the binary equal to $r_0=50 Q_B$, where $Q_B = a_B(1+e_B)$ is the apocentric distance between both components. Since the orbital elements of the planet and binary are given at $r=r_{min}$, the outcome of the scattering event is not dependent on the value of $r_0$, as long as it is chosen to be sufficiently large. Thus, in each series we have only two free parameters left: $(r_{min},v^*)$. For each series we chose a total of $2.5 \times 10^{5}$ values chosen from an equispaced $500 \times 500$ grid within the intervals: $$\begin{aligned} 1.1 \leq & v_0/v_{esc} & \leq 10 \\ 0.1 \leq & r_{min}/Q_B & \leq 4 \nonumber , \label{eq24}\end{aligned}$$ where $v_0$ is the magnitude of the initial velocity at $r=r_0$ and $v_{esc}$ the escape velocity at that point. Thus, we consider initial speeds ranging from almost parabolic orbits to highly hyperbolic fly-bys. After some simple algebraic calculations, we find that the corresponding interval in $v^*$ is approximately $0.08 \leq v^*\leq 1.7$. Figure \[fig2\] shows the post-scattering $m_A$-centric orbital elements of the secondary star of the resulting binary system. Since the central star becomes isolated in type-I interactions, only type II (left) and type III (right) are analyzed. In the first case, $m_B$ remains bounded to $m_A$, so the eccentricity and semimajor axis plotted are those of $m_B$. In type-III outcomes, however, the final binary is now $m_A+m_C$, so the orbital elements are those of $m_C$ with respect to $m_A$. Black squares show those runs in which the planetary orbit remained almost circular, while those in which $m_p$ was severely perturbed are shown in brown. In most cases the resulting distribution is V-shaped, roughly delimited on the left by apocentric distances $Q \ge 25$ AU, which corresponds to the apocentric distance of the original binary system. The black squares, on the other hand, are mostly located with $q \ge 12$ AU, which is also the pericentric distance of the original system. It therefore appears that if the new binary has a lower value of $q$, roughly independent of the eccentricity, its gravitational perturbations on the planet would be sufficient to increase its eccentricity beyond the imposed limit $e_p=0.1$. Of course this analysis is qualitative and does not rigorously represent every solution, but it does give an overall picture of the behavior. Not all black squares are categorized as positive solutions. As mentioned previously, the final system must also permit for planetary formation, i.e., allow for an in-situ accretion of $m_p$. Since the present-day $\gamma$-Cephei system is too compact to allow for accretional collisions in the planetesimal swarm (e.g. Thébault et al. 2004; Paardekooper et al. 2008), we must ask about the minimum distance between binary components at which this may be avoided. Unfortunately, there is no easy answer to this question, but we may obtain a rough idea from previous numerical studies of planetesimal dynamics in binary systems. Thébault et al. (2006) presented a detailed calculation of the impact velocity of different-sized planetesimals located at $a = 1$ AU in generic binary systems with $m_A=1$ and $m_B = 0.5$, both in units of solar mass. For every point in a grid of values of $(a_B,e_B)$ these authors calculated the average impact velocity $\Delta v$ between a $2.5$ km and a $5$ km solid body, and drew level curves of constant $\Delta v$. According to the authors, accretional collisions are guaranteed for $\Delta v < 10 m/s$, denoting regions where the encounter speeds are practically unaffected by the secondary star. Collisions with $10 < \Delta v < 100 m/s$ are less certain, but may still allow for planetary formation even if some erosion is expected. Since the impact velocity at any given location is primarily dictated by the forced eccentricity $e_f$ excited by the secondary star on the planetesimal swarm, we can use the results by Thébault et al. (2006) to extrapolate their findings to other binary systems. As determined by Heppenheimer (1978), a first-order expression (in the masses) for the forced eccentricity at a given semimajor axis $a$ may be written as [ $$e_f = \frac{5}{4} \frac{a}{a_B} \frac{e_B}{1-e_B^2}. \label{eq25}$$ ]{} A second-order expression was calculated recently by Giuppone et al. (2011), which depends explicitly on the stellar masses. However, the difference between both values is only relevant for high values of $e_f$, which is beside the purpose of the present study. Using expression (\[eq25\]) of Thébault’s system we can calculate (for our case) the critical value of the forced eccentricity associated to $\Delta v = 10 m/s$ and to $\Delta v = 100 m/s$. We can then use these as reference values and, once again applying (\[eq25\]), estimate the values of $a_B$ and $e_B$ that give the same value of $e_f$ for a given semimajor axis $a$. Since the forced eccentricity increases with $a$, $a_B$ and $e_B$, in principle we should expect accretional collisions for lower values of all these parameters. To complete this setup, we must specify a value for $a$. Since the exoplanet detected in $\gamma$-Cephei has a semimajor axis of $2.1$ AU, even an in-situ formation with little planetary migration requires a sufficiently large feeding zone that would extend beyond the snow line to accrete the necessary mass. For the present study, we chose $a=4$ AU; in other words, we required that the entire region up to $4$ AU satisfies the conditions established in Thébault et al. (2006). Although this upper limit may appear rather arbitrary, we consider it as illustrative of the dynamics and not as an actual proposal for the formation process itself. The orange curves in each frame of Figure \[fig2\] show the values of the eccentricity and semimajor axis of the binary system that yield (within the approximation described above) the two values of the impact velocities mentioned before. All binaries located to the right and below each curve satisfy this ad-hoc condition of accretional collision. Figure \[fig3\] focuses on the type-II outcomes for $m_C=0.4 M_{\odot}$. The top frame shows the distribution of orbital parameters of the binary as a function of the minimum distance $r_{min}$ of the impactor, in units of $Q_B$. For illustration purposes, we have also drawn the curves of $q_B=a_B(1-e_b)=12$ AU and $Q_B=a_B(1+e_b)=25$ AU, both corresponding to the original values of the system. Thus, the original orbit is located at the intersection point between both curves. Since distant encounters (i.e. large values of $r_{min}$) imply lower perturbations, the final configuration remains close to the original orbit. However, as the distance decreases, the resulting orbits spread significantly, most of them increasing both $q_B$ and $Q_B$. A comparison with the curves of constant $\Delta v$ also shows that the positive solutions occur primarily for impactors whose minimum distance lies between one and three times the apocentric distance of the original binary. Closer encounters are too destructive, while more distant fly-bys are not able to modify the system sufficiently to overcome the planetary formation constraints. The lower plot shows the distribution of final orbits as a function of the velocity at infinity of $m_C$, in units of the escape velocity of the system. Low incoming speeds cause a more or less random spread of solutions around the original configuration, mainly as a consequence of the resonance interactions. Higher speeds generate more ordered distributions. Once again, positive solutions are mainly found for intermediate values, between two and six times the escape velocity. [ c | r r r r r ]{}\ \[-2ex\] $m_C$ & $0.2$ & $0.4$ & $0.6$ & $0.8$ & $1.0$\ \[0.5ex\]\ \[-1ex\] Type I & – & 0.5% & 1.9 % & 3.3% & 4.8%\ \[1ex\] Type II\ ($\Delta v < 10$ m/s) & – & 0.02% & 0.01 % & 0.01% & 0.01%\ \[1ex\] Type II\ ($\Delta v < 100$ m/s) & 1.9% & 0.9% & 0.3 % & 0.2% & 0.2%\ \[1ex\] Type III\ ($\Delta v < 10$ m/s) & – & 0.1% & 0.2 % & 0.03% & 0.03%\ \[1ex\] Type III\ ($\Delta v < 100$ m/s) & 0.01% & 1.0% & 2.1 % & 0.9% & 0.2%\ \[1ex\] Finally, the percentage of positive results are summarized in Table \[table1\], were we have extended the series to include a total of five values of $m_C$ between $0.2 M_{\odot}$ and $1 M_{\odot}$. For most impactor masses, type I positive solutions are the most common because they contain no restrictions to the orbit of the outgoing binary. We reecall that in this type of outcome the central star $m_A$ becomes isolated and the new binary is composed of $m_B+m_C$. All other types of solutions are rare, as already noted by Marzari & Barbieri (2007a), usually less than $1 \%$ of all scattering events. If we consider the very restrictive $\Delta v < 10$ m/s limit, only a few $0.01 \%$ of the encounters yield final configurations suitable for planetary formation around $m_A$. Full numerical simulations {#sec4} ========================== We now repeated the simulations, this time with random values of all angular variables $(I_B, \lambda_B,\varpi_B,\Omega_B,\lambda_p,\varpi_p)$. In other words, we extended the experiments to the 3D case and considered arbitrary orientations of the orbits of both the binary and the planet. However, in all cases we considered the planet initially in the same orbital plane as the secondary, i.e. $I_p=I_B$ and $\Omega_p=\Omega_B$. One of the aspects we wished to study is the change in the mutual inclination between the planet and the binary perturber after the passage of the impactor. It was already noted in Marzari & Barbieri (2007ab) that high inclinations between the impactor and the binary may lead to greater disruptions of the system, so these configuration may cause large variations in $I_p$, leaving behind planetary systems with high inclinations with respect to the plane of the binary. Figure \[fig4\] shows the eccentricity and semimajor axis distribution of the resulting binaries, where the left-hand plots show the type-II outcomes and the right-hand graphs correspond to type III. As with Figure \[fig2\], the symbols in brown show all the outcomes while those in black maintained the planet in an almost circular orbit ($e_p < 0.1$). Compared with Figure \[fig2\], we see a greater proportion of positive results for type-II outcomes, but a significant decrease in positive results for type III (see Table \[table2\]). In fact, the number of Type III outcomes with relative encounter speed $\Delta v < 10$ m/s expected for small planetesimals is less than $0.01 \%$ for all impactor masses. Results also show a serious reduction in the positive outcomes of type-I encounters, which are now typically only half those found in our test runs. Finally, Figure \[fig5\] shows the distribution of final inclinations $i_{mut}$ of the planetary orbit with respect to that of the binary component. In type-II outcomes (left-hand plots), the secondary remains the original companion $m_B$, therefore $i_{mut}$ measures the perturbation of the fly-by on the original system. Although most orbits remain relatively aligned, especially those corresponding to positive results, we also note a significant amount of highly inclined orbits, some even retrograde with respect to the binary orbit. The right-hand plots in Figure \[fig5\] correspond to type-III outcomes, in which the new binary system is now composed of $m_A+m_C$. Because the new companion is captured and not primordial, in principle there should be little correlation between the orbits of $m_p$ and $m_C$; consequently the values of $i_{mut}$ should be fairly random. However, the results of the simulations, especially those classified as positive, show a marked correlation. This seems to indicate that only the fly-bys that occur near the orbital plane of the original binary lead to systems suitable for planetary formation. The difference in the plots between the black and red curves gives the impression that most of the encounters with large $I_B$ lead to highly eccentric planets, and positive results occur primarily with $I_B \sim 0$. [ c | r r r r r ]{}\ \[-2ex\] $m_C$ & $0.2$ & $0.4$ & $0.6$ & $0.8$ & $1.0$\ \[0.5ex\]\ \[-1ex\] Type I & – & 0.2% & 0.6 % & 0.9% & 2.7%\ \[1ex\] Type II\ ($\Delta v < 10$ m/s) & – & 0.1% & 0.2 % & 0.2% & 0.4%\ \[1ex\] Type II\ ($\Delta v < 100$ m/s) & 0.9% & 1.6% & 2.3 % & 2.6% & 5.5%\ \[1ex\] Type III\ ($\Delta v < 10$ m/s) & – & – & – & – & –\ \[1ex\] Type III\ ($\Delta v < 100$ m/s) & – & 0.1% & 0.1 % & 0.1% & 0.1%\ \[1ex\] Distribution functions for the free parameters ============================================== To gain a more accurate understanding of the probability to form the $\gamma$-Cephei system through the process explained in these paper, we must weight the results showed in the previous sections by assigning a distribution function (DF) to each of the free parameters, $r_{min}$, $v^*$ and $m_C$. As mentioned earlier, the parameter $r_{min}$ appears more suitable than the impact parameter $b$ for describing the two-body dynamics of the scattering problem. However, to find the corresponding DF, we must first relate $r_{min}$ with $b$ and $v^*$. For a hyperbolic keplerian orbit we may write $$\begin{aligned} & r_{min} & = - a (e - 1) \nonumber \\ & b & = - a \sqrt{e^2 - 1} \\ & {v^*}^2 & = \frac{\Lambda}{a}, \nonumber \label{eq26}\end{aligned}$$ where $a$ and $e$ are the semimajor axis and eccentricity of the orbit of the incoming body $m_C$ and $\Lambda = a_{AB} \; m_C(m_A+m_B)/(m_A m_B)$. Introducing the last two expressions of (26) into the first, and after some simple algebraic manipulations, we obtain [ $$r_{min}(b,v^*) = \frac{\Lambda}{{v^*}^2} \left(\sqrt{\left(\frac{b {v^*}^2}{\Lambda} \right)^2 + 1} - 1 \right). \label{eq27}$$ ]{} Following Hut & Bahcall (1983) we assumed a distribution of close encounters uniform in $b^2$. To estimate the distribution probability of $v^*$, we first need to determine the DF of the relative encounter velocities between a single star and a binary. Again following Hut & Bahcall (1983), the thermal distribution function of velocities for stars of equal mass $m$ is [ $$f(v^*) = \left( \frac{2}{\pi} \right)^{1/2} \left(\frac{m}{kT} \right)^{3/2} {v^*}^2 \exp{\left(-\frac{m}{2kT}{v^*}^2\right)}, \label{eq28}$$ ]{} where $k$ is Boltzmann’s constant and $T$ is a temperature-like parameter in analogy with a velocity DF of gas particles. Below we will express these variables in terms of the thermal velocity dispersion $v_{th}$. Binaries formed from equal-mass stars have a combined mass $2m$, and a thermal DF [ $$f'(v^*) = \left( \frac{2}{\pi} \right)^{1/2} \left(\frac{2m}{kT'} \right)^{3/2} {v^*}^2 \exp{\left(-\frac{m}{kT'}{v^*}^2\right)}, \label{eq29}$$ ]{} where $T' = T$ for equipartition of translational energy, and $T' = 2T$ if the binaries have the same velocity dispersion as single stars (Hut & Bahcall 1982). The real value for $T'$ lies somewhere in the middle of these extremes. Finally, we can calculate the distribution of relative velocities of single stars in the instantaneous rest frame of a binary. It is again a Maxwellian distribution but with an effective mass $m^*$ and temperature $T^*$: $$\begin{aligned} m^* &=& \frac{2m^2}{m + 2m} = \frac{2}{3}m \\ T^* &=& \frac{m^*}{m}T + \frac{m^*}{2m}T' = \frac{2}{3}T + \frac{1}{3}T' \nonumber. \label{eq30}\end{aligned}$$ And we have for the velocities [ $$f^*(v^*) = \left( \frac{2}{\pi} \right)^{1/2} \left(\frac{m^*}{kT^*} \right)^{3/2} {v^*}^2 \exp{\left(-\frac{m^*}{2kT^*}{v^*}^2\right)}. \label{eq31}$$ ]{} From now on we will characterize the temperature $T^*$ by the thermal velocity dispersion in the relative velocities: [ $$s^2 = \left< {v^*}^2 \right> = 3\frac{kT^*}{m^*}. \label{eq32}$$ ]{} Although these expressions were obtained for equal-mass binaries, the same procedure can be extended to other masses. The velocity dispersion is the classical parameter used to characterize the relative velocities of different types of stellar systems. Typical values for $s$ are extremely dependent on the environment, and may be as low as $0.3$ km/s in open clusters, or up to two orders of magnitude higher in the solar neighborhood (see Binney & Tremaine 2008 for a detailed analysis). From equations (26), (\[eq27\]), (\[eq31\]) and (\[eq32\]), we can calculate the probability function $P(r_{min},v^*) \, dr_{min}\, dv^*$ for a given value of $s$. Results are shown in Figure \[fig6\] for three magnitudes of the velocity dispersion, and adopting values for $dr_{min}$ and $dv^*$ from a $100 \times 100$ grid of initial conditions in the plane. As expected, higher velocity dispersions increase the probability of encounters at higher values of $v^*$. For $s=1$ km/s, only encounters with $v_0 \sim v_{esc}$ are relevant and most of the solutions discussed in the lower plot of Figure \[fig3\] will have negligible weight in the overall outcome probability. On the other hand, for $s=10$ km/s the spread in $v^*$ is much larger and even high incoming speeds are statistically pertinent. We can now incorporate these distribution functions into the results shown in Table \[table2\] to give more realistic probabilities of positive outcomes. Before discussing these results, we must also incorporate a distribution function for the impactor mass. For this we have adopted the so-called universal initial mass function proposed by Kroupa (2001) which, in the mass interval between $\sim 0.1 M_{\odot}$ and $\sim 1 M_{\odot}$, can be expressed by a multiple-part power-law $\xi(m) \propto m^{-\alpha{_i}}$ with $$\begin{aligned} \alpha_1 &=& 1.3 \;\;\; {\rm if} \;\;\; 0.08 \le m/M_{\odot} < 0.5 \\ \alpha_2 &=& 2.3 \;\;\; {\rm if} \;\;\; 0.50 \le m/M_{\odot} \nonumber \label{eq33}\end{aligned}$$ and where $\xi(m) \, dm$ is the number of single stars expected in the mass interval $[m,m+dm]$. This distribution function is shallower than the classical Salpeter (1955) prescription for small stellar masses but, at least for our application, both give similar results. Taking into consideration all distribution functions, we can now estimate the final probability for favorable scattering outcomes, integrated over the impactor mass $m_C$ and initial values for $r_{min}$ and $v^*$. Results are shown in Table \[table3\] (which adds all positive type I + type II + type III outcomes), where we have considered three different possible values for the velocity dispersion $s$ of background stars. We have also performed separate calculations assuming that the maximum impact speeds for planetesimals leading to accretional collisions is either $10$ m/s or $100$ m/s. We recall that the first is a very strict limit in which all collisions should be constructive (Thébault et al. 2006), while the higher impact velocity allows some disruptions to take place. Contrary to expectations, results are not extremely sensitive to the adopted value for $s$. For very stringent accretional restrictions, we find that around $1.5 \%$ of all stellar scattering events lead to favorable outcomes, while this number increases to $\sim 5.5 \%$ if we allow higher impact speeds. This seems to imply that given a “normal” planetary system consisting in either (i) a planet around a single star interacting with a binary stellar system, or (ii) a planet in a circumstellar orbit around a wide binary interacting with a single star, the final result could be a system similar to $\gamma$-Cephei in (roughly) $1\% - 5 \%$ of cases. ------------------------------ --------------- ------- $s = 1$ km/s 0.9 % \[1ex\] $\Delta v < 10$ m/s $s = 5$ km/s 1.7 % \[1ex\] $s = 10$ km/s 1.5 % \[1ex\] $s = 1$ km/s 3.3 % \[1ex\] $\Delta v < 100$ m/s $s = 5$ km/s 5.4 % \[1ex\] $s = 10$ km/s 5.8 % \[1ex\] ------------------------------ --------------- ------- : Percentage of positive outcomes for two limiting impact velocities $\Delta v$ for accretion and three different velocity dispersions $s$ for background star. Values are integrated over distribution probabilities for stellar mass, impact speeds and cross-sections.[]{data-label="table3"} Conclusions =========== The discovery of at least four exoplanets in circumstellar orbits in tight binary systems has raised the question of their formation history. So far, all attempts to explain their origin via normal in-situ formation processes have been unsuccessful, and we wonder whether other more exotic mechanisms may have played a role. Following the suggestion of Marzari & Barbieri (2007a), in this work we have analyzed whether these systems, in particular $\gamma-Chepei$, could have begun their lives in other dynamical configurations more suitable for planetary formation, attaining their present structure later on as a consequence of a close encounter with a background singleton. Following a series of numerical simulations of time-reversed stellar encounters, whose results were later weighted adopting classical expressions for probability distributions for stellar mass, relative velocities and impact cross-sections, we have found that between $1$ and $5$ percent of fly-bys between a planetary system and background stars could lead to planets in tight binary systems. Although this number may not seem high, it could in fact explain why planets in tight binaries are not more numerous, which would indeed be the case if in-situ planetary formation were possible. However, we must remain cautious. Although we have attempted to give some estimate about the probability outcomes, given a sufficiently large number of initial conditions and values in the parameter space, it is possible to obtain almost any result from a scattering event. This does not mean that the event actually happened or that the explanation lies there. However, as Sir Arthur Conan Doyle said through his character Sherlock Holmes: [*“Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.”*]{} Acknowledgments {#acknowledgments .unnumbered} =============== This work has been partially supported by the Argentinian Research Council -CONICET-, and by the Universidad Nacional de Córdoba (UNC). The authors wish to express their gratitude to Francesco Marzari for helpful suggestions and a detailed review of this work. Artymowicz, P., Lubow, S.H. 1994. ApJ, 421, 651. Beaugé, C., Leiva, A.M., Haghighipour, N., Correa-Otto, J. 2010. MNRAS, 408, 503. Binney, J., Tremaine, S. 2008. “Galactic Dynamics”, Princeton University Press, New Jersey, USA. Butler, G. 2005. “Methods of Celestial Mechanics”, Springer-Verlag, Germany. Cieza, L.A., Padgett, D.L., Allen, L.E., McCabe, C.E., Brooke, T.Y., Carey, S.J., Chapman, N.L., Fukagawa, M., Huard, T.L., Noriga-Crespo, A., Peterson, D.E., Rebull, L.M. 2009. ApJL, 696, L84. Duchene, G.. 2010. Highlights of Astronomy, 15, 764. Duquennoy, A., Mayor, M. 1991. A&A, 248, 485. Giuppone, C.A., Leiva, A.M., Correa-Otto, J., Beaugé, C. 2011. A&A, 530, A103. Haghighipour, N., Raymond, S. 2007. ApJ, 666, 436. Hatzes, A.P., Cochran, W.D., Endl, M., McArthur, B., Paulson, D.B., Walker, G.A.H., Campbell, B., Yang, S. 2003. ApJ, 599, 1383. Heppenheimer, T.A. 1978. A&A, 65, 421. Hills, J.G. 1975. AJ, 80, 809. Hut, P., Bahcall, J.N. 1983. ApJ, 268, 319. Konacki, M. 2005. Nature, 436, 230. Kroupa, P. 2001. MNRAS, 322, 231. Marzari, F., Scholl, H. 2000. ApJ, 543, 328. Marzari, F., Barbieri, M. 2007a. A&A, 467, 347. Marzari, F., Barbieri, M. 2007b. A&A, 472, 643. Marzari, F., Thébault, P., Scholl, H. 2009a. A&A, 507, 505. Marzari, F., Scholl, H., Thébault, P., Baruteau, C. 2009b. A&A, 508, 1493. Marzari, F., Baruteau, C., Scholl, H., Thébault, P. 2012. A&A, 508, 1493. Müller, T.W.A., Kley, W. 2012. A&A, 539 id.A98. Nelson, A.F. 2000. ApJ, 537, L65. Paardekooper, S.-J., Thébault, P., Mellema, G. 2008. MNRAS, 386, 973. Pfahl, E. 2005. ApJ, 635, L89. Portegies Zwart, S.F., McMillan, S.L. 2005. ApJ, 633, L141. Salpeter, E.E. 1955. ApJ, 121, 161. Thébault, P., Marzari, F., Scholl, H., Turrini, D., Barbieri, M. 2004. A&A, 427, 1097. Thébault, P., Marzari, F., Scholl, H. 2006. Icarus, 183, 193. Thébault, P., Marzari, F., Augereau, J. C. 2010. A&A, 524, A13. Thébault, P. 2011. CeMDA, 111, 29. Xie, J., Zhou, J., Ge, J. 2010. ApJ, 708, 1566. Zhao, G., Xie, J., Zhou, J., Lin, D. 2012. ApJ, 749, id. 172.
--- abstract: | In 1912, Otto Sackur and Hugo Tetrode independently put forward an equation for the absolute entropy of a monoatomic ideal gas and published it in “Annalen der Physik.” The grand achievement in the derivation of this equation was the discretization of phase space for massive particles, expressed as $\delta q \delta p = h$, where $q$ and $p$ are conjugate variables and $h$ is Planck’s constant. Due to the dependence of the absolute entropy on Planck’s constant, Sackur and Tetrode were able to devise a test of their equation by applying it to the monoatomic vapor of mercury; from the satisfactory numerical comparison of $h$ obtained from thermodynamic data on mercury with Planck’s value from black-body radiation, they inferred the correctness of their equation. In this review we highlight this almost forgotten episode of physics, discuss the arguments leading to the derivation of the Sackur–Tetrode equation and outline the method how this equation was tested with thermodynamic data.\ PACS: 05.70.-a, 51.30.+i author: - | W. Grimus[^1]\ University of Vienna, Faculty of Physics\ Boltzmanngasse 5, A–1090 Vienna, Austria\ \ date: 23 January 2013 title: | UWThPh-2011-34\ On the 100th anniversary of the\ Sackur–Tetrode equation --- Introduction ============ The formula for the absolute entropy of a monoatomic ideal gas is named after Otto Sackur and Hugo Tetrode who independently derived it in 1912 [@sackur2; @tetrode; @sackur]. In classical thermodynamics the entropy of a monoatomic ideal gas is $$\label{Skl} S(E,V,N) = kN \left( \frac{3}{2} \ln \frac{E}{N} + \ln \frac{V}{N} + s_0 \right),$$ where $E$, $V$ and $N$ are the kinetic energy, the volume and the number of atoms, respectively. In classical physics the constant $s_0$ is undetermined. The achievement of Sackur and Tetrode was to compute $s_0$. At first sight this does not look very exciting, however, in order to compute $s_0$ they had to work out the size of “elementary cells or domains” in phase space. Only with this knowledge it is possible to count the number of states in classical phase space which is a prerequisite for the computation of Boltzmann’s absolute entropy given by [@boltzmann; @planck3] $$\label{SW} S = k \ln W.$$ In this formula, $W$ is the number of possibilities to realize a system compatible with some given boundary conditions. Sackur and Tetrode determined the volume of phase space cells as $h^n$ where $h$ is Planck’s constant and $n$ is the number of degrees of freedom. Until then, $h$ was primarily associated with harmonic oscillators and photons. With the work of Sackur and Tetrode it became clear that Planck’s constant was not only relevant for counting the number of states in the case of photons but also in the case of massive particles. In this way, $h$ became ubiquitous in statistical physics, more than ten years before the advent of quantum mechanics. This was an amazing result because a priori Planck’s constant in the expression $h \nu$ for the energy of a photon has nothing to do with the phase-space volume associated with massive particles. This connection was clarified only later by quantum mechanics. We want to stress that the elegance of the work of Sackur and Tetrode derives from the combination of theoretical considerations and usage of experimental data with which they were able to lend credibility to their result. They did so by successfully applying their equation to the then available data on mercury, whose vapor is monoatomic and behaves in good approximation as an ideal gas, Below we list the articles of Sackur and Tetrode and the achievements therein, written in the course of the development of their equation. The titles are literal translations from the German titles. 1. O. Sackur, *The application of the kinetic theory of gases to chemical problems* [@sackur1] (received October 6, 1911): In this paper Sackur develops the formula for the entropy $S$ of a monoatomic ideal gas as a function of the size of the elementary cell. 2. O. Sackur, *The meaning of the elementary quantum of action for gas theory and the computation of the chemical constant* [@sackur2] (no “received date”, must have been written in spring 1912): Here Sackur postulates that the size of the elementary cell is $h^n$ and obtains the absolute entropy $S$ of a monoatomic ideal gas. Using $S$, he computes the vapor pressure over a solid and makes a comparison with data on neon and argon. The numerical results, are, however, not completely satisfying. 3. H. Tetrode, *The chemical constant and the elementary quantum of action* [@tetrode] (received March 18, 1912): Tetrode gives an illuminating derivation of $S$, assuming that the size of the elementary cell is $(zh)^n$. He fits the parameter $z$ by using data on the vapor pressure of liquid mercury. Due to some numerical mistakes he obtains $z \approx 0.07$.[^2] 4. H. Tetrode, erratum to *The chemical constant and the elementary quantum of action* [@tetrode] (received July 17, 1912): Tetrode corrects the numerics and obtains now $z \sim 1$. He acknowledges the papers [@sackur2; @sackur1] of Sackur by noting that the formula for $S$ has been developed by both of them at the same time. More precisely, he refers to a formula for the so-called “chemical constant” pioneered by Nernst [@nernst], which we will define later. 5. O. Sackur, *The universal meaning of the so-called elementary quantum of action* [@sackur] (received October 19, 1912): He obtains good agreement ($\pm 30\%$) with the data on the vapor pressure of mercury and comments on the paper by Tetrode. The paper is organized as follows. In section \[derivation\] we describe the different approaches of Sackur and Tetrode to derive their equation and add some comments. Since historically the corroboration of the Sackur–Tetrode equation by using data on the vapor pressure of (liquid) mercury was crucial, we give a detailed account of it in section \[vapor pressure\]. Moreover, we redo the numerics by using modern mercury data in section \[fit\] and obtain a reasonably good value of Planck’s constant. In section \[conclusions\] our conclusions are presented. A derivation of Kirchhoff’s equation, which is used in the numerical computation, is found in the appendix. The Sackur–Tetrode equation {#derivation} =========================== Tetrode’s derivation -------------------- The starting point of Tetrode’s reasoning is the entropy formula (\[SW\]) which should, according to Nernst’s heat theorem [@nernst], give the correct value of the entropy without any additive constant. Then he considers a system with $n$ degrees of freedom and phase space coordinates $q_1, \ldots, p_n$, for which he connects $W$ with the number of configurations of phase space points. In order to have a finite entropy, it is necessary to discretize phase space, which Tetrode does by introducing “elementary domains” of volume $$\delta q_1\, \delta p_1 \cdots \delta q_n\, \delta p_n = \sigma = (zh)^n,$$ where $h$ is Planck’s constant and $z$ is a dimensionless number. Then he argues that, in a system of $\nu$ identical particles, configurations which are related only by exchange of particles should not be counted as different. Therefore, denoting by $W'$ the number of configurations in phase space, the entropy for such a system is $$S = k \ln \frac{W'}{\nu !}.$$ This is to avoid the Gibbs paradox and to obtain $S$ as an extensive quantity, though Tetrode does not mention Gibbs in this context. Moving on to the monoatomic gas consisting of $\nu \equiv N$ atoms with mass $m$ and spatial volume $V$, the number of degrees of freedom is $n = 3N$ and, for a given maximal energy $E$ of the gas, the volume occupied in phase space is computed by $$\mathcal{V}(E,V,N) = \int {\mathrm{d}}^3 x_1 \int {\mathrm{d}}^3 p_1 \cdots \int {\mathrm{d}}^3 x_N \int {\mathrm{d}}^3 p_N \quad \mbox{with} \quad \frac{1}{2m} \left( {\vec p_1}^{\,2} + \cdots + {\vec p_N}^{\,2} \right) \leq E.$$ Utilizing the gamma function, this phase space volume is expressed as $$\mathcal{V}(E,V,N) = \frac{(2\pi mE)^{\frac{3N}{2}}\, V^N}{\Gamma\left(\frac{3N}{2} + 1 \right)}.$$ According to the arguments above, the entropy is then given by $$\label{S1} S = k \ln \frac{\mathcal{V}(E,V,N)}{(zh)^{3N} N!}.$$ In the last step Stirling’s formula is used, to wit the approximations $$\ln N! \simeq N ( \ln N - 1 ) \quad \mbox{and} \quad \ln \Gamma\left( \frac{3N}{2} + 1 \right) \simeq \frac{3N}{2} \left( \ln \frac{3N}{2} - 1 \right)$$ for large $N$. This leads to Tetrode’s final result $$\label{s-tetrode} S(E,V,N) = kN \left( \frac{3}{2} \ln \frac{E}{N} + \ln \frac{V}{N} + \frac{3}{2} \ln \frac{4\pi m}{3 (zh)^2} + \frac{5}{2} \right)$$ for the entropy of a monoatomic ideal gas. This derivation is of an amazing lucidity. No wonder that 100 years later it is one of the standard methods in modern textbooks. The only amendment to Tetrode’s derivation comes from quantum mechanics which fixes the size of the elementary domain to $h^n$, i.e. requires $z=1$; the latter result was obtained by Tetrode through a fit to the data of the vapor pressure of mercury. From equation (\[s-tetrode\]) with $z=1$ we infer that the constant $s_0$ of equation (\[Skl\]) is given by $$s_0 = \frac{3}{2}\, \ln \frac{4\pi m}{3h^2} + \frac{5}{2}.$$ Sackur’s derivation ------------------- It is much harder to follow Sackur’s line of thoughts. Here we sketch the derivation of the entropy formula in [@sackur], because there he gives the most detailed account of his derivation. In this paper he first derives Planck’s law of radiation by considering a system of radiators, before he moves on to the ideal monoatomic gas. In both cases Sackur defines a time interval $\tau$ in which the system is monitored and an energy interval $\Delta \varepsilon$ for the discretization of energy. For the gas the time $\tau$ is assumed to be so small that during this time collisions between atoms can be neglected. Therefore, during the time interval of length $\tau$, each of the kinetic energies associated with the three directions in space, $\varepsilon_x$, $\varepsilon_y$, $\varepsilon_z$, of every atom can be assumed each to lie in a well-defined energy interval of length $\Delta \varepsilon$. In other words, Sackur imagines a three-dimensional energy space with $x$, $y$ and $z$-axis referring to the kinetic energies $\varepsilon_x$, $\varepsilon_y$ and $\varepsilon_z$, respectively, and with energy unit $\Delta \varepsilon$ on every axis. In this way, the energy space is divided into cubes of volume $(\Delta \varepsilon)^3$ and the kinetic energy of every particle lies, during the time interval $\tau$, in a well-defined cube. If the $i$-th energy cube is given by $n_k \Delta\varepsilon \leq \varepsilon_k < (n_k + 1) \Delta\varepsilon$ ($k=x,y,z$) with integers $n_k$, the energy $\varepsilon_i$ associated with this cube can, for instance, be defined as $$\varepsilon_i = (n_x + n_y + n_z) \Delta\varepsilon.$$ Sackur considers further the probability $w$ of of observing, during the time interval $\tau$, atoms with kinetic energy $\varepsilon_k$ ($k=x,y,z$) lying in a specific energy interval associated with the $k$-axis; he argues that $w$ will be proportional to the product $\tau \Delta \varepsilon$, because the smaller $\tau$ and $\Delta \varepsilon$ are, the smaller $w$ will be. Hence, since there are three directions in space, the number of atoms in the $i$-th energy cube, $N_i$, will be proportional to $(\tau \Delta \varepsilon)^3$. In this way, Sackur justifies the Ansatz $$\label{ansatz} N_i = N f(\varepsilon_i) \left( \tau \Delta \varepsilon \right)^3,$$ where $N$ is the total number of atoms in the volume $V$. He goes on by distributing the $N$ atoms into $r$ energy cubes, in exactly the same way as in the case of harmonic oscillators and photons. The number of possibilities for putting $N_1$ atoms into cube 1, $N_2$ atoms into cube 2, etc. is given by $$\label{WS} W = \frac{N!}{N_1! N_2! \cdots N_r!} \quad \mbox{with} \quad N = N_1 + N_2 + \cdots + N_r.$$ Note that Sackur computes the number of possibilities $W$ for a given decomposition of $N$ into the numbers $N_1, \ldots, N_r$, which clearly implies that he assumes *distinguishable* atoms; for indistinguishable atoms, a fixed decomposition would simply correspond to a *single* state and thus $W=1$. According to Boltzmann and Planck, the entropy is obtained by $$\label{S} S = k \ln W = k N \ln N - k \sum_i N_i \ln N_i = -kN \sum_i \frac{N_i}{N} \ln \frac{N_i}{N}$$ for large numbers $N_i$ and the most probable distribution is given by the maximum of $S$ under the conditions $$\label{NE} \sum_i N_i = \sum_i N f(\varepsilon_i) \left( \tau \Delta \varepsilon \right)^3 = N, \quad \sum_i N_i \,\varepsilon_i = \sum_i N f(\varepsilon_i) \left( \tau \Delta \varepsilon \right)^3 \varepsilon_i = E.$$ This procedure superficially resembles the derivation of the canonical ensemble, however, its spirit is completely different. We know that the ST equation is only valid for a dilute gas, and Tetrode’s derivation implicitly assumes that the occupation numbers, i.e. the numbers of particles occupying the energy levels of single-particle states, are very small; otherwise the expression for the number of distinguishable configurations in phase space would be much more complicated than $W'/N!$ and effects of spin and statistics would have to be taken into account. However, Sackur in his derivation assumes the opposite, namely occupation numbers $N_i \gg 1$. Finding the maximum of $S$ of equation (\[S\]) amounts to computing the stationary point of the functional $-\int {\mathrm{d}}\varepsilon f \ln f$, under the conditions of a fixed total number of atoms and a fixed energy, where the function $f$ is defined in the Ansatz (\[ansatz\]). The sought for stationary point is obtained from the maximum of $$\Phi(f, \varepsilon) = -f \ln f + \left( \alpha' + 1 \right) f - \beta \varepsilon f,$$ where the parameters $\alpha'$ and $\beta$ are Lagrange multipliers: $$\frac{\partial \Phi}{\partial f} = -\ln f + \alpha' - \beta \varepsilon = 0 \quad \Rightarrow \quad f(\varepsilon) = e^{\alpha' - \beta \varepsilon} = \alpha e^{-\beta \varepsilon} \quad \mbox{with} \quad \alpha = e^{\alpha'}.$$ Eventually, Sackur arrives at the Boltzmann distribution $$f(\varepsilon) = \alpha e^{-\beta \varepsilon}.$$ Plugging $N_i$ with this $f$ into formula (\[S\]) and using equation (\[NE\]), the simple expression $$\label{S2} S = -3kN \ln (\tau \Delta\varepsilon) - kN \ln \alpha + k \beta E$$ for the entropy ensues. In equation (\[S2\]) there are three unknowns: $\tau \Delta\varepsilon$, $\alpha$ and $\beta$. At this point, referring to Sommerfeld [@sommerfeld], Sackur states that the smallest action that can take place in nature is given by Planck’s constant $h$. Therefore, he makes the bold assumption that $$\tau \Delta \varepsilon = h,$$ which he had already made successfully for the derivation of Planck’s law of radiation in the same paper. The other two parameters are in principle determined by equation (\[NE\]). Sackur then argues that, for simplicity, in the two integrals of equation (\[NE\]) summation can be replaced by integration. For this purpose he makes the following step: $$\label{pk} \varepsilon_k = \frac{p_k^2}{2m} \;\; (k = x,y,z) \quad \Rightarrow \quad {\mathrm{d}}\varepsilon_k = \frac{p_k}{m}\, {\mathrm{d}}p_k = \frac{\bar x_k}{\tau}\, {\mathrm{d}}p_k,$$ where the $\bar x_k$ are the average Cartesian components of the distance covered by the atoms during the time $\tau$. Then Sackur connects the product of the three average distances with the volume $V$ of the gas by equating it with the volume per atom: $$\label{v/n} \bar x \bar y \bar z = \frac{V}{N}.$$ It is hard to understand why this equation should hold, but with equations (\[pk\]) and (\[v/n\]) he effectively introduces an integration ${\mathrm{d}}^3 x\, {\mathrm{d}}^3 p$ in phase space.[^3] Moreover, since Sackur nowhere introduces the concept of indistinguishable atoms, he needs the factor $1/N$ in equation (\[v/n\]) for avoiding Gibbs paradox, as we will see shortly. So he ends up with $$\tau^3 {\mathrm{d}}\varepsilon_x {\mathrm{d}}\varepsilon_y {\mathrm{d}}\varepsilon_z = \frac{V}{N}\, {\mathrm{d}}p_x {\mathrm{d}}p_y {\mathrm{d}}p_z$$ for the integration in equation (\[NE\]) and obtains $$1 = \frac{\alpha V m^3}{N} \left( \frac{2\pi}{m \beta} \right)^{3/2} \quad \mbox{and} \quad E = \frac{3\alpha V m^3}{2\beta} \left( \frac{2\pi}{m \beta} \right)^{3/2}.$$ These two equations are easily solved for $\alpha$ and $\beta$. Plugging the solution $$\beta = \frac{3N}{2E} \quad \mbox{and} \quad \alpha = \frac{N}{V} \left( \frac{3N}{4\pi mE} \right)^{3/2}$$ into equation (\[S2\]), Sackur arrives at his final result $$\label{s-sackur} S(E,V,N) = kN \left( \frac{3}{2} \ln \frac{E}{N} + \ln \frac{V}{N} + \frac{3}{2} \ln \frac{4\pi m}{3 h^2} + \frac{3}{2} \right).$$ Comparing this expression with Tetrode’s result (\[s-tetrode\]), we see that there is a difference in the last term in parentheses; Sackur has $3/2$ while while Tetrode has the correct number $5/2$. Thus $$\left. S(z=1) \right|_\mathrm{Tetrode} - \left. S \right|_\mathrm{Sackur} = kN,$$ which Sackur observed and commented upon in [@sackur]. It is interesting to note that in his previous paper [@sackur2] Sackur actually had the correct number. It is kind of amazing that Sackur, with his line of reasoning, arrives at nearly the correct result, being off only by $kN$. This difference is indeed important for the comparison of the entropy formula with the data from vapor pressure of mercury [@tetrode; @sackur]; anticipating equation (\[vp\]), we see that a determination of Planck’s constant with Sackur’s formula would result in a value which is too low by a factor of $e^{-1/3} \approx 0.72$ where $e$ is Euler’s number. We conclude this section with a comment on equation (\[v/n\]). We know that $S$ is an extensive quantity, i.e. $S(\zeta E, \zeta V, \zeta N) = \zeta S(E,V,N)$ holds for all $\zeta > 0$. If the factor $1/N$ had been absent in equation (\[v/n\]), we would have to replace $V$ by $NV$ in equation (\[s-sackur\]); but then $S$ would not be an extensive quantity, as one can easily check. Discussion ---------- Let us present here, in particular, for comparison with Sackur’s treatment, the derivation of the entropy of a monoatomic ideal gas by using the canonical partition function $Z$. Since we are dealing with non-interacting particles, $Z$ is given by $$Z = \frac{Z_1^N}{N!},$$ where $Z_1$ is the partition function of a single particle. The factor $1/N!$ is present to take into account that the particles are indistinguishable. Then the entropy is given by $$\label{SZ1} S = k \left( \ln Z + \beta E \right) = kN \left( \ln \frac{Z_1}{N} + 1 + \frac{\beta E}{N} \right),$$ where $E$ is the total energy of the $N$ particles and $\beta = 1/(kT)$. Furthermore, Stirling’s formula has been used to replace $\ln N!$ by $N(\ln N - 1)$. If $E/N$ does not depend on $N$, which is the case for the ideal gas, this equation displays the full dependence on $N$. For the monoatomic ideal gas, in the classical approximation, the single-particle partition function is given by the integral $$Z_1 = \frac{1}{h^3} \int_{\mathcal{V}} {\mathrm{d}}^3x \int {\mathrm{d}}^3p\, \exp \left( -\beta \frac{{\vec p}^{\,2}}{2m} \right) = \frac{V}{\lambda^3} \quad \mbox{with} \quad \lambda = \frac{h}{\sqrt{2\pi m kT}}$$ being the thermal de Broglie wave length. The integration domain $\mathcal{V}$ is the space taken by the gas, i.e. the container with volume $V$. Plugging $Z_1$ into equation (\[SZ1\]) yields the desired entropy $$\label{Scan} S(T,V,N) = kN \left( \ln \frac{V}{\lambda^3N} + \frac{5}{2} \right)$$ as a function of temperature, volume and particle number. We compare Tetrode’s and Sackur’s result with the entropy formula (\[Scan\]) by substituting $$E = \frac{3}{2}\,NkT$$ in equations (\[s-tetrode\]) and (\[s-sackur\]).[^4] We find what we have announced earlier: Tetrode’s result exactly agrees with equation (\[Scan\]), while Sackur’s result differs by $kN$. We can easily locate the origin of the difference. Considering the definitions of $\alpha$ and $Z_1$ and taking into account equation (\[NE\]), we find that $$\alpha = \frac{N}{h^3 Z_1}.$$ Insertion of this expression into equation (\[S2\]) leads to the entropy (\[SZ1\]), with the “1” within the parentheses being absent. Effectively Sackur replaces $\ln N! \simeq N(\ln N - 1)$ by $N\ln N$ in his derivation and does, therefore, not fully take into account indistinguishability of the atoms. The entropy of the monoatomic ideal gas as a function of the pressure $p$ instead of the volume $V$ is obtained with the ideal-gas equation by the substitution $V = NkT/p$. As mentioned in the introduction, Sackur and Tetrode tested their equation on mercury vapor. This element has seven stable isotopes with various nuclear spins $s_k$ [@aw]. Therefore, in principle for mercury one has to add the corresponding residual entropy $$\label{Sres} S_\mathrm{res}(\mbox{Hg}) = Nk\,\sum_{k=1}^7 P_k \left( -\ln P_k + \ln(2s_k + 1) \right),$$ where the $P_k$ are the isotopic abundances ($\sum_k P_k = 1$), to the Sackur–Tetrode formula. Of course, in 1912 the mercury isotopes were not known. However, as we will see in the next section, in the mercury test only the entropy difference between gaseous and liquid phases is relevant. For both phases, however, the same residual entropy is expected and thus $S_\mathrm{res}(\mbox{Hg})$ of equation (\[Sres\]) drops out. The vapor pressure of mercury and Planck’s constant {#vapor pressure} =================================================== How to subject the *absolute* entropy of a monoatomic ideal gas to experimental scrutiny? Sackur and Tetrode applied the following procedure. Consider the latent heat $L(T)$ of a monoatomic substance for the phase transition from the liquid to the gaseous phase. In terms of the absolute molar entropies, the latent heat is given by $$\label{L} L(T) = T \left( s_\mathrm{vapor}(T, \bar p(T)) - s_\mathrm{liquid}(T, \bar p(T)) \right),$$ where $\bar p(T)$ denotes the pressure along the coexistence curve, i.e. the vapor pressure. If the vapor behaves in good approximation like a monoatomic ideal gas, then the Sackur–Tetrode equation in the form $$\label{st-molar} s_\mathrm{vapor} = R \left( \ln \frac{kT}{\bar p \lambda^3} + \frac{5}{2} \right)$$ with the molar gas constant $R$ can be substituted for $s_\mathrm{vapor}(T, \bar p(T))$. For the liquid phase, neglecting the $p$-dependence, the absolute entropy can be expressed as an integral over the heat capacity: $$\label{s-liquid} s_\mathrm{liquid} = \int_0^T {\mathrm{d}}T'\, \frac{c_p(T')}{T'}.$$ Note that here the integration includes the solid and liquid phases, and the latent heat of melting. After insertion of $s_\mathrm{vapor}$ and $s_\mathrm{liquid}$ into equation (\[L\]), one obtains an expression for the vapor pressure: $$\label{vp} \ln \bar p(T) = -\frac{L(T)}{RT} + \ln \frac{(2\pi m)^{3/2} (kT)^{5/2}}{h^3} + \frac{5}{2} - \int_0^T {\mathrm{d}}T'\, \frac{c_p(T')}{RT'}.$$ Similar derivations can be found in [@zemanski; @reif]. Since equation (\[vp\]) is a direct consequence of equation (\[st-molar\]), it serves as a testing ground for the Sackur–Tetrode equation. For this test not only data on the vapor pressure $\bar p(T)$ are needed, but also data on the latent heat $L(T)$ and the heat capacity $c_p(T)$ in the condensed phase must be available. While for $\bar p(T)$ and $L(T)$ it is sufficient to have data in a certain temperature interval, one needs to know $c_p(T)$ as a function of $T$ down to absolute zero. In 1912 the most comprehensive set of data was available on mercury. This was utilized by Sackur and Tetrode to test their equation. In this test they followed slightly different approaches. Both employed the value of Planck’s constant $h$ as determined from black-body radiation and inserted it into equation (\[vp\]). Then Sackur directly computed the vapor pressure of mercury from equation (\[vp\]) and compared his results with the experimental data, whereas Tetrode replaced $h$ in equation (\[vp\]) by $zh$ and carried out a fit of $z$ to the data. Now we want to delineate how Sackur and Tetrode actually performed the numerical evaluation of equation (\[vp\]). We follow the exposition of Sackur in [@sackur] because his account is sufficiently detailed and easy to follow. On the right-hand side of equation (\[vp\]) we have to discuss the term with $L(T)$ and the integral. In treating the latent heat as a function of $T$, Sackur uses Kirchhoff’s equation—see equation (\[kirchhoff\]) in the appendix. Furthermore, he assumes that in the temperature interval he considers, which is from $0^\circ\,\mbox{C}$ to $360^\circ\,\mbox{C}$, the heat capacity in the liquid phase can be regarded to have the constant value $c_p^\mathrm{liquid}$. If at a reference temperature $T_1$ the latent heat is $L_1$, then due to Kirchhoff’s equation $$\label{L1} L(T) = L_1 + \left(\frac{5}{2}\,R - c_p^\mathrm{liquid} \right) (T-T_1).$$ The integral on the right-handed side of equation (\[vp\]) is treated by splitting it into the part in the solid phase, the contribution of the phase transition, and the part in the liquid phase. Denoting the latent heat of melting by $L_m$ and the melting point by $T_m$, this integral reads $$\int_0^T {\mathrm{d}}T'\, \frac{c_p(T')}{T'} = \int_0^{T_m} {\mathrm{d}}T'\, \frac{c_p^\mathrm{solid}(T')}{T'} + \frac{L_m(T_m)}{T_m} + c_p^\mathrm{liquid} \ln \frac{T}{T_m}.$$ Again the approximation that the heat capacity of the liquid is temperature-independent has been used. Implicitly the additional approximation that the melting temperature $T_m$ is independent of the pressure has been made. The final form of the vapor pressure, prepared for the numerical evaluation, is thus $$\begin{aligned} \ln \bar p(T) &=& - \frac{L_1 + \left( c_p^\mathrm{liquid} - \frac{5}{2}R \right)T_1}{RT} + \frac{5}{2} \ln T - \int_0^{T_m} {\mathrm{d}}T'\, \frac{c_p^\mathrm{solid}(T')}{RT'} \nonumber \\[2mm] && \label{vp1} -\frac{L_m(T_m)}{RT_m} - \frac{c_p^\mathrm{liquid}}{R} \ln \frac{T}{T_m} + \ln \frac{(2\pi m)^{3/2} k^{5/2}}{h^3} + \frac{c_p^\mathrm{liquid}}{R}.\end{aligned}$$ This equation corresponds to Sackur’s equation on top of p. 82 of [@sackur] and we have written the terms in the same order as there. We have refrained, however, from converting the natural logarithm to the logarithm to the base of ten, which was used by Sackur. As mentioned earlier, Sackur and Tetrode actually determine the *chemical constant*, defined as $$\label{chem} \mathcal{C} = \frac{1}{\ln 10} \times \ln \frac{(2\pi m)^{3/2} k^{5/2}}{h^3} = \log \frac{(2\pi m)^{3/2} k^{5/2}}{h^3},$$ from the data and compare this value of $\mathcal{C}$ with the value computed with Planck’s constant obtained from black-body radiation. At that time, the chemical constant was a commonly used quantity. It appears not only in the vapor pressure but also in the law of mass action of chemical reactions in the gas phase [@nernst]. Note that the conversion of the logarithm mentioned above brings about a division by $\ln 10 \approx 2.3026$ in many places in the equations in [@tetrode; @sackur]. In equation (\[vp1\]), in the integral over $c_p^\mathrm{solid}(T)/T$ both Sackur and Tetrode use a model by Nernst [@nernst1] for the specific heat of solid mercury. This model is a kind of Einstein model [@einstein] but is sums two frequencies, $\omega$ and $2\omega$. It is interesting to note that the paper of Debye concerning the Debye model [@debye] has a “received date” July 24, 1912, and is thus prior to Sackur’s paper [@sackur]. Actually, Sackur refers to it in [@sackur], but only in the part concerning Planck’s law of radiation; in the integration over the solid phase of mercury he uses nevertheless Nernst’s model. We conclude this section by summarizing and commenting on the approximations which lead to equation (\[vp1\]). In essence the following approximations have been made: 1. The vapor is treated as a classical ideal gas. 2. The molar volume $v_l$ of the liquid is neglected compared to the molar volume $v_g$ of the vapor. 3. In the liquid phase the dependence on $p$ of the isobaric heat capacity is negligible in the considered temperature interval. 4. \[4\] There are two technical assumptions which facilitate the numerics: The temperature dependence of the heat capacity in the liquid phase is neglected and the melting temperature $T_m$ is pressure independent. From the first assumption it follows that the heat capacity of a monoatomic vapor is constant with the value $$\label{cp-v} c^\mathrm{vapor}_p = \frac{5}{2}\,R,$$ which is an important ingredient in equation (\[L1\]). The thermal equation of state, $$p V = n_m R T,$$ where $n_m$ the number of moles of the gas, has been used in equation (\[st-molar\]) and in the derivation of Kirchhoff’s equation—see appendix. The second assumption, which occurs only in the derivation of Kirchhoff’s equation, is well justified because the order of magnitude of the ratio of the molar volumes is $v_g/v_l \sim 10^3$. To discuss the third assumption we note that via the Gibbs potential we obtain the relation $$\left. \frac{\partial c^\mathrm{liquid}_p}{\partial p} \right|_T = -T v \left( \alpha^2 + \left. \frac{\partial \alpha}{\partial T} \right|_p \right) \quad \mbox{with} \quad \alpha = \frac{1}{v} \left. \frac{\partial v}{\partial T} \right|_p,$$ where $\alpha$ is the thermal expansion coefficient. This equation leads to a linear approximation of the heat capacity with respect to the pressure: $$c^\mathrm{liquid}_p(T,p) \approx c^\mathrm{liquid}_p(T,p_0) - T \left( \alpha^2 + \left. \frac{\partial \alpha}{\partial T} \right|_p \right)_{p=p_0} v(T,p_0)\,(p-p_0).$$ The pressure $p_0$ is a reference pressure. It is well known that the $p$-dependence of $c_p$ for liquids is suppressed for two reasons. First of all, the product $v p \sim 1\,\mbox{J}\, \mbox{mol}^{-1}$ where $v$ is the molar volume of the liquid and $p \sim 1\,\mbox{bar}$ is rather small. Secondly, the thermal expansion coefficient $\alpha$ of a liquid is small as well; for instance for mercury $\alpha \approx 1.8 \times 10^{-4}$K$^{-1}$ at 1bar. Thus, the third assumption is very well justified. However, in general the heat capacity of a liquid depends on the temperature, although not drastically. For mercury it drops by $4\%$ between $-38.84^\circ\,\mbox{C}$, which is the melting point, and $200^\circ\,\mbox{C}$ [@crc1]. Our fit of Planck’s constant to mercury data {#fit} ============================================ It is worthwhile to use the thermodynamic data on mercury available at present and employ a slight variation of the method of Sackur and Tetrode described in the previous section in order to check the accuracy with which Planck’s constant can be determined in this way. We follow Tetrode’s approach in replacing $h$ by $zh$ in equation (\[vp\]). In the following we will plug in the modern meanvalue of $h$ and determine $z$ from the data. The best modern value of $h$, recommended by CODATA [@nist-fc], is $$\label{h} 6.626 069 57(29) \times 10^{-34}\,\mathrm{J}\,\mathrm{s}$$ In order to account for the slight temperature dependence of the heat capacity of liquid mercury we make the ansatz $$\label{cp-l} c_p^\mathrm{liquid}(T) = a_0 + a_1 T + a_2 T^2$$ and fit the coefficients $a_0$, $a_1$ and $a_2$ to the input data from the table presented in [@crc1]. In this table one can also read off that from the melting point up to a temperature of about $200^\circ\,\mbox{C}$ the heat capacity of gaseous mercury agrees exactly with the ideal-gas value (\[cp-v\]). Thus we confine ourselves to the temperature interval from $-38.84^\circ\,\mbox{C}$ to $200^\circ\,\mbox{C}$, in which the ansatz (\[cp-l\]) should be sufficient. With equations (\[L\]) and (\[cp-l\]), and taking into account Kirchhoff’s equation, we obtain $$L(T) = L_0 + \frac{5}{2}\,R (T-T_0) - a_0 \left( T-T_0 \right) -\frac{1}{2} a_1 \left( T^2 - T_0^2 \right) - \frac{1}{3} a_2 \left( T^3 - T_0^3 \right),$$ while inserting equation (\[cp-l\]) into the entropy formula (\[s-liquid\]) gives $$s_\mathrm{liquid}(T) = s_0 + a_0 \ln \frac{T}{T_0} + a_1 (T-T_0) + \frac{1}{2} a_2 \left( T^2 - T_0^2 \right).$$ As a reference temperature we take $T_0 = 298.15\,\mbox{K}$, which allows us to use the enthalpy of formation and the standard molar entropy from the CODATA Key Values for Thermodynamics [@key]: $$L_0 = 61.38 \pm 0.04\,\, \mbox{kJ}\,\mbox{mol}^{-1}, \quad s_0 = 75.90 \pm 0.12\,\, \mbox{J}\,\mbox{K}^{-1}\,\mbox{mol}^{-1}.$$ The value of $s_0$ saves us from the non-trivial task of determining the integral in equation (\[s-liquid\]) with the boundaries $T=0$ and $T=T_0$. The input data for the vapor pressure of mercury we take from the table in [@crc2]. In the legend of this table estimated uncertainties of the vapor pressure values are given, which we use in the method of least squares in order to fit the parameter $z$. A further input parameter is the atomic weight of mercury, $A = 200.95(2)$ [@aw]. The mass value for mercury is then $m = Au$ where $u$ is the atomic mass unit. For the determination of $h$ from mercury data we can safely neglect errors in the physical constants $R$, $k$ and $u$. With the above input, our best fit value for $z$ is $\bar z = 1.003$ at $\chi^2_\mathrm{min} = 4.2$. Since we have at disposal vapor pressure measurements at 75 temperatures [@crc2] in the considered interval, but we determine only one parameter, the number of degrees of freedom is 74. For such a large number of degrees of freedom the above value of the minimal $\chi^2$ tells us that the fit is perfect. We take into account the following sources of uncertainties in $z$: the statistical error determined by $\chi^2(z) = \chi^2_\mathrm{min} + 1$, the errors in $A$, $L_0$ and $s_0$, and an error in $c_p$. We obtain the uncertainties $\pm 0.0002$ for the statistical error and $\pm 0.0005$ for the error in $A$. These errors are one order of magnitude smaller than the errors originating in $L_0$ and $s_0$ which are $\pm 0.004$ and $\pm 0.005$, respectively. We have no information on the error in the heat capacity of liquid mercury in [@crc1]. Therefore, we simply vary $a_0$ by $\pm 1\%$ as a generous error estimate [@giauque]; the resulting uncertainty, however, is smaller than the statistical error. In summary, our value of $z$ is $$z = 1.003 \pm 0.004\,(L_0) \pm 0.005\,(s_0).$$ Of course, the error estimate above is not a sound statistical computation, but we can safely argue that, with existing thermodynamic data on the equilibrium of liquid and gaseous phases of mercury, Planck’s constant can be determined with an accuracy of about one percent. Improving the accuracy of $L_0$ and $s_0$ might improve the determination of $h$, but due to the approximations pointed out in the previous section, thermodynamic data can most probably never compete with quantum physics data for this purpose. Conclusions =========== Planck’s quantum hypothesis in 1900 was a revolutionary step which he justified by referring to Boltzmann, because in this way he could count the number of different photon states and compute the entropy of a photon gas by using formula (\[SW\]). The importance of the quantum hypothesis became clear only gradually. In the beginning, Planck’s constant played a role in loosely connected or seemingly unconnected phenomena. The unified perspective was achieved only later with quantum mechanics and quantum field theory. However, the importance of the quantum hypothesis for atomic and molecular physics, including thermodynamic quantities like heat capacities, was suspected quite early, for instance, by Sommerfeld [@sommerfeld] who connected Planck’s constant with the “action[^5] in pure molecular processes.” In the beginning, apart from black-body radiation, the phenomena to which the quantum hypothesis could be applied were scarce. In 1905 it was used by Einstein to explain the photoelectric effect. A bit later Johannes Stark could interpret features of the light spectrum emitted by canal rays and of the X-ray spectrum produced by the impact of electrons with the help of the quantum hypothesis. In 1907 Einstein put forward the “Einstein model” of the heat capacity of solids where $h \nu$ was now associated with the energy of vibrations of a crystal; this theory could account for deviations from the Dulong–Petit law at high temperatures but gave the wrong behavior at low temperatures. This flaw was cured by Debye [@debye], who developed his model practically at the same time as Sackur and Tetrode derived their equation. The Bohr model of the atom was to follow in 1913. As a side remark, Ernest Rutherford’s paper on the atomic nucleus appeared in 1911, in the same year when Heike Kamerlingh Onnes discovered superconductivity. For an extensive account of the evolution of the “old quantum theory” we refer the reader to [@rechenberg]. Just as Planck more than ten years earlier, Sackur and Tetrode referred to Boltzmann in the derivation of their equation. One can view the Sackur–Tetrode equation and its successful test with thermodynamic data as one of the very first confirmations of Planck’s quantum hypothesis. This equation was a quite fundamental step towards modern physics as it demonstrated the ubiquity of Planck’s constant in statistical physics. We stress once more that the outstanding feature of the papers of Sackur and Tetrode was the combination of theoretical ideas with an ingenious usage of experimental data. One may speculate why the work of Sackur and Tetrode is not that well known in the physics community as one would expect from its importance in the development of quantum theory and statistical physics. One reason is certainly that both died rather young. Sackur (1880–1914), who was actually a physical chemist, died in an explosion in the laboratory of Fritz Haber, only two years after the Sackur–Tetrode equation. On the other hand, Tetrode (1895–1931) was a wunderkind who published his first research paper, namely the paper on the Sackur–Tetrode equation, at the age of 17. Later on he rather lived in seclusion, though he did publish a few papers which were appreciated by the community[^6] and kept some contact with eminent contemporary physicists before he prematurely died of tuberculosis. #### Acknowledgements: The author thanks E.R. Oberaigner for useful discussions and P.O. Ludl for a critical reading of the manuscript. Kirchhoff’s equation ==================== For the transition from the liquid (or solid) to the gaseous phase, this equation relates the slope of the latent-heat curve to the difference of the heat capacities across the coexistence curve. We derive it here because it is not that well known and also because we want to give a fairly self-contained account of the physics around the Sackur–Tetrode equation. A derivation of Kirchhoff’s equation is also found in [@zemanski; @blundell]. The starting point is the equation $$\label{Lg} L(T) = T \left( s_2(T, \bar p(T)) - s_1(T, \bar p(T)) \right)$$ for the molar latent heat of the transition from phase 1 to phase 2. In order to simplify the notation we define $$\Delta s(T) = s_2(T, \bar p(T)) - s_1(T, \bar p(T)),$$ and analogously $\Delta v$ and $\Delta c_p$, the molar volume and heat capacity differences, respectively, along the coexistence curve. Taking the derivative of equation (\[Lg\]) with respect to $T$, we obtain $$\label{dL} \frac{{\mathrm{d}}L}{{\mathrm{d}}T} = \Delta s + T \left. \frac{\partial \Delta s}{\partial T} \right|_p + T \left. \frac{\partial \Delta s}{\partial p} \right|_T \frac{{\mathrm{d}}\bar p}{{\mathrm{d}}T}.$$ Though along the coexistence curve the entropy difference is a function of the temperature alone because the pressure along this curve is given by $p = \bar p(T)$, the partial derivatives in equation (\[dL\]) refer to the original dependence of the entropy on temperature and pressure; of course, after performing the derivatives, $p$ has to be replaced by $\bar p(T)$. Next we perform three substitutions in equation (\[dL\]). Firstly, we note that the molar heat capacity is given by $$c_p = T \left. \frac{\partial s}{\partial T} \right|_p.$$ Secondly, we use the Maxwell relation $$\left. \frac{\partial s}{\partial p} \right|_T = -\left. \frac{\partial v}{\partial T} \right|_p.$$ Thirdly, we apply the Clausius–Clapeyron equation $$\label{CC} \frac{{\mathrm{d}}\bar p}{{\mathrm{d}}T} = \frac{L}{T \Delta v}.$$ With these substitutions equation (\[dL\]) reads $$\label{ddL} \frac{{\mathrm{d}}L}{{\mathrm{d}}T} = \frac{L}{T} + \Delta c_p - \left. \frac{1}{\Delta v} \frac{\partial \Delta v}{\partial T} \right|_p L.$$ So far this equation is general for a phase transition of first order. Now we argue that, in the case of the vapor pressure over a liquid (or solid), the third term on the right-hand side of equation (\[ddL\]) cancels the first term to a very good approximation. To this end we consider the thermal expansion coefficient defined by $$\alpha = \frac{1}{v} \left. \frac{\partial v}{\partial T} \right|_p$$ which, for an ideal gas, is simply $1/T$. Then, using $v_l/v_g \ll 1$, we derive $$\left. \frac{1}{\Delta v} \frac{\partial \Delta v}{\partial T} \right|_p = \frac{\alpha_g - {\frac{\displaystyle v_l}{\displaystyle v_g}} \alpha_l}{1 - {\frac{\displaystyle v_l}{\displaystyle v_g}}} \approx \alpha_g \approx \frac{1}{T},$$ which proves the cancellation announced above.[^7] With this step we finally end up with Kirchhoff’s equation for the latent heat of vaporization: $$\label{kirchhoff} \frac{{\mathrm{d}}L}{{\mathrm{d}}T} \approx \Delta c_p.$$ [99]{} O. Sackur, *Die Bedeutung des elementaren Wirkungsquantums für die Gastheorie und die Berechnung der chemischen Konstanten*, Festschrift W. Nernst zu seinem 25jährigen Doktorjubiläum (Verlag Wilhelm Knapp, Halle a. d. S., 1912) H. Tetrode, *Die chemische Konstante der Gase und das elementare Wirkungsquantum*, Annalen der Physik 38 (1912) 434; Berichtigung *ibid.* 39 (1912) 255 O. Sackur, *Die universelle Bedeutung des sog. elementaren Wirkungsquantums*, Annalen der Physik 40 (1913) 67 L. Boltzmann, *Über die Beziehung zwischen dem zweiten Hauptsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung respektive den Sätzen über das Wärmegleichgewicht*, Wiener Berichte 76 (1877) 373 M. Planck, *Ueber das Gesetz der Energieverteilung im Normalspectrum*, Annalen der Physik 4 (1901) 553 O. Sackur, *Die Anwendung der kinetischen Theorie der Gase auf chemische Probleme*, Annalen der Physik 36 (1911) 958 W. Nernst, *Über die Berechnung chemischer Gleichgewichte aus thermischen Messungen*, Nachrichten von der Königlichen Akademie der Wissenschaften zu Göttingen 1 (1906) 1 A. Sommerfeld, *Das Plancksche Wirkungsquantum und seine allgemeine Bedeutung für die Molekularphysik*, Physikalische Zeitschrift 12 (1911) 1057 J.K. Tuli, *Nuclear Wallet Cards*, National Nuclear Data Center, Brookhaven National Laboratory (2005)\ URL: http://www.nndc.bnl.gov/wallet/wallet05.pdf M.W. Zemanski, *Heat and thermodynamics* (McGraw-Hill Kogakusha Ltd., Tokyo, 1968) F. Reif, *Fundamentals of Statistical and Thermal Physics* (McGraw-Hill,New York, 1965) W. Nernst, *Der Energieinhalt fester Stoffe*, Annalen der Physik 36 (1911) 395 A. Einstein, *Die Plancksche Theorie der Strahlung und die Theorie der spezifischen Wärme*, Annalen der Physik 22 (1907) 180 P. Debye, *Zur Theorie der spezifischen Wärmen*, Annalen der Physik 39 (1912) 789 *CRC Handbook of Chemistry and Physics*, 92nd Edition, 2011–2012 (Taylor & Francis, 2012), p. 6-179 P.J. Mohr, B.N. Taylor and D.B. Newell (2011), *The 2010 CODATA Recommended Values of the Fundamental Physical Constants*\ URL: http://physics.nist.gov/cuu/Constants/index.html J.D. Cox, D.D. Wagman and V.A. Medvedev, *CODATA Key Values for Thermodynamics*, Hemisphere Publishing Corp. (New York, 1989)\ URL: http://www.codata.org/resources/databases/key1.html *CRC Handbook of Chemistry and Physics*, 92nd Edition, 2011–2012 (Taylor & Francis, 2012), p. 6-148 R.H. Busey and W.F. Giauque, *The heat capacity of mercury from $15$ to $360^\circ\,\mbox{K}$. Thermodynamic properties of solid, liquid and gas. Heat of fusion and vaporization*, J. Am. Chem. Soc. 75 (1953) 806 J. Mehra and H. Rechenberg, *The Historical Development of Quantum Theory*, Vol. 1, *The Quantum theory of Planck, Einstein, Bohr and Sommerfeld: Its Foundations and the Rise of Its Difficulties (1900–1925)* (Springer-Verlag, New York, 1982) S.J. Blundell and K.M. Blundell, *Concepts in Thermal Physics* (Oxford University Press, Oxford, New York, 2006) D. Dieks and W.J. Slooten, *Historic papers in physics – The case of Hugo Martin Tetrode, 1895–1931*, Czech. J. Phys. B 36 (1986) 39 [^1]: E-mail: walter.grimus@univie.ac.at [^2]: Actually, from Tetrode’s equations (12) and (13) we would rather deduce $z \approx 0.02$. [^3]: These manipulations introduce an ambiguity in the integration boundaries: In ${\mathrm{d}}\varepsilon_k$ the integration is from zero to infinity, while in ${\mathrm{d}}p_k$ Sackur integrates from minus infinity to plus infinity. [^4]: In Tetrode’s formula we set $z=1$. [^5]: Here action has the usual meaning of the time integral over the Lagrangian. [^6]: Tetrode published a total of six papers [@dieks]. [^7]: Note that for a liquid far below the critical point usually not only $v_l \ll v_g$ holds but also $\alpha_l \ll \alpha_g$.